請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/62389完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 吳家麟(Ja-Ling Wu) | |
| dc.contributor.author | Shun-Xuan Wang | en |
| dc.contributor.author | 王舜玄 | zh_TW |
| dc.date.accessioned | 2021-06-16T13:45:28Z | - |
| dc.date.available | 2018-07-26 | |
| dc.date.copyright | 2013-07-26 | |
| dc.date.issued | 2013 | |
| dc.date.submitted | 2013-07-09 | |
| dc.identifier.citation | [1] P. J. Besl and N. D. McKay. A method for registration of 3-d shapes. IEEE Trans- actions on Pattern Analysis and Machine Intelligence, 14(2):239–256, Feb. 1992.
[2] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(9):1124–1137, 2004. [3] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(11):1222–1239, Nov. 2001. [4] M.CamplaniandL.Salgado.Efficientspatio-temporalholefillingstrategyforkinect depth maps. In SPIE International Conference on 3D Image Processing (3DIP) and Applications, volume 8290, pages 82900E 1–10, 2012. [5] Y. Cui, S. Schuon, D. Chan, S. Thrun, and C. Theobalt. 3d shape scanning with a time-of-flight camera. In Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1173–1180. IEEE, 2010. [6] B. Curless and M. Levoy. A volumetric method for building complex models from range images. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH ’96), pages 303–312, New York, NY, USA, 1996. ACM. [7] C. C. Dorea and R. L. de Queiroz. Depth map reconstruction using color-based region merging. In B. Macq and P. Schelkens, editors, Proceedings of the 201118th IEEE International Conference on Image Processing (ICIP), pages 1977–1980. IEEE, 2011. [8] J. Fu, S. Wang, Y. Lu, S. Li, and W. Zeng. Kinect-like depth denoising. In Proceed- ings of the 2012 IEEE International Symposium on Circuits and Systems (ISCAS’12), pages 512–515, 2012. [9] V. Gandhi, J. Cech, and R. Horaud. High-resolution depth maps based on tof-stereo fusion. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation (ICRA), pages 4742–4749. IEEE, 2012. [10] J. Heikkila. Geometric camera calibration using circular control points. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(10):1066–1077, Oct. 2000. [11] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox. Rgb-d mapping: Using kinect- style depth cameras for dense 3d modeling of indoor environments. International Journal of Robotics Research, 31(5):647–663, Apr. 2012. [12] C. Herrera, J. Kannala, and J. Heikkila. Joint depth and color camera calibration with distortion correction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(10):2058–2064, 2012. [13] C. D. Herrera, J. Kannala, and J. Heikkila. Accurate and practical calibration of a depth and color camera pair. In Proceedings of the 14th international conference on Computer analysis of images and patterns, CAIP’11, pages 437–445, Berlin, Hei- delberg, 2011. Springer-Verlag. [14] I. Ihrke, K. N. Kutulakos, H. P. A. Lensch, M. A. Magnor, and W. Heidrich. Trans- parent and specular object reconstruction. Computer Graphics Forum, 29(8):2400– 2426, December 2010. [15] S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon. Kinectfusion: real-time 3dreconstruction and interaction using a moving depth camera. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST ’11), pages 559–568, New York, NY, USA, 2011. ACM. [16] K. Khoshelham. Accuracy analysis of kinect depth data. Proceedings of the Inter- national Society for Photogrammetry and Remote Sensing (ISPRS) Workshop Laser Scanning, 38(5/W12):1, 2010. [17] J. Lellmann, J. Balzer, A. Rieder, and J. Beyerer. Shape from specular reflection and optical flow. International Journal of Computer Vision, 80(2):226–241, Nov. 2008. [18] K. Li, Q. Dai, W. Xu, and J. Yang. Temporal-dense dynamic 3-d reconstruction with low frame rate cameras. IEEE Journal of Selected Topics in Signal Processing, 6(5):447–459, 2012. [19] D.G.Lowe.Distinctiveimagefeaturesfromscale-invariantkeypoints.International Journal of Computer Vision, 60(2):91–110, Nov. 2004. [20] D.Miao,J.Fu,Y.Lu,S.Li,andC.W.Chen.Texture-assistedkinectdepthinpainting. In Proceedings of the 2012 IEEE International Symposium on Circuits and Systems (ISCAS), pages 604–607. IEEE, 2012. [21] S. Milani and G. Calvagno. Joint denoising and interpolation of depth maps for ms kinect sensors. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 797–800. IEEE, 2012. [22] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges, and A. Fitzgibbon. Kinectfusion: Real-time dense surface mapping and tracking. In Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR ’11), pages 127–136, Wash- ington, DC, USA, 2011. IEEE Computer Society. [23] C. V. Nguyen, S. Izadi, and D. Lovell. Modeling kinect sensor noise for improved 3d reconstruction and tracking. In Proceedings of the 2012 Second InternationalConference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), pages 524–530. IEEE, 2012. [24] S. Roth and M. J. Black. Specular flow and the recovery of surface structure. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), pages 1869–1876. IEEE Computer Society, 2006. [25] A. C. Sankaranarayanan, A. Veeraraghavan, O. Tuzel, and A. K. Agrawal. Specu- lar surface reconstruction from sparse reflection correspondences. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’10), pages 1245–1252. IEEE, 2010. [26] S. Schuon, C. Theobalt, J. Davis, and S. Thrun. Lidarboost: Depth superresolution for tof 3d shape scanning. In I. Essa, S. B. Kang, and M. Pollefeys, editors, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR’09), pages 343–350, Miami, USA, 2009. IEEE. [27] J. Smisek, M. Jancosek, and T. Pajdla. 3d with kinect. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pages 1154–1160. IEEE, 2011. [28] C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In Pro- ceedings of the Sixth International Conference on Computer Vision (ICCV’98), pages 839–846, Washington, DC, USA, 1998. IEEE Computer Society. [29] J. Tong, M. Zhang, X. Xiang, H. Shen, H. Yan, and Z. Chen. 3d body scanning with hairstyle using one time-of-flight camera. Computer Animation and Virtual Worlds, 22(2-3):203–211, Apr. 2011. [30] J. Wasza, S. Bauer, and J. Hornegger. Real-time preprocessing for dense 3-d range imaging on the gpu: Defect interpolation, bilateral temporal averaging and guided filtering. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pages 1221–1227. IEEE, 2011. [31] Z. Zhang. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330–1334, Nov. 2000. [32] J. Zhu, L. Wang, R. Yang, and J. Davis. Fusion of time-of-flight depth and stereo for high accuracy depth maps. In Proceedings of the 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’08), pages 1–8. IEEE Computer Society, 2008. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/62389 | - |
| dc.description.abstract | 人們可以利用許多種類的攝影器材來為真實物件拍攝其可在電腦中 使用的三維模型 (3-D volumetric model),但是沒有任何一種現存可以用 來拍攝三維模型的攝影機可以從鏡面反射物體 (specular object) 上得到 良好的三維模型,因此我們利用可同時獲取色彩資訊 (color map) 與深 度資訊 (depth map) 的彩色暨深度攝影機 (RGB-D camera) 製作針對鏡面 實物的三維掃瞄架構。既使彩色暨深度攝影機獲取的深度資訊存在以 下兩個主要的問題:雜訊 (noise) 與空洞 (hole),而且這些問題在鏡面 實物的拍攝時會更加嚴重,但是我們提出的系統可以整合我們在色彩 資訊中得到的視覺線索 (visual cue) 來重建這些被破壞的深度資訊。我 們的實驗結果顯示我們提出的系統能夠比其它先前的相關研究重建出 更接近於原實物的三維模型。 | zh_TW |
| dc.description.abstract | 3-D volumetric models for real objects can be obtained through many types of cameras; however, none of them operates well when the scanning target is a specular object. The proposed 3-D scanning system uses a RGB- D camera in which both color values (color map) and depth values (depth map) are acquired simultaneously. Even though the depth maps captured by RGB-D cameras are both noisy and hole-existing and the specular surfaces of objects deteriorate these negative properties, the proposed system integrates visual cues from color maps to rebuild the ruined depth maps. Experimental results show that the rebuilt 3-D models are closer to the original objects than other previously related works did, and even if laser scanners are used. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-16T13:45:28Z (GMT). No. of bitstreams: 1 ntu-102-R00944050-1.pdf: 1219131 bytes, checksum: 65f3449dfb0ac1794723c3ee48d3ded8 (MD5) Previous issue date: 2013 | en |
| dc.description.tableofcontents | 誌謝 ii
摘要 iii Abstract iv 1 Introduction 1 1.1 3-D Scanning ................................ 1 1.2 3-D Scanning for Specular Objects..................... 2 2 Related Works 5 2.1 Related Image Processing.......................... 6 2.2 RGB-D Camera Processing......................... 6 2.3 3-D Shape From Specular Objects ..................... 7 3 The Proposed Approach 8 3.1 Visual Cues Integration from Color and Depth Information . . . . . . . . 9 3.2 Reflectance Similarity ........................... 10 3.3 Surface Reconstruction........................... 14 3.3.1 Energy Function of the Labeling Problem . . . . . . . . . . . . . 14 3.3.2 Data Term.............................. 15 3.3.3 Smooth Term............................ 16 3.3.4 Implementation Details....................... 16 3.4 Edge-Preserved Depth Value Inpainting .................. 17 3.5 3-D Formation ............................... 17 4 Experiments and Results 19 4.1 Experimental Setting ............................ 19 4.2 Ground Truth ................................ 20 4.3 The Weighting Factor α ofThe Energy Function. . . . . . . . . . . . . . 21 4.4 Comparison With Other Algorithms .................... 25 4.5 Discussion.................................. 34 5 Discussions and Conclusions 35 5.1 Limitation and Future Work ........................ 35 5.2 Conclusions................................. 36 Bibliography................................. 37 | |
| dc.language.iso | en | |
| dc.subject | 雙邊濾波器 | zh_TW |
| dc.subject | 圖形切割演算法 | zh_TW |
| dc.subject | 鏡面反射物體 | zh_TW |
| dc.subject | 訊號重建 | zh_TW |
| dc.subject | 三維物件掃瞄 | zh_TW |
| dc.subject | 彩色攝影機 | zh_TW |
| dc.subject | 深度攝影機 | zh_TW |
| dc.subject | 圖形處理器 | zh_TW |
| dc.subject | Graph Cut Algorithm | en |
| dc.subject | 3D scanning | en |
| dc.subject | GPU | en |
| dc.subject | Depth Cameras | en |
| dc.subject | RGB Camera | en |
| dc.subject | Signal Reconstruction | en |
| dc.subject | Specular Objects | en |
| dc.subject | Bilateral Filter | en |
| dc.title | 使用彩色暨深度攝影機製作鏡面物體之三維掃描儀 | zh_TW |
| dc.title | An RGB-D Camera Based 3-D Scanner for Specular Objects | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 101-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 朱威達(Wei-Ta Chu),鄭文皇(Wen-Huang Cheng),胡敏君(Min-Chun Hu) | |
| dc.subject.keyword | 三維物件掃瞄,圖形處理器,深度攝影機,彩色攝影機,訊號重建,鏡面反射物體,雙邊濾波器,圖形切割演算法, | zh_TW |
| dc.subject.keyword | 3D scanning,GPU,Depth Cameras,RGB Camera,Signal Reconstruction,Specular Objects,Bilateral Filter,Graph Cut Algorithm, | en |
| dc.relation.page | 41 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2013-07-09 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
| 顯示於系所單位: | 資訊網路與多媒體研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-102-1.pdf 未授權公開取用 | 1.19 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
