Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/62389
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor吳家麟(Ja-Ling Wu)
dc.contributor.authorShun-Xuan Wangen
dc.contributor.author王舜玄zh_TW
dc.date.accessioned2021-06-16T13:45:28Z-
dc.date.available2018-07-26
dc.date.copyright2013-07-26
dc.date.issued2013
dc.date.submitted2013-07-09
dc.identifier.citation[1] P. J. Besl and N. D. McKay. A method for registration of 3-d shapes. IEEE Trans- actions on Pattern Analysis and Machine Intelligence, 14(2):239–256, Feb. 1992.
[2] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(9):1124–1137, 2004.
[3] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(11):1222–1239, Nov. 2001.
[4] M.CamplaniandL.Salgado.Efficientspatio-temporalholefillingstrategyforkinect depth maps. In SPIE International Conference on 3D Image Processing (3DIP) and Applications, volume 8290, pages 82900E 1–10, 2012.
[5] Y. Cui, S. Schuon, D. Chan, S. Thrun, and C. Theobalt. 3d shape scanning with a time-of-flight camera. In Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1173–1180. IEEE, 2010.
[6] B. Curless and M. Levoy. A volumetric method for building complex models from range images. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH ’96), pages 303–312, New York, NY, USA, 1996. ACM.
[7] C. C. Dorea and R. L. de Queiroz. Depth map reconstruction using color-based region merging. In B. Macq and P. Schelkens, editors, Proceedings of the 201118th IEEE International Conference on Image Processing (ICIP), pages 1977–1980. IEEE, 2011.
[8] J. Fu, S. Wang, Y. Lu, S. Li, and W. Zeng. Kinect-like depth denoising. In Proceed- ings of the 2012 IEEE International Symposium on Circuits and Systems (ISCAS’12), pages 512–515, 2012.
[9] V. Gandhi, J. Cech, and R. Horaud. High-resolution depth maps based on tof-stereo fusion. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation (ICRA), pages 4742–4749. IEEE, 2012.
[10] J. Heikkila. Geometric camera calibration using circular control points. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(10):1066–1077, Oct. 2000.
[11] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox. Rgb-d mapping: Using kinect- style depth cameras for dense 3d modeling of indoor environments. International Journal of Robotics Research, 31(5):647–663, Apr. 2012.
[12] C. Herrera, J. Kannala, and J. Heikkila. Joint depth and color camera calibration with distortion correction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(10):2058–2064, 2012.
[13] C. D. Herrera, J. Kannala, and J. Heikkila. Accurate and practical calibration of a depth and color camera pair. In Proceedings of the 14th international conference on Computer analysis of images and patterns, CAIP’11, pages 437–445, Berlin, Hei- delberg, 2011. Springer-Verlag.
[14] I. Ihrke, K. N. Kutulakos, H. P. A. Lensch, M. A. Magnor, and W. Heidrich. Trans- parent and specular object reconstruction. Computer Graphics Forum, 29(8):2400– 2426, December 2010.
[15] S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon. Kinectfusion: real-time 3dreconstruction and interaction using a moving depth camera. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST ’11), pages 559–568, New York, NY, USA, 2011. ACM.
[16] K. Khoshelham. Accuracy analysis of kinect depth data. Proceedings of the Inter- national Society for Photogrammetry and Remote Sensing (ISPRS) Workshop Laser Scanning, 38(5/W12):1, 2010.
[17] J. Lellmann, J. Balzer, A. Rieder, and J. Beyerer. Shape from specular reflection and optical flow. International Journal of Computer Vision, 80(2):226–241, Nov. 2008.
[18] K. Li, Q. Dai, W. Xu, and J. Yang. Temporal-dense dynamic 3-d reconstruction with low frame rate cameras. IEEE Journal of Selected Topics in Signal Processing, 6(5):447–459, 2012.
[19] D.G.Lowe.Distinctiveimagefeaturesfromscale-invariantkeypoints.International Journal of Computer Vision, 60(2):91–110, Nov. 2004.
[20] D.Miao,J.Fu,Y.Lu,S.Li,andC.W.Chen.Texture-assistedkinectdepthinpainting. In Proceedings of the 2012 IEEE International Symposium on Circuits and Systems (ISCAS), pages 604–607. IEEE, 2012.
[21] S. Milani and G. Calvagno. Joint denoising and interpolation of depth maps for ms kinect sensors. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 797–800. IEEE, 2012.
[22] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges, and A. Fitzgibbon. Kinectfusion: Real-time dense surface mapping and tracking. In Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR ’11), pages 127–136, Wash- ington, DC, USA, 2011. IEEE Computer Society.
[23] C. V. Nguyen, S. Izadi, and D. Lovell. Modeling kinect sensor noise for improved 3d reconstruction and tracking. In Proceedings of the 2012 Second InternationalConference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), pages 524–530. IEEE, 2012.
[24] S. Roth and M. J. Black. Specular flow and the recovery of surface structure. In
Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), pages 1869–1876. IEEE Computer Society, 2006.
[25] A. C. Sankaranarayanan, A. Veeraraghavan, O. Tuzel, and A. K. Agrawal. Specu- lar surface reconstruction from sparse reflection correspondences. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’10), pages 1245–1252. IEEE, 2010.
[26] S. Schuon, C. Theobalt, J. Davis, and S. Thrun. Lidarboost: Depth superresolution for tof 3d shape scanning. In I. Essa, S. B. Kang, and M. Pollefeys, editors, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR’09), pages 343–350, Miami, USA, 2009. IEEE.
[27] J. Smisek, M. Jancosek, and T. Pajdla. 3d with kinect. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pages 1154–1160. IEEE, 2011.
[28] C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In Pro- ceedings of the Sixth International Conference on Computer Vision (ICCV’98), pages 839–846, Washington, DC, USA, 1998. IEEE Computer Society.
[29] J. Tong, M. Zhang, X. Xiang, H. Shen, H. Yan, and Z. Chen. 3d body scanning with hairstyle using one time-of-flight camera. Computer Animation and Virtual Worlds, 22(2-3):203–211, Apr. 2011.
[30] J. Wasza, S. Bauer, and J. Hornegger. Real-time preprocessing for dense 3-d range imaging on the gpu: Defect interpolation, bilateral temporal averaging and guided filtering. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pages 1221–1227. IEEE, 2011.
[31] Z. Zhang. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330–1334, Nov. 2000.
[32] J. Zhu, L. Wang, R. Yang, and J. Davis. Fusion of time-of-flight depth and stereo for high accuracy depth maps. In Proceedings of the 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’08), pages 1–8. IEEE Computer Society, 2008.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/62389-
dc.description.abstract人們可以利用許多種類的攝影器材來為真實物件拍攝其可在電腦中 使用的三維模型 (3-D volumetric model),但是沒有任何一種現存可以用 來拍攝三維模型的攝影機可以從鏡面反射物體 (specular object) 上得到 良好的三維模型,因此我們利用可同時獲取色彩資訊 (color map) 與深 度資訊 (depth map) 的彩色暨深度攝影機 (RGB-D camera) 製作針對鏡面 實物的三維掃瞄架構。既使彩色暨深度攝影機獲取的深度資訊存在以 下兩個主要的問題:雜訊 (noise) 與空洞 (hole),而且這些問題在鏡面 實物的拍攝時會更加嚴重,但是我們提出的系統可以整合我們在色彩 資訊中得到的視覺線索 (visual cue) 來重建這些被破壞的深度資訊。我 們的實驗結果顯示我們提出的系統能夠比其它先前的相關研究重建出 更接近於原實物的三維模型。zh_TW
dc.description.abstract3-D volumetric models for real objects can be obtained through many types of cameras; however, none of them operates well when the scanning target is a specular object. The proposed 3-D scanning system uses a RGB- D camera in which both color values (color map) and depth values (depth map) are acquired simultaneously. Even though the depth maps captured by RGB-D cameras are both noisy and hole-existing and the specular surfaces of objects deteriorate these negative properties, the proposed system integrates visual cues from color maps to rebuild the ruined depth maps. Experimental results show that the rebuilt 3-D models are closer to the original objects than other previously related works did, and even if laser scanners are used.en
dc.description.provenanceMade available in DSpace on 2021-06-16T13:45:28Z (GMT). No. of bitstreams: 1
ntu-102-R00944050-1.pdf: 1219131 bytes, checksum: 65f3449dfb0ac1794723c3ee48d3ded8 (MD5)
Previous issue date: 2013
en
dc.description.tableofcontents誌謝 ii
摘要 iii
Abstract iv
1 Introduction 1
1.1 3-D Scanning ................................ 1
1.2 3-D Scanning for Specular Objects..................... 2
2 Related Works 5
2.1 Related Image Processing.......................... 6
2.2 RGB-D Camera Processing......................... 6
2.3 3-D Shape From Specular Objects ..................... 7
3 The Proposed Approach 8
3.1 Visual Cues Integration from Color and Depth Information . . . . . . . . 9
3.2 Reflectance Similarity ........................... 10
3.3 Surface Reconstruction........................... 14
3.3.1 Energy Function of the Labeling Problem . . . . . . . . . . . . . 14
3.3.2 Data Term.............................. 15
3.3.3 Smooth Term............................ 16
3.3.4 Implementation Details....................... 16
3.4 Edge-Preserved Depth Value Inpainting .................. 17
3.5 3-D Formation ............................... 17
4 Experiments and Results 19
4.1 Experimental Setting ............................ 19
4.2 Ground Truth ................................ 20
4.3 The Weighting Factor α ofThe Energy Function. . . . . . . . . . . . . . 21
4.4 Comparison With Other Algorithms .................... 25
4.5 Discussion.................................. 34
5 Discussions and Conclusions 35
5.1 Limitation and Future Work ........................ 35
5.2 Conclusions................................. 36
Bibliography................................. 37
dc.language.isoen
dc.subject雙邊濾波器zh_TW
dc.subject圖形切割演算法zh_TW
dc.subject鏡面反射物體zh_TW
dc.subject訊號重建zh_TW
dc.subject三維物件掃瞄zh_TW
dc.subject彩色攝影機zh_TW
dc.subject深度攝影機zh_TW
dc.subject圖形處理器zh_TW
dc.subjectGraph Cut Algorithmen
dc.subject3D scanningen
dc.subjectGPUen
dc.subjectDepth Camerasen
dc.subjectRGB Cameraen
dc.subjectSignal Reconstructionen
dc.subjectSpecular Objectsen
dc.subjectBilateral Filteren
dc.title使用彩色暨深度攝影機製作鏡面物體之三維掃描儀zh_TW
dc.titleAn RGB-D Camera Based 3-D Scanner for Specular Objectsen
dc.typeThesis
dc.date.schoolyear101-2
dc.description.degree碩士
dc.contributor.oralexamcommittee朱威達(Wei-Ta Chu),鄭文皇(Wen-Huang Cheng),胡敏君(Min-Chun Hu)
dc.subject.keyword三維物件掃瞄,圖形處理器,深度攝影機,彩色攝影機,訊號重建,鏡面反射物體,雙邊濾波器,圖形切割演算法,zh_TW
dc.subject.keyword3D scanning,GPU,Depth Cameras,RGB Camera,Signal Reconstruction,Specular Objects,Bilateral Filter,Graph Cut Algorithm,en
dc.relation.page41
dc.rights.note有償授權
dc.date.accepted2013-07-09
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊網路與多媒體研究所zh_TW
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
ntu-102-1.pdf
  未授權公開取用
1.19 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved