Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/54667
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor王傑智(Chieh-Chih Wang)
dc.contributor.authorHUNG-CHIH LUen
dc.contributor.author盧泓志zh_TW
dc.date.accessioned2021-06-16T03:36:26Z-
dc.date.available2015-08-11
dc.date.copyright2015-08-11
dc.date.issued2015
dc.date.submitted2015-06-11
dc.identifier.citationArieli, Y., Freedman, B., Machline, M., & Shpunt, A. (2012). Depth mapping using projected
patterns. US Patent 8,150,142.
Bascle, B., Blake, A., & Zisserman, A. (1996). Motion deblurring and super-resolution from
an image sequence. In Computer VisionECCV’96 (pp. 571–582). Springer.
Cho, S. & Lee, S. (2009). Fast motion deblurring. In ACM Transactions on Graphics (TOG),
volume 28, (pp. 145).
Girod, B. & Scherock, S. (1990). Depth from defocus of structured light. In 1989 Advances
in Intelligent Robotics Systems Conference, (pp. 209–215).
Khoshelham, K. (2011). Accuracy analysis of kinect depth data. In ISPRS workshop laser
scanning, volume 38, (pp. W12).
Kim, T. H., Ahn, B., & Lee, K. M. (2013). Dynamic scene deblurring. In 2013 IEEE International
Conference on Computer Vision (ICCV), (pp. 3160–3167).
Kim, T. H. & Lee, K. M. (2014). Segmentation-free dynamic scene deblurring. In 2014 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), (pp. 2766–2773).
Liu, R., Li, Z., & Jia, J. (2008). Image partial blur detection and classification. In IEEE
Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008, (pp. 1–8).
Nayar, S. & Ben-Ezra, M. (2004). Motion-based motion deblurring. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 26(6), 689–698.
Ringaby, E. & Forss´en, P.-E. (2011). Scan rectification for structured light range sensors
with rolling shutters. In 2011 IEEE International Conference on Computer Vision (ICCV),
(pp. 1575–1582).
Scharstein, D. & Szeliski, R. (2003). High-accuracy stereo depth maps using structured
light. In 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,
2003. Proceedings, volume 1, (pp. I–195).
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/54667-
dc.description.abstract利用深度相機取得的三維場景的去模糊化在電腦視覺領域中是一個新穎的題目。動態模糊(motion blur)發生在許多基於結構光(structured light)的三維相機中。我們分析了基於結構光的三維相機產生動態模糊的原因,並設計了一個新穎的方法在三維場景中去模糊化。我們利用物體的模型去取代三維場景中有動態模糊的部分。因為我們處理連續的三維影像,因此我們可以在物體還沒產生動態模糊時建出物體的模型。我們的去模糊演算法分為兩個部分:動態模糊偵測以及動態模糊去模糊化。在動態模糊偵測部分,我們依物體的速度來辦定是否產生動態模糊。在動態模糊去模糊化部分,我們先判斷動態模糊的種類,並應用跌代最近點演算法(iterative closest point algorithm)針對不同種類的動態模糊來做不同的處理。我們對三組真實數據(real data)做實驗,成功得到了去模糊化的結果。zh_TW
dc.description.abstractDeblurring of 3D scenes captured by 3D sensors is a novel topic in computer vision. Motion blur occurs in a number of 3D sensors based on structured light techniques. We analyze the causes of motion blur captured by structured light depth cameras and design a novel algorithm using the speed cue and object models to deblur a 3D scene. The main idea is using the 3D model of an object to replace the blurry object in the scene. Because we aim to deal with consecutive 3D frame sequences, ie 3D videos, an object model can be built in the frame where the object is not blurry yet. Our deblurring method can be divided into two parts: motion blur detection and motion blur removal. For the motion blur detection part, we use the speed cue to detect where the motion blur is. For the motion blur removal part, first we judge the type of the motion blur, and then we apply the iterative closest point (ICP) algorithm in different ways according to the motion blur type. The proposed method is evaluated in real world cases and successfully accomplishes motion blur detection and blur removal.en
dc.description.provenanceMade available in DSpace on 2021-06-16T03:36:26Z (GMT). No. of bitstreams: 1
ntu-104-R02922127-1.pdf: 3248896 bytes, checksum: 5770083f8fc50d18094e492b7a7c89c7 (MD5)
Previous issue date: 2015
en
dc.description.tableofcontentsCHAPTER 1. Introduction 1
CHAPTER 2. RelatedWork 3
CHAPTER 3. Motion Blur Detection 5
3.1. The Foundation of Structured Light 5
3.2. Causes of Motion Blur of Structured Light Depth Cameras 7
3.3. The Difference between Motion Blur in 2D Images and 3D Piont Clouds 7
3.4. Our Blur Detection Method 12
CHAPTER 4. Deblurring 14
4.1. Building Object Model 14
4.2. Judge the Type of Motion Blur 14
4.3. Find the Correct Object Model Pose 17
CHAPTER 5. Experiment and Discussion 19
5.1. Experiment Setup 19
5.2. Experiment Results and Discussion 19
CHAPTER 6. Conclusion and Future Work 25
BIBLIOGRAPHY 27
dc.language.isoen
dc.subject結構光zh_TW
dc.subject深度相機zh_TW
dc.subject去模糊zh_TW
dc.subjectDeblurringen
dc.subjectStructured Lighten
dc.subjectDepth Cameraen
dc.title深度相機的模糊偵測與去模糊zh_TW
dc.titleStructured Light Depth Camera Motion Blur Detection and Deblurringen
dc.typeThesis
dc.date.schoolyear103-2
dc.description.degree碩士
dc.contributor.oralexamcommittee胡竹生,林文杰,林惠勇
dc.subject.keyword深度相機,結構光,去模糊,zh_TW
dc.subject.keywordDepth Camera,Structured Light,Deblurring,en
dc.relation.page28
dc.rights.note有償授權
dc.date.accepted2015-06-12
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-104-1.pdf
  未授權公開取用
3.17 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved