Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/65132
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor吳家麟
dc.contributor.authorMing-Hung Tsaien
dc.contributor.author蔡明宏zh_TW
dc.date.accessioned2021-06-16T23:26:26Z-
dc.date.available2013-08-15
dc.date.copyright2012-08-15
dc.date.issued2012
dc.date.submitted2012-07-31
dc.identifier.citation[1] J. Assa and L. Wolf. Diorama construction from a single image. In Eurographics’ 2007. Eurographics Association, 2007.
[2] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester. Image inpainting. In Proceedings of the 27th annual conference on Computer graphics and interactive
techniques, SIGGRAPH ’00, pages 417–424, New York, NY, USA, 2000. ACM Press/Addison-Wesley Publishing Co.
[3] Y. Boykov and M.-P. Jolly. Interactive graph cuts for optimal boundary amp; region segmentation of objects in n-d images. In Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, volume 1, pages 105 –112 vol.1, 2001.
[4] Y.-Y. Chuang, D. B. Goldman, K. C. Zheng, B. Curless, D. H. Salesin, and R. Szeliski. Animating pictures with stochastic motion textures. ACM Trans. Graph., 24(3):853–860, July 2005.
[5] F. D.A. Shape from texture and integrability. In Proceedings of IEEEE International Conference on Compiter Vision, 27(5):447–453, Dec. 2008.
[6] A. L. Dani, D. Lischinski, and Y. Weiss. Colorization using optimization. ACM Transactions on Graphics, 23:689–694, 2004.
[7] P. F. Felzenszwalb and D. P. Huttenlocher. Efficient graph-based image segmentation. Int. J. Comput. Vision, 59(2):167–181, Sept. 2004.
[8] M. Gerrits, B. de Decker, C. O. Ancuti, T. Haber, C. O. Ancuti, T. Mertens, and P. Bekaert. Stroke-based creation of depth maps. In ICME, pages 1–6, 2011.
[9] K. He, J. Sun, and X. Tang. Single image haze removal using dark channel prior. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 33(12):2341 –2353, dec. 2011.
[10] S. Lee, D. Feng, and B. Gooch. Automatic construction of 3d models from architectural line drawings. In Proceedings of the 2008 symposium on Interactive 3D
graphics and games, I3D ’08, pages 123–130, New York, NY, USA, 2008. ACM.
[11] F. Meyer. Color image segmentation. IEEE ICIP, 1992.
[12] B. M. Oh, M. Chen, J. Dorsey, and F. Durand. Image-based modeling and photo editing. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, SIGGRAPH ’01, pages 433–442, New York, NY, USA, 2001. ACM.
[13] S. Paris and F. Durand. A fast approximation of the bilateral filter using a signal
processing approach. International Journal of Computer Vision, 81:24–52, 2009.10.1007/s11263-007-0110-8.
[14] F. Porikli. Constant time o(1) bilateral filtering. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1 –8, june 2008.
[15] B. C. Russell and A. Torralba. Building a database of 3d scenes from user annotations. In IEEE CVPR, 2009.
[16] D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vision, 47(1-3):7–42, Apr. 2002.
[17] D. S’ykora, J. Dingliana, and S. Collins. LazyBrush: Flexible painting tool for hand-drawn cartoons. Computer Graphics Forum, 28(2):599–608, 2009.
[18] D. S’ykora, D. Sedlacek, S. Jinchao, J. Dingliana, and S. Collins. Adding depth to cartoons using sparse depth (in)equalities. Computer Graphics Forum, 29(2):615–623, 2010.
[19] T.-P. Wu, J. Sun, C.-K. Tang, and H.-Y. Shum. Interactive normal reconstruction from a single image. ACM Trans. Graph., 27(5):119:1–119:9, Dec. 2008.
[20] Q. Yang, K.-H. Tan, and N. Ahuja. Real-time o(1) bilateral filtering. In CVPR, pages 557–564, 2009.
[21] W. Yang, J. Cai, J. Zheng, and J. Luo. User-friendly interactive image segmentation through unified combinatorial user inputs. Image Processing, IEEE Transactions on, 19(9):2470 –2479, sept. 2010.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/65132-
dc.description.abstract近幾年來由於3D技術的成熟,真實場景的深度資訊可以直接由深度相機取得,但是已經存在的繪畫若是想要為其附加深度值,唯一的方式只能由使用者來標記。我們提出了一個低時間複雜度的互動式系統來解決這個問題。以往的相關研究大多是採用整張圖一次處裡,minimize energy function的形式來進行,因此十分的耗費時間,而我們的系統將這個問題轉換成filter base algorithm以達到低response time與合理滿意的結果。我們的靈感來自於雕刻的行為,以重複的刻畫行為來進行,在刻畫的過程當中使用者也可以即時的檢視半成品的狀態。由於我們的系統能夠做到即時產生深度圖,藉由投影2D加上深度的資訊到3D當中,可以做出類似版畫的觀察模式來更直覺地掌握現在的情況,如此一來使用者就可以用這個資訊進行判斷下一步的操作。最後我們將展示出擁有depth map之後除了能產生3D視覺效果之外,還能夠藉由這些深度的資訊讓圖片編輯、區域強調甚至轉變成動畫這些原本複雜的行為變得更簡單。zh_TW
dc.description.abstractIn real word, the depth of objects can be directly acquired by existing depth cameras, however, the depth information of 2D paintings can only be generated by users. In order to deal with the depth generation problem, a novel and low complexity interactive depth generation approach for 2D paintings is devised. In contrast to traditional approaches which usually addressed this problem through a time-consuming global optimization framework, we formulate the problem as filter-based schemes to achieve reasonable interactive response time. Inspired by sculpturing, the depth information generation can also be addressed as an iterative stroking-and-viewing process. Our work achieves instant response for interactive generating of depth information in which immediate visualization of the depth generation results is possible. Users can then rapidly rectify the current depth generation results. Finally, we illustrate that the newly added depth information for 2D paintings can not only be applied to view 3D effects but also support interesting applications like editing, enhancement and user-controlled animation of 2D paintings.en
dc.description.provenanceMade available in DSpace on 2021-06-16T23:26:26Z (GMT). No. of bitstreams: 1
ntu-101-R99922117-1.pdf: 4429012 bytes, checksum: 42bf5d25e2ffc52d7b0c1a3b1aee493d (MD5)
Previous issue date: 2012
en
dc.description.tableofcontents致謝 i
中文摘要 ii
Abstract iii
1 Introduction 1
2 Related Work 4
3 Method 6
3.1 Formulation of Depth Map Generation by Global Optimization . . . . . . 6
3.2 Interactive Development of Depth Map . . . . . . . . . . . . . . . . . . . 9
3.2.1 Incremental Segmentation . . . . . . . . . . . . . . . . . . . . . 9
3.2.2 Intra Segment Depth Propagation by Local Filtering . . . . . . . 11
3.2.3 Global Depth Refinement . . . . . . . . . . . . . . . . . . . . . 15
3.3 Interactive Verification of Semi-Finished Depth Map . . . . . . . . . . . 16
4 Experimental Results 17
4.1 Depth Information Creation . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2.1 2D Painting Animation . . . . . . . . . . . . . . . . . . . . . . . 19
4.2.2 Shallow Depth of Field Effect . . . . . . . . . . . . . . . . . . . 19
4.2.3 Image Composition . . . . . . . . . . . . . . . . . . . . . . . . . 20
5 Limitation and Future Works 22
6 Conclusion 23
Bibliography 24
dc.language.isoen
dc.subject分水嶺切割法zh_TW
dc.subject深度圖zh_TW
dc.subject雙向濾波器zh_TW
dc.subjectWatersheden
dc.subjectBilateral Filteren
dc.subjectDepth Mapen
dc.title二維繪畫之深度雕刻zh_TW
dc.titleDepth Sculpturing of 2D Paintingen
dc.typeThesis
dc.date.schoolyear100-2
dc.description.degree碩士
dc.contributor.oralexamcommittee莊永裕,朱威達,許秋婷,鄭文皇
dc.subject.keyword深度圖,雙向濾波器,分水嶺切割法,zh_TW
dc.subject.keywordDepth Map,Bilateral Filter,Watershed,en
dc.relation.page26
dc.rights.note有償授權
dc.date.accepted2012-07-31
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-101-1.pdf
  未授權公開取用
4.33 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved