Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 管理學院
  3. 資訊管理學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/16346
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳炳宇(Bing-Yu Chen)
dc.contributor.authorKuan-Hung Liuen
dc.contributor.author劉冠宏zh_TW
dc.date.accessioned2021-06-07T18:10:45Z-
dc.date.copyright2020-08-25
dc.date.issued2020
dc.date.submitted2020-08-01
dc.identifier.citationAdobe. Adobe illustrator 2020: Image trace, 2020.
E. A. Ascher. Mental rotation in artists and non-artists.
S. Belongie, J. Malik, and J. Puzicha. Shape context: A new descriptor for shape matching and object recognition. In Advances in neural information processing systems, pages 831– 837, 2001.
A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
H.-T. Chen, T. Grossman, L.-Y. Wei, R. M. Schmidt, B. Hartmann, G. Fitzmaurice, and M. Agrawala. History assisted view authoring for 3d models. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’14, pages 2027–2036, New York, NY, USA, 2014. ACM.
C. B. Choy, D. Xu, J. Gwak, K. Chen, and S. Savarese. 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In European conference on computer vision, pages 628–644. Springer, 2016.
A. Dai and M. Niessner. Scan2mesh: From unstructured range scans to 3d meshes. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
P. E. Debevec, C. J. Taylor, and J. Malik. Modeling and rendering architecture from pho- tographs: A hybrid geometry-and image-based approach. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 11–20, 1996.
D. DeCarlo, A. Finkelstein, S. Rusinkiewicz, and A. Santella. Suggestive contours for conveying shape. ACM Transactions on Graphics (Proc. SIGGRAPH), 22(3):848-855, July 2003.
A. Dosovitskiy, J. T. Springenberg, M. Tatarchenko, and T. Brox. Learning to generate chairs, tables and cars with convolutional networks. IEEE transactions on pattern analysis and machine intelligence, 39(4):692–705,2016.
H. Fan, H. Su, and L. J. Guibas. A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 605–613, 2017.
T. Groueix, M. Fisher, V. G. Kim, B. Russell, and M. Aubry. AtlasNet: A Papier-Maˆche´ Approach to Learning 3D Surface Generation. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.
J. W. Hennessey, H. Liu, H. Winnemo¨ller, M. Dontcheva, and N. J. Mitra. How2sketch: Generating easy-to-follow tutorials for sketching 3d objects. Symposium on Interactive 3D Graphics and Games, 2017.
S. Hoshyari, E. A. Dominici, A. Sheffer, N. Carr, Z. Wang, D. Ceylan, I. Shen, et al. Perception-driven semi-structured boundary vectorization. ACM Transactions on Graphics (TOG), 37(4):118, 2018.
T. Karras, S. Laine, and T. Aila. A style-based generator architecture for generative adver- sarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4401–4410, 2019.
M. Kazhdan, M. Bolitho, and H. Hoppe. Poisson surface reconstruction. In Proceedings of the Fourth Eurographics Symposium on Geometry Processing, SGP ’06, page 61–70, Goslar, DEU, 2006. Eurographics Association.
J. Kopf, F. Langguth, D. Scharstein, R. Szeliski, and M. Goesele. Image-based rendering in the gradient domain. ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2013), 32(6):to appear, 2013.
J. Kopf and D. Lischinski. Depixelizing pixel art. ACM Transactions on graphics (TOG), 30(4):99, 2011.
Y. J. Lee, C. L. Zitnick, and M. F. Cohen. Shadowdraw: Real-time user guidance for freehand drawing. ACM Trans. Graph., 30(4):27:1–27:10, July 2011.
H.-T. D. liu and A. Jacobson. Cubic stylization. ACM Transactions on Graphics (TOG), 38(6):1–10, 2019.
S. Liu, T. Li, W. Chen, and H. Li. Soft rasterizer: A differentiable renderer for image-based 3d reasoning. In Proceedings of the IEEE International Conference on Computer Vision, pages 7708–7717, 2019.
Y. Liu, A. Agarwala, J. Lu, and S. Rusinkiewicz. Data-driven iconification. In International Symposium on Non-Photorealistic Animation and Rendering (NPAR), May 2016.
Z. Lun, M. Gadelha, E. Kalogerakis, S. Maji, and R. Wang. 3d shape reconstruction from sketches via multi-view convolutional networks. In 2017 International Conference on 3D Vision (3DV), pages 67–77. IEEE, 2017.
Z. Lun, E. Kalogerakis, and A. Sheffer. Elements of style: Learning perceptual shape style similarity. ACM Transactions on Graphics, 34(4), 2015.
Z. Lun, E. Kalogerakis, R. Wang, and A. Sheffer. Functionality preserving shape style transfer. ACM Transactions on Graphics, 35(6), 2016.
S.-J. Luo, Y. Yue, C.-K. Huang, Y.-H. Chung, S. Imai, T. Nishita, and B.-Y. Chen. Legoliza- tion: Optimizing lego designs. ACM Transactions on Graphics (Proc. SIGGRAPH Asia 2015), 34(6):222:1–222:12, 2015.
R. Mehra, Q. Zhou, J. Long, A. Sheffer, A. Gooch, and N. J. Mitra. Abstraction of man-made shapes. ACM Transactions on Graphics, 28(5):#137, 1–10, 2009.
K. Mo, H. Wang, X. Yan, and L. Guibas. PT2PC: Learning to generate 3d point cloud shapes from part tree conditions. arXiv preprint arXiv:2003.08624, 2020.
T. Nguyen-Phuoc, C. Li, L. Theis, C. Richardt, and Y.-L. Yang. Hologan: Unsupervised learning of 3d representations from natural images. In Proceedings of the IEEE Interna- tional Conference on Computer Vision, pages 7588–7597, 2019.
M. Niemeyer, L. Mescheder, M. Oechsle, and A. Geiger. Differentiable volumetric ren- dering: Learning implicit 3d representations without 3d supervision. arXiv preprint arXiv:1912.07372, 2019.
E. Park, J. Yang, E. Yumer, D. Ceylan, and A. C. Berg. Transformation-grounded image generation network for novel 3d view synthesis. In Proceedings of the ieee conference on computer vision and pattern recognition, pages 3500–3509, 2017.
M. Peng, J. Xing, and L.-Y. Wei. Autocomplete 3d sculpting. ACM Trans. Graph., 37(4):132:1–132:15, July 2018.
E. Penner and L. Zhang. Soft 3d reconstruction for view synthesis. 36(6), 2017.
R. Schmidt, T. Isenberg, P. Jepp, K. Singh, and B. Wyvill. Sketching, scaffolding, and inking: A visual history for interactive 3d modeling. In NPAR ’07: Proceedings of the 5th international symposium on Non-photorealistic animation and rendering, pages 23–32, 2007.
R. Schmidt, A. Khan, K. Singh, and G. Kurtenbach. Analytic drawing of 3d scaffolds. ACM Transactions on Graphics, 28(5):(to appear), 2009. Proceedings of SIGGRAPH ASIA 2009.
S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski. A comparison and evalu- ation of multi-view stereo reconstruction algorithms. In 2006 IEEE computer society con- ference on computer vision and pattern recognition (CVPR’06), volume 1, pages 519–528. IEEE, 2006.
L.-T. Shen, S.-J. Luo, C.-K. Huang, and B.-Y. Chen. Sd models: Super-deformed charac- ter models. Computer Graphics Forum, 31(7):2067–2075, 2012. (Pacific Graphics 2012 Conference Proceedings).
M. Stepin. Hqx, 2003.
S.-H. Sun, M. Huh, Y.-H. Liao, N. Zhang, and J. J. Lim. Multi-view to novel view: Syn- thesizing novel views with self-learned confidence. In European Conference on Computer Vision, 2018.
M. Tatarchenko, A. Dosovitskiy, and T. Brox. Multi-view 3d models from single images with a convolutional network. In European Conference on Computer Vision (ECCV), 2016.
S. Tulsiani, T. Zhou, A. A. Efros, and J. Malik. Multi-view supervision for single-view re- construction via differentiable ray consistency. In Computer Vision and Pattern Regognition (CVPR), 2017.
S. G. Vandenberg and A. R. Kuse. Mental rotations, a group test of three-dimensional spatial visualization. Perceptual and motor skills, 47(2):599–604, 1978.
Vector Magic. Cedar lake ventures, 2020.
J. Xing, H.-T. Chen, and L.-Y. Wei. Autocomplete painting repetitions. ACM Trans. Graph., 33(6):172:1–172:11, Nov. 2014.
J. Xing, R. H. Kazi, T. Grossman, L.-Y. Wei, J. Stam, and G. Fitzmaurice. Energy-brushes: Interactive tools for illustrating stylized elemental dynamics. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, UIST ’16, pages 755–766, New York, NY, USA, 2016. ACM.
M. E. Yumer and L. B. Kara. Co-abstraction of shape collections. ACM Trans. Graph., 31(6), Nov. 2012.
T. Zhou, S. Tulsiani, W. Sun, J. Malik, and A. A. Efros. View synthesis by appearance flow. In European Conference on Computer Vision, 2016.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/16346-
dc.description.abstract我們提出了一個可以輔助使用者創作不同視角美工圖案的系統。受到設計師創作過程的啟發,我們的系統會依照使用者輸入的美工圖案生成一模型,並利用該模型去渲染出使用者需要的視圖,以作為創作之視覺參照。
我們最主要的挑戰是如何讓生成的參照圖能夠符合使用者的期待,既要能夠讓生成的模型有正確的部件比例及位置,也要保有與輸入之美工圖案相似的幾何風格。因此本篇論文提出了一個使用者輔助的曲線擠出方法,並透過一致的渲染方式來生成參照。
使用者可以根據參照更有效率的在想要的視角進行創作。我們透過直觀的介面搭配生成的參照圖進行使用者研究,結果顯示透過我們的系統,使用者設計的不同視角的美工圖案在幾何風格與形狀上都與輸入之美工圖案相似。
zh_TW
dc.description.abstractWe present an assistive system for clipart design by providing visual scaffolds fromunseen viewpoints. Inspired by the artists’ creation process, our system constructs thevisual scaffold by first synthesizing the reference 3D shape of the input clipart and ren-dering it from the desired viewpoint. The critical challengeof constructing this visual scaffold is to generate a reference 3D shape that matches theuser’s expectation in terms of object sizing and positioning while preserving the geomet-ric style of the input clipart. To address this challenge, we propose a user-assisted curveextrusion method to obtain the reference 3D shape. We render the synthesized reference3D shape with consistent style into the visual scaffold. By following the generated visualscaffold, the users can efficiently design clipart with their desired viewpoints. The userstudy conducted by an intuitive user interface and our generated visual scaffold suggeststhat the users are able to design clipart from different viewpoints while preserving theoriginal geometric style without losing its original shape.en
dc.description.provenanceMade available in DSpace on 2021-06-07T18:10:45Z (GMT). No. of bitstreams: 1
U0001-2907202016452100.pdf: 12055930 bytes, checksum: d3b6af399c21a994679d39f72b2845a9 (MD5)
Previous issue date: 2020
en
dc.description.tableofcontents口試委員會審定書 i
致謝 ii
摘要 iii
Abstract iv
List of Figures viii
Chapter 1 Introduction 1
Chapter 2 Related Work 5
2.1 Novel-view Synthesis 5
2.2 Clipart Synthesis 6
2.3 Assisting Authoring Tools 7
2.4 Geometric Stylization 7
Chapter 3 Method 9
3.1 Visual Scaffold Synthesis 9
3.1.1 Single-view Guiding Shape Synthesis 11
3.1.2 User-assisted Curve Extrusion 13
3.2 User Interface 17
Chapter 4 Results and Evaluation 19
4.1 3D Shape Comparison 21
4.2 User Study 22
Chapter 5 Conclusion 29
Bibliography 31
Appendices 35
Chapter A Differential Volumetric Renderer Experiment 1
Chapter B Result of User-assisted Curve Extrusion 3
Chapter C User Drawings 5
dc.language.isoen
dc.subject影像處理zh_TW
dc.subject參數化曲線zh_TW
dc.subject曲面模型zh_TW
dc.subjectParametric curveen
dc.subjectImage Processingen
dc.subjectSurface modelen
dc.title多視角美工圖案之輔助設計系統zh_TW
dc.titleMulti-view Clipart Designen
dc.typeThesis
dc.date.schoolyear108-2
dc.description.degree碩士
dc.contributor.oralexamcommittee陳文進(Wen-Chin Chen),李明穗(Ming-Sui Lee)
dc.subject.keyword參數化曲線,曲面模型,影像處理,zh_TW
dc.subject.keywordParametric curve,Surface model,Image Processing,en
dc.relation.page52
dc.identifier.doi10.6342/NTU202002046
dc.rights.note未授權
dc.date.accepted2020-08-03
dc.contributor.author-college管理學院zh_TW
dc.contributor.author-dept資訊管理學研究所zh_TW
顯示於系所單位:資訊管理學系

文件中的檔案:
檔案 大小格式 
U0001-2907202016452100.pdf
  未授權公開取用
11.77 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved