Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51203
Full metadata record
???org.dspace.app.webui.jsptag.ItemTag.dcfield???ValueLanguage
dc.contributor.advisor莊永裕(Yung-Yu Chuang)
dc.contributor.authorShuen-Huei Guanen
dc.contributor.author官順暉zh_TW
dc.date.accessioned2021-06-15T13:27:24Z-
dc.date.available2018-03-08
dc.date.copyright2016-03-08
dc.date.issued2016
dc.date.submitted2016-02-16
dc.identifier.citation[1] Autodesk. Autodesk maya®. http://www.autodesk.com/products/ maya/overview, 2016.
[2] Pixar. Pixar’s renderman®. https://renderman.pixar.com/, 2016.
[3] TheRprojectforstatisticalcomputing.http://www.r-project.org/,2013.
[4] Bernard Mendiburu. 3D Movie Making Stereoscopic Digital Cinema from Script to Screen. Focal Press, 2010.
[5] G. Sun and N. S. Holliman. Evaluating methods for controlling depth perception in stereoscopic cinematography. In Proc. SPIE, Stereoscopic Displays and Applications XX, volume 7237, 2009.
[6] Piotr Didyk, Tobias Ritschel, Elmar Eisemann, Karol Myszkowski, and Hans-Peter Seidel. A perceptual model for disparity. ACM Trans. Graph., 30(4):96:1–96:10, July 2011.
[7] Piotr Didyk, Tobias Ritschel, Elmar Eisemann, Karol Myszkowski, Hans-Peter Sei- del, and Wojciech Matusik. A luminance-contrast-aware disparity model and ap- plications. ACM Transactions on Graphics (Proceedings SIGGRAPH Asia 2012, Singapour), 31(6), 2012.
[8] Song-Pei Du, Belen Masia, Shi-Min Hu, and Diego Gutierrez. A metric of visual comfort for stereoscopic motion. ACM Trans. Graph., 32(6):222:1–222:9, 2013.
[9] Gary Sharp. RealD discusses stereo contrast ratio (SCR), 2013.
64[10] Guan-Ming Su, Yu-Chi Lai, Andres Kwasinski, and Haohong Wang. 3D Visual Communications. John Wiley and Sons, 2012.
[11] Wikipedia. Depth perception. http://www.wikiwand.com/en/Depth_ perception, 2016.
[12] E. Bruce Goldstein. Sensation and Perception. Cengage Learning, 9th edition, February 2013.
[13] Daren Tang. Depth perception. http://faculty.pccu.edu.tw/~tdl/ percept7.htm.
[14] Stephan Reichelt, Ralf Hussler, Gerald Flitterer, and Norbert Leister. Depth cues in human visual perception and their realization in 3d displays. Three-Dimensional Imaging, Visualization, and Display, pages 76900B–76900B–12, 2010.
[15] J. E. Cutting and P. M. Vishton. Perceiving layout and knowing distance: the in- tegration, relative potency and contextual use of different information about depth. Perception of Space and Motion, pages 69–118, 1995.
[16] O. Ostberg. Accommodation and visual fatigue in display work. Displays, 2(2): 81–85, 1980.
[17] Andrew J. Woods, Tom Docherty, and Rolf Koch. Image distortions in stereoscopic video systems. In Proc. SPIE, Stereoscopic Displays and Applications IV, volume 1915, pages 36–48, 1993.
[18] VictorS.Grinberg,GreggPodnar,andM.W.Siegel.Geometryofbinocularimaging. In Proc. of the IS&T/SPIE Symp. on Electronic Imaging, Stereoscopic Displays and applications, Vol.2177, pages 56–65, 1994.
[19] S. Palmer. Vision Science: Photons to Phenomenology. The MIT Press, 2002.
[20] P. Howard and B. Rogers. Seeing in Depth. Oxford Univresity Press, 2002.
65
[21] Frank L. Kooi and Alexander Toet. Visual comfort of binocular and 3D displays. Displays, 25(2):99–108, 2004.
[22] Sumio Yano, Masaki Emoto, and Tetsuo Mitsuhashi. Two factors in visual fatigue caused by stereoscopic HDTV images. Displays, 25(4):141–150, 2004.
[23] B. Julesz. Foundations of Cyclopean Perception. The MIT Press, 2006.
[24] Robert T. Held and Martin S. Banks. Misperceptions in stereoscopic displays: A vision science perspective. In Proceedings of the 5th Symposium on Applied Per- ception in Graphics and Visualization, APGV ’08, pages 23–32, 2008.
[25] DavidM.Hoffman,AhnaR.Girshick,KurtAkeley,andMartinS.Banks.Vergence- accommodation conflicts hinder visual performance and cause visual fatigue. Jour- nal of Vision, 8(3):1–30, 2008.
[26] M.T.M. Lambooij, W.A. IJsselsteijn, and M.F. Fortuin. Visual discomfort and visual fatigue of stereoscopic displays: A review. Journal of Imaging Technology and Science, 53:1–14, 2009.
[27] Takashi Shibata, Joohwan Kim, David M. Hoffman, and Martin S. Banks. The zone of comfort: Predicting visual discomfort with stereo displays. Journal of Vision, 11(6), 2011.
[28] F. Zilly, J. Kluger, and P. Kauff. Production rules for stereo acquisition. Proceedings of the IEEE, 99(4):590–606, 2011.
[29] Yong Ju Jung, Seong-il Lee, Hosik Sohn, Hyun Wook Park, and Yong Man Ro. Vi- sual comfort assessment metric based on salient object motion information in stereo- scopic video. Journal of Electronic Imaging, 21(1):011008–1–011008–16, 2012.
[30] B. Pollock, M. Burton, J. W. Kelly, S. Gilbert, and E. Winer. The right view from the wrong location: Depth perception in stereoscopic multi-user virtual environments. IEEE Transactions on Visualization and Computer Graphics, 18(4):581–588, 2012.
66
[31] Krzysztof Templin, Piotr Didyk, Tobias Ritschel, Karol Myszkowski, and Hans- Peter Seidel. Highlight microdisparity for improved gloss depiction. ACM Trans. Graph., 31(4):92:1–92:5, 2012.
[32] E.W.JIN,M.E.MILLER,S.ENDRIKHOVSKI,andC.D.CEROSALETTI.Creat- ing a comfortable stereoscopic viewing experience: effects of viewing distance and ?eld of view on fusional range. In Proc. SPIE 5664, pages 10–21, 2006.
[33] E. W. Jin, M. E. Miller, and M. R. Bolin. Tolerance of misalignment in stereoscopic systems. In Proc. ICIS, pages 370–373, 2006.
[34] J. Li, M. Barkowsky, and P. Le Callet. Study on visual discomfort induced by stim- ulus movement at fixed depth on stereoscopic displays using shutter glasses. In In Third InternationalWorkshop on Quality of Multimedia Experience (QoMEX), pages 1–8, 2011.
[35] J. Li, M. Barkowsky, and P. Callet. The influence of relative disparity and planar motion velocity on visual discomfort of stereoscopic videos. In In Third Internation- alWorkshop on Quality of Multimedia Experience (QoMEX), pages 155–160, 2011.
[36] S.-H. CHO and H.-B. KANG. Subjective evaluation of visual discomfort caused from stereoscopic 3d video using perceptual importance map. In In TENCON 2012 - 2012 IEEE Region 10 Conference,, pages 1–6, 2012.
[37] Xuan Yang, Linling Zhang, Tien-Tsin Wong, and Pheng-Ann Heng. Binocular tone mapping. ACM Trans. Graph., 31(4):93:1–93:10, July 2012.
[38] Martin Banks. Discomfort and fatigue from stereo 3d displays. http: //spie.org/newsroom/technical-articles/4831-discomfort- and-fatigue-from-stereo-3d-displays, April 2013.
[39] Wa James Tam, F. Speranza, S. Yano, K. Shimono, and H. Ono. Stereoscopic 3d-tv: Visual comfort. Broadcasting, IEEE Transactions on, 57(2):335–346, June 2011.
67
[40] Ian P. Howard and Brian J. Rogers. Seeing in Depth. Oxford University Press, USA, 2008.
[41] Graham Jones, Delman Lee, Nicolas Holliman, and David Ezra. Controlling per- ceived depth in stereoscopic images. In STEREOSCOPIC DISPLAYS AND VIR- TUAL REALITY SYSTEMS VIII, page 200, 2001.
[42] Rene Klein Gunnewiek and Patrick Vandewalle. How to display 3d content realisti- cally. Technical report, Philips Research Laboratories, The Netherlands., 2010.
[43] KenichiroMasaoka,AtsuoHanazato,MasakiEmoto,HirokazuYamanoue,YujiNo- jiri, and Fumio Okano. Spatial distortion prediction system for stereoscopic images. Journal of Electronic Imaging, 15(1):013002–013002–12, 2006.
[44] JoohwanKim,DavidKane,andMartinS.Banks.Visualdiscomfortandthetemporal properties of the vergence-accommodation conflicts. SPIE. Stereoscopic Displays and Virtual Reality Systems, 8288:1–11, 2012.
[45] Hao Pan, Chang Yuan, and Scott Daly. 3D video disparity scaling for preference and prevention of discomfort. In SPIE. Stereoscopic Displays and Applications XXII, volume 7863, pages 786306–1, 2011.
[46] ManuelLang,AlexanderHornung,OliverWang,StevenPoulakos,AljoschaSmolic, and Markus Gross. Nonlinear disparity mapping for stereoscopic 3D. ACM Trans. Graph., 29(3):10, 2010.
[47] A. Smolic, P. Kauff, S. Knorr, A. Hornung, M. Kunter, M. Muller, and M. Lang. Three-dimensional video postproduction and processing. Proceedings of the IEEE, 99(4):607–625, 2011.
[48] Neil A. Dodgson. Variation and extrema of human interpupillary distance. In Pro- ceedings of SPIE Stereoscopic Displays and Virtual Reality Systems XI, pages 36–46, 2004.
68
[49] Z. Wartell, L. F. Hodges, and W. Ribarsky. A geometric comparison of algorithms for fusion control in stereoscopic htds. IEEE Transactions on Visualization and Com- puter Graphics, 8(2):129–143, 2002.
[50] GunterK.vonNoordenandEmilioC.Campo.BinocularVisionandOcularMotility Theory and Management of Strabismus. Mosby, 6th edition, 2001.
[51] M. Tory and T. Moller. Human factors in visualization research. Visualization and Computer Graphics, IEEE Transactions on, 10(1):72–84, 2004.
[52] KennethMoreland.Divergingcolormapsforscientificvisualization.InProceedings of the 5th International Symposium on Advances in Visual Computing: Part II, ISVC ’09, pages 92–103, 2009.
[53] Wikipedia. Lab color space.
[54] Thomas Oskam, Alexander Hornung, Huw Bowles, Kenny Mitchell, and Markus Gross. OSCAM - Optimized stereoscopic camera control for interactive 3D. ACM Trans. on Graphics (Proc. SIGGRAPH), 30(6):189:1–189:8, 2011.
[55] Clyde Dsouza. Think in 3D: Food For Thought for Directors, Cinematographers and Stereographers. CreateSpace Independent Publishing Platform, 2012.
[56] Roger Kirk. Experimental Design. Brooks/Cole Publishing Company, second edi- tion, 1982.
[57] C Lawrence Zitnick, Sing Bing Kang, Matthew Uyttendaele, Simon Winder, and Richard Szeliski. High-quality video view interpolation using a layered representa- tion. ACM Trans. Graph. (ACM SIGGRAPH 2004 papers), 23(3):600, 2004.
[58] A. Criminisi, A. Blake, C. Rother, J. Shotton, and P. H. Torr. Efficient dense stereo with occlusions for new view-synthesis by four-state dynamic programming. Int. J. Comput. Vision, pages 89–110, January 2007.
69
[59] M Bleyer, M Gelautz, C Rother, and C Rhemann. A stereo approach that handles the matting problem via image warping. IEEE Conference on Computer Vision and Pattern Recognition, pages 501–508, 2009.
[60] Marko Teittinen. Depth cues in the human visual system. http://www.hitl. washington.edu/scivw/EVE/III.A.1.c.DepthCues.html.
[61] GeorgeK.Hung.ModelsofOculomotorControl.WorldScientificPublishing,2001.
[62] Marc Lambooij, Wijnand IJsselsteijn, Marten Fortuin, and Ingrid Heynderickx. Vi- sual discomfort and visual fatigue of stereoscopic displays: A review. Journal of Imaging Science and Technology, 53(3):030201, 2009.
[63] David G. Lowe. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision, 60:91–110, November 2004.
[64] RobHess.Anopen-sourcesiftlibrary.InProceedingsoftheinternationalconference on Multimedia, MM ’10, pages 1493–1496, New York, NY, USA, 2010. ACM.
[65] Martin A. Fischler and Robert C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6):381–395, June 1981.
[66] Simon Baker and Iain Matthews. Lucas-kanade 20 years on: A unifying framework. Int. J. Comput. Vision, 56:221–255, February 2004.
[67] Stas Goferman, Lihi Zelnik-Manor, and Ayellet Tal. Context-aware saliency detec- tion. Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, pages 2376–2383, 2010.
[68] Pedro Felzenszwalb and Daniel Huttenlocher. Efficient graph-based image segmen- tation. International Journal of Computer Vision, pages 167–181, 2004.
[69] Che-Han Chang, Chia-Kai Liang, and Yung-Yu Chuang. Content-aware display adaptation and interactive editing for stereoscopic images. Multimedia, IEEE Trans- actions on, 13(4):589 –601, aug. 2011.
70
[70] ManuelLang,AlexanderHornung,OliverWang,StevenPoulakos,AljoschaSmolic, and Markus Gross. Nonlinear disparity mapping for stereoscopic 3d. ACM Trans. Graph. (ACM SIGGRAPH 2010 papers), 29:75:1–75:10, July 2010.
[71] James A. Ferwerda, Sumanta N. Pattanaik, Peter Shirley, and Donald P. Greenberg. A model of visual adaptation for realistic image synthesis. In SIGGRAPH ’96, pages 249–258, 1996.
[72] Amid Amidi. The Art of Pixar: 25th Anniv.: The Complete Color Scripts and Select Art from 25 Years of Animation. Chronicle Books, November 2011.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51203-
dc.description.abstract隨著立體電影與立體遊戲的普及與流行,愈來愈多的研究被發表 了出來。甚至有一段時間,立體(S-3D)研究相當的熱門,在電腦視 覺、電腦圖學、視覺心理學或其它領域,都看得到不少的研究被提出 來。很多的研究解決了不同面向的問題,包括像是立體顯示(硬體居 多)的技術、立體影像的後製處理與方法、人因工程與人體視知覺的 實驗設計、或甚至是更加基本而重要的立體度量衡...等。這些研究, 有的提供了不同但有啟發性的獨到觀點、有的就一些舊有的問題提出 了改善的解法、有的甚至開啟了一些可供未來持續發展下去的研究方 向。然而,卻很少研究的觀點始於創作者的一方。更精確一點來說, 很少研究是從那些製作出立體電影、立體特效或是立體遊戲的工作室 的觀點出發的。
對於一個執行創意發想的電影製作團隊來說,去了解戲院觀眾的 感受,是非常重要但同時很困難的一件事。如果把立體感受這部分加 進來,那就更加的困難了。雖然困難,但卻是無比的重要。也就是說, 這是一個怎麼把觀眾的感受給帶進製作團隊工作流程中的一個困難挑 戰。而且,在整個製作立體電影的過程中,有很多專業人事的投入與 合作,他們各自有不同的專業與背景,包括像是導演、製作人、技術 總監、立體攝影師、動畫師、技術人員、協調行政人員...等。這樣的 團隊組成,造成的結果就是:大量的溝通與討論。因此,在整個電影 製作的過程中,製定出一套能有效溝通,或甚至可以拿來當作度量衡 的立體單位(或術語),十足的重要。
這一篇研究論文提出了一個度量衡單位,geometric perceived depth percentage (GPDP),它可以用來量化觀眾的感受到的立體程度,而且 並不需要到最後一刻的渲染(rendering)時才有辦法得知。根據立體 場景裏頭的物體遠近以及立體攝影機的參數設定,GPDP 除了可以量 化出立體感受,它同時還能把立體投影環境(螢幕的大小,以及觀眾 與螢幕之間的距離)也一並考慮了進來。也因此,GPDP 提供了一個 有效而簡易的立體度量單位,它可以用來量化或甚至是預先得知觀眾 感受到的立體效果。同時,它也可以做為一個一致的溝通述語。
藉由 GPDP 的應用,我們開發了一個被實際拿來使用的立體預覽工 具。透過這個工具,立體攝影師可以直接預測觀眾的立體感受,而不
ii需要任何特殊的立體設備或是立體投影環境。這個工具的組成,涵蓋 了立體舒適空間(comfort volume)、立體著色系統(shading schemes)、 立體指標(depth perception markers)、以及立體直方圖(histogram)... 等。這些資訊以不同的型式呈現出來,供立體攝影師使用,協助他們 在調整立體參數時更有效而到位。這工具可以非常容易地實作,並且 整合進現代的動畫製作流程或是渲染流程裏頭。我們分別實作整合進 了 Autodesk Maya [1] 以及 Pixar’s RenderMan [2]。這整套系統與工具, 被實際使用在很多的商業立體專案裏頭,貢獻良多。
有時候,我們可能得必須在後製的階段,直接調整立體效果,尤其 是當時間或資訊不足,或甚至是我們根本沒有原始 3D 場景,無法重製 的狀況下。另外,有時候即使我們使用了基於 GPDP 的工具來製作立 體電影,最終的結果還是有可能因為別的因素而結果不佳。這個情況 下,基於影像變形與裁切(image warping and cropping)的演算法,亦 在本研究中提出。五個基本原則被提出,它們將各自扮演著 1) 要嘛就 是消除立體不適感;2) 要嘛就是加強立體感受的功能。
最後,由於 GPDP 是一個用來把觀眾所感受到的立體效果給量化並 帶進製作團隊的一個機制與工具,是以,我們也設計並且執行了一些 使用者研究。研究的結果顯示 GPDP 是一個對於製作者來說,非常適 合的立體效果指標。另一方面,本篇研究所開發出來的工具,也的確 能有效的改善並且加進立體攝影機參數的調整等工具。
zh_TW
dc.description.abstractWith the popularity of stereoscopic movies and games, several researches are published. stereoscopy, or S-3D, becomes one of hot topics, for a while, in computer vision, computer graphics, psychology, and other related fields. Lots of research works are proposed to solve issues in display technologies, signal post-processing, human visual perception, and fundamental quality as- sessment. These works all provide inspiring insights, significant solutions to some issues, and even open questions for future research. However, seldom works are started from the viewpoint of producers, or more specifically, pro- duction studios that make stereoscopic films, visual effect, or games.
It is a necessary but challenging task for creative producers to have an idea how the target audience might perceive when watching a stereoscopic film in a cinema during production. That is, it is a problem of bringing tar- get consumers’ perception into producers’ work-flow. Moreover, during the pipeline of a S-3D production, much communication happens between experts with different profession, including directors, producers, supervisors, layout artists, technical directors, coordinators and others. Having an effective com- municate and even a measurement of the stereoscopic effect is fundamental through the whole pipeline.
This paper proposes a novel metric, geometric perceived depth percent- age (GPDP), to numerate and depict the depth perception of a scene before rendering. In addition to the geometric relationship between the object depth and focal distance, GPDP also takes the screen width and viewing distance
ivinto account. As a result, it provides a more intuitive mean for predicting stereoscopy and is universal across different viewing conditions.
Furthermore, based on GPDP, a practical tool is designed to visualize the stereoscopic perception without the need of any 3D device or special environ- ment. The tool utilizes the stereoscopic comfort volume, GPDP-based shad- ing schemes, depth perception markers, and GPDP histograms as visual cues so that S-3D layout artists can set stereoscopic parameters more easily. The tool can be easily implemented into any modern rendering pipeline, including interactive Autodesk Maya [1] and off-line Pixar’s RenderMan renderer [2]. It has been used in several production projects including commercial ones.
Sometimes, we might need to fine-tune depth effect in post-processing when time and budget is not allowed for time-consuming in-production pipeline, or there is no 3D scenes to work on (ex, the video is a mix of real shooting and CG animation). An 3D experience optimization by fixing visual fatigue and depth perception is proposed, too. Five principles are proposed to reduce visual fatigue and enhance depth perception.
Finally, several user studies show that GPDP is a proper depth perception indicator and the proposed tool can make the stereoscopic parameter setting process more easy and efficient.
en
dc.description.provenanceMade available in DSpace on 2021-06-15T13:27:24Z (GMT). No. of bitstreams: 1
ntu-105-D99944013-1.pdf: 42726206 bytes, checksum: a177d1f101af94d63fbce44483bc2941 (MD5)
Previous issue date: 2016
en
dc.description.tableofcontentsDissertation Cover i
中文ﰀ要 ii
Abstract iv
Contents vi
List of Figures viii
List of Tables xi
1 Introduction 1
2 Related Work 7
3 Geometric Perceived Depth Percentage 10
3.1 VisualizationandManipulationHelper................... 15 3.2 Histogram.................................. 20 3.3 Implementations .............................. 22 3.4 UserStudies................................. 25 3.5 Effectiveness of GPDP across Different Screen Sizes . . . . . . . . . . . 25 3.6 UsefulnessoftheProposedTool ...................... 29
4 3D Experience Optimization 34
4.1 VisualFatigueReduction.......................... 35
vi
4.2 DepthPerceptionEnhancement....................... 36
4.3 CorrespondenceF ............................. 36
4.4 SaliencyS.................................. 37
4.5 SegmentationG............................... 39
4.6 Cropping .................................. 41
4.6.1 BlackFrameGeneration ...................... 41
4.7 Warping................................... 43
4.8 OptimizationResult............................. 44
4.8.1 VisualFatigueReduction...................... 45
4.8.2 DepthPerceptionEnhancement .................. 45
4.9 UserStudies................................. 47
4.10 EffectivenessofS-3DExperienceOptimization . . . . . . . . . . . . . . 48
5 Conclusion 49
A Supplementary Result Images 51
B Memo 55
B.1 StereoscopyasaStorytellingTool ..................... 55 B.2 3DAnimationPipeline ........................... 56 B.3 InspiringResearch ............................. 57 B.4 ColorScripts ................................ 57
C RenderMan Imager Shader for GPDP Visualization 59
Bibliography
64
dc.language.isoen
dc.subject立體視覺zh_TW
dc.subject3D立體zh_TW
dc.subject3D立體製作流程zh_TW
dc.subject電腦圖學zh_TW
dc.subject立體視覺zh_TW
dc.subject3D立體zh_TW
dc.subject3D立體製作流程zh_TW
dc.subject電腦圖學zh_TW
dc.subjectProduction Pipeline.en
dc.subject3Den
dc.subject Binocular Vi- sionen
dc.subjectStereoscopic Visionen
dc.subjectProduction Pipeline.en
dc.subjectStereoscopyen
dc.subject Binocular Vi- sionen
dc.subjectStereoscopic Visionen
dc.subjectComputer Graphicsen
dc.subject3Den
dc.subjectS-3Den
dc.subjectComputer Graphicsen
dc.subjectStereoscopyen
dc.subjectS-3Den
dc.title3-D立體影像的動畫製作流程與優化zh_TW
dc.titleProduction-Oriented Stereoscopic Pipeline and Optimizationen
dc.typeThesis
dc.date.schoolyear104-1
dc.description.degree博士
dc.contributor.oralexamcommittee歐陽明(Ming Ouhyoung),陳炳宇(Bing-Yu Chen),賴祐吉(Yu-Chi Lai),姚智原(Chih-Yuan Yao),吳健榕(Jian-Rong Wu)
dc.subject.keyword立體視覺,3D立體,3D立體製作流程,電腦圖學,zh_TW
dc.subject.keywordComputer Graphics,Stereoscopy,S-3D,3D, Binocular Vi- sion,Stereoscopic Vision,Production Pipeline.,en
dc.relation.page71
dc.rights.note有償授權
dc.date.accepted2016-02-16
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊網路與多媒體研究所zh_TW
Appears in Collections:資訊網路與多媒體研究所

Files in This Item:
File SizeFormat 
ntu-105-1.pdf
  Restricted Access
41.72 MBAdobe PDF
Show simple item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved