請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/62908
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳良基(Liang-Gee Chen) | |
dc.contributor.author | Chung-Te Li | en |
dc.contributor.author | 李宗德 | zh_TW |
dc.date.accessioned | 2021-06-16T16:14:31Z | - |
dc.date.available | 2018-02-21 | |
dc.date.copyright | 2013-02-21 | |
dc.date.issued | 2013 | |
dc.date.submitted | 2013-02-06 | |
dc.identifier.citation | [1] H. Murata, T. Okino, T. Iinuma, S. Yamashita, S. Tanase, K. Terada and K. Kanatani, “Conversion of Two-Dimensional Images to Three Dimensions”, International Symposium, Seminar, and Exhibition of Society for Information Display, pp. 859-862, 1995
[2] H. Murata, X Mori, S. Yamashita, A. Maenaka, S. Okada, K. Oyamada, S. Kishimoto, “A Real-Time 2-D to 3-D Image Conversion Technique Using Computed Image Depth,” International Symposium, Seminar, and Exhibition of Society for Information Display, pp. 919-922, 1998 [3] T. Iinuma, H. Murata, S. Yamashita, and K. Oyamada, “Natural stereo depth creation methodology for a real-time 2D-to-3D image conversion,” International Symposium, Seminar, and Exhibition of Society for Information Display, vol. 43, pp. 1212–1215, May 14–19, 2000. [4] Humberto Rosas, “Perception and Reality in Stereo Vision: Technological Applications,” Advances in Stereo Vision, 2011. [5] I. P. Howard, S. Palmisano, R. S. Allison, X. Fang, “Effects of Noisy Binocular Disparity in Stereoscopic Virtual-Reality Systems,” Final Report for Contract W7711-7-7393. Defence and Civil Institute of Environmental Medicine, July, 1999. [6] O. Hilliges, D. Kim, S. Izadi, M. Weiss, and A.D. Wilson, “Holodesk: Direct 3D Interactions with a Situated See-Through Display,” in Proc. of ACM SIGCHI ’12, pp. 2412-2430, 2012. [7] H. Benko, R. Jota, A.D. Wilson, “MirageTable: freehand interaction on a projected augmented reality tabletop,” in Proc. of ACM SIGCHI ’12, pp. 199-208, 2012. [8] D. Marr, “Vision,” Freeman, San Francisco, 1982. [9] P. Boher, T. Leroux, V. Collomb-Patton, T. Bignon, D. Glinel, 'Imaging polarization for characterization of polarization based stereoscopic 3D displays,' in Proc. SPIE Stereoscopic Displays and Applications XXI, vol. 7524, pp. 75241K-1–75241K-11, January 2010. [10] E. H. Adelson and J. Y. A. Wang., “Single lens stereo with plenoptic camera,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 99-106, February 1992. [11] G. Deco, E. T. Rolls, “Attention, short-term memory, and action selection: A unifying theory,” Progress in Neurobiology, vol.76, no.4, July 2005, pp.236-256 [12] L. Fang and S. Grossberg, 'From stereogram to surface: how the brain sees the world in depth,' Spatial Vision, vol. 22, no. 1, pp. 45–82, 2009 [13] K. Tsutsui, H. Sakata, T. Naganuma, M. Taira, 'Neural correlates for perception of 3D surface orientation from texture gradient,' Science, vol. 298 no. 5592, pp. 409-412, October 11th, 2002. [14] K. Tsutsui, M. Taira, H. Sakata, 'Neural mechanisms of three-dimensional vision,' Neuroscience Research, vol. 51, no. 3, pp.221-229, March, 2005. [15] M. E. Sereno, T. Trinath, M. Augath, N. K. Logothetis, 'Three-dimensional shape representation in monkey cortex,' Neuron, vol. 33, pp. 635-652, February 14th. 2002. [16] T. Uka, G. C. DeAngelis 'Contribution of middle temporal area to coarse depth discrimination: comparison of neuronal and psychophysical sensitivity,' Journal of Neuroscience, vol. 23, no. 8, pp. 3515-3530, April 15th, 2003. [17] A. J. Parker, 'Binocular depth perception and the cerebral cortex,' Nature Reviews Neuroscience, vol. 8, pp. 379-391, May 2007. [18] L. T. Maloney and M. S. Landy, “A statistical framework for robust fusion of depth information,” in Proc. SPIE: Visual Communications and Image Processing IV, vol. 1199, pp. 1154–1163, 1989. [19] M. S. Landy, L. T. Maloney, E. B. Johnston, and M. Young, “Measurement and modeling of depth cue combination: in defense of weak fusion.” Vision Research, vol. 35, pp. 389–412, 1995. [20] D. Vishwanath, A. R. Girshick, and M. S. Banks, 'Why pictures look right when viewed from the wrong place.' Nature Neuroscience, vol. 8, no. 10, pp. 1401-1410, 2005. [21] D. C. Knill, A. Pouget, “The Bayesian brain: the role of uncertainty in neural coding and computation,” Trends in Neuroscience vol. 27, pp. 712-719, 2004. [22] W. J. Ma, J. M. Beck, P. E. Latham, and A. Pouget, 'Bayesian inference with probabilistic population codes,” Nature Neuroscience, vol. 9, no. 11, pp. 1432-1438, 2006. [23] Ernst, M.O. & Banks, M.S. (2002) Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415, 429-433. [24] Royal Philips Electronics, “Whitepaper WOWvx BlueBox” [Online]. Available: http://www.business-sites.philips.com/global/en/gmm/images/3d/3dcontentceationproducts/downloads/BlueBox_white_paper.pdf. [25] C. Wu, G. Er, X. Xie, T. Li, X. Cao, and Q. Dai, “A novel method for semi-automatic 2D to 3D video conversion,” 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video, 2008, pp. 65–68, May 28–30, 2008. [26] H. M. Wang, C. H. Huang, and J. F. Yang, “Depth maps interpolation from existing pairs of keyframes and depth maps for 3D video generation,” in Proc. IEEE Int. Symp. Circuits Syst., pp. 3248–3251, May 2010. [27] H. M. Wang, C. H. Huang, and J. F. Yang, “Block-based depth maps interpolation for efficient multiview content generation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 12, pp. 1847–1858, Dec. 2011. [28] I. A. Ideses, L. P. Yaroslavsky, B. Fishbain, and R. Vistuch, “3D from compressed 2D video,” in Proc. SPIE: Stereoscopic Displays and Applications XIV, vol. 6490, 64901C, 2007. [29] P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 3, pp. 406–417, 2006. [30] J. Park and C. Kim, “Extracting focused object from low depth-of-field image sequences,” in Proc. SPIE Visual Communications and Image Processing, vol. 6077, pp. 607710-1–607710-8, Jan. 2006. [31] W. J. Tam, A. S. Yee, J. Ferreira, S. Tariq, and F. Speranza, “Stereoscopic image rendering based on depth maps created from blur and edge information,” in Proc. SPIE: Stereoscopic Displays and Applications XII, vol. 5664, pp. 104–115, 2005. [32] W. J. Tam, C. Vazquez, and F. Speranza, “Three-dimensional TV: A novel method for generating surrogate depth maps using color information,” in Proc. SPIE: Stereoscopic Displays and Applications XX, vol. 7237, 2009. [33] C. C. Cheng, C. T. Li, and L. G. Chen, “An ultra-low-cost 2D-to-3D conversion system,” International Symposium, Seminar, and Exhibition of Society for Information Display, pp. 766–769, May 23–28, 2010. [34] J. Zhang, Y. Yang, and Q. Dai, “A novel 2D-to-3D scheme by visual attention and occlusion analysis,” 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video, 2011, pp. 1–4, May 16–18, 2011. [35] Y. Feng, J. Ren, and J. Jiang, “Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications,” IEEE Transactions on Broadcasting, vol. 57, no. 2, pp. 500–509, June 2011. [36] S. Battiato, S. Curti, M. La Cascia, E. Scordato, M. Tortora, “Depth Map Generation by Image Classification” in Proc. SPIE: Electronic Imaging 2004 – Three-Dimensional Image Capture and Applications VI, vol. 5302-13, January 2004; [37] S. Battiato, A. Capra, S. Curti, M. La Cascia, “3D Stereoscopic Pairs by Depth-map Image Generation,” IEEE 3DPVT’04, 2nd International Symposium on 3D Data Processing Visualization & Transmission, pp. 124-131, September 2004. [38] C. C. Cheng, C. T. Li, P. S. Huang, T. K. Lin, Y. M. Tsai, and L. G. Chen, “A block-based 2D-to-3D conversion system with bilateral filter,” IEEE International Conference on Consumer Electronics, 2009. ICCE '09. Digest of Technical Papers, pp. 1–2, Jan. 10–14, 2009. [39] Y. L. Chang, J. Y. Chang, Y. M. Tsai, C. L. Lee, and L. G. Chen, “Priority depth fusion for the 2D-to-3D conversion system,” SPIE 20th Annual Symposium on Electronics Imaging, vol. 6805, 680513, 2008. [40] C. C. Cheng, C. T. Li, and L. G. Chen, “A 2D-to-3D conversion system using edge information,” IEEE International Conference on Consumer Electronics. ICCE 2010, Digest of Technical Papers, pp. 377–378, Jan. 9–13, 2010. [41] Z. Wang, A. Conrad, H.R. Sheikh, and E. P. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. Image Processing, vol. 13, no. 4, pp 600-612, April 2004. [42] M. H. Pinson and S. Wolf, 'A new standardized method for objectively measuring video quality,' IEEE Transactions on broadcasting, vol. 50, no. 3, pp. 312-322, September 2004. [43] C. T. E. R. Hewage, S. T. Worrall, S. Dogan, S. Villette, and A. M. Kondoz, “Quality evaluation of color plus depth map-based stereoscopic video,” IEEE Signal Processing Vol. 3, No. 2, pp 304-318, Apr. 2009. [44] S. L. P. Yasakethu, D. V. S. X. De Silva, W. A. C. Fernando, and A. Kondoz, “Predicting sensation of depth in 3D video,” Electronics Letters, vol. 46, no. 12, pp. 837–839, June 10, 2010 [45] C. T. Li, Y. C. Lai, C. Wu, C. C. Cheng, and L. G. Chen, “A quality measurement based on object formation for 3D contents,” International Symposium, Seminar, and Exhibition of Society for Information Display , pp. 1265–1268, May 16–20, 2011. [46] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” Computer Vision, 1998. Sixth International Conference, pp. 839–846, Jan 4–7, 1998, [47] DSP/IC LAB, National Taiwan University, “Demo sequences for brain-inspired framework for fusion of multiple depth cues” [Online]. Available at: http://video.ee.ntu.edu.tw/~ztdjbdy/3D_demos/. [48] Royal Philips Electronics, “Philips 3D Solutions” [Online]. Available at: http://www.business-sites.philips.com/3dsolutions/. [49] F. Banterle, A. Artusi, T. Aydin, P. Didyk, E. Eisemann, D. Gutierrez, R. Mantiuk, and K. Myszkowski, “Multidimensional image retargeting [online],” available at: http://vcg.isti.cnr.it/publications/2011/baadegmm11. [50] A. Said, and B. Culbertson, 'Analysis and management of geometric distortions on multi-view displays with only horizontal parallax,' HP Labs Technical Report, HPL-2012-2. [51] S. J. Koppala, C. L. Zitnick, M. F. Cohen, S. B. Kang, B. Ressler, and A. Colburn., “A viewer centric editor for 3d movies,” IEEE Transaction on Computer Graphics and Applications, vol. 31, pp. 20–35, 2011. [52] F. Devernay and P. Beardsley, “Stereoscopic Cinema,” in Image and Geometry Processing for 3-D Cinematography, R’emi Ronfard and Gabriel Taubin, Eds. Springer-Verlag, 2010. [53] F. Devernay and S. Duchene, “New view synthesis for stereo cinema by hybrid disparity remapping,” in IEEE International Conference on Image Processing, pp.5-8, September 2010. [54] M. Tanimoto, 'Overview of FTV (free-viewpoint television),' IEEE International Conference o Multimedia and Expo, 2009, pp.1552-1553, June 28th -July 3rd, 2009 [55] R.-I. Hartley and A. Zisserman, 'Multiple View Geometry in Computer Vision,' Cambridge University Press, 2004 [56] S.-F. Tsai, P.-K. Tsung, K.-Y. Chen, C.-T. Li and L.-G. Chen, 'iSense3D: A Real-Time Viewpoint-Aware 3D Video Synthesis System', in IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, U.S.A, Jan. 2012. [57] P.-K. Tsung, P.-C. Lin, K.-Y. Chen, T.-D. Chuang, H.-J. Yang, S.-Y. Chien, L.-F. Ding, W.-Y. Chen, C.-C. Cheng, T.-C. Chen and L.-G. Chen, 'A 216fps 4096×2160p 3DTV Set-Top Box SoC for Free-Viewpoint 3DTV Applications', in IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, U.S.A, Feb. 2011. [58] G. Fanelli, J. Gall, and L.V. Gool, 'Real time head pose estimation with random regression forests,' Computer Vision and Pattern Recognition (CVPR), 2011, pp.617-624, June 2011. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/62908 | - |
dc.description.abstract | 本論文描述了一大腦啟發和視覺皮層感知的立體視訊處理系統。與目前的智慧3D電視(Smart 3DTV)相比,本文提出的系統強調立體視覺品質的提高和自然智慧立體互動。我們將此系統稱為超智慧3D電視(Ultra Smart 3DTV)。 本系統通過分析人類視覺系統感知來增強立體視覺品質。根據心理學家們的研究,人類具有透過各種單眼和雙眼深度線索產生立體視覺的能力。在觀看3D立體影像的同時,深度線索之間的衝突,讓觀眾感到不舒服或不自然。然而,缺乏3D立體內容,是一個現有立體電視系統的最主要問題。因此,本文提出了一個無深度線索衝突2D影像到3D立體影像之轉換,以產生視覺品質更好的3D影像。我們試圖通過模仿大腦如何分析深度,來計算影像中各物體的深度,並透過模仿大腦的機制,在轉換中,避免因深度線索之衝突而造成立體影像品質不佳。由於科學家已經發現了人類大腦處理的深度線索,可以用貝氏機率模型來解釋,我們將2D影像到3D立體影像的轉換問題,化歸為深度線索間之貝氏分析,從而可得到最佳化的深度估計,而提供更好的立體影像品質。比起傳統的轉換方法,我們的方法不論在主觀上或是客觀上都有更佳的表現。
此外,我們所提出的自然智慧立體互動透過(1)偵測觀看者的意向與(2)其觀看電視時之立體感知,並考慮兩者之間的相互作用來實現。對於使用者來說,透過雙手及肢體來表達意向,是最自然的方式。因此,我們所提出的自然智慧立體互動讓使用者以雙手來表達其意向。然而,在透過雙手與電視中顯示的物體互動時,立體感不能被扭曲,這是一個好的立體互動系統的必要條件。為了避免立體感被扭曲,我們嘗試了解視覺皮層中的早期視覺,並用影像處理的演算法模擬之。值得注意的是,視網膜上的影像在早期視覺中,是很重要的。因此,在立體互動系統中,我們提出了一個基於視覺皮層中的早期視覺及視網膜影像的立體影像校正方法。而視網膜影像與觀看電視的視角,距離等等有很強的關係。因此,我們的互動系統將會針對觀看者與電視之間的幾何關系來對立體影像做相對應的補償。我們也將智慧立體互動與大腦啟發2D到3D立體影像之轉換做結合,提出兩個展示系統:互動視角立體電視及立體互動窗。根據我們的實驗結果,比起目前的智慧3D電視,本文所提出的3D立體互動展示讓使用者有較佳的使用體驗。 在本論文中,我們所提出的大腦啟發和視覺皮層感知的立體視訊處理系統,透過大腦啟發之2D-3D轉換,提供更多高視覺品質的3D內容。並透過對視覺皮層的分析,使所提出的自然智慧立體互動系統讓觀眾能享受更好的使用體驗。 | zh_TW |
dc.description.abstract | This dissertation describes a brain-inspired and visual-cortex-aware interactive 3D video processing system. Compare to current smart 3DTV, the proposed system focuses on 1) the enhancement of 3D visual quality and 2) natural and smart 3D interaction. We enhance the 3D visual quality by analyzing the perception in the human visual system. Psychologists have explored that human beings perceive 3D effects by various monocular and binocular depth cues. While watching a 3D video, the conflicts between those depth cues make the viewer feel unnatural or uncomfortable. In contrast, lacking of 3D contents is a well-known fundamental problem for current 3DTV system. Therefore, we propose a depth cue conflict-free 2D-to-3D conversion to generate 3D videos with higher visual quality. To eliminate the potential conflicts between the depth cues while watching the converted 3D videos, we try to compute the depth from conventional 2D videos by mimicking how the brain analyzes the depth. Since neural scientists have found that the depth perception is generated by dealing with all the depth cues in Bayesian way, so-called “Bayesian brain”, we convert 2D videos to 3D videos by solving a Bayesian inference problem of the depth cues. We call the proposed methods as “brain-inspired 2D-to-3D conversion.” From the subjective viewpoint, the brain-inspired 2D-to-3D conversion outperforms earlier conversion methods by preserving more reliable depth cues. Moreover, an enhancement of 0.70-3.14 dB and 0.0059-0.1517 in the perceptual quality of the videos is realized in terms of the objective-modified peak signal-to-noise ratio and disparity distortion measure, respectively.
Besides, the natural and smart 3D interaction is performed between the intention of the viewer and the corresponding 3D perception. In our proposed system, we assume that the viewers perform their intentions by hands, which is one of the most natural ways of the interaction. However, the 3D perception cannot be distorted during the process of the interaction. This is a fundamental and necessary condition of a smart 3D interaction system. In the proposed system, we model the early vision in the visual cortex to make sure there are no distortions in the 3D perception. Psychologists have also found that the images on the retinas are the inputs of the visual cortex; therefore, we propose an interactive 3D video retargeting method on the basis of estimating the retinal images and the responses of the early vision in the visual cortex, called as “visual-cortex-aware interactive 3D retargeting”. Notably, scientists have explored that the retinal images will be pre-processed on the basis of viewing angle in the early vision while watching television. Hence, our proposed visual-cortex-aware interactive 3D retargeting considers the pre-processing of the retinal images for preserving the intensive 3D perception. Several demonstrations of 3D interactions, including 3D viewpoint-interactive video and 3D interactive window, are also designed in this dissertation. From the subjective viewpoint, the proposed interactive 3D demonstrations are much more immersive and also preferred than current non-interactive 3D since the perceptual distortion is quite reduced. In this dissertation, we describe a brain-inspired and visual-cortex-aware interactive 3D video processing system for 3D video processing. Our proposed brain-inspire 2D-to-3D conversion provides more 3D contents with enhanced visual quality. The proposed visual-cortex-aware interactive 3D retargeting let viewers be able to enjoy their 3D experiences during the natural and smart 3D interaction. | en |
dc.description.provenance | Made available in DSpace on 2021-06-16T16:14:31Z (GMT). No. of bitstreams: 1 ntu-102-F95943008-1.pdf: 12901991 bytes, checksum: 5b2e2b4f052123ba12a16f4ca2fca6f7 (MD5) Previous issue date: 2013 | en |
dc.description.tableofcontents | 口試委員會審定書 #
誌謝 i 中文摘要 iii ABSTRACT v CONTENTS vii LIST OF FIGURES x LIST OF TABLES xv Chapter 1 Trends of Future 3D Video Processing 1 1.1 Former Expectation - Protrusion 2 1.2 3D Visual Quality 3 1.3 Natural and Smart 3D Interaction 4 1.4 The Key Differences between This Work and Current Smart 3DTV 6 Chapter 2 Introduction of the Current Capture and Display System of 3D Videos 8 2.1 Stereopsis 8 2.2 Stereoscopic and Autostereoscopic Displays 10 2.3 Stereoscopic Cameras and Camera Arrays 15 2.4 Holography and Light Field 16 2.5 Holographic Display 17 2.6 Light Field Camera 18 2.7 Summaries 18 Chapter 3 3D Perception in Human Visual System 20 3.1 Visual Cortex and Stereopsis 20 3.2 Bayesian Brain 23 3.3 Shape Constancy, Retinal Images and Its Pre-Processing 27 Chapter 4 Conventional 2D-to-3D Conversions and Relative Quality Metrics 32 4.1 2D-to-3D Conversions - Problem Definition and Previous Works 32 4.2 Relative Depth Quality Metrics – For Image / Video Compressions 36 4.3 Relative Quality Metrics – Depth Distortion Measure 37 4.4 Relative Quality Metrics – Modified Depth PSNR 41 Chapter 5 Brain-Inspired 2D-to-3D Conversion 44 5.1 Depth Cues in Human Brain 44 5.2 Proposed Framework 47 5.3 Initial Estimation for Each Depth Cue 49 5.4 Measurement and Conditional Probability of Depth Cue Measurements 54 5.5 Noise Suppression for Depth Cues 61 5.6 Reliability Analysis for Fusion of Depth Cues 62 5.7 Bayesian Depth Interference - Reliability-Based Fusion of Multiple Depth Cues 64 5.8 Reliability Analysis for Fusion of Depth Cues 65 5.9 Summary 85 Chapter 6 Shape Constancy in 3D Interaction 87 6.1 Relative 3D Interactive Systems – MirageTable and HoloDesk 90 6.2 3D Touch – Between Hands and 3D Virtual Objects 91 6.3 Shape Constancy – Problem Definition and Existed Solutions 93 6.4 Depth Perception before Lateral Geniculate Nucleus in the Early Vision When Viewing Stereoscopic Display 100 6.5 Proposed 3D Image Compensation System 105 6.6 Interactive Selection of User-Preferred 3D Effects 107 6.7 Retinal Image Pre-Processing Based 3D Image Compensation 112 6.8 Experimental Results 115 6.9 Summary 117 Chapter 7 Demonstration of Prototype of Proposed 3D Video Processing 119 7.1 Demonstration of 3D Viewpoint-Interactive Video – Viewpoint Selection 122 7.2 Demonstration of 3D Interactive Window – Viewpoint Selection 124 7.3 Implementation of Brain-Inspired 2D-to-3D Conversion and Depth Image Based Rendering 127 7.4 Hand Tracking and Viewing Location Detection 128 7.5 Visual-Cortex-Aware Interactive 3D Retargeting 128 7.6 Summary 129 Chapter 8 Conclusion 130 8.1 Brain-Inspired 2D-to-3D Conversion 131 8.2 Visual-Cortex-Aware Interactive 3D Retargeting 133 8.3 Future Directions 135 REFERENCE 137 | |
dc.language.iso | en | |
dc.title | 一大腦啟發和視覺皮層感知之互動立體視訊處理系統 | zh_TW |
dc.title | A Brain-Inspired and Visual-Cortex-Aware Interactive 3D Video Processing System | en |
dc.type | Thesis | |
dc.date.schoolyear | 101-1 | |
dc.description.degree | 博士 | |
dc.contributor.oralexamcommittee | 簡韶逸(Shao-Yi Chien),賴永康,楊家輝,吳安宇,葉素玲 | |
dc.subject.keyword | 大腦啟發,視覺皮層感知,互動,立體視訊處理, | zh_TW |
dc.subject.keyword | Brain-Inspired,Visual-Cortex-Aware,Interactive 3D Video, | en |
dc.relation.page | 143 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2013-02-07 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-102-1.pdf 目前未授權公開取用 | 12.6 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。