Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/18944
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳宏銘
dc.contributor.authorTai-Hsiang Huangen
dc.contributor.author黃泰翔zh_TW
dc.date.accessioned2021-06-08T01:40:24Z-
dc.date.copyright2016-08-26
dc.date.issued2016
dc.date.submitted2016-08-21
dc.identifier.citation[1] K. Fujii, M. D. Grossberg, and S. K. Nayar, 'A projector-camera system with real-time photometric adaptation for dynamic environments.' in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 814-821.
[2] R. T. Azuma, “A survey of augmented reality,” Teleoperators and Virtual Environments,, vol. 6, no. 4, pp. 355-385, 1997.
[3] R. Azuma et al., “Recent advances in augmented reality,” IEEE Computer Graphics and Applications, vol. 21, no. 6, pp. 34-47, 2001.
[4] H. Benko, R. Jota, and A. Wilson, 'MirageTable: freehand interaction on a projected augmented reality tabletop,' in Proceedings of SIGCHI Conference on Human Factors in Computing Systems, pp. 199-208, 2012.
[5] M. Mine et al., “Projection-based augmented reality in disney theme parks,” Computer, vol. 45, no. 7, pp. 32-40, 2012.
[6] X. Cao, C. Forlines, and R. Balakrishnan, 'Multi-user interaction using handheld projectors,' in Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, pp. 43-52, 2007.
[7] Y. Zhao, C. Xue, X. Cao, Y. Shi, 'PicoPet: real world digital pet on a handheld projector,' in Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, pp. 1-2, 2011.
[8] S. S. Snibbe, and H. S. Raffle, 'Social immersive media: pursuing best practices for multi-user interactive camera/projector exhibits,' in Proceedings of SIGCHI Conference on Human Factors in Computing Systems, pp. 1447-1456, 2009.
[9] R. Shilkrot, S. Hunter, and P. Maes, 'PoCoMo: projected collaboration using mobile devices,' in Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services, pp. 333-336, 2011.
[10] K. D. Willis, “Ubiquitous projection new interfaces using mobile projectors” Ph.D. dissertation, Architecture Department at Carnegie Mellon University, Pittsburgh, PA, 2013.
[11] M. D. Grossberg, H. Peri, S. K. Nayar, and P. N. Velhumeur, 'Making one object look like another: controlling appearance using a projector-camera system,' in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. I-452–459, 2002.
[12] S. K. Nayar, H. Peri, M. D. Grossberg, and P. N. Bellhumeur, 'A projection system with radiometric compensation for screen imperfections,' in Proceedings of IEEE International Workshop on Projector-Camera Systems (ProCams), 2003.
[13] T. Yoshida, C. Horii, and K. Sato, 'A virtual color reconstruction system for real heritage with light projection,' in Proceedings of Virtual Systems and Multimedia (VSMM), pp. 161–168, 2003.
[14] G. Wetzstein, and O. Bimber, 'Radiometric Compensation through Inverse Light Transport,' in Proceedings of Pacific Conference on Computer Graphics and Applications, pp. 391–399, 2007.
[15] M. Brown, A. Majumder, and R. Yang, “Camera-based calibration techniques for seamless multiprojector displays,” IEEE Transactions on Visualization and Computer Graphics, vol. 11, no. 2, pp. 193–206, 2005.
[16] J. van Baar, T. Willwacher, S. Rao, and R. Raskar, “Seamless multi-projector display on curved screens,” in Proceedings of the Workshop on Virtual Environments (EGVE), pp. 281-286, 2003.
[17] R. Raskar, J. van Baar, P. Beardsley, T. Willwacher, S. Rao, and C. Forlines, “iLamps: geometrically aware and self-configuring projectors,” in Proceedings of the ACM Transactions on Graphics (SIGGRAPH), no. 3, pp. 809–818, July 2003.
[18] W. Sun, X. Yang, S. Xiao, and W. Hu, “Robust checkerboard recognition for efficient nonplanar geometry registration in Projector-Camera systems,” in Proceedings of the ACM/IEEE 5th International Workshop on Projector Camera Systems (PROCAMS), Article No. 2, 2008.
[19] R. Juang, and A. Majumder, 'Photometric Self-Calibration of a Projector-Camera System,' in IEEE CVPR Workshop on Projector Camera Systems, June 2007.
[20] N. Asada, A. Amano, and M. Baba, 'Photometric calibration of zoom lens systems,' in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 186–190, 1996.
[21] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000.
[22] J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 10, pp. 965–980, 1992.
[23] A. Majumder, and R. Stevens, “LAM: luminance attenuation map for photometric uniformity in projection based displays,” in Proceedings of the ACM Symposium on Virtual Reality Software and Technology, pp. 147–154, Hong Kong, China, 2002.
[24] A. Majumder, D. Jones, M. McCrory, M. E. Papka, and R. Stevens, 'Using a camera to capture and correct spatial photometric variation in multi-projector displays,' in Proceedings of IEEE International Workshop on Projector-Camera Systems (ProCams), 2003.
[25] A. Majumder, and R. Stevens, “Perceptual photometric seamlessness in projection-based tiled displays,” ACM Transactions on Graphics (TOG), vol. 24, no. 1, pp. 118–139, 2005.
[26] D. Wang, I Sato, T. Okabe, and Y. Sato, 'Radiometric compensation in a projector-camera system based on the properties of human vision system.' in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol.3, pp. 100–107, 2005.
[27] M. Ashdown, T. Okabe, I. Sato, and Y. Sato, 'Robust Content-Dependent Photometric Projector Compensation,' in Proceedings of IEEE International Workshop on Projector-Camera Systems (ProCams), pp. 17-22, 2006.
[28] O. Bimber. D. Iwai, G. Wetzstein, and A. Grundhofer., “The visual computing of projector-camera systems,” Computer Graphics Forum, vol. 27, no. 8, pp. 2219–2245, Dec. 2008.
[29] W. Zou, and H. Xu, “Colorimetric color reproduction framework for screen relaxation of projection display,” Displays, vol. 32, no. 5, pp. 313–319, 2011.
[30] B. Zhu, L. Xie, T. Yang, Q. Wang, and Y. Zheng, 'A novel radiometric projector compensation algorithm based on Lambertian reflection model,' in Proceedings of SPIE MIPPR: Pattern Recognition and Computer Vision, pp. 80040I–80040I-5, 2011.
[31] T.-H. Huang, C.-T. Kao, and H. H. Chen, 'Quality enhancement of procam system by radiometric compensation,' in Proceedings of IEEE MMSP, pp. 192–197, 2012.
[32] A. Iranli, W. Lee, and M. Pedram, “HVS-aware dynamic backlight scaling in TFT-LCDs,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 14, no. 10, pp. 1103–1116, 2006.
[33] P.-S. Tsai et al., “Image enhancement for backlight-scaled TFT-LCD displays,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 19, no. 4, pp. 574–583, 2009.
[34] T.-H. Huang, K.-T Shih, S.-L Yeh, and H. H. Chen, “Enhancement of backlight-scaled images,” IEEE Transactions on Image Processing, vol. 22, no. 12, pp. 4587–4597, 2013.
[35] L. Rayleigh, “XXXI. Investigations in optics, with special reference to the spectroscope,” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol. 8, no. 49, pp. 261-274, 1879.
[36] M. Yanoff and J. S. Duker, Ophthalmology 3rd Edition, MOSBY Elsevier, 2009, p. 54.
[37] A. Kitaoka. (2011, Nov. 21). Illusory color crosses [Online]. Available: http://www.psy.ritsumei.ac.jp/~akitaoka/color14e.html.
[38] A. Gilchrist et al., “An anchoring theory of lightness perception,” Psychological review, vol. 106, no. 4, pp. 795–834, 1999.
[39] N. Moroney, M. D. FairChild, R. Hunt, C. J. Le, M. R. Luo, and T. Newman, 'The CIECAM02 color appearance model,' in Proceedings of IS&T/SID 10th Color Imaging Conference, pp. 23–27, 2002.
[40] T. Smith, and J. Guild, “The CIE colorimetric standards and their use,” Transactions of the Optical Society, vol. 33, no. 3, pp. 73–134, 1931.
[41] P. Bodrogi and T. Q. Khan, Illumination, Color and Imaging: Evaluation and Optimization of Visual Displays: John Wiley & Sons, 2012.
[42] C. Li, M. R. Luo, R. R. Hunt, N. Moroney, M. D. Fairchild, and T. Newman, 'The performance of CIECAM02,' in Proceedings of IS&T/SID 10th Color Imaging Conf., pp. 28–32, Scottsdale, AZ, Nov. 2002.
[43] Fractal coding and analysis group (2009). Repository [Online]. Available: http://links.uwaterloo.ca/Repository.html.
[44] Kodak Lossless True Color Image Suite. (1999, Nov. 15). True Color Kodak Images [Online]. Available:http://r0k.us/graphics/kodak/.
[45] The USC-SIPI Image Database. SIPI Image Database [Online]. Available: http://sipi.usc.edu/database/.
[46] Wikipeida Foundation. (2012, Dec. 5). Analysis of variance. [Online]. Available: http://en.wikipedia.org/wiki/Analysis_of_variance.
[47] M. D. Fairchild, “A revision of CIECAM97s for practical applications,” Color Research & Application, vol. 26, no. 6, pp. 418–427, 2001.
[48] R. Hunt, and M. Pointer, “A colour‐appearance transform for the CIE 1931 standard colorimetric observer,” Color Research & Application, vol. 10, no. 3, pp. 165-179, 1985.
[49] L. Michaelis and M. L. Menten, “Die kinetik der invertinwerkung,” Biochemische Zeitschrift, vol. 49, pp. 333–369, 1913.
[50] T.-H. Huang, C.-K. Liang, S.-L. Yeh, and H. H. Chen, “JND-based enhancement of perceptibility for dim images,” IEEE International Conference on Image Processing, San Diego, pp. 1752-1755, Oct. 2008.
[51] H. R. Wilson, “A transducer function for threshold and suprathreshold human vision,” Biological Cybernetics, vol. 38, no. 3, pp. 171-178, 1980.
[52] A. Valberg, Light vision color: New York, NY, USA: Wiley, 2005, p. 436.
[53] M. Ashikhmin, 'A tone mapping algorithm for high contrast images.' in Proceedings of Eurograph, 2002, pp. 145–156.
[54] R. Mantiuk, K. Myszkowski, and H.-P. Seidel, “A perceptual framework for contrast processing of high dynamic range images,” ACM Transactions on Applied Perception (TAP), vol. 3, no. 3, pp. 286–308, 2006.
[55] J. Valeton, and D. van Norren, “Light adaptation of primate cones: an analysis based on extracellular data,” Vision Research, vol. 23, no. 12, pp. 1539–1547, 1983.
[56] E. P. Ong et al., “Perceptual quality and objective quality measurements of compressed videos,” Journal of Visual Communication and Image Representation, vol. 17, no. 4, pp. 717–737, 2006.
[57] T.-H. Huang, C.-T Kao, Y-C Chen, S.-L Yeh, and H. H. Chen, 'A visibility model for quality assessment of dimmed images,' in Proceedings of 4th IEEE Int. Workshop Multimedia Signal Processing, pp. 206–211, Sept. 2012,.
[58] Widimedia Foundation. (2013, Feb. 3). Cornsweet Illusion, Petersburg, FL, USA [Online]. Available: http://en.wikipedia.org/wiki/Cornsweet_illusion.
[59] D. Purves, A. Shimpi, and R. B. Lotto, “An empirical explanation of the Cornsweet effect,” The Journal of Neuroscience, vol. 19, no. 19, pp. 8542–8551, 1999.
[60] F. Durand, and J. Dorsey, “Fast bilateral filtering for the display of high-dynamic-range images,” ACM Transactions on Graphics (TOG), vol. 21, no. 3, pp. 257–266, 2002.
[61] Y. Li, L. Sharan, and E. H. Adelson, “Compressing and companding high dynamic range images with subband architectures,” ACM Transactions on Graphics (TOG), vol. 24, no. 3, pp. 836–844, 2005.
[62] B. Moulden, “Border effects on brightness: A review of findings, models and issues,” Spatial Vision, vol. 3, no. 4, pp. 225–262, 1988.
[63] G. Krawczyk, K. Myszkowski, and H. P. Seidel, 'Contrast restoration by adaptive countershading,' in Proceedings of Eurograph, 2002, pp. 145–156.
[64] W.-C. Cheng, Y. Hou, and M. Pedram, 'Power minimization in a backlit TFT-LCD display by concurrent brightness and contrast scaling,' IEEE Transactions on Consumer Electronics, vol. 50, no. 1, pp. 25–32, Feb. 2004.
[65] R. Fattal, D. Lischinski, and M. Werman, 'Gradient domain high dynamic range compression,' ACM Transactions on Graphics (TOG), vol. 21, no. 3, pp. 249–256, Jul. 2002.
[66] T. O. Aydin,R. Mantiuk, K. Myskowski, and H. P. Seidel, 'Dynamic range independent image quality assessment,' ACM Transactions on Graphics (TOG), vol. 27, no. 3, pp.69–78, Aug. 2008.
[67] H. R. Sheikh, A. C. Bovik, and G. De Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2117–2128, 2005.
[68] F. W. Campbell, and J. Robson, “Application of Fourier analysis to the visibility of gratings,” The Journal of physiology, vol. 197, no. 3, pp. 551–566, 1968.
[69] M. Losada, and J. Santamaría, “Relative contributions of optical and neural limitations to human contrast sensitivity at different luminance levels,” Vision Research, vol. 33, no. 16, pp. 2321–2336, 1993.
[70] R. Mantiuk, K. J. Kim, A. G. Rempel, and W. Heidrich, 'HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions,' ACM Transactions on Graphics (TOG), vol. 21, no. 3, pp. 40–50, 2011.
[71] C. F. Stromeyer and B. Julesz, 'Spatial frequency masking in vision: critical bands and spread of masking,' Journal of the Optical Society of America, vol. 62, pp. 2321–2336, 1993.
[72] G. E. Legge, and J. M. Foley, “Contrast masking in human vision,” Journal of the Optical Society of America, vol. 70, no. 12, pp. 1458–1471, 1980.
[73] J. Nachmias, and R. V. Sansbury, “Grating contrast: discrimination may be better than detection,” Vision Research, vol. 14, no. 10, pp. 1039–1042, 1974.
[74] J. M. Foley, “Human luminance pattern-vision mechanisms: masking experiments require a new model,” Journal of the Optical Society of America, vol. 11, no. 6, pp. 1710–1719, 1994.
[75] K. K. De Valois, and R. Tootell, “Spatial-frequency-specific inhibition in cat striate cortex cells,” The Journal of Physiology, vol. 336, pp. 1221–1232, 1983.
[76] R. Mantiuk, S. Daly, and L. Kerofsky, “Display adaptive tone mapping,” ACM Transactions on Graphics (TOG), vol. 27, no. 3, pp. 68–82, 2008.
[77] J. Vos, and T. Van den Berg, “Report on disability glare,” CIE Collection, vol. 135, no. 1, pp. 1–9, 1999.
[78] R. L. De Valois, D. G. Albrecht, and L. G. Thorell, “Spatial frequency selectivity of cells in macaque visual cortex,” Vision Research, vol. 22, no. 5, pp. 545–559, 1982.
[79] E. P. Simoncelli, and W. T. Freeman, 'The steerable pyramid: A flexible architecture for multi-scale derivative computation,' in Proceedings of International Conference on Image Processing, IEEE Comput. Soc. Press, vol. 3, p. 444–447, 2002.
[80] S. J. Daly, 'Visible differences predictor: an algorithm for the assessment of image fidelity,' Digital Images and Human Vision, MIT Press, pp. 179–206, 1993.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/18944-
dc.description.abstract隨著投影元件的進步,手提式的投影裝置已日漸普及。然而在日常生活中可見的投影平面卻未必是理想的白色,同時投影面的反射係數也未必理想。在本論文中,我提出了一個針對投影面的影像色彩補償系統。此系統會依照投影面的色彩及花紋在預投影的影像上做對應的修正,使得投影效果趨近於理想投影面上的結果。
人眼視覺系統的特性被大量的應用於此一色彩補償系統中。透過顏色定錨理論的應用,本系統能更精準的考慮人眼對環境色彩的適應性,進而在有色彩及花紋的投影平面上能更準確的重現一張影像原本該有的色彩。透過可見度模型的建立與使用,本系統能有效的預測哪些影像細節在非理想投影面上會變得不可見,進而對該影像細節做對應的增強。在主觀測試中,超過95%的受測者能在本系統的補償影像上看到相較於既有系統更好的色彩及影像細節。此一測試結果不僅反映了本系統的優越性,同時亦證明了視覺模型及理論可以有效的改善在非理想投影面上的投影品質。
zh_TW
dc.description.abstractUbiquitous projection, meaning being able to project an image anywhere, is no longer a fiction due to the miniaturization of projectors. However, flat surfaces in our living environment to be used as the replacement of a projection screen for pico or handheld projectors are not necessarily white. To improve the image quality for ubiquitous projection, a perceptual radiometric compensation system is presented in this dissertation to counteract the effect of color projection surfaces on image appearance.
In the first part of the dissertation, a perceptual radiometric compensation method is presented to counteract the effect of color projection surfaces on image appearance. It reduces color clipping while preserving the hue and brightness of images based on the anchoring property of human visual system. In addition, it considers the effect of chromatic adaptation on perceptual image quality and fixes the color distortion caused by non-white projection surfaces by properly shifting the color of the image pixels toward the complementary color of the projection surface. User ratings show that the method outperforms existing methods in 974 out of 1020 subjective tests.
In the second part of the dissertation, a visibility enhancement method is presented to counteract the visibility loss effect caused by the brightness scaling in the process of radiometric compensation. Based on JND theory and the HVS response model, the method effectively enhances the visibility of image details in dark regions without affecting the perceptual contrast of bright regions. The method also applies appropriate counter shading to eliminate halo effect and, meanwhile, enhance perceptual contrast of the brightness-scaled image. Experimental results are provided to show the performance of the method.
In the last part of the dissertation, a simplified CIECAM02 model is presented to reduce the computational burden caused by the nonlinear color transforms of CIECAM02. Experimental results show that the simplified model reduces the computation time by 50% with only 2.3% approximation error. The speedup is important for practical applications.
en
dc.description.provenanceMade available in DSpace on 2021-06-08T01:40:24Z (GMT). No. of bitstreams: 1
ntu-105-D97942020-1.pdf: 3430222 bytes, checksum: 8563d67e69861cd95e79cdfb7e6293f0 (MD5)
Previous issue date: 2016
en
dc.description.tableofcontents摘要 x
ABSTRACT xi
CHAPTER 1 INTRODUCTION 1
1.1 Dissertation Overview 6
1.1.1 Dissertation Roadmap 6
1.1.2 Chapter Descriptions 7
CHAPTER 2 RADIOMETRIC COMPENSATION TECHNIQUES 9
2.1 Radiometric Models 9
2.2 Model Estimation 12
2.3 Image Compensation 14
2.4 Device Limitations 14
2.4.1 Gamut of the Projection Surface 15
2.4.2 Resolution of the Projector and Camera 17
2.4.3 Dynamic Range and Gamut of the Camera 19
2.5 Summary 19
CHAPTER 3 RADIOMETRIC COMPENSATION FRAMEWORK 21
3.1 Radiometry Modeling 21
3.2 Calibration of the Radiometric Model 23
3.2.1 Color Mixing between Projector’s and Camera’s Channel 23
3.2.2 Nonlinear Responses of the Projector 25
3.3 Radiometric Compensation 26
3.4 Limitations of a Procam System 29
3.5 Luminance Reallocation 31
3.6 Summary 31
CHAPTER 4 COLOR COMPENSATION 33
4.1 Review 34
4.1.1 Anchoring Theory 34
4.1.2 Color Appearance Model 35
4.1.3 Color Clipping Artifact 37
4.2 Proposed Method 39
4.2.1 Brightness Scaling 43
4.2.2 Hue Adjustment 44
4.2.3 Quality Optimization 46
4.2.4 Off-Line Subjective Experiment 47
4.3 Simplification 49
4.3.1 Procedure Simplification 50
4.4 Experiments 51
4.4.1 Time Complexity of the Simplification 52
4.4.2 Impact of Hue Adjustment on Image Brightness 53
4.4.3 Brightness Benefit 57
4.4.4 Performance Comparison 57
4.5 Summary 65
CHAPTER 5 SIMPLIFICATION OF CIECAM 67
5.1 Forward Transformation 68
5.2 Backward Transformation 70
5.3 Performance Verification 72
5.4 Summary 74
CHAPTER 6 VISIBILITY ENHANCEMENT 76
6.1 Background 77
6.1.1 Just Noticeable Difference 78
6.1.2 Human Visual Response Model 79
6.1.3 Effects of Brightness Scaling on Images 80
6.2 Key Ideas 82
6.3 Proposed Algorithm 84
6.3.1 Prediction of the detail loss effect 84
6.3.2 Enhancement of invisible pixels 87
6.4 Visualization of Brightness Scaled Images 95
6.5 Performance Evaluations 97
6.5.1 Subjective Evaluation 99
6.5.2 Objective Evaluation 100
6.6 Sensitivity of the Proposed Method 104
6.7 . Discussion 105
6.8 Summary 107
CHAPTER 7 VISIBILITY MODELING 108
7.1 Contrast Sensitivity and Contrast Masking 108
7.2 Visibility Model 110
7.2.1 Display Model 110
7.2.2 Optical and Retinal Pathway 111
7.2.3 Contrast sensitivity function and contrast masking 112
7.2.4 Visibility Metric 113
7.3 Experiment Setup 114
7.4 Results and Discussion 116
7.5 Summary 121
CHAPTER 8 CONCLUSION 123
8.1 Chromatic Adaptation 123
8.2 Visibility Enhancement 124
8.3 Computational Complexity 125
BIBLIOGRAPHY 126
dc.language.isoen
dc.subject人眼視覺系統zh_TW
dc.subject針對投影幕的影像顏色補償zh_TW
dc.subject具有相機的投影系統zh_TW
dc.subject定錨理論zh_TW
dc.subject人眼對顏色的自動校正zh_TW
dc.subjectCIE 色彩模型zh_TW
dc.subject可視性模型zh_TW
dc.subjectVasarely illusionen
dc.subjectRadiometric compensationen
dc.subjectprocamen
dc.subjectchromatic adaptationen
dc.subjectvisibility modelen
dc.subjectHuman visual systemen
dc.subjectCIECAM02en
dc.title奠基於視覺模型之光雕投影zh_TW
dc.titleRadiometric Compensation for Ubiquitous Projectionen
dc.typeThesis
dc.date.schoolyear104-2
dc.description.degree博士
dc.contributor.oralexamcommittee卜令楷,簡韶逸,葉素玲,林嘉文,董蘭榮
dc.subject.keyword針對投影幕的影像顏色補償,具有相機的投影系統,定錨理論,人眼對顏色的自動校正,CIE 色彩模型,人眼視覺系統,可視性模型,zh_TW
dc.subject.keywordRadiometric compensation,procam,Vasarely illusion,chromatic adaptation,CIECAM02,Human visual system,visibility model,en
dc.relation.page139
dc.identifier.doi10.6342/NTU201603328
dc.rights.note未授權
dc.date.accepted2016-08-21
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電信工程學研究所zh_TW
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-105-1.pdf
  未授權公開取用
3.35 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved