Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 生物資源暨農學院
  3. 生物機電工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/48998
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor周瑞仁(Jui Jen Chou)
dc.contributor.authorPo-Tong Wangen
dc.contributor.author王柏東zh_TW
dc.date.accessioned2021-06-15T11:13:18Z-
dc.date.available2021-02-26
dc.date.copyright2021-02-26
dc.date.issued2021
dc.date.submitted2021-02-19
dc.identifier.citation江宗錫。2000。利用倒傳遞類神經網路作數位相機色彩非線性演繹模式之建立。 碩士論文。台灣桃園:元智大學資訊工程研究所。
李天任、徐明景、羅明、羅梅君、歐陽盟、陳鴻興、 孫沛立、蕭之昀、李姍、郭 重志 、陳昱達 、與闕家彬。2000。出自“前瞻性數位典藏技術之開發與系統 建構研究成果報告 ( 完整版 ) ”。台北: 行政院國家科學委員會專題研究計 畫成果報告 / 中國文化大學資訊傳播學系 ( 所 ) 。
羅梅君。 1991。印刷色度學。印刷科技出版社。
渡辺辰巳與小嶋章夫。 2001. Color image processing method and color image processing apparatus. Japan Patent No. JP4491988B2.
Cao, Congjun, and Sun Jing. 2008. Study on color space conversion between CMYK and CIE L* a* b* based on generalized regression neural network. 2008 International Conference on Computer Science and Software Engineering. IEEE 6: 275-277.
Chen, Minmin, Jeffrey Pennington, and Samuel S. Schoenholz. 2018. Dynamical isometry and a mean field theory of RNNs: Gating enables signal propagation in recurrent neural networks. arXiv preprint arXiv: 1806.05394.
Çiçek, Özgün, Ahmed Abdulkadir, Soeren S. Lienkamp, Thomas Brox, and Olaf Ronneberger. 2016. 3D U-Net: learning dense volumetric segmentation from sparse annotation. International Conference on Medical Image Computing and Computer-assisted Intervention. Springer, Cham: 424-432.
Costa, Pedro, Adrian Galdran, Maria Ines Meyer, Meindert Niemeijer, Michael Abràmoff, Ana Maria Mendonça, and Aurélio Campilho. 2017. End-to-end adversarial retinal image synthesis. IEEE Transactions on Medical Imaging 37.3: 781-791.
Dahl, George E., Tara N. Sainath, and Geoffrey E. Hinton. 2013. Improving deep neural networks for LVCSR using rectified linear units and dropout. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE: 8609- 8613.
Deng, Jia, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. 2009.Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE: 248-255.
Du, Simon, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. 2019. Gradient descent finds global minima of deep neural networks. International Conference on Machine Learning. PMLR 97: 1675-1685.
Easwart, V. Venka. 2005. Polynomial based multi-level screening. US Patent No. US 6940619B1.
Finlayson D. Graham. 2012. Color correction of images. US Patent No. US20130342557A1.
Fukushima, Tadashi, Yoshiki Kobayashi, Takeshi Katoh, and Seiji Kashioka. 1987. Image signal processor. US Patent No. US4665556A.
Glorot, Xavier, and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics: 249-256.
Gunturk, B. K., Yucel Altunbasak, and Russell M. Mersereau. 2002. Color plane interpolation using alternating projections. IEEE Transactions on Image Processing 11.9: 997-1013.
Harrington, J. Steven. 1996. System for correcting color images using tetrahedral interpolation over a hexagonal lattice. European Patent Office No. EP19950305945.
Hinton, Geoffrey, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen,Tara N. Sainath, and Brian Kingsbury. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine 29.6: 82–97.
Hinton, Geoffrey, Nitish Srivastava, and Kevin Swersky. 2012. Lecture 6a overview of mini–batch gradient descent. Coursera Lecture slides. Available at: https://class. coursera. org/neuralnets-2012-001/lecture. Accessed 27 May, 2020.
Hong, Guowei, M. Ronnier Luo, and Peter A. Rhodes. 2001. A study of digital camera colorimetric characterization based on polynomial modeling. Color Research Application 26.1: 76-84.
Hung, Po-Chieh. 1992. Color processing method and apparatus with a color patch. US Patent No. US5121196A.
Hung, Po-Chieh. 1993. Colorimetric calibration in electronic imaging devices using a look-up-table model and interpolations. Journal of Electronic Imaging 2.1: 53-61.
IT8. 1994. IT8 is a set of American National Standards Institute (ANSI) standards for color communications and control specifications. Available at: https://en.wikipedia. org/wiki/IT8. Accessed 27 May, 2020.
Kang, H. R. and P. G. Anderson. 1992. Neural network applications to the color scanner and printer calibrations. Journal of Electronic Imaging 1.2: 125-136.
Krizhevsky, Alex,Ilya Sutskever, and Geoffrey E. Hinton. 2017. Imagenet classification with deep convolutional neural networks. Communications of the ACM 60.6: 84-90.
LeCun, Yann., L. Bottou, Y. Bengio, and P. Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86.11: 2278-2324.
LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521.7553: 436-444.
Lin, Min, Qiang Chen, and Shuicheng Yan. 2013. Network in network. arXiv preprint arXiv: 1312.4400.
Mahy, Marc, Luc Van Eycken, and André Oosterlinck.1994. Evaluation of uniform color spaces developed after the adoption of CIELAB and CIELUV. Color Research Application 19.2: 105-121.
Marcu, Gabriel, and Iwata, Kansei. 1993. RGB-YMCK color conversion by application of the neural networks. Color and Imaging Conference. Society for Imaging Science and Technology 1993.1: 27-32.
McClanahan, J. Craig. 2002. Computer-implemented neural network color matching formulation applications. US Patent No. US6804390B2.
Medioni, G.R. Gerard, Monti R.Wilson, Timothy F. Prohaska, and Lynn R. Poreta. 1989. Method and apparatus for registering color separation film. US Patent No. US4849914A.
Naka, Motohiko, Takehisa Naka, Tanaka Takehiko Shida, Mie Saitoh, and Kunio Yoshida. 1989. Color data correction apparatus ultilizing neural network. US Patent No. US5162899A.
Papamarkos, Nikos. 1999. Color reduction using local features and a kohonen self- organized feature map neural network. International Journal of Imaging Systems and Technology 10.5: 404-409.
Poole, Lahiri, M. Raghu, J. Sohl-Dickstein, and S. Ganguli. 2016. Exponential expressivity in deep neural networks through transient chaos. arXiv preprint arXiv: 1606.05340.
Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. 2015. U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical Image Computing and Computer-assisted Intervention. Springer, Cham: 234-241.
Sakamoto, Takashi, Akesawa-Cho, Nishino, Yamashina-ku, Kyoto-shi, and Akira Itooka. 1981. Linear interpolator for color correction. US patent No. US4275413A.
Sakamoto, Takashi. 1985. Linear interpolating method and color conversion apparatus using this method. US Patent No. US4511989A.
Sharma, Gaurav, and Raja Bala, eds. 2017. Digital color imaging handbook. CRC press. ISBN 0-8493-0900-X: 31.
Shen, Hui-Liang, and John H. Xin. 2004. Colorimetric and spectral characterization of a color scanner using local statistics. Journal of Imaging Science and Technology 48.4: 342-346.
Silver, David, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel and Demis Hassabis. 2017. Mastering the game of go without human knowledge. Nature 550.7676: 354-359.
Snell, Foster Dee, and Cornelia T. Snell. 1959. Colorimetric methods of analysis. D. van Nostrand. Inc., New York 2.
Srivastava, Nitish, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15.1: 1929-1958.
Srivastava, Rupesh K., Klaus Greff, and Jürgen Schmidhuber. 2015. Training very deep networks. arXiv preprint arXiv: 1507. 06228.
Sun, Yi, Xiaogang Wang, and Xiaoou Tang. 2015. Deeply learned face representations are sparse, selective, and robust. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition: 2892-2900.
Tominaga, Shoi. 1993. Color notation conversion by neural networks. Color Research Application 18.4 253-259.
Tominaga, Shoji. 1998. Color conversion using neural networks. Color Imaging: Device-Independent Color, Color Hardcopy, and Graphic Arts III. International Society for Optics and Photonics 3300: 66-75.
Usui, Shiro, Yoshifumi Arai, and Shigeki Nakauchi,. 2000. Neural networks for device-independent digital color imaging. Information Sciences 123.1-2: 115-125.
Vogl, T. P., A. K. Rigler, and B. R. Canty. 1971. Asymmetric lens design using bicubic splines: application to the color TV lighthouse. Applied Optics 10.11: 2513-2516.
Vondran, L. Gary. Jr. 2000. Common pruned radial and pruned tetrahedral interpolation hardware implementation. US Patent No. US6028683A.
Wang, Po-Tong, and Ching-Han Chen. 2003. Hybrid neural networks for color identification. US Patent No. US6571228B1.
Wang, Po-Tong, Jui-Jen Chou, and Chiu-Wang Tseng. 2019. Colorimetric characterization of color image sensors based on convolutional neural network modeling. Sens. Mater 31: 1513-1522.
Wang, Po-Tong, Jui-Jen Chou, and Chiu-Wang Tseng. 2020. Pixel-wise colorimetric characterization based on U-Net convolutional network. Journal of Imaging Science and Technology 64.4: 40405-1.
X-rite. 2020. Complete guide to color management. Available at: https://xritephoto.com/ documents/literature/EN/L11-144_CompleteGuideToColorManagement_EN.pdf. Accessed 27 May, 2020.
Xu, Haisong, and Yong Wang. 2007. Colorimetric characterization for scanner based on polynomial regression models. Acta Optica Sinica, 6.
Yin,Ye, Yin-Fan Chou, and Anuj Bhatnagar. 2014. Imaging pipeline for spectro- colorimeters. US Patent No. US20140300753A1.
Zhang, Jing, Ying Ping Yang, and Jin Min Zhang. 2016. A MEC-BP-Adaboost neural network-based color correction algorithm for color image acquisition equipments. Optik 127.2: 776-780.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/48998-
dc.description.abstract本研究以卷積神經網路 (convolutional neural network, CNN) 與 U 型卷積網路 (U-Net convolutional network, U-Net) 演算法為核心技術, 並根據 CIE ( 法語:Commission Internationale de l´Eclairage,國際照明委員會 ) 所推薦基於人眼色彩視覺為基礎的色度測定標準,進行色彩特性化建模 (colorimetric characterization modeling),以實現彩色影像感測器 (color image sensor) 之高精度色彩特性化。
影像感測器之色彩特性化是一項艱鉅的任務。首先彩色影像感測器所感測的 RGB 訊號不能當作色彩度量 (colorimetry),因為相同的圖像以不同的影像感測裝置量測所產生的 RGB 訊號差異很大,同樣的 RGB 感測訊號可能代表不同的顏色,因此 RGB 訊號不是 CIE 所規範色彩度量的標準 ( 例如 CIELAB 或 CIE XYZ)。所謂影像感測器之色彩特性化係透過演算法進行 RGB 與 CIELAB/CIE XYZ 的色彩空間轉換。過去的研究主要採用包含對照表內插模式 (LUT-interpolation model)、迴歸模式 (regression model) 與類神經網路模式 (artificial neural network model) 等方法,到目前為止,色彩特性化的技術經過評測結果:還無法達到接近分光光譜儀的測色水準,其中主要原因為色彩特性化係一非線性的複雜關係,因此,色彩特性化的演算法還有很大的進步空間。
基於 CNN 卷積神經網路為基礎,本研究試圖突破傳統 (3 x N) 多項式迴歸建模的度量精度。對於 CNN 色彩特性化技術的研究,我們透過影像感測器自動擷取 IT8.7/4 色彩導表,將 (3 x 8 x 8 ) 像素 ( 3 為 RGB 三顏色, (8 x 8) 為像素大小 ) 輸入 CNN 卷積神經網路,再映射由分光光譜儀量測所輸出的 CIELAB (3 x 1 x 1) 像素 ( 3 為 LAB 三顏色 , (1 x 1) 為像素 ) 數據,經過 5 次迭代的卷積神經網路學習,到第 5 次迭代卷積層已擴增為 8 幅 (3 x 8 x 32) 特徵圖 (feature map),最後平面化 (flatten) 生成 6,144 筆色彩特徵向量輸入至倒傳遞神經網路(back-propagation neural network, BP NN)。 在色彩特性化平均色差值的評比:CNN 建模的 ΔE*ab 為 0.48 優於傳統 (3 x 11) 多項式迴歸建模的 ΔE*ab 為 3.03。
CNN 色彩特性化建模面臨的挑戰:CNN 訓練所需的電腦運算量龐大、訓練時間長、訓練的色彩數據不足、與可驗證的色彩數據過少等。為了克服上述瓶頸, 本研究藉由 U-Net 突破 CNN 訓練運算時間的問題:U-Net 只花了 1,000 波期 (epoch) 的學習週期而 CNN 需要耗費 100,000 波期的學習週期。透過 U-Net 學習可以解決 IT8.7/4 色彩導表數據不足的問題:U-Net 僅從 ISO 12640 (CIELAB/ SCID) 的六幅影像,再利用資料擴增 (data augmentation) 技術標註 32,027,200 色面:U-Net 驗證 ISO 12640 (CIELAB/SCID) 的兩幅 CIELAB 影像中 9,338,456 像素與 1,626,192 顏色;而 CNN 從 IT8.7/4 色彩導表中驗證 39,488 像素與 317 顏色。
本研究利用 CNN 與 U-Net 卷積網路所建構之創新色彩特性化方法,相較於傳統 (3 × N) 多項式迴歸建模的性能表現更勝一籌,經由研究結果驗證 CNN 卷積神經網路的平均色差值 ΔE*ab 為 0.48,而 U-Net 的平均色差值 ΔE*ab 為 0.52,二者皆優於傳統 (3 x 11) 多項式迴歸模式的 ΔE*ab 為 3.03。雖然 U-Net 建模的平均色差值的精準度略遜於 CNN 模型,但是 U-Net 建模的運算效率比 CNN 建模快約六倍,實驗透過配備 Nvidia GPU GTX 1080 Ti 的 PC,驗證一張 ISO 12640 (CIELAB/ SCID) 彩色影像之特性化模型,CNN 模型運算平均需要 5 秒,而 U-Net 模型運算平均需要 0.8 秒。本研究證實藉由 CNN 與 U-Net 所產出的色彩特性化建模演算法技術,可提升影像感測器裝置之色彩特性化更高的精度。
zh_TW
dc.description.abstractIn this study, the colorimetric characterization of a color image sensor was developed and modeled using a convolutional neural network (CNN) and an U-Net convolutional network (U-Net), in according with the International Commission on Illumination (CIE), which is the recommended colorimetric measurement standard based on human eye color vision as the basis. Color image sensors can be incorporated into compact devices to detect the color of objects under a wide range of light sources and brightness. They should be colorimetrically characterized to realize the high-precision color characterization of the innovative color image sensor technology.
However, the colorimetric characterization of image sensors is a difficult task. First, the red, green, and blue (RGB) signals are not colorimetrically captured by the color image sensor. They are device-dependent, which means that different sensing devices generate different spectrum responses for the same RGB signal. Therefore, the generated RGB signals do not adhere to the colorimetry standards regulated by CIE, such as CIELAB and CIE XYZ. The colorimetric characterization of an image sensor applies algorithms to transform RGB to CIELAB/CIE XYZ color space. The previously used methods for this purpose mainly include the look up table (LUT) interpolation model, regression model, and artificial neural network model. The colorimetric characterization technology evaluated thus far has not reached a colorimetric measurement level close to that of a spectrometer. This is mainly because the system used for colorimetric characterization is nonlinear and complex, and thus, needs to be considerably improved.
To address this issue, this study aimed to apply a CNN colorimetric characterization algorithm, which is superior to the traditional 3 × N polynomial regression modeling. Here, the color patch of the IT8.7/4 target is automatically captured using the image sensor and converted into (3 × 8 × 8) pixels, where 3 represents the RGB colors and 8 × 8 is the pixel size input to the CNN. Then, the image is mapped to CIELAB (3 × 1 × 1) pixels, where 3 indicates the three Lab colors and 1 × 1 is the pixel size that represents the data measured and output by the spectrometer. After five iterations of CNN learning, the convolutional layer expands to eight feature maps, each of (3 × 8 × 32) pixels, and finally flattens to generate 6,144 color feature vectors, which are input to the back-propagation neural network (BP NN). The average color difference value ( ΔE*ab ) for CNN modeling is 0.48, which is better than that obtained for the traditional 3 × 11 polynomial regression modeling (3.03).
CNN colorimetric characterization modeling faces the following challenges: CNN training requires considerable computation, long training time, so far training color data, and negligible verifiable color data. To overcome the aforementioned issues, this study used U-Net. First, U-Net requires only 1000 epochs of a learning cycle, whereas CNN requires 100 000 epochs. Thus, the issue of excessive training time required for CNN computation can be resolved, Second, U-Net can overcome the issue of insufficient data in the IT8.7/4 target: it uses the data augmentation technology to label 32 027 200 color patches over 256 × 256 pixels, resulting in 2098 billion pixels, including 6 885 222 colors for only six ISO 12640 (CIELAB/ SCID) images; by contrast, CNN can only randomly select 1000 colors from the 1617 color patches in the IT8.7/4 target, which amounts to only 1000 patch colors over 64000 pixels. Moreover, U-Net validates 9 338 456 pixels and 1 626 192 colors in two CIELAB images of ISO 12640 (CIELAB/SCID), whereas CNN validates 39 488 pixels and 617 colors from the IT8.7/4 target.
The innovative colorimetric characterization methods used in this study to apply CNN or U-Net, which are superior to the traditional 3 × N polynomial regression modeling. The study results validate the CNN performance. The ΔE*ab values of CNN and U-Net are 0.48 and 0.52, respectively, both of which are superior to the values achieved by the traditional 3 × 11 polynomial regression model (3.03). Although the accuracy of the ΔE*ab value modeled by U-Net is slightly inferior to that modeled by CNN, the computation efficiency of U-Net is approximately six times faster than that of CNN. The experimental results are validated using a PC equipped with Nvidia GPU GTX 1080 Ti. Using ISO 12 640 (CIELAB/SCID) standard color images for the colorimetric characteristic model, the application of the CNN model requires an average of 5 s, whereas the U-Net model requires 0.8 s. Thus, the colorimetric characterization modeling algorithm technology implemented using CNN and U-Net exhibit improved colorimetric characterization accuracy of the image sensor device.
en
dc.description.provenanceMade available in DSpace on 2021-06-15T11:13:18Z (GMT). No. of bitstreams: 1
U0001-0702202106081600.pdf: 13169922 bytes, checksum: 3d73e0641ada6d0fcd5f60e3df99253b (MD5)
Previous issue date: 2021
en
dc.description.tableofcontents目錄
口試委員審定書 i
誌謝 ii
摘要 iv
Abstract vi
目錄 ix
圖目錄 xi
表目錄 xiii
第 1 章 前言 1
第 2 章 文獻探討 3
2.1. 色彩特性化介紹 3
2.1.1. 影像感測器的色彩特性化設備與材料 3
2.1.2 色彩特性化演算法 5
2.2 類神經網路的演算法 7
2.3 深度學習的演算法 11
2.3.1 CNN演算法 11
2.3.2 U-Net 演算法 13
第 3 章. 材料與方法 19
3.1 傳統多項式迴歸之色彩特性化建模 19
3.2 CNN 色彩特性化建模 20
3.2.1 CNN 色彩特性化的材料 20
3.2.2 CNN 色彩特性化訓練與驗證的數據 22
3.2.3 CNN 色彩特性化的方法 23
3.3 U-Net 色彩特性化建模 27
3.3.1 U-Net 色彩特性化的材料 28
3.3.2 U-Net 色彩特性化訓練與驗證的數據 29
3.3.3 U-Net 色彩特性化的方法 32
第 4 章 結果與討論 40
4.1 CNN 色彩特性化的結果 40
4.2 U-Net 色彩特性化的結果 45
4.3 討論 50
第 5 章. 結論 52
參考文獻 54
圖目錄
圖 2-1 IT8.7/4 色彩導表 5
圖 2-2 非監督式自組織映射類神經網路架構圖 8
圖 2-3 監督式倒傳遞類神經網路架構圖 9
圖 2-4 複合式類神經網路架構圖 10
圖 2-5 LeNet-5 的模型架構圖 12
圖 2-6 U-Net 架構圖 13
圖 2-7 藉由變形影像為資料擴增技術 14
圖 2-8 (2 x 2) 卷積核進行最大池化 15
圖 2-9 U-Net 編碼與解碼架構圖 18
圖 2-10 U-Net 編碼與解碼融合之運作示意 18
圖 3-1 多項式迴歸之色彩特性化實施流程圖 19
圖 3-2 多項式迴歸的「點 - 對 - 點」之色彩空間轉換示意 20
圖 3-3 色彩特性化實施流程圖 21
圖 3-4 CNN 「色塊影像 - 對 - 點」之色彩空間轉換示意圖 22
圖 3-5 CNN 色彩特性化架構 23
圖 3-6 Sigmoid 、Tanh 與 ReLU 激勵函數 25
圖 3-7 ReLU 激勵函數使網路變得更為稀疏 26
圖 3-8 汰減 (dropout) 以刪減鍵結的神經元示意圖 27
圖 3-9 U-Net 實現「影像 - 對 - 影像 」之逐像素色彩空間轉換示意圖 28
圖 3-10 ISO 12640(CIELAB/SCID)印前技術-數位交換資料之彩色影像 28
圖 3-11 U-Net 色彩特性化實施流程圖 29
圖 3-12 ISO 12640 (CIELAB/SCID) 標準圖庫中 N2 - N7 為訓練的圖檔 30
圖 3-13 輸入的 RGB 訓練影像流程圖 31
圖 3-14 ISO 12640 (CIELAB/SCID) 標準圖庫中 N1與 N8 為驗證的圖檔 31
圖 3-15 U-Net 色彩特性化架構 32
圖 3-16 自動編碼器之架構 33
圖 3-17 U-Net 新增級聯以改良自動編碼器的架構 34
圖 3-18 U-Net 級聯運作示意圖 34
圖 3-19 U-Net 之 (1 x 1) 卷積核之運作 3D 示意圖 35
圖 3-20 U-Net 的收縮路徑暨卷積核之對照圖 36
圖 3-21 U-Net 的 (1 x 1) 卷積核 37
圖 3-22 U-Net 的擴張路徑暨卷積核之對照圖 38
圖 4-1 CNN 色彩特性化建模之色差直方圖 42
圖 4-2 多項式 (3 x 11) 迴歸色彩特性化建模之色差直方圖 42
圖 4-3. CNN色彩特性化建模 ΔE*ab >6 之 4 筆顏色比較圖 43
圖 4-4 ISO 12640 (CIELAB/SCID) 標準圖 N8_LAB 44
圖 4-5 CNN 色彩特性化建模之推論圖 N8CNN_LAB 44
圖 4-6 多項式 (3 x 11) 迴歸建模之推論圖 N83 x 11_LAB 45
圖 4-7 U-Net 色彩特性化建模的 MSE 與 val_MSE 學習曲線 46
圖 4-8 U-Net 色彩特性化建模的 MAE 與 val_MAE 學習曲線 46
圖 4-9 U-Net 色彩特性化建模於 148 波期已漸收斂 47
圖 4-10 U-Net 建模建模影像 N1_LAB 可視化比較圖 47
圖 4-11 U-Net 建模建模影像 N1_LAB 色差值: ΔE*ab ≤ 1 佔比 99.79% 48
圖 4-12 U-Net 建模建模影像 N8_LAB 可視化比較圖 48
圖 4-13 U-Net 建模建模影像 N8_LAB 色差值 ΔE*ab ≤ 1 佔比 93.24% 49
表目錄
表 4.1 CNN色彩特性化建模 ΔE*ab >6 之 4 筆顏色對照表. 43
dc.language.isozh-TW
dc.title基於卷積神經網路與 U 型卷積網路之影像感測器的色彩特性化建模zh_TW
dc.titleColorimetric Characterization of Image Sensors Based on CNN and U-Net Modelingen
dc.typeThesis
dc.date.schoolyear109-1
dc.description.degree博士
dc.contributor.author-orcid0000-0002-0979-3823
dc.contributor.oralexamcommittee陳倩瑜(Chien-Yu Chen),顏炳郎(Ping-Lang Yen),孫沛立(Pei-Li Sun),羅梅君(Mei-Chun Lo)
dc.subject.keyword彩色影像感測器,卷積神經網路 (CNN),U 型卷積網路 (U-Net),色彩特性化,逐像素迴歸,資料擴增,zh_TW
dc.subject.keywordcolor image sensor,convolutional neural network (CNN),U-Net convolutional network (U-Net),pixel-wise regression,colorimetric characterization,data augmentation,en
dc.relation.page58
dc.identifier.doi10.6342/NTU202100640
dc.rights.note有償授權
dc.date.accepted2021-02-19
dc.contributor.author-college生物資源暨農學院zh_TW
dc.contributor.author-dept生物機電工程學系zh_TW
顯示於系所單位:生物機電工程學系

文件中的檔案:
檔案 大小格式 
U0001-0702202106081600.pdf
  目前未授權公開取用
12.86 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved