Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/59210
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor洪一平(Yi-Ping Hung)
dc.contributor.authorChih-Tsung Shenen
dc.contributor.author沈志聰zh_TW
dc.date.accessioned2021-06-16T09:17:55Z-
dc.date.available2017-08-25
dc.date.copyright2017-08-25
dc.date.issued2017
dc.date.submitted2017-07-11
dc.identifier.citation[1] R. Fattal. Image upsampling via imposed edge statistics. ACM Transactions on Graphics, 26(3)(95), 2007.
[2] G. Freedman and R. Fattal. Image and video upscaling from local self-examples. ACM Transactions on Graphics, 30(2):1–10, 2011.
[3] D. Mon. Video Enhancer. http://www.thedeemon.com/
VideoEnhancer/, 2006.
[4] Q. Shan, Z. Li, J. Jia, and C.K. Tang. Fast image/video upsampling. ACM Transactions on Graphics, 27(5)(153), 2008.
[5] QELabs05. QELabs05. http://www.qelabs.com/index, 2005.
[6] D. Glasner, S. Bagon, and M. Irani. Super-resolution from a single image. International Conference on Computer Vision (ICCV)., pages 349–356, 2009.
[7] K.I. Kim and Y. Kwon. Single-image super-resolution using sparse regression and natural image prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(6):1127–1133, 2010.
[8] J. Yang, J. Wright, T.-S. Huang, and Y. Ma. Image super-resolution via sparse representation. IEEE Transactions on Image Processing, 19(11):2861–2873, 2010.
[9] S. Mallat and G. Yu. Super-resolution with sparse mixing estimators. IEEE Transactions on Image Processing, 19(11):2889–2900, 2010.
[10] A. Krizhevsky, I. Sutskever, and G.E. Hinton. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems (NIPS), pages 1097–1105, 2012.
[11] W.-C. Cheng and M. Pedram. Power minimization in a backlit TFT-LCD display by concurrent brightness and contrast scaling. Design, Automation and Test in Europe
Conference and Exhibition (DATE), pages 10252–10259, 2004.
[12] P.-S. Tsai, C.-K. Liang, T.-H. Huang, and H.-H. Chen. Image enhancement for backlight-scaled TFT-LCD displays. IEEE Transactions on Circuits and Systems for Video Technology, 19(4):574–583, 2009.
[13] R.-Y. Tsai and T.-S. Huang. Multiframe image Restoration and registration. Advances in Computer Vision and Image Processing, 1(2):317–339, 1984.
[14] S.C. Park, M.K. Park, and M.G. Kang. Super-resolution image reconstruction: a technical overview. IEEE Signal Processing Magazine, 20(3):21–36, 2003.
[15] D. Capel and A. Zisserman. Computer vision applied to super resolution. IEEE Signal Processing Magazine, 20(3):75–86, 2003.
[16] B. Bascle, A. Blake, and A. Zisserman. Motion deblurring and super-resolution from an image sequence. European Conference on Computer Vision (ECCV), pages
571–582, 1996.
[17] M.-S. Lee, M.-Y. Shen, and C.-C.J. Kuo. Techniques for flexible image/video resolution conversion with heterogeneous terminals. IEEE Communication Magazine,
45(1):61–67, 2007.
[18] S. Farsiu, M. Robinson, M. Elad, and P. Milanfar. Fast and robust multiframe super resolution. IEEE Transactions on Image Processing, 13(10):1327–1344, 2004.
[19] C. Liu and D. Sun. On bayesian adaptive video super resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(2):346–360, 2014.
[20] C. Lee, M. Eden, and M. Unser. High-quality image resizing using oblique projection operators. IEEE Transactions on Image Processing, 7(5):679–692, 1998.
[21] X. Li and M.T. Orchard. New edge-directed interpolation. IEEE Transactions on Image Processing, 10(10):1521–1527, 2001.
[22] A. Giachetti and N. Asuni. Real-time artifact-free image upsampling. IEEE Transactions on Image Processing, 20(10):2760–2768, 2011.
[23] K.-W. Hung and W.-C. Siu. Robust soft-decision interpolation using weighted least squares. IEEE Transactions on Image Processing, 21(3):1061–1069, 2012.
[24] S. Mallat. A wavelet tour of signal processing: the sparse way. Academic Press, 3rd ed. edition, 2008.
[25] J. Sun, J. Sun, Z. Xu, and H.Y. Shum. Image super-resolution using gradient profile prior. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1–8, 2008.
[26] W.T. Freeman, E.C. Pasztor, and O.T. Carmichael. Learning low-level vision. International Journal of Computer Vision, 40(1):25–47, 2000.
[27] W.T. Freeman, T.R. Jones, and E.C. Pasztor. Example-based super-resolution. IEEE Computer Graphics and Applications Magazine, 22(2):56–65, 2002.
[28] S. Baker and T. Kanade. Limits on super-resolution and how to break them.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(9):1167–1183, 2002.
[29] Y. HaCohen, R. Fattal, and D. Lischinski. Image upsampling via texture hallucination. International Conference on Computational Photography (ICCP), pages 1–8,
2010.
[30] K. Zhang, X. Gao, D. Tao, and X. Li. Multi-scale dictionary for single image super-resolution. IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), pages 1114–1121, 2012.
[31] W. Dong, L.Zhang, G. Shi, and X. Li. Nonlocally centralized sparse representation for image restoration. IEEE Transactions on Image Processing, 22(4):1620–1630,
2013.
[32] M. Zibulevsky and M. Elad. L1-L2 optimization in signal and image processing. IEEE Signal Processing Magazine, 27(3):76–88, 2010.
[33] L. Xu and J. Jia. Two-phase kernel estimation for robust motion deblurring. European Conference on Computer Vision (ECCV), pages 157–170, 2010.
[34] Y.W. Tai and M. Brown. Single image defocus map estimation using local contrast prior. IEEE International Conference on Image Processing (ICIP), pages 1797–1800, 2009.
[35] S. Zhuo and T. Sim. Defocus map estimation from a single image. Pattern Recognition, 44(9):1852–1858, 2011.
[36] Y. Wang, W. Yin, J. Yang, and Y. Zhan. A new alternating minimization algorithm for total variation image reconstruction. SIAM Journal on Imaging Science, 1(3):
248–272, 2008.
[37] K. He, J. Sun, and X. Tang. Guided image filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(6):1397–1409, 2013.
[38] M. A. Webster, M. A. Georgeson, and S.M. Webster. Neural adjustments to image blur. Natural Neuroscience, 5(9):839–840, 2002.
[39] M. Trentacoste, R. Mantiuk, and W. Heidrich. Blur-aware image downsampling. Computer Graphics Forum, 30(2):573–582, 2011.
[40] V.F. Pamplona, M.M. Oliveira, D.G. Aliaga, and R. Raskar. Tailored displays to compensate for visual aberrations. ACM Transactions on Graphics, 31(4), 2012.
[41] J. Kopf, Ariel Shamir, and Pieter Peers. Content-adaptive image downscaling. ACM Transactions on Graphics, 32(6)(173), 2013.
[42] P. Milanfar. Super-Resolution Imaging. CRC Press, 2011.
[43] W. Yin, S. Osher, D. Goldfarb, and J. Darbon. Bregman iterative algorithms for L1-minimization with applications to compressed sensing. SIAM Journal on Imaging Sciience, 1(1):143–168, 2008.
[44] Q. Shan, J. Jia, and A. Agarwala. High-quality motion deblurring from a single image. ACM Transactions on Graphics, 27(3)(73), 2008.
[45] Y. Dong, M. Hintermüller, and M.M. Rincon-Camacho. A multi-scale vectorial Lτ-TV framework for color image restoration. International Journal of Computer Vision, 92(3):296–307, 2011.
[46] S. Chan, R. Khoshabeh, K. Gibson, P. Gill, and T. Nguyen. An augmented lagrangian method for total variation video restoration. IEEE Transactions on Image Processing, 20(11):3097–3111, 2011.
[47] P. Green, W. Sun, W. Matusik, and F. Durand. Multi-aperture photography. ACM Transactions on Graphics, 26(3)(68), 2007.
[48] D.A. Forsyth and J. Ponce. Computer vision: a mordern approach. Prentice Hall, 2011.
[49] S. Bae and F. Durand. Defocus magnification. Computer Graphics Forum, 26(3): 571–579, 2007.
[50] N. Joshi, R. Szeliski, and D. Kriegman. Psf estimation using sharp edge prediction. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1–8, 2008.
[51] C.-T. Shen, F.-J. Chang, Y.-P. Hung, and S.-C. Pei. Edge-preserving image decomposition using L1 fidelity with L0 gradient. ACM SIGGRAPH Asia Technical Brief, (6), 2012.
[52] T.F. Chan and C.-K. Wong. Total variation blind deconvolution. IEEE Transactions on Image Processing, 7(3):370–375, 1998.
[53] C. Dong, C.C. Loy, K. He, and X. Tang. Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelli-
gence, 38(2):295 – 307, 2016.
[54] T.S. Cho, C.L. Zitnick, N. Joshi, S.B. Kang, R. Szeliski, and W.T. Freeman. Image restoration by matching gradient distributions. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 34(4):683–694, 2012.
[55] A. Mittal, A.K. Moorthy, and A.C. Bovik. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing, 21(12):4695–4708, 2012.
[56] Q. Shan, J. Jia, and Aseem Agarwala. High-quality motion deblurring from a single image. ACM Transactions on Graphics, 27(3)(73), 2008.
[57] H. Takeda, S. Farsiu, and P. Milanfar. Deblurring using regularized locally adaptive kernel regression. IEEE Transactions on Image Processing, 17(4):550–563, 2008.
[58] H. Zhang, J. Yang, Y. Zhang, and T.-S. Huang. Image and video restorations via nonlocal kernel regression. IEEE Transactions on Cybernetics, 43(3):1035–1046, 2013.
[59] P. Debevec and J. Malik. Recovering high dynamic range radiance maps from photographs. SIGGRAPH, pages 369–378, 1997.
[60] E. H. Land and J. J. McCann. Lightness and retinex theory. Journal of OSA, 61(1): 1–11, 1971.
[61] D. J. Jobson, Z. Rahman, and G. A. Woodell. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image Processing, 6(7):965–976, 1997.
[62] D. J. Jobson, Z. Rahman, and G. A. Woodell. Properties and performance of a center/surround retinex. IEEE Transactions on Image Processing, 6(3):451–462, 1997.
[63] R. Kimmel, M. Elad, D. Shaked, R. Kesher, and I. Sobel. A variational framework for retinex. International Journal of Computer Vision, 52(1):7–23, 2003.
[64] D. Shaked and R. Kesher. Robust recursive envelope operators for fast retinex. Hewlett Packard Technical Report, 2002-74(R.1).
[65] D. Shaked. Interpolation of non-linear retinex type algorithms. Hewlett Packard Technical Report, HPL-2006(179), 2006.
[66] M. Meylan and S. Süsstrunk. High dynamic range image rendering with a retinex-based adaptive filter. IEEE Transactions on Image Processing, 15(9):2820–2830, 2006.
[67] D. Choi, I. Jang, M. Kim, and N. Kim. Color image enhancement based on single-scale retinex with a jnd-based nonlinear filter. IEEE International Symposium on
Circuits and Systems (ISCAS), pages 3948–3951, 2007.
[68] D. H. Choi, I. H. Jang, M. H. Kim, and N. C. Kim. Color image enhancement using single-scale retinex based on an inproved image formation model. European Signal Processing Conference, pages 1–5, 2008.
[69] S. Saponara, L. Fanucci, S. Marsi, G. Ramponi, D. Kammler, and E. M. Witte. Application-specific instruction-set processorfor retinex-like image and video processing. IEEE Transactions on Circuits and Systems II, 54(7):596–600, 2007.
[70] W. Ma, J. M. Morel, S. Osher, and A. Chien. An l1-based variational model for retinex theory and its application to medical images. IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), pages 153–160, 2011.
[71] M. K. Ng and W. Wang. A total variation model for retinex. SIAM Journal of Imaging Sciences, 4(1):345–365, 2011.
[72] G. Guarnieri, S. Marsi, and G. Ramponi. High dynamic range image display with halo and clipping prevention. IEEE Transactions on Image Processing, 20(5):1351–1362, 2011.
[73] S.-C. Pei, C.-T. Shen, and T.-Y. Lee. Visual enhancement using constrained L0 gradient image decomposition for low backlight displays. IEEE Signal Processing Letter, 19(12):813–816, 2012.
[74] Y.-D. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin. Compression of deep convolutional neural networks for fast and low power mobile applications. International Conference on Learning Representations, 2016.
[75] Q. Shan, J. Jiaya, and M. Brown. Globally optimized linear windowed tone-mapping. IEEE Transactions on Visualization and Computer Graphics, 16(4):663–675, 2010.
[76] Photomatix. HDRsoft. http://www.hdrsoft.com.
[77] C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. International Conference on Computer Vision (ICCV), pages 836–846, 1998.
[78] F. Durand and J. Dorsey. Fast bilateral filtering for the display of high-dynamic range images. ACM Transactions on Graphics, 21(3):257–266, 2002.
[79] Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski. Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Transactions on Graphics, 27(3)(67), 2008.
[80] L. Xu, C. Lu, Y. Xu, and J. Jia. Image smoothing via L0 gradient minimization. ACM Transactions on Graphics, 30(6)(174), 2011.
[81] X. Fu, Y. Liao, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding. A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation. IEEE Transactions on Image Processing, 24(12):4965–4977, 2015.
[82] F. Kou, W. Chen, Z. Li, and C. Wen. Content adaptive image detail enhancement. IEEE Signal Processing Letter, 22(2):211–215, 2015.
[83] L. Xu, Q. Yan, Y. Xia, and J. Jia. Structure extraction from texture via relative total variation. ACM Transactions on Graphics, 31(6)(139), 2012.
[84] Minjung Son, Yunjin Lee, Henry Kang, and Seungyong Lee. Art-Photographic Detail Enhancement. Computer Graphics Forum, 33(2), 2014.
[85] S.-C. Pei, W.-W. Chang, and C.-T. Shen. Saliency detection using superpixel belief propagation. IEEE International Conference on Image Processing (ICIP), pages
1135–1139, 2014.
[86] N. Chang, I. Choi, and H. Shim. Dls: Dynamic backlight luminance scaling of liquid crystal display,. IEEE Transactions on Very Large Scale Integration Systems,,
12(8):837–846, 2004.
[87] A. Iranli, H. Fatemi, and M. Pedram. HEBS: Histogram equalization for back-light scaling. Design, Automation and Test in Europe Conference and Exhibition (DATE), pages 346–351, 2005.
[88] A. Iranli, W. Lee, and M. Pedram. HSV-aware dynamic backlight scaling in TFT-LCDs. IEEE Transactions on Very Large Scale Integration Systems, 14(10):1103–1116, 2006.
[89] M. Ruggiero, A. Bartolini, and L. Benini. Dbs4video: Dynamic luminance back-light scaling based on multi-histogram frame characterization for video streaming
application. ACM Embedded Software, pages 109–118, 2008.
[90] T.-H. Huang, C.-K. Liang, S.-L. Yeh, and H.-H. Chen. JND-based enhancement of perceptibility for dim images. IEEE International Conference on Image Processing (ICIP), pages 1752–1755, 2008.
[91] T.-H. Huang, K.-T. Shih, S.-L. Yeh, and H.-H. Chen. Enhancement of backlight-scaled images. IEEE Transactions on Image Processing, 22(11):4587–4597, 2013.
[92] C. Lee, C. Lee, Y.-Y. Lee, and C.-S. Kim. Power-constrained contrast enhancement for emissive displays based on histogram equalization. IEEE Transactions on Image Processing, 21(1):80–93, 2012.
[93] Y.-O. Nam, D.-Y. Choi, and B. C. Song. Power-constrained contrast enhancement algorithm using multiscale retinex for oled display. IEEE Transactions on Image Processing, 23(8):3308–3320, 2014.
[94] C. Jung and Z. Xia. Perceptual backlight scaling for low power liquid crystal displays based on visual saliency. IEEE International Conference on Image Processing (ICIP), pages 3240–3244, 2015.
[95] K.-T. Shih and H.-H. Chen. Exploiting perceptual anchoring for color image enhancement. IEEE Transactions on Multimedia, 18(2):300–310, 2016.
[96] T. Goldstein and S. Osher. The split bregman method for l1-regularized problems. SIAM Journal of Imaging Sciences, 2(2):323–343, 2009.
[97] L. Xu, Q. Yan, Y. Xia, and J. Jia. Structure extraction from texture via relative total variation. ACM Transactions on Graphics, 31(6):139, 2012.
[98] C.-T. Shen, Z. Lu, Y.-P. Hung, and S.-C. Pei. Visual enhancement using sparsity-based image decomposition for low backlight displays. IEEE International Symposium on Circuits and Systems (ISCAS), pages 2563–2566, 2016.
[99] P. Abbeel, A. Coates, M. Quigley, and A. Y. Ng. An application of reinforcement learning to aerobatic helicopter flight. Advances in Neural Information Processing
Systems (NIPS), pages 1–8, 2006.
[100] J. Kober, J. Andrew Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. International Journal of Robotics Research, 32(11):1238–1274, 2013.
[101] V. Mnih, K. Kavukcuoglu, D. Silver, A.A. Rusu, J. Veness, M.G. Bellemare, A. Graves, M. Riedmiller, A.K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature Letter, 518:529–533, 2015.
[102] D. Silver, A. Huang, C.J. Maddison, A. Guez, L. Sifre, G. Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe,
J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529:484–489, 2016.
[103] Y. Wei, F. Wen, W. Zhu, and J. Sun. Geodesic saliency using background priors. European Conference on Computer Vision (ECCV), pages 29–42, 2012.
[104] A. Mattal, R. Soundararjan, and A.C. Bovik. Making a completely blind image quality analyzer. IEEE Signal Processing Letter, 22(3):209–212, 2013.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/59210-
dc.description.abstract近期,圖像增強的技術被廣泛應用在高解析面板上,其中以超解析圖像技術及高動態圖像合成技術最被重視。雖然有相當數量的研究工作對此議題做探討,然而在邊緣保留、紋理細節、人眼視覺感受,仍然具有非常大的挑戰性。在此論文中,我們不單單是有效地解決邊緣、紋理及人眼感受的議題,我們更進一步對計算機視覺和訊號處理領域中的圖像空疏性質之最佳化進行研究。
第一個議題是超解析圖像技術。超解析技術在公元 1984 年提出,最開始是由多張圖像融合成一張圖像,由於多張圖像之間存在位移量,可以用來提升融合後的圖像畫素。在公元 2000 年前後,圖像生成模型被導入在超解析技術的架構中,從此超解析技術跟去模糊演算法及最佳化求解就息息相關了。不同於傳統的超解析技術是利用多張圖及消去單一模糊核心,我們提出具有空間變動性的單張圖像超解析技術,可以針對單一圖像估測多個模糊核心去完成超解析圖像的重建。更進一步,我們將空間變動性結合人眼視覺感受,使得圖像進行放大的同時,能保持觀測者的視覺感受,在超解析技術的研究進程上,算是一個突破。同時,我們也對圖像空疏性的最佳化求解,提出了一個整合性的架構。
第二個議題是高動態圖像模擬合成技術。高動態圖像合成技術在公元 1996 年提出,利用不同的曝光量,捕捉不同動態區間的圖像,再合成為一張高動態的圖像。同一時期,美國航太總署利用圖像濾波器對單張圖像進行圖像分離,再進而調整整體亮度以及細節對比,模擬出一張相似高動態合成的圖像。由於使用多張圖像融合技術,會帶來模糊、鬼影以及非自然等圖像瑕疵;於是,我們推演美國航太總署的模擬方法,將圖像分離後再增強亮度跟對比的方法。我們避免一般圖像分離會造成的亮度反轉問題,並且我們也導入空疏性、空間變動性及人眼感受等看法。
由高動態圖像合成技術,我們延伸到第三個議題。第三個議題是利用圖像分離技術進行圖像增強,讓圖像在螢幕面板為低背光源的省電模式下,內容依然清晰可視。我們推導了背光源跟圖像的光學關係式,並使用了空疏性質進行圖像分離。此外,我們更進一步提出了一個強化式參數學習的方法,讓系統能夠自主決定使用的圖像分離及圖像增強之參數。
對未來的研究期許,除了讓演算法能在學術領域中謀求突破,我們也希望將所提出的演算法逐一在物聯網及嵌入式系統上實現,用以增加國家在電子、光電及數位內容的科技競爭力;透過軟硬整合、訊號處理及人工智慧,做出產品與服務,對國家的產業經濟,有所裨益。
zh_TW
dc.description.abstractIn recent years, image enhancement technologies have been widely applied to high-resolution displays. Among these image enhancement techniques, super-resolution and HDR-like synthesis are the most important topics. Although a lot of research works are developed to accomplish these two topics, it remains tough to process the regions around edges and the regions full with texture. It is also hard to engage these two topics with human visual perception. In this dissertation, we not merely efficiently deal with the edges, texture regions, and human visual perception. We further consider the optimization methods related to image sparsity which is the current mainstream in the fields of computer vision and signal processing.
The first topic is super-resolution. Super-resolution was proposed in Year 1984. The original idea is to combine the information from several input images, register these images in the subpixel level, and fuse together to generate a high-resolution output. Around Year 2000, researchers brought the image formation model into the framework of super-resolution. From then on, super-resolution is highly integrated with image deblurring methods and optimization schemes. Different to the conventional super-resolution methods which deal with multiple images and single blur kernel, we propose a spatially-varying super-resolution using only a single input image. That is, we reconstruct a high-resolution image from only one low resolution image
while we deal with multiple blur kernels. Moreover, we extend our spatially-varying super-resolution to a viewing-distance-aware scheme. When we en-large an image, we can maintain the viewer’s visual perceptual constancy. To our knowledge, we are the very first to propose the perceptual super-resolution scheme for this topic. Meanwhile, we also propose a integrated framework for image reconstruction by solving the optimization problem with image sparsity.
The second topic is high-dynamic-range-like (HDR-like) image synthesis. HDR was proposed in Year 1996. HDR imaging adopts different dynamic-range input images with different exposures to render a high-dynamic-range image. However, fusing several images to have a HDR image may suffer blurred degradation, ghosting and unnatural artifacts. Meanwhile, in Year 1996, the researchers at NASA adopts a set of filters to decompose the input. Then, they adjust the global base layer and boost the detail layer to synthesize a HDR-like output. To avoid the drawbacks and artifacts in multiple-images scheme, we adopt the image decomposition scheme from a single input to enhance both base layer and detail layer so as to render a HDR-like output. Our proposed HDR-like syntheses cannot only avoid degradation but also consider the image sparsity and human visual system.
We can extend the HDR-like synthesis to our third topic. The third topic is image enhancement via sparsity-based decomposition for low-backlighted displays. We deduce the optical relationship between the backlight scale and the input image. We try to approximate the enhanced image on low-backlighted display to the input image on full-backlighted display so as to save the electrical power of the whole system. Here, we adopt the content-aware image decomposition using sparsity-based optimization. Moreover,
we propose our reinforcement parameter learning to enable the system to learn the parameters with intelligence.
In the future, we will implement the aforementioned algorithms on IoT embedded systems. We not merely pursuit the break-through in the academic research fields of vision, multimedia, signal processing and even artificial intelligence. We also hope that we can further improve the national competitive power in the electronics, display, and digital content industries. Moreover, we wish that our ‘AI+IoT’ embedded integrations and services can also improve the national economic.
en
dc.description.provenanceMade available in DSpace on 2021-06-16T09:17:55Z (GMT). No. of bitstreams: 1
ntu-106-D99944010-1.pdf: 29683103 bytes, checksum: 4be0009ebe818ee96bcf47f9ce0a1dd8 (MD5)
Previous issue date: 2017
en
dc.description.tableofcontents1 Introduction 1
1.1 The Challenges 1
1.2 Thesis Overview 2
2 Spatially-Varying Super-Resolution 6
2.1 Introduction 6
2.2 Our Proposed System 9
2.2.1 Overview 9
2.2.2 Spatially-Varying Blur Identification 10
2.2.3 L1L2TTV Deblurring with Saliency Weighting 11
2.2.4 Pixel Selection 14
2.3 Experimental Results 15
2.4 Conclusion 16
3 Viewing-Distance Aware Super-Resolution 19
3.1 Introduction 19
3.2 Viewer Perception and Display 21
3.2.1 Scaling factor, image size and viewing distance 22
3.2.2 Relationship between the viewing distance, perceptual blur radius and image blur levels 24
3.3 Proposed Super-Resolution Algorithm 28
3.3.1 Image Formation Model for Super-Resolution 28
3.3.2 Estimation of Spatially-Varying Image Blur 29
3.3.3 L1L2TTV Deblurring 30
3.3.4 Pixel Selection 34
3.4 Experimental Results 34
3.4.1 Experimental Settings 35
3.4.2 Quantitative and Visual Results on Super-Resolution 37
3.4.3 Subjective Examinations 40
3.4.4 Limitation 42
3.5 Concluding Remarks 43
4 HDR-Like Synthesis using Retinex Enhancement 51
4.1 Introduction 51
4.2 Background 54
4.3 Our proposed HDR-Like Retinex Enhancement 55
4.3.1 Multi-Scale Illumination Estimator with Spatially-Adaptive Prior 56
4.3.2 Synthesize HDR by Illumination/Reflectance Tuning 58
4.3.3 Color Saturation Boosting 60
4.4 Experimental Results 61
4.5 Discussion on Processing Time 63
4.6 Conclusion 67
5 HDR-Like Synthesis using Sparsity-Based Image Decomposition 68
5.1 Introduction 68
5.2 Image Decomposition with Sparse Gradient Priors 70
5.2.1 Solvers for v(x, y) and w(x, y) 72
5.2.2 Solvers for B(x, y) 73
5.3 Correction and Enhancement 75
5.3.1 Texture-Aware Detail Enhancement 75
5.3.2 Color Enhancement in Lab Color Space 77
5.4 Experimental Results 78
5.5 Conclusions 83
6 Visual Enhancement using Sparsity-Based Image Decomposition for Low Back-lighted Displays 84
6.1 Introduction 84
6.2 Backlight Compensation and Simulation 87
6.3 Image Decomposition with Sparse Gradient Priors 88
6.3.1 Solvers for v(x, y) and w(x, y) 90
6.3.2 Solvers for S(x, y) 91
6.4 Base Layer Compensation and Texture-Aware Detail Enhancement 91
6.4.1 Texture-Aware Detail Enhancement 91
6.4.2 Base Layer Compensation 92
6.4.3 Simulation on Displays with Full Backlight 93
6.5 Experimental Results and Discussion 94
6.6 Conclusion 94
7 Visual Enhancement via Reinforcement Parameter Learning for Low Back-lighted Display 96
7.1 Introduction 96
7.2 The Proposed Method 97
7.2.1 Image Enhancer 98
7.2.2 Reinforcement Parameter Learning 101
7.3 Conclusions 107
8 Conclusions and Future Work 109
8.1 Conclusions 109
8.2 Future Work 110
Bibliography 111
dc.language.isoen
dc.title利用空疏型最佳化完成內容相關性之影像增強zh_TW
dc.titleContent-Adaptive Image Enhancement using Sparsity-Based Optimizationen
dc.typeThesis
dc.date.schoolyear105-2
dc.description.degree博士
dc.contributor.coadvisor貝蘇章(Soo-Chang Pei)
dc.contributor.oralexamcommittee傅立成(Li-Chen Fu),莊永裕(Yung-Yu Chuang),黃文良(Wen-Liang Hwang),陳祝嵩(Chu-Song Shen),鍾國亮(Kuo-Liang Chung)
dc.subject.keyword內容相關性,空疏性,最佳化,超解析重建,高動態圖像合成,高解析面板,zh_TW
dc.subject.keywordContent-aware,sparsity,optimization,super-resolution reconstruction,HDR-like synthesis,high-definition display,en
dc.relation.page121
dc.identifier.doi10.6342/NTU201701399
dc.rights.note有償授權
dc.date.accepted2017-07-11
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊網路與多媒體研究所zh_TW
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
ntu-106-1.pdf
  目前未授權公開取用
28.99 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved