請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/55549
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 簡韶逸(Shao-Yi Chien) | |
dc.contributor.author | Yi-Nung Liu | en |
dc.contributor.author | 劉逸穠 | zh_TW |
dc.date.accessioned | 2021-06-16T04:08:48Z | - |
dc.date.available | 2019-09-05 | |
dc.date.copyright | 2014-09-05 | |
dc.date.issued | 2014 | |
dc.date.submitted | 2014-08-22 | |
dc.identifier.citation | [1] Wikipedia, “Visual system — wikipedia, the free encyclopedia,” 2014,
[Online; accessed 30-June-2014]. [2] Wikipedia, “Lateral geniculate nucleus — wikipedia, the free encyclope- dia,” 2014, [Online; accessed 30-June-2014]. [3] Y. HaCohen, R. Fattal, and D. Lischinski, “Image upsampling via texture hallucination,” in 2010 IEEE International Conference on Computational Photography (ICCP),, 2010, pp. 1 –8. [4] D. Glasner, S. Bagon, and M. Irani, “Super-resolution from a single image,” in Proc. IEEE 12th International Conference on Computer Vision (ICCV 2009),, 2009, pp. 349 –356. [5] B.K. Gunturk, J. Glotzbach, Y. Altunbasak, R.W. Schafer, and R.M. Mersereau, “Demosaicking: color filter array interpolation,” Signal Pro- cessing Magazine, IEEE, vol. 22, no. 1, pp. 44–54, Jan 2005. [6] W. Lu and Y.-P. Tan, “Color filter array demosaicking: new method and performance measures,” Image Processing, IEEE Transactions on, vol. 12, no. 10, pp. 1194 – 1210, oct. 2003. [7] S.-C. Pei and I.-K Tam, “Effective color interpolation in ccd color filter arrays using signal correlation,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 13, no. 6, pp. 503 – 513, june 2003. 135 [8] X. Li, “Demosaicing by successive approximation,” Image Processing, IEEE Transactions on, vol. 14, no. 3, pp. 370 –379, march 2005. [9] T.-H. Chen and S.-Y. Chien, “Cost effective color filter array demosaicking with chrominance variance weighted interpolation,” in Circuits and Systems, 2007. ISCAS 2007. IEEE International Symposium on, may 2007, pp. 1277 –1280. [10] C.-Y. Tsai and K.-T. Song, “Heterogeneity-projection hard-decision color interpolation using spectral-spatial correlation,” Image Processing, IEEE Transactions on, vol. 16, no. 1, pp. 78 –91, jan. 2007. [11] K.-L. Chung, W.-J. Yang, W.-M. Yan, and C.-C. Wang, “Demosaicing of color filter array captured images using gradient edge detection masks and adaptive heterogeneity-projection,” Image Processing, IEEE Transactions on, vol. 17, no. 12, pp. 2356 –2367, dec. 2008. [12] M.-C. Chuang, Y.-N. Liu, T.-H. Chen, and S.-Y. Chien, “Color filter array demosaicking using joint bilateral filter,” in Multimedia and Expo, 2009. ICME 2009. IEEE International Conference on, 28 2009-july 3 2009, pp. 125 –128. [13] B. Leung, G. Jeon, and E. Dubois, “Least-squares luma chroma demultiplexing algorithm for bayer demosaicking,” Image Processing, IEEE Transactions on, vol. 20, no. 7, pp. 1885 –1894, july 2011. [14] A. Hore and D. Ziou, “An edge-sensing generic demosaicing algorithm with application to image resampling,” Image Processing, IEEE Transactions on, vol. 20, no. 11, pp. 3136 –3150, nov. 2011. [15] R.A Maschal, S.S. Young, J.P. Reynolds,K. Krapels, J. Fanning, and T. Corbin, “New image quality assessment algorithms for cfa demosaicing,” Sensors Journal, IEEE, vol. 13, no. 1, pp. 371–378, Jan 2013. [16] S. Farsiu, M.D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Transactions on Image Processing, vol. 13, no. 10, pp. 1327–1344, Oct. 2004. [17] M. Protter, M. Elad, H. Takeda, and P. Milanfar, “Generalizing the nonlocal-means to super-resolution reconstruction,” IEEE Transactions on Image Processing, vol. 18, no. 1, pp. 36–51, Jan. 2009. [18] Q. Wang and R.K. Ward, “A new orientation-adaptive interpolation method,” IEEE Transactions on Image Processing, vol. 16, no. 4, pp. 889– 900, April 2007. [19] X. Li and M.T. Orchard, “New edge-directed interpolation,” IEEE Transactions on Image Processing, vol. 10, no. 10, pp. 1521–1527, Oct 2001. [20] R. Fattal, “Image upsampling via imposed edge statistics,” in SIGGRAPH ’07: ACM SIGGRAPH 2007 papers. 2007, p. 95, ACM. [21] Q. Shan, Z. Li, J. Jia, and C.K. Tang, “Fast image/video upsampling,” ACM Transactions on Graphics (SIGGRAPH ASIA), vol. 27, no. 5, pp. 153, 2008. [22] W.T. Freeman, T.R. Jones, and E.C. Pasztor, “Example-based superresolution,” IEEE Computer Graphics and Applications, vol. 22, no. 2, pp. 56–65, Mar/Apr 2002. [23] S.F. Lin, Y.L. Chang, and L.G. Chen, “Motion adaptive interpolation with horizontal motion detection for deinterlacing,” Consumer Electronics, IEEE Transactions on, vol. 49, no. 4, pp. 1256–1265, 2003. [24] C.J. Kuo, C. Liao, and C.C. Lin, “Adaptive interpolation technique for scanning rate conversion,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 6, no. 3, pp. 317–321, 1996. [25] O. Kalevo and P. Haavisto, “Motion compensated deinterlacing,” in Consumer Electronics, 1993. Digest of Technical Papers. ICCE., IEEE 1993 International Conference on. IEEE, 1993, pp. 40–41. [26] S. Kwon, K. Seo, J. Kim, and Y. Kim, “A motion-adaptive de-interlacing method,” Consumer Electronics, IEEE Transactions on, vol. 38, no. 3, pp. 145–150, 1992. [27] Y.C. Fan, H.S. Lin, A. Chiang, H.W. Tsao, and C.C. Kuo, “Motion compensated deinterlacing with efficient artifact detection for digital television displays,” Display Technology, Journal of, vol. 4, no. 2, pp. 218–228, 2008. [28] S.H. Keller, F. Lauze, and M. Nielsen, “Deinterlacing using variational methods,” Image Processing, IEEE Transactions on, vol. 17, no. 11, pp. 2015–2028, 2008. [29] Q. Huang, D. Zhao, S. Ma, W. Gao, and H. Sun, “Deinterlacing using hierarchical motion analysis,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 20, no. 5, pp. 673–686, 2010. [30] A. Behnad, K.N. Plataniotis, and X. Wu, “A hidden markov model-based methodology for intra-field video deinterlacing,” in Image Processing (ICIP), 2011 18th IEEE International Conference on. IEEE, 2011, pp. 1189–1192. [31] Z. Pang, D. Lei, D. Chen, and H. Tan, “A motion adaptive deinterlacing method based on human visual system,” in Image and Signal Processing (CISP), 2011 4th International Congress on. IEEE, 2011, vol. 1, pp. 340– 344. [32] V. Jakhetiya, O.C. Au, S. Jaiswal, Luheng Jia, and Hong Zhang, “Fast and efficient intra-frame deinterlacing using observation model based bilateral filter,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, May 2014, pp. 5819–5823. [33] H. Kolb, “http://webvision.med.utah.edu/ ,” 2013. [34] A. Heinrich, G. de Haan, and C.N. Cordes, “A novel performance measure for picture rate conversion methods,” in Consumer Electronics, 2008. ICCE 2008. Digest of Technical Papers. International Conference on, jan. 2008, pp. 1 –2. [35] H. H. Bauschke and J. M. Borwein, “On projection algorithms for solving convex feasibility problems,” SIAM Review, , no. 3, 1996. [36] Y.-N. Liu, Y.-C. Lin, and S.-Y. Chien, “A no -reference quality evaluation method for cfa demosaicking,” in Consumer Electronics (ICCE), 2010 Digest of Technical Papers International Conference on, jan. 2010, pp. 365 –366. [37] Y.-C. Lin, Y.-N. Liu, and S.-Y. Chien, “Direction-adaptive image upsampling using double interpolation,” in Picture Coding Symposium (PCS), 2010, dec. 2010, pp. 254 –257. [38] M.J.J.C. Annegarn, T. Doyle, P.H. Frencken, and D.A. Van Hees, “Video signal processing circuit for processing an interlaced video signal,” 1988. [39] J. Salo, Y. Neuvo, and V. Hameenaho, “Improving tv picture quality with linear-median type operations,” Consumer Electronics, IEEE Transactions on, vol. 34, no. 3, pp. 373–379, 1988. [40] T. Doyle and M. Looymans, “Progressive scan conversion using edge information,” signal processing of HDTV II, pp. 711–721, 1990. [41] D. Vatolin and M. Smirnov, “MSU perceptual video quality tool,” 2011. [42] I. Recommendation, “500-11, methodology for the subjective assessment of the quality of television pictures,” International Telecommunication Union, Geneva, Switzerland, 2002. [43] R. Lowry, “One-way analysis of variance,” 2012. [44] L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Machine Intell., vol. 20, no. 11, pp. 1254–1259, Nov. 1998. [45] T. Blu and M. Unser, “Image interpolation and resampling,” in Handbook of Medical Imaging, Processing and Analysis. 2000, pp. 393–420, Academic Press. [46] S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Processing Mag., vol. 20, no. 3, pp. 21 – 36, May 2003. [47] D Su and P Willis, “Image interpolation by pixel-level data-dependent triangulation,” Computer Graphics Forum, vol. 23, no. 2, pp. 189–201, 2004. [48] G. Jeon, J. Lee, W. Kim, and J. Jeong, “Edge direction-based simple resampling algorithm,” in Proc. IEEE International Conference on Image Processing (ICIP 2007), Oct. 2007, vol. 5, pp. V–397 – V–400. [49] L. Zhang and X. Wu, “An edge-guided image interpolation algorithm via directional filtering and data fusion,” IEEE Trans. Image Processing, vol. 15, no. 8, pp. 2226 –2238, Aug. 2006. [50] Q. Shan, Z. Li, J. Jia, and C.-K. Tang, “Fast image/video upsampling,” ACM Transactions on Graphics (SIGGRAPH ASIA), vol. 27, no. 5, pp. 153, 2008. [51] J. Sun, Z. Xu, and H.-Y. Shum, “Image super-resolution using gradient profile prior,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008, pp. 1 –8. [52] J. Sun, N.-N. Zheng, H. Tao, and H.-Y. Shum, “Image hallucination with primal sketch priors,” in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2003), 2003, vol. 2, pp. II – 729–36. [53] H. A. Aly and E. Dubois, “Image up-sampling using total-variation regularization with a new observation model.,” IEEE Transactions on Image Processing, vol. 14, no. 10, pp. 1647–1659, 2005. [54] U. Mudenagudi, S. Banerjee, and P.K. Kalra, “Space-time super-resolution using graph-cut optimization,” IEEE Trans. Pattern Anal. Machine Intell., vol. 33, no. 5, pp. 995 –1008, May 2011. [55] S.P. Belekos, N.P. Galatsanos, and A.K. Katsaggelos, “Maximum a posteriori video super-resolution using a new multichannel image prior,” IEEE Trans. Image Processing, vol. 19, no. 6, pp. 1451 –1464, June 2010. [56] R. Fattal, “Image upsampling via imposed edge statistics,” in SIGGRAPH ’07: ACM SIGGRAPH 2007 papers, 2007, p. 95. [57] S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 9, pp. 1167–1183, Sep 2002. [58] W.T. Freeman, T.R. Jones, and E.C. Pasztor, “Example-based superresolution,” IEEE Computer Graphics and Applications, vol. 22, no. 2, pp. 56–65, Mar/Apr 2002. [59] K.I. Kim and Y. Kwon, “Single-image super-resolution using sparse regression and natural image prior,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 32, no. 6, pp. 1127 –1133, june 2010. [60] M. Planck, K. I. Kim, and Y. Kwon, “Example-based learning for singleimage super-resolution and jpeg artifact removal,” Biological Cybernetics, , no. August, 2008. [61] S.H. Keller, F. Lauze, and M. Nielsen, “Video super-resolution using simultaneous motion and intensity calculations,” IEEE Trans. Image Processing, vol. 20, no. 7, pp. 1870 –1884, July 2011. [62] H. Takeda, P. Milanfar, M. Protter, and M. Elad, “Super-resolution without explicit subpixel motion estimation,” IEEE Trans. Image Processing, vol. 18, no. 9, pp. 1958 –1975, Sept. 2009. [63] N. Efrat, D. Glasner, A Apartsin, B. Nadler, and A Levin, “Accurate blur models vs. image priors in single image super-resolution,” in Computer Vision (ICCV), 2013 IEEE International Conference on, Dec 2013, pp. 2832– 2839. [64] Y.-W. Tai, S. Liu, M.S. Brown, and S. Lin, “Super resolution using edge prior and single image detail synthesis,” in 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 2400 –2407. [65] J. Sun, J. Zhu, and M.F. Tappen, “Context-constrained hallucination for image super-resolution,” in 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 231 –238. [66] K.-Y. Hsu and S.-Y. Chien, “Hardware architecture design of frame rate up-conversion for high definition videos with global motion estimation and compensation,” in Proc. 2011 IEEE Workshop on Signal Processing Systems (SiPS), Oct., pp. 90–95. [67] Y.-C. Lin, Y.-N. Liu, and S.-Y. Chien, “Direction-adaptive image upsampling using double interpolation,” in Proc. Picture Coding Symposium (PCS), 2010, Dec. 2010, pp. 254 –257. [68] L.-Y. Wei and M. Levoy, “Fast texture synthesis using tree-structured vector quantization,” in SIGGRAPH ’00: ACM SIGGRAPH 2000 papers, 2000, pp. 479–488. [69] H. Sun, L. Liu, Q. Chen, B. Zhai, and N. Zheng, “Design and implementation of a video display processing SoC for full HD LCD TV,” in Proc. 2012 International SoC Design Conference (ISOCC), Nov., pp. 297–300. [70] P. Lakshmi Narasimha, B. Mudigoudar, Z. Yue, and P. Topiwala, “An improved real time superresolution FPGA system,” in Proc. SPIE 7328, Enhanced and Synthetic Vision 2009, 2009, pp. 732806–732806–12. [71] Y.-N. Liu, Y.-C. Lin, and S.-Y. Chien, “High quality video up-scaling using database-free texture synthesis,” 2013. [72] R. Lowry, “Vassarstats: Website for statistical computation,” . [73] Wikipedia, “Telecine—wikipedia, the free encyclopedia,” 2014, [Online; accessed 30-June-2014]. [74] Y. K. Chen, A. V. H. Sun, and S. Y. Kung, “Frame-rate up-conversion using transmitted true motion vectors,” in Proc. IEEE Workshop Multimedia Signal Processing, 1998, pp. 622–627. [75] D. Sasaki, M. Imai, and H. Hayama, “Motion picture simulation for designing high-picture-quality hold-type displays,” in SID Symp. Digest, 2002, pp. 926–929. [76] T. W. Su, T. J. Chang, P.L. Chen, and K. Y. Lin, “LCD visual quality analysis by moving picture simulation,” in SID Symp. Digest, 2005, pp. 1010–1013. [77] H. Pan, X. Feng, and S. Daly, “Quantitative analysis of LCD motion blur and performance of existing approaches,” in SID Symp. Digest, 2005, pp. 1590–1593. [78] M. A. Klompenhouwer and L. J. Velthoven, “LCD motion blur reduction with motion compensated inverse filtering,” in SID Symp. Digest, 2004, pp. 1340–1343. [79] B. W. Lee, Y. C. Yang, D. J. Park, P. Y. Park, B. Jeon, S. Hong, T. Kim, S. Moon, M. Hong, and K. Chung, “Spatio-temporal edge enhancement for reducing motion blur,” in SID Symp. Digest, 2006, pp. 1801–1803. [80] T. Kim, B. Park, B. H. Shin, H. Nam, B. H. Berkeley, and S. S. Kim, “Analysis of response-time compensation for black-frame insertion,” Journal of the SID, vol. 15, pp. 741–747, Sept. 2007. [81] T. Kim, B. H. Shin, H. S. Nam, B. H. Berkeley, and S. S. Kim, “Advanced impulsive-driving technique for super-pva panel,” Journal of the SID, vol. 16, pp. 177–182, Jan. 2008. [82] S. Hong, B. Berkeley, and S. S. Kim, “Motion image enhancement of LCDs,” in Proc. IEEE Int. Conf. Image processing, Sept. 2005, vol. 2, pp. II–17–20. [83] T. Kim, B. Ye, C. P. Vu, N. Balram, and H. Steemers, “Motion-adaptive alternate gamma drive for flicker-free motion-blur reduction in 100/120-hz LCD TV,” Journal of the SID, vol. 17, pp. 203–212, Mar. 2009. [84] T. C. Hsu, J. C. Cheng, M. T. Hsu, and S. F. Chen, “High video image quality technology: Dynamic scanning backlight with black insertion implemented in a 32 OCB-LCD TV,” in SID Symp. Digest, 2007, pp. 353– 355. [85] J. Someya, “Correlation between perceived motion blur and mprt measurement,” in SID Symp. Digest, 2005, pp. 1018–1021. [86] J. Someya and H. Sugiura, “Evaluation of liquid-crystal-display motion blur with moving-picture response time and human perception,” Journal of the SID, vol. 15, pp. 79–86, Jan. 2007. [87] Wikipedia, “wagon-wheel — wikipedia, the free encyclopedia,” 2014, [Online; accessed 30-June-2014]. [88] Y. Kuroki, T. Nishi, S. Kobayashi, H. Oyaizu, and S. Yoshimura, “A psychophysical study of improvements in motion-image quality by using high frame rates,” Journal of the SID, vol. 15, pp. 61–68, Jan. 2007. [89] T.-H. Tsai and H.-Y. Lin, “High visual quality particle based frame rate up conversion with acceleration assisted motion trajectory calibration,” Journal of Display Technology,, vol. 8, no. 6, pp. 341 –351, june 2012. [90] B. D. Choi, J. W. Han, C. S. Kim, and S. J. Ko, “Motion-compensated frame interpolation using bilateral motion estimation and adaptive overlapped block motion compensation,” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 4, pp. 407–416, Apr. 2007. [91] S.J. Kang, S. Yoo, and Y.H. Kim, “Dual motion estimation for frame rate up-conversion,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 20, no. 12, pp. 1909–1914, 2011. [92] G.-G. Lee, C.-F. Chen, C.-J. Hsiao, and J.-C. Wu, “Bi-directional trajectory tracking with variable block-size motion estimation for frame rate upconvertor,” Emerging and Selected Topics in Circuits and Systems, IEEE Journal on, vol. 4, no. 1, pp. 29–42, March 2014. [93] C. N. Cordes and G. de Haan, “Key requirements for high quality picturerate conversion,” in SID Symp. Digest, 2009, pp. 850–853. [94] R. Thoma and M. Bierling, “Motion compensating interpolation considering covered and uncovered background,” Signal Processing: Image Communications, vol. 1, no. 2, pp. 191–212, Oct. 1989. [95] J. Chalidabhongse and C. C. J. Kuo, “Fast motion vector estimation using multiresolution-spatio-temporal correlations,” IEEE Trans. Circuits Syst. Video Technol., vol. 7, no. 3, pp. 477–488, Jul. 1997. [96] R. Krishnamurthy, J. W. Woods, and P. Moulin, “Frame interpolation and bidirectional prediction of video using compactly encoded optical-flow fields and label fields,” vol. 9, no. 5, pp. 713–726, Aug. 1999. [97] G. de Haan, P. W. Biezen, H. Huijgen, and O. A. Ojo, “True-motion estimation with 3-d recursive search block matching,” IEEE Trans. Circuits Syst. Video Technol., vol. 3, no. 5, pp. 368–379, Oct. 1993. [98] S.-C. Tai, Y.-R. Chen, Z.-B. Huang, and C.-C. Wang, “A multi-pass true motion estimation scheme with motion vector propagation for frame rate up-conversion applications,” Journal of display Technology, vol. 4, no. 2, pp. 188–197, Jun. 2008. [99] Y. Neuvo J. Astola, P. Haavisto, “Vector median filters,” in Proc. IEEE, Apr. 1990, vol. 78, pp. 678–689. [100] G. Dane and T. Q. Nguyen, “Motion vector processing fro frame rate up conversion,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, May 2004, vol. 3, pp. III–309–312. [101] A.-M. Huang and T. Q. Nguyen, “A multistage motion vector processing method for motion-compensated frame interpolation,” IEEE Trans. Image Processing, vol. 17, no. 5, pp. 694–708, May 2008. [102] A.-M. Huang and T. Nguyen, “Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation,” IEEE Trans. Image Processing, vol. 18, no. 4, pp. 740–752, Apr. 2009. [103] A. Kaup and T. Aach, “Efficient prediction of uncovered background in interframe coding using spatial extrapolation,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, Apr. 1994, vol. 5, pp. 501–504. [104] B. W. Jeon, G. I. Lee, S. H. Lee, and R. H. Park, “Coarse-to-fine frame interpolation for frame rate up-conversion using pyramid structure,” IEEE Trans. Consumer Electron., vol. 49, no. 3, pp. 499–508, Aug. 2003. [105] Y.-T. Yang, Y.-S. Tung, and J.-L. Wu, “Quality enhancement of frame rate up-converted video by adaptive frame skip and reliable motion extraction,” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 12, pp. 1700–1713, Dec. 2007. [106] K.-Y. Hsu and S.-Y. Chien, “Frame rate up-conversion with global-to-local iterative motion compensated interpolation,” in Proc. IEEE Int. Conf. on Multimedia and Expo, 2008, pp. 161–164. [107] R. D. Wright and L. M. Ward, Orienting of Attention, Oxford university press, 2008. [108] E. B. Goldstein, Sensation & Perception, Thomson Wadsworth, seventh edition, 2007. [109] K.I. Forster and J.C. Forster, “Dmdx: A windows display program with millisecond accuracy,” Behavior Research Methods, vol. 35, no. 1, pp. 116–124, 2003. [110] J. Miseli, J. Lee, and J. H. Souk, “Advanced motion artifact analysis method for dynamic contrast degradation caused by line spreading,” in SID Symp. Digest, 2006, pp. 2–5. [111] K. Oka and Y. Enami, “Moving picture response time (MPRT) measurement system,” SID Symposium Digest of Technical Papers, vol. 35, no. 1, pp. 1266–1269, 2004. [112] Y. Shimodaira, “Fundamental phenomena underlying artifacts induced by image motion and the solutions for decreasing the artifacts on FPDs,” in SID Symp. Digest, 2003, pp. 1034–1037. [113] R. Lukac, “Adaptive vector median filtering,” Pattern Recognition Letters, vol. 24, no. 12, pp. 1889–1899, 2003. [114] B. T. Choi, S. H. Lee, and S. J. Ko, “New frame rate up-conversion using bi-directional motion estimation,” IEEE Trans. Consumer Electron., vol. 46, no. 3, pp. 603–609, Aug. 2000. [115] T.-W. Chen, “Design and implementation of an H.264/MPEG-4 AVC decoder for 2048x1024 30fps videos,” Master thesis, Jun. 2005. [116] C.-Y. Tsai, T.-C. Chen, T.-W. Chen, and L.-G. Chen, “Bandwidth optimized motion compensation hardware design for h.264/avc hdtv decoder,” Circuits and Systems, 2005. 48th Midwest Symposium on, vol. 2, pp. 1199– 1202, Aug. 2005. [117] T.-C. Chen, S.-Y. Chien, Y.-W. Huang, C.-H. Tsai, C.-Y. Chen, T.-W. Chen, and L.-G. Chen, “Analysis and architecture design of an HDTV720p 30 frames/s H. 264/AVC encoder,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 16, no. 6, pp. 673–688, 2006. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/55549 | - |
dc.description.abstract | 本篇論文中,我們提出了一套自我驗證架構來處理不同的內插問題,包含影像感應測器的彩色濾光片內插,電視訊號的去交錯,影像的超解析放大。這些內差的問題在生活中每天都會遇到,而這一類的問題基於取樣的不足,註定無法還原成完美的高解析度影像。學者提出了許多不同的假設並利用這些假設來內插,問題是這些假設如果不成立,那最後輸出的影像品質就會變得無法預測。我們提出的架構是利用重複內插的收斂性來選出最佳的內插解,架構本身不提供輸出值的預測,而是在幾個候選演算法中,檢查其中一種方法重複內插後的收斂性來判斷內插是否正確。利用此方法,我們可以把不具收斂性的內插法排除,避免產生錯誤的結果.根據我們的客觀與主觀實驗結果,利用自我驗證內插架構比候選演算法提供更好的結果。且品質差距是顯著的。
利用此架構,我們也設計了一套電視用的超解析引擎,能夠將低解析度的電視訊號放大成高解析度的影像。影像訊號中可分結構與材質,我們利用先前提出的自我驗證架構來重建影像的結構並避免錯誤的結果,再利用電腦圖學中的材質合成來生成材質的細紋,給觀看者得到高解析度的感受。我們也分析了對應的硬體架構,利用區塊內遞迴的架構可以大幅降低所需要的頻寬與記憶體用量。 在論文的最後一部分,我們利用人類視覺特性,設計實驗找出人眼對於平面顯示器的動態模糊的極限,並利用此極限設計出移動估計補償畫面插補法來提高高更新率的影片,並有效降低人眼感受到的動態模糊。 綜觀整篇論文,我們的核心觀念是利用人眼視覺特性找出影像處理上的原則與排除冗餘的運算。人類對視覺的認知還只是在粗淺的表面,還有很多未知的領域有待學者分析,在影像處理上也還有非常多的未知數。未來的研究者必能將成果繼續往前推進。 | zh_TW |
dc.description.abstract | Pixel interpolation is well known as an ill-posed problem of video restoration.
In this thesis, a self-validation framework is proposed to solve different kinds of interpolation problems. And corresponding hardware architecture were analysed. In the proposed self-validation framework, multiple algorithms under differ- ent assumptions were performed to generate multiple candidate results. After that, the final estimation of a missing pixel sample was decided adaptively by evaluating the local consistency of each algorithm with a process called double interpolation. By combining the results of different algorithms, the color artifacts are reduced. The proposed framework is applied to three interpolation problems include CFA demosaick, image up-scaling,and de-interlacing. Experimental re- sults demonstrate that the proposed framework improves image and video quality in both subjective and objective assessments. We also implemented a spatial up-sampling hardware for TV scaler using the proposed framework. The corresponding hardware architecture design is also analysed. The proposed tile-based approach can reduce most of bandwidth and make the design practical on low cost hardware. As a results, a tile-based low cost super-resolution hardware is implemented on FPGA. For spatial-temporal interpolation, a real-time hardware-based perception-aware motion-compensated frame interpolation algorithm is proposed. We conducted an experiment to find out the limitations of motion blur perception of human visual system. And these perceptual limits is used to reduce the computational cost of frame interpolation without affecting visual quality. The experimental results show that the proposed low-cost algorithm maintains the visual quality of the interpolation results. Finally, to optimize the trade-off between memory bandwidth and hardware cost, a dedicated hardware architecture design was also proposed. The major contributions of this thesis are: First, to apply human visual system knowledge to image processing and provide high visual quality results without heavy computation. Second, to propose a unified framework for pixel interpolation problems and provide solid simulation results. Finally, to optimize the tradeoff between picture quality and hardware cost to derive a compromising solution for real applications. | en |
dc.description.provenance | Made available in DSpace on 2021-06-16T04:08:48Z (GMT). No. of bitstreams: 1 ntu-103-F94943078-1.pdf: 38804030 bytes, checksum: 14a11e43f7eced69cc3e0317adb72ff6 (MD5) Previous issue date: 2014 | en |
dc.description.tableofcontents | Contents
Abstract xiii 1 Introduction 1 1.1 Visual Data in Multimedia . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 Pixels in digital imaging . . . . . . . . . . . . . . . . . . 1 1.1.2 Interpolation in digital imaging . . . . . . . . . . . . . . 2 1.1.3 Types of interpolation methods . . . . . . . . . . . . . . . 3 1.1.4 Interpolation cases . . . . . . . . . . . . . . . . . . . . . 5 1.2 Human Vision System . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.1 Pathway of visual signal . . . . . . . . . . . . . . . . . . 9 1.3 Validation in Statistics . . . . . . . . . . . . . . . . . . . . . . . 13 1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.5 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . 15 2 Self Validation Framework 17 2.1 Self-Validation Framework . . . . . . . . . . . . . . . . . . . . . 17 2.1.1 Double Interpolation as Performance Measurement . . . . 18 2.1.2 Self-Validation Framework as Mode Decision . . . . . . . 19 2.2 CFA Demosaicking . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.2 Experimental Results . . . . . . . . . . . . . . . . . . . . 22 2.3 Spatial Up-sampling . . . . . . . . . . . . . . . . . . . . . . . . 27 2.3.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . 27 2.3.2 Experimental Results . . . . . . . . . . . . . . . . . . . . 31 2.4 Deinterlacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.4.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . 32 2.4.2 Experimental results . . . . . . . . . . . . . . . . . . . . 35 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3 Spatial Upsampling 47 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.2.1 Interpolation-Based Algorithms . . . . . . . . . . . . . . 50 3.2.2 Super Resolution Algorithms . . . . . . . . . . . . . . . . 51 3.2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3 Proposed Algorithm Based on Database-Free Texture Synthesis . 54 3.3.1 Texture Synthesis . . . . . . . . . . . . . . . . . . . . . . 55 3.3.2 Back Projection using Gradient Descent . . . . . . . . . . 58 3.3.3 Algorithm Summary and Complexity Comparison . . . . 59 3.4 Proposed Hardware Architecture . . . . . . . . . . . . . . . . . . 60 3.4.1 Specification and Design Challenges . . . . . . . . . . . . 62 3.4.2 Tile-Based Architecture . . . . . . . . . . . . . . . . . . 63 3.4.3 Texture Synthesis Architecture . . . . . . . . . . . . . . . 68 3.4.4 Back Projection by Gradient Descent Architecture . . . . 75 3.4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . 76 3.5.1 Synthesis Results . . . . . . . . . . . . . . . . . . . . . . 76 3.5.2 FPGA Demo System . . . . . . . . . . . . . . . . . . . . 78 3.5.3 Image/video Results . . . . . . . . . . . . . . . . . . . . 78 3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 84 4 Temporal Upsampling 87 4.1 Introduction Temporal Interpolation . . . . . . . . . . . . . . . . 87 4.2 Introduction of Motion Blur on LCD . . . . . . . . . . . . . . . . 88 4.2.1 The cause of Motion Blur . . . . . . . . . . . . . . . . . 88 4.2.2 Motion Blur Reduction Method . . . . . . . . . . . . . . 92 4.2.3 Motion Blur Evaluation . . . . . . . . . . . . . . . . . . 93 4.3 Motivation and Design Challenge of Perception-Aware Frame Rate Up-conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.4 Overview of Motion-Compensated Frame Interpolation . . . . . . 98 4.4.1 Motion Estimation Methods for MCFI . . . . . . . . . . . 99 4.4.2 MV Processing Methods . . . . . . . . . . . . . . . . . . 100 4.4.3 Motion-Compensated Interpolation . . . . . . . . . . . . 101 4.4.4 Special Coarse-to-Fine MCFI Scheme . . . . . . . . . . . 103 4.5 Psychophysical Experiment on Human Visual Perception . . . . . 104 4.5.1 Design of Psychophysical Experiment . . . . . . . . . . . 104 4.5.2 Results of Psychophysical Experiment . . . . . . . . . . . 107 4.5.3 Summary of Psychophysical Experiment . . . . . . . . . 110 4.6 Proposed Perception-Aware Motion-Compensated Frame Interpolation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.6.1 Overview of Algorithm . . . . . . . . . . . . . . . . . . . 111 4.6.2 Forward MV Processing . . . . . . . . . . . . . . . . . . 113 4.6.3 Motion-Trajectory-Based MV Prediction . . . . . . . . . 113 4.6.4 Bidirectional MV Processing . . . . . . . . . . . . . . . . 115 4.6.5 Perception-Aware MC . . . . . . . . . . . . . . . . . . . 116 4.6.6 Software Simulation Results . . . . . . . . . . . . . . . . 116 4.7 Proposed Hardware Architecture . . . . . . . . . . . . . . . . . . 117 4.7.1 Bandwidth Analysis and Design Challenge . . . . . . . . 119 4.7.2 Proposed Hardware Architecture . . . . . . . . . . . . . . 119 4.7.3 Hardware Design for MV Processing Stage.......122 4.7.4 Hardware Design for Bilateral ME . . . . . . . . . . . . . 124 4.7.5 Hardware Design for MC . . . . . . . . . . . . . . . . . . 128 4.7.6 Implementation Results . . . . . . . . . . . . . . . . . . 128 4.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 5 Conclusion 131 5.1 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 5.2 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Reference 135 | |
dc.language.iso | en | |
dc.title | 自我驗證像素內插框架及其相關演算法與硬體架構設計 | zh_TW |
dc.title | Self-Validation Pixel Up-Sampling Framework and its Related Algorithm and Architecture Design | en |
dc.type | Thesis | |
dc.date.schoolyear | 102-2 | |
dc.description.degree | 博士 | |
dc.contributor.oralexamcommittee | 陳良基,李國君,張添烜,陳宏銘,郭峻因 | |
dc.subject.keyword | 自我驗證,像素,去交錯,材質,內插,超解析, | zh_TW |
dc.subject.keyword | Double interpolation,Super resolution,Texture synthesis,Tile-based, Self-validation, | en |
dc.relation.page | 148 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2014-08-22 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電子工程學研究所 | zh_TW |
顯示於系所單位: | 電子工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-103-1.pdf 目前未授權公開取用 | 37.89 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。