Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電子工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101195
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor盧奕璋zh_TW
dc.contributor.advisorYi-Chang Luen
dc.contributor.author林奕憲zh_TW
dc.contributor.authorYi-Hsien Linen
dc.date.accessioned2025-12-31T16:16:52Z-
dc.date.available2026-01-01-
dc.date.copyright2025-12-31-
dc.date.issued2025-
dc.date.submitted2025-12-01-
dc.identifier.citation[1] Y. Lin and Y. Lu. “Low-light enhancement using a plug-and-play Retinex model with shrinkage mapping for illumination estimation”. In: IEEE Trans. Image Process. 31 (2022), pp. 4897–4908.
[2] J. Xu et al. “STAR: A structure and texture aware retinex model”. In: IEEE Trans. Image Process. 29 (2020), pp. 5022–5037.
[3] C. Guo et al. “Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement”. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2020, pp. 1780–1789.
[4] R. C. Gonzales and R. E. Woods. Digital image processing. Prentice Hall, 2002.
[5] H. Ibrahim and N. S. P. Kong. “Brightness preserving dynamic histogram equalization for image contrast enhancement”. In: IEEE Trans. Consum. Electron. 53.4 (2007), pp. 1752–1758.
[6] E. D. Pisano et al. “Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms”. In: J. Digit. Imag. 11.4 (1998), pp. 193–200.
[7] T. Celik and T. Tjahjadi. “Contextual and variational contrast enhancement”. In: IEEE Trans. Image Process. 20.12 (2011), pp. 3431–3441. 79 doi:10.6342/NTU202504739
[8] T. Mertens, J. Kautz, and F. Van Reeth. “Exposure fusion: A simple and practical alternative to high dynamic range photography”. In: Comput. Graph. Forum. Vol. 28. Wiley Online Library. 2009, pp. 161–171.
[9] Z. Ying, G. Li, Y. Ren, R. Wang, and W. Wang. “A new image contrast enhancement algorithm using exposure fusion framework”. In: Proc. Int. Conf. Comput. Anal. Images Patterns. Springer. 2017, pp. 36–46.
[10] Z. Ying, G. Li, Y. Ren, R. Wang, and W. Wang. “A new low-light image enhancement algorithm using camera response model”. In: Proc. IEEE Int. Conf. Comput. Vis. Workshop. 2017, pp. 3015–3022.
[11] Q.-C. Tian and L. D. Cohen. “A variational-based fusion model for non-uniform illumination image enhancement via contrast optimization and color correction”. In: Signal Process. 153 (2018), pp. 210–220.
[12] Q. Zhang, Y. Nie, and W.-S. Zheng. “Dual illumination estimation for robust exposure correction”. In: Comput. Graph. Forum. Vol. 38. Wiley Online Library. 2019, pp. 243–252.
[13] E. H. Land. “The retinex theory of color vision”. In: Sci. Amer. 237.6 (1977), pp. 108–129.
[14] D. J. Jobson, Z.-U. Rahman, and G. A. Woodell. “Properties and performance of a center/surround retinex”. In: IEEE Trans. Image Process. 6.3 (1997), pp. 451–462.
[15] D. J. Jobson, Z.-U. Rahman, and G. A. Woodell. “A multiscale retinex for bridging the gap between color images and the human observation of scenes”. In: IEEE Trans. Image Process. 6.7 (1997), pp. 965–976.
[16] B. K. P. Horn. “Determining lightness from an image”. In: Comput. Graph. Image Process. 3.4 (1974), pp. 277–299.
[17] J. M. Morel, A. B. Petro, and C. Sbert. “A PDE formalization of Retinex theory”. In: IEEE Trans. Image Process. 19.11 (2010), pp. 2825–2837.
[18] X. Fu, Y. Sun, M. LiWang, Y. Huang, X.-P. Zhang, and X. Ding. “A novel retinex based approach for image enhancement with illumination adjustment”. In: Proc. IEEE Int. Conf. Acoust. Speech Signal Process. 2014, pp. 1190–1194.
[19] X. Guo, Y. Li, and H. Ling. “LIME: Low-light image enhancement via illumination map estimation”. In: IEEE Trans. Image Process. 26.2 (2016), pp. 982–993.
[20] X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding. “A weighted variational model for simultaneous reflectance and illumination estimation”. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2016, pp. 2782–2790.
[21] X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley. “A fusion-based enhancing method for weakly illuminated images”. In: Signal Process. 129 (2016), pp. 82–96.
[22] B. Cai, X. Xu, K. Guo, K. Jia, B. Hu, and D. Tao. “A joint intrinsic-extrinsic prior model for retinex”. In: Proc. IEEE Int. Conf. Comput. Vis. 2017, pp. 4000–4009.
[23] X. Dong et al. “Fast efficient algorithm for enhancement of low lighting video”. In: Proc. IEEE Int. Conf. Multimedia Expo. 2011, pp. 1–6.
[24] A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío. “On the duality between retinex and image dehazing”. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2018, pp. 8212–8221.
[25] L. Li, R. Wang, W. Wang, and W. Gao. “A low-light image enhancement method for both denoising and contrast enlarging”. In: Proc. IEEE Int. Conf. Image Process. 2015, pp. 3730–3734.
[26] K. G. Lore, A. Akintayo, and S. Sarkar. “LLNet: A deep autoencoder approach to natural low-light image enhancement”. In: Pattern Recognit. 61 (2017), pp. 650–662.
[27] J. Park, J.-Y. Lee, D. Yoo, and I. So Kweon. “Distort-and-recover: Color enhancement using deep reinforcement learning”. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2018, pp. 5928–5936.
[28] Y. Jiang et al. “EnlightenGAN: Deep light enhancement without paired supervision”. In: IEEE Trans. Image Process. 30 (2021), pp. 2340–2349.
[29] C. Wei, W. Wang, W. Yang, and J. Liu. “Deep retinex decomposition for low-light enhancement”. In: Proc. British Mach. Vis. Conf. 2018, pp. 127–136.
[30] R. Wang, Q. Zhang, C.-W. Fu, X. Shen, W.-S. Zheng, and J. Jia. “Underexposed photo enhancement using deep illumination estimation”. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2019, pp. 6849–6857.
[31] R. Chartrand. “Shrinkage mappings and their induced penalty functions”. In: Proc. IEEE Int. Conf. Acoust. Speech Signal Process. 2014, pp. 1026–1029.
[32] K. He, J. Sun, and X. Tang. “Guided image filtering”. In: IEEE transactions on pattern analysis and machine intelligence 35.6 (2012), pp. 1397–1409.
[33] S. H. Chan, X. Wang, and O. A. Elgendy. “Plug-and-Play ADMM for image restoration: Fixed-point convergence and applications”. In: IEEE Trans. Comput. Imag. 3.1 (2016), pp. 84–98.
[34] V. Bychkovsky, S. Paris, E. Chan, and F. Durand. “Learning photographic global tonal adjustment with a database of input/output image pairs”. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2011, pp. 97–104.
[35] H.-C. Lee. Introduction to color imaging science. Cambridge University Press, 2005.
[36] M. Ebner. “Color constancy”. In: Computer Vision: A Reference Guide. Springer, 2021, pp. 168–175.
[37] N. Banić and S. Lončarić. “Light random sprays Retinex: exploiting the noisy illumination estimation”. In: IEEE Signal Processing Letters 20.12 (2013), pp. 1240–1243.
[38] N. Banić and S. Lončarić. “Smart light random memory sprays Retinex: a fast Retinex implementation for high-quality brightness adjustment and color correction”. In: JOSA A 32.11 (2015), pp. 2136–2147.
[39] Sensation & perception. Sinauer Sunderland, MA.
[40] M. K. Ng and W. Wang. “A total variation model for retinex”. In: SIAM J. Imag. Sci. 4.1 (2011), pp. 345–365.
[41] S. Hao, X. Han, Y. Guo, X. Xu, and M. Wang. “Low-light image enhancement with semi-decoupled decomposition”. In: IEEE Trans. Multimedia 22.12 (2020), pp. 3025–3038.
[42] G. Fu, L. Duan, and C. Xiao. “A Hybrid L2-LP Variational Model For Single Low-Light Image Enhancement With Bright Channel Prior”. In: Proc. IEEE Int. Conf. Image Process. 2019, pp. 1925–1929.
[43] K. Kurihara, H. Yoshida, and Y. Iiguni. “Low-Light Image Enhancement via Adaptive Shape and Texture Prior”. In: Proc. IEEE Int. Conf. Signal-Image Technol. Internet-Based Syst. IEEE. 2019, pp. 74–81.
[44] S. P. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. “Distributed optimization and statistical learning via the alternating direction method of multipliers”. In: Found. Trends Mach. Learn. 3.1 (2011), pp. 1–122.
[45] Y. Wang, W. Yin, and J. Zeng. “Global convergence of ADMM in nonconvex nonsmooth optimization”. In: Journal of Scientific Computing 78 (2019), pp. 29–63.
[46] A. Chambolle and T. Pock. “A first-order primal-dual algorithm for convex problems with applications to imaging”. In: J. Math. Imag. Vis. 40.1 (2011), pp. 120–145.
[47] E. Y. Sidky, R. Chartrand, J. M. Boone, and X. Pan. “Constrained TpV Minimization for Enhanced Exploitation of Gradient Sparsity: Application to CT Image Reconstruction”. In: IEEE J. Transl. Eng. Health Med. 2 (2014), pp. 1–18.
[48] H. Zhang, L. Wang, B. Yan, L. Li, A. Cai, and G. Hu. “Constrained total generalized Lp-variation minimization for few-view X-ray computed tomography image reconstruction”. In: PLOS ONE 11.2 (2016).
[49] X. Liu, Y. Chen, Z. Peng, J. Wu, and Z. Wang. “Infrared image super-resolution reconstruction based on quaternion fractional order total variation with Lp quasinorm”. In: Appl. Sci. 8.10 (2018), p. 1864.
[50] E. J. Candes, M. B. Wakin, and S. P. Boyd. “Enhancing sparsity by reweighted L1 minimization”. In: Journal of Fourier analysis and applications 14 (2008), pp. 877–905.
[51] S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman. “Plug-and-play priors for bright field electron tomography and sparse interpolation”. In: IEEE Transactions on Computational Imaging 2.4 (2016), pp. 408–423.
[52] C. Tomasi and R. Manduchi. “Bilateral filtering for gray and color images”. In: Proc. IEEE Int. Conf. Compute Vision. IEEE. 1998, pp. 839–846.
[53] Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski. “Edge-preserving decompositions for multi-scale tone and detail manipulation”. In: ACM Trans. Graph. 27.3 (2008), pp. 1–10.
[54] L. Xu, C. Lu, Y. Xu, and J. Jia. “Image smoothing via L0 gradient minimization”. In: ACM Trans. Graph. 30.6 (2011), pp. 1–12.
[55] Q. Zhang, X. Shen, L. Xu, and J. Jia. “Rolling guidance filter”. In: Proc. Eur. Conf. Comput. Vis. Springer. 2014, pp. 815–830.
[56] L. Xu, Q. Yan, Y. Xia, and J. Jia. “Structure extraction from texture via relative total variation”. In: ACM transactions on graphics (TOG) 31.6 (2012), pp. 1–10.
[57] V. Vonikakis. “https://sites.google.com/site/vonikakis/datasets”. In: (2017).
[58] NASA. “https://dragon.larc.nasa.gov/retinex/pao/news”. In: (2001).
[59] L. Shi. “Re-processed version of the gehler color constancy dataset of 568 images”. In: http://www.cs.sfu.ca/~colour/data/ (2000).
[60] E. F. Kaasschieter. “Preconditioned conjugate gradients for solving singular systems”. In: J. Comput. Appl. Math. 24.1-2 (1988), pp. 265–275.
[61] S. Wang, J. Zheng, H.-M. Hu, and B. Li. “Naturalness preserved enhancement algorithm for non-uniform illumination images”. In: IEEE Trans. Image Process. 22.9 (2013), pp. 3538–3548.
[62] K. Ma, K. Zeng, and Z. Wang. “Perceptual quality assessment for multi-exposure image fusion”. In: IEEE Trans. Image Process. 24.11 (2015), pp. 3345–3356.
[63] M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo. “Structure-revealing low-light image enhancement via robust retinex model”. In: IEEE Trans. Image Process. 27.6 (2018), pp. 2828–2841.
[64] A. Mittal, R. Soundararajan, and A. C. Bovik. “Making a“completely blind"image quality analyzer”. In: IEEE Signal Process. Lett. 20.3 (2012), pp. 209–212.
[65] K. Gu, D. Tao, J.-F. Qiao, and W. Lin. “Learning a no-reference quality assessment model of enhanced images with big data”. In: IEEE Trans. Neural Netw. Learn. Syst. 29.4 (2017), pp. 1301–1313.
[66] K. Gu, G. Zhai, X. Yang, and W. Zhang. “Using free energy principle for blind image quality assessment”. In: IEEE Trans. Multimedia 17.1 (2014), pp. 50–63.
[67] W. Yang, W. Wang, H. Huang, S. Wang, and J. Liu. “Sparse gradient regularized deep retinex network for robust low-light image enhancement”. In: IEEE Transaction on Image Processing 30 (2021), pp. 2072–2086.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101195-
dc.description.abstract低光的攝影條件會使得影像品質下降,造成細節損失、對比度降低等問題。本論文提出了一種全新基於 Retinex 理論的低光影像增強方法,能夠準確地將輸入影像進行視覺拆分為反射層和照明層,而能更有效地處理低光問題。透過調整照明層的亮度和對比度,我們可以顯著改善影像的視覺效果。然而,由於影像分解是一個最佳化問題,我們在其框架中引入了適當的限制條件以提升結果的可靠性。
為了實現理想的 Retinex 分解,我們引入了非凸的 $L_p$ 範數,對照明層中使用收縮映射,以進一步提升精確性和穩定性,此外,我們也利用即插即用(Plug-and-Play)技術引入邊緣保持濾波器來改進照明層的效果。在反射層中,我們採用了基於變異數和影像梯度的加權方法,以有效抑制雜訊並保留更多細節。
為了提高計算效率,我們採用了交替方向乘數法(ADMM)來處理最佳化問題。實驗結果顯示,該方法在多個具有挑戰性的低光影像測試庫中取得了優異的性能。與當前最先進的方法相比,我們的方法能夠更有效地增強影像亮度,並在主觀、客觀等多種影像品質評估中有出色的表現,為低光影像增強提供了一個全新而高效的解決方案。
zh_TW
dc.description.abstractLow-light photography conditions degrade image quality. This study proposes a novel Retinex-based low-light enhancement method to decompose an input image into reflectance and illumination correctly. Subsequently, we can improve the viewing experience by adjusting the illumination using intensity and contrast enhancement. Because image decomposition is a highly ill-posed problem, constraints must be appropriately imposed on the optimization framework. To meet the criteria of ideal Retinex decomposition, we design a nonconvex $L_p$ norm and apply shrinkage mapping to the illumination layer. In addition, edge-preserving filters are introduced using the plug-and-play technique to improve illumination. Pixel-wise weights based on variance and image gradients are adopted to suppress noise and preserve details in the reflectance layer. We choose the alternating direction method of multipliers (ADMM) to solve the problem efficiently. Experimental results on several challenging low-light datasets show that our proposed method can more effectively enhance image brightness than state-of-the-art methods. In addition to subjective observations, the proposed method also achieved competitive performance in objective image quality assessments.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-12-31T16:16:52Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-12-31T16:16:52Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents口試委員會審定書iii
誌謝v
摘要vii
Abstract ix
Contents xi
List of Figures xv
List of Tables xix
1 Introduction 1
1.1 Low-light Enhancement and Related Works . . . . . . . . . . . . . . . . 3
1.1.1 Histogram-equalization-based methods . . . . . . . . . . . . . . 3
1.1.2 Fusion methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.3 Retinex-based methods . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.4 Dehazing-based methods . . . . . . . . . . . . . . . . . . . . . . 5
1.1.5 Learning-based methods . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Main Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Background 11
2.1 Human vision and Retinex theory . . . . . . . . . . . . . . . . . . . . . 11
2.1.1 Lambertian reflection model . . . . . . . . . . . . . . . . . . . . 12
2.1.2 Retinex theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.3 White patch assumption . . . . . . . . . . . . . . . . . . . . . . 15
2.1.4 Gray world assumption . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.5 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Variational Retinex Model . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.1 Lp regularizer and Retinex decomposition . . . . . . . . . . . . . 21
2.3 ADMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.1 Proximal mapping . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.2 ADMM on Retinex model . . . . . . . . . . . . . . . . . . . . . 26
3 Algorithm 29
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2 Illumination Regularizers . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.1 Reweighted L2 norm . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.2 Lp Shrinkage Mapping . . . . . . . . . . . . . . . . . . . . . . . 31
3.2.3 Plug-and-play ADMM . . . . . . . . . . . . . . . . . . . . . . . 33
3.2.4 A Comparative Study for Edge-preserving Filters . . . . . . . . . 36
3.2.5 Proposed illumination regularizers . . . . . . . . . . . . . . . . . 36
3.3 Weight Matrix for Reflectance . . . . . . . . . . . . . . . . . . . . . . . 39
3.4 Proposed Retinex Model . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.5 Solving Subproblems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.5.1 Solving the Subproblem of R . . . . . . . . . . . . . . . . . . . . 44
3.5.2 Solving the Subproblem of L . . . . . . . . . . . . . . . . . . . . 45
3.5.3 Solving the Subproblem of T . . . . . . . . . . . . . . . . . . . . 46
3.5.4 Solving the Subproblem of S . . . . . . . . . . . . . . . . . . . . 47
3.5.5 Updating Z, Y, and μ . . . . . . . . . . . . . . . . . . . . . . . . 47
3.6 Image Filtering for Illumination . . . . . . . . . . . . . . . . . . . . . . 48
3.7 Time Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4 Experiment 53
4.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.2 Visual Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3 Quantitative Assessments . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.3.1 Common Image Dataset . . . . . . . . . . . . . . . . . . . . . . 59
4.3.2 Analysis on each metric . . . . . . . . . . . . . . . . . . . . . . 60
4.3.3 MIT–Adobe Dataset . . . . . . . . . . . . . . . . . . . . . . . . 62
4.3.4 More comparison with DeepUPE . . . . . . . . . . . . . . . . . 64
4.4 User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.5 Extremely low light datasets . . . . . . . . . . . . . . . . . . . . . . . . 68
5 Conclusion 75
5.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Reference 79
-
dc.language.isoen-
dc.subject低光影像增強-
dc.subject視覺分層模型-
dc.subject非凸函數最佳化-
dc.subjectLp收縮映射-
dc.subject即插即用交替方向乘數法-
dc.subjectLow-light image enhancement-
dc.subjectRetinex model-
dc.subjectNonconvex optimization-
dc.subjectLp shrinkage-
dc.subjectPlug-and-play ADMM-
dc.title使用穩健視覺分層之隨插即用低光影像增強技術zh_TW
dc.titlePlug-and-Play Low-Light Enhancement with Robust Retinex Decompositionen
dc.typeThesis-
dc.date.schoolyear114-1-
dc.description.degree博士-
dc.contributor.oralexamcommittee楊家驤;蔡佩芸;丁建均;吳沛遠zh_TW
dc.contributor.oralexamcommitteeChia-Hsiang Yang;Pei-Yun Tsai;Jian-Jiun Ding;Pei-Yuan Wuen
dc.subject.keyword低光影像增強,視覺分層模型非凸函數最佳化Lp收縮映射即插即用交替方向乘數法zh_TW
dc.subject.keywordLow-light image enhancement,Retinex modelNonconvex optimizationLp shrinkagePlug-and-play ADMMen
dc.relation.page86-
dc.identifier.doi10.6342/NTU202504739-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2025-12-02-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept電子工程學研究所-
dc.date.embargo-lift2026-01-01-
顯示於系所單位:電子工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-114-1.pdf53.39 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved