Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 光電工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90984
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor黃升龍zh_TW
dc.contributor.advisorSheng-Lung Huangen
dc.contributor.author劉智皓zh_TW
dc.contributor.authorChih-Hao Liuen
dc.date.accessioned2023-10-24T16:36:39Z-
dc.date.available2025-08-01-
dc.date.copyright2023-10-24-
dc.date.issued2023-
dc.date.submitted2023-07-26-
dc.identifier.citationBancroft, J. D., & Gamble, M. (2008). Theory and practice of histological techniques. Elsevier health sciences.
Huang, D., Swanson, E. A., Lin, C. P., Schuman, J. S., Stinson, W. G., Chang, W., Hee, M. R., Flotte, T., Gregory, K., & Puliafito, C. A. (1991). Optical coherence tomography. Science (New York, N.Y.), 254(5035), 1178–1181.
Drexler, W., Morgner, U., Ghanta, R. K., Kärtner, F. X., Schuman, J. S., & Fujimoto, J. G. (2001). Ultrahigh-resolution ophthalmic optical coherence tomography. Nature Medicine, 7(4), 502–507.
Spaide, R. F., Fujimoto, J. G., Waheed, N. K., Sadda, S. R., & Staurenghi, G. (2018). Optical coherence tomography angiography. Progress in Retinal and Eye Research, 64, 1–55.
Mogensen, M., Joergensen, T. M., Nürnberg, B. M., Morsy, H. A., Thomsen, J. B., Thrane, L., & Jemec, G. B. E. (2009). Assessment of optical coherence tomography imaging in the diagnosis of non-melanoma skin cancer and benign lesions versus normal skin: observer-blinded evaluation by dermatologists and pathologists. Dermatologic Surgery, 35(6), 965–972.
Brezinski, M. E., Pitris, C., & Fujimoto, J. G. (1998). Optical coherence tomography for neurosurgical imaging of human intracortical melanoma. Neurosurgery, 43(4).
Strasswimmer, J., Pierce, M. C., Park, B. H., Neel, V., & De Boer, J. F. (2004). Polarization-sensitive optical coherence tomography of invasive basal cell carcinoma. Journal of Biomedical Optics, 9(2), 292–298.
Gambichler, T., Orlikov, A., Vasa, R., Moussa, G., Hoffmann, K., Stücker, M., Altmeyer, P., & Bechara, F. G. (2007). In vivo optical coherence tomography of basal cell carcinoma. Journal of dermatological science, 45(3), 167–173.
Ogien, J., Levecq, O., Azimani, H., & Dubois, A. (2020). Dual-mode line-field confocal optical coherence tomography for ultrahigh-resolution vertical and horizontal section imaging of human skin in vivo. Biomedical Optics Express, 11(3), 1327–1335.
LeCun, Y., & Bengio, Y. (1995). Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10), 1995.
Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20, 273–297.
Breiman, L. (2001). Random forests. Machine Learning, 45, 5–32.
Pearson, K. (1901). LIII. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11), 559–572.
Lloyd, S. (1982). Least squares quantization in PCM. IEEE Transactions on Information Theory, 28(2), 129–137.
Ester, M. (1996). A density-based algorithm for discovering clusters in sarge spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, 1996 (pp. 291-316).
Reynolds, D.A. (2009). Gaussian mixture models. Encyclopedia of Biometrics.
Van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-SNE. Journal of Machine Learning Research, 9(11).
Torgerson, W. S. (1952). Multidimensional scaling: I. Theory and method. Psychometrika, 17(4), 401–419.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.
Werbos, P. J. (1990). Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10), 1550–1560.
Bengio, Y., Louradour, J., Collobert, R., & Weston, J. (2009). Curriculum learning. Proceedings of the 26th Annual International Conference on Machine Learning, 41–48.
Blum, A., & Mitchell, T. (1998). Combining labeled and unlabeled data with co-training. Proceedings of the Eleventh Annual Conference on Computational Learning Theory, 92–100.
Grandvalet, Y., & Bengio, Y. (2004). Semi-supervised learning by entropy minimization. Advances in Neural Information Processing Systems, 17.
Xie, Q., Luong, M.-T., Hovy, E., & Le, Q. V. (2020). Self-training with noisy student improves imagenet classification. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10687–10698.
Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504–507.
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. ArXiv Preprint ArXiv:1810. 04805.
Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020, November). A simple framework for contrastive learning of visual representations. In International conference on machine learning (pp. 1597-1607). PMLR.
Finn, C., Abbeel, P., & Levine, S. (2017, July). Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning (pp. 1126-1135). PMLR.
Nichol, A., Achiam, J., & Schulman, J. (2018). On first-order meta-learning algorithms. ArXiv Preprint ArXiv:1803. 02999.
Elkan, C. (2001). The foundations of cost-sensitive learning. International Joint Conference on Artificial Intelligence, 17, 973–978. Lawrence Erlbaum Associates Ltd.
Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30.
LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2818–2826.
Merchant, F., & Castleman, K. (2022). Microscope image processing. Academic press.
Otsu, N. (1979). A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9(1), 62–66.
Beucher, S., & Meyer, F. (2018). The morphological approach to segmentation: the watershed transformation. In Mathematical morphology in image processing (pp. 433–481). CRC Press.
Kass, M., Witkin, A., & Terzopoulos, D. (1988). Snakes: Active contour models. International Journal of Computer Vision, 1(4), 321–331.
Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3431–3440.
Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18 (pp. 234-241). Springer International Publishing.
Falk, T., Mai, D., Bensch, R., Çiçek, Ö., Abdulkadir, A., Marrakchi, Y., Böhm, A., Deubner, J., Jäckel, Z., Seiwald, K., Dovzhenko, A., Tietz, O., Dal Bosco, C., Walsh, S., Saltukoglu, D., Tay, T. L., Prinz, M., Palme, K., Simons, M., Diester, I., Brox T., & Ronneberger, O. (2019). U-Net: deep learning for cell counting, detection, and morphometry. Nature methods, 16(1), 67–70.
Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2017). Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4), 834–848.
Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature pyramid networks for object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2117–2125.
Zhao, H., Shi, J., Qi, X., Wang, X., & Jia, J. (2017). Pyramid scene parsing network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2881–2890.
Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T., & Ronneberger, O. (2016). 3D U-Net: learning dense volumetric segmentation from sparse annotation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19 (pp. 424-432). Springer International Publishing.
Chen, Tao, Ma, K.-K., & Chen, L.-H. (1999). Tri-state median filter for image denoising. IEEE Transactions on Image Processing, 8(12), 1834–1838.
Buades, A., Coll, B., & Morel, J. M. (2011). Non-local means denoising. Image Processing On Line, 1, 208-212.
Donoho, D. L., & Johnstone, I. M. (1994). Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81(3), 425–455.
Vincent, P., Larochelle, H., Bengio, Y., & Manzagol, P. A. (2008, July). Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning (pp. 1096-1103).
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems (p. 2672-2680).
Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. ArXiv Preprint ArXiv:1312. 6114.
Dinh, L., Krueger, D., & Bengio, Y. (2014). NICE: Non-linear independent components estimation. ArXiv Preprint ArXiv:1410. 8516.
Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33, 6840-6851.
Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134).
Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2921-2929).
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision (pp. 618-626).
Chattopadhay, A., Sarkar, A., Howlader, P., & Balasubramanian, V. N. (2018, March). Grad-CAM++: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE winter conference on applications of computer vision (WACV) (pp. 839-847). IEEE.
Vabre, L., Dubois, A., & Boccara, A. C. (2002). Thermal-light full-field optical coherence tomography. Optics Letters, 27(7), 530–532.
Dubois, A., Grieve, K., Moneron, G., Lecaque, R., Vabre, L., & Boccara, C. (2004). Ultrahigh-resolution full-field optical coherence tomography. Applied Optics, 43(14), 2874–2883.
Malitson, I. H. (1965). Interspecimen comparison of the refractive index of fused silica. Josa, 55(10), 1205–1209.
Gabel, V.-P., Kampik, A., & Burkhardt, J. (1987). Analysis of intraocularly applied silicone oils of various origins. Graefe’s Archive for Clinical and Experimental Ophthalmology, 225, 160–162.
Ding, H., Lu, J. Q., Wooden, W. A., Kragel, P. J., & Hu, X.-H. (2006). Refractive indices of human skin tissues at eight wavelengths and estimated dispersion relations between 300 and 1600 nm. Physics in Medicine & Biology, 51(6), 1479.
Tsai, C.-C., Chang, C.-K., Hsu, K.-Y., Ho, T.-S., Lin, M.-Y., Tjiu, J.-W., & Huang, S.-L. (2014). Full-depth epidermis tomography using a Mirau-based full-field optical coherence tomography. Biomedical Optics Express, 5(9), 3001–3010.
Dubois, A., Vabre, L., Boccara, A.-C., & Beaurepaire, E. (2002). High-resolution full-field optical coherence tomography with a Linnik microscope. Applied Optics, 41(4), 805–812.
Hamilton, D. S., Gayen, S. K., Pogatshnik, G. J., Ghen, R. D., & Miniscalco, W. J. (1989). Optical-absorption and photoionization measurements from the excited states of Ce3+: Y3Al5O12. Physical Review B, 39(13), 8807.
Tsai, C. C., Chen, T. H., Lin, Y. S., Wang, Y. T., Chang, W., Hsu, K. Y., Chang, Y. H., Hsu, P. K., Jheng, D. Y., Huang, K. Y., Sun, E., & Huang, S. L. (2010). Ce3+:YAG double-clad crystal-fiber-based optical coherence tomography on fish cornea. Optics letters, 35(6).
Hsu, K.-Y., Jheng, D.-Y., Liao, Y.-H., Ho, T.-S., Lai, C.-C., & Huang, S.-L. (2012). Diode-laser-pumped glass-clad Ti:sapphire crystal-fiber-based broadband light source. IEEE Photonics Technology Letters, 24(10), 854–856.
Wang, S.-C. (2016). Development and Applications of Glass-clad Ti:Al2O3 Crystal Fiber. (Doctoral dissertation, National Taiwan University).
Ebling, F. J. G., & Montagna, W. (n.d.). Human skin.
Ramadon, D., McCrudden, M. T. C., Courtenay, A. J., & Donnelly, R. F. (2021). Enhancement strategies for transdermal drug delivery systems: Current trends and applications. Drug Delivery and Translational Research, 1–34.
Kojima, H., Seidle, T., & Spielmann, H. (2019). Alternatives to Animal Testing: Proceedings of Asian Congress 2016 (p. 130). Springer Nature.
Emanuel, P., & Cheng, H. (n.d.-a). Eczema pathology.
Katz, S. I. (2006). National Institute of Arthritis and Musculoskeletal and Skin Diseases. Strategic Plan for Reducing Health Disparities.
Tollefson, M. M., Bruckner, A. L., & Section On Dermatology (2014). Atopic dermatitis: skin-directed management. Pediatrics, 134(6), e1735–e1744.
Grey, K., & Maguiness, S. (2016). Atopic dermatitis: update for pediatricians. Pediatric Annals, 45(8), e280–e286.
Soter, N. A. (1989). Morphology of atopic eczema. Allergy, 44, 16–19.
Mihm, M. C., Jr, Soter, N. A., Dvorak, H. F., & Austen, K. F. (1976). The structure of normal skin and the morphology of atopic eczema. Journal of Investigative Dermatology, 67(3), 305–312.
Kazlouskaya, V., & Collins, M.-K. (n.d.). Psoriasis.
Menter, A., Gottlieb, A., Feldman, S. R., Van Voorhees, A. S., Leonardi, C. L., Gordon, K. B., Lebwohl, M., Koo, J. Y., Elmets, C. A., Korman, N. J., Beutner, K. R., & Bhushan, R. (2008). Guidelines of care for the management of psoriasis and psoriatic arthritis: Section 1. Overview of psoriasis and guidelines of care for the treatment of psoriasis with biologics. Journal of the American Academy of Dermatology, 58(5), 826–850.
Palfreeman, A. C., McNamee, K. E., & McCann, F. E. (2013). New developments in the management of psoriasis and psoriatic arthritis: a focus on apremilast. Drug Design, Development and Therapy, 201–210.
Weigle, N., & McBane, S. (2013). Psoriasis. American Family Physician, 87(9), 626.
Raychaudhuri, S. K., Maverakis, E., & Raychaudhuri, S. P. (2014). Diagnosis and classification of psoriasis. Autoimmunity Reviews, 13(4–5), 490–495.
Kimmel, G. W., & Lebwohl, M. (2018). Psoriasis: overview and diagnosis. Evidence-Based Psoriasis: Diagnosis and Treatment, 1-16.
Chan, B. (n.d.). Solar lentigo.
Emanuel, P., & Cheng, H. (n.d.-b). Solar lentigo pathology
Rafal, E. S., Griffiths, C. E. M., Ditre, C. M., Finkel, L. J., Hamilton, T. A., Ellis, C. N., & Voorhees, J. J. (1992). Topical tretinoin (retinoic acid) treatment for liver spots associated with photodamage. New England Journal of Medicine, 326(6), 368–374.
Andersen, W. K., Labadie, R. R., & Bhawan, J. (1997). Histopathology of solar lentigines of the face: a quantitative study. Journal of the American Academy of Dermatology, 36(3), 444–447.
Cardinali, G., Kovacs, D., & Picardo, M. (2012, December). Mechanisms underlying post-inflammatory hyperpigmentation: lessons from solar lentigo. In Annales de Dermatologie et de Vénéréologie (Vol. 139, pp. S148-S152). Elsevier Masson.
Sanchez, N. P., Pathak, M. A., Sato, S., Fitzpatrick, T. B., Sanchez, J. L., & Mihm, M. C., Jr. (1981). Melasma: a clinical, light microscopic, ultrastructural, and immunofluorescence study. Journal of the American Academy of Dermatology, 4(6), 698–710.
Achar, A., & Rathi, S. K. (2011). Melasma: a clinico-epidemiological study of 312 cases. Indian Journal of Dermatology, 56(4), 380.
Bagherani, N., Gianfaldoni, S., & Smoller, B. (2015). An overview on melasma. Pigmentary Disorders, 2(10), 218.
Sarvjot, V., Sharma, S., Mishra, S., & Singh, A. (2009). Melasma: a clinicopathological study of 43 cases. Indian journal of pathology & microbiology, 52(3), 357–359.
Goldsmith, L. A., Katz, S. I., Gilchrest, B. A., Paller, A. S., Leffell, D. J., & Wolff, K. (2012). Fitzpatrick’s Dermatology in General Medicine, 8e. McGrawHill Medical, 2421-2429.
Alikhan, A., Felsten, L. M., Daly, M., & Petronic-Rosic, V. (2011). Vitiligo: a comprehensive overview: part I. Introduction, epidemiology, quality of life, diagnosis, differential diagnosis, associations, histopathology, etiology, and work-up. Journal of the American Academy of Dermatology, 65(3), 473–491.
Grimes, P. E. (2017). Vitiligo: Management and prognosis. UpToDate. Tsao H.(Ed).
Happle, R. (1995). What is a nevus. Dermatology, 191(1), 1–5.
Sober, A. J., & Burstein, J. M. (1995). Precursors to skin cancer. Cancer, 75(S2), 645–650.
Emanuel, P., & Cheng, H. (n.d.-c). Melanocytic nevus pathology.
Kincannon, J., & Boutzale, C. (1999). The physiology of pigmented nevi. Pediatrics, 104(Supplement_5), 1042-1045.
Hoang, M. P., & Mihm Jr, M. C. (2014). Dysplastic (Atypical) Nevi. In Melanocytic Lesions: A Case Based Approach (pp. 205-221). New York, NY: Springer New York.
Hafner, C., & Vogt, T. (2008). Seborrheic keratosis. JDDG: Journal Der Deutschen Dermatologischen Gesellschaft, 6(8), 664–677.
Foster, C., & Tallon, B. (n.d.). Seborrhoeic keratosis pathology.
Minagawa, A. (2017). Dermoscopy--pathology relationship in seborrheic keratosis. The Journal of Dermatology, 44(5), 518–524.
Takahashi, K., Mulliken, J. B., Kozakewich, H. P., Rogers, R. A., Folkman, J., & Ezekowitz, R. A. (1994). Cellular markers that distinguish the phases of hemangioma during infancy and childhood. The Journal of clinical investigation, 93(6), 2357–2364.
Léauté-Labrèze, C., Harper, J. I., & Hoeger, P. H. (2017). Infantile haemangioma. The Lancet, 390(10089), 85–94.
Chundriger, Q., & Ud, N. (n.d.). Hemangioma & variants.
Prajapati, V., & Barankin, B. (2008). Dermacase. Actinic keratosis. Canadian Family Physician Medecin de Famille Canadien, 54(5), 691–699.
Fernández-Figueras, M.-T., & Muñoz, N. P. (n.d.). Actinic keratosis.
Weedon, D. (2009). Weedon's skin pathology e-book: expert consult-online and print. Elsevier Health Sciences.
Cockerell, C. J. (2000). Histopathology of incipient intraepidermal squamous cell carcinoma (“actinic keratosis”). Journal of the American Academy of Dermatology, 42(1), S11–S17.
Bath-Hextall, F. J., Matin, R. N., Wilkinson, D., & Leonardi-Bee, J. (2013). Interventions for cutaneous Bowen’s disease. Cochrane Database of Systematic Reviews, (6).
Cox, N. H. (1994). Body site distribution of Bowen’s disease. British Journal of Dermatology, 130(6), 714–716.
Hale, C. S. (n.d.). Squamous cell carcinoma in situ / Bowen disease.
Yeh, S., How, S. W., & Lin, C. S. (1968). Arsenical cancer of skin: Histologic study with special reference to Bowen’s disease. Cancer, 21(2), 312–339.
Shain, A. H., & Bastian, B. C. (2016). From melanocytes to melanomas. Nature Reviews Cancer, 16(6), 345–358.
Titus-Ernstoff, L., Perry, A. E., Spencer, S. K., Gibson, J. J., Cole, B. F., & Ernstoff, M. S. (2005). Pigmentary characteristics and moles in relation to melanoma risk. International Journal of Cancer, 116(1), 144–149.
Emanuel, P., & Cheng, H. (n.d.-d). Melanoma pathology.
Barnhill, R. L., Piepkorn, M., & Busam, K. J. (2004). Pathology of melanocytic nevi and malignant melanoma. Springer Science & Business Media.
Elder, D. E. (2006). Pathology of melanoma. Clinical Cancer Research, 12(7), 2308s–2311s.
Rubin, A. I., Chen, E. H., & Ratner, D. (2005). Basal-cell carcinoma. New England Journal of Medicine, 353(21), 2262–2269.
Kalmykova, A. (n.d.). Basal cell carcinoma.
Farmer, E. R., & Helwig, E. B. (1980). Metastatic basal cell carcinoma: a clinicopathologic study of seventeen cases. Cancer, 46(4), 748–757.
Cameron, M. C., Lee, E., Hibler, B. P., Barker, C. A., Mori, S., Cordova, M., Nehal, K. S., & Rossi, A. M. (2019). Basal cell carcinoma: Epidemiology; pathophysiology; clinical and histological subtypes; and disease associations. Journal of the American Academy of Dermatology, 80(2), 303–317.
Alam, M., & Ratner, D. (2001). Cutaneous squamous-cell carcinoma. New England Journal of Medicine, 344(13), 975–983.
Bernstein, S. C., Lim, K. K., Brodland, D. G., & Heidelberg, K. A. (1996). The many faces of squamous cell carcinoma. Dermatologic Surgery, 22(3), 243–254.
Guenthner, S. T., Hurwitz, R. M., Buckel, L. J., & Gray, H. R. (1999). Cutaneous squamous cell carcinomas consistently show histologic evidence of in situ changes: a clinicopathologic correlation. Journal of the American Academy of Dermatology, 41(3), 443–448.
Thompson, L. D. R. (2003). Squamous cell carcinoma variants of the head and neck. Current Diagnostic Pathology, 9(6), 384–396.
Turnbull, N., & Emanual, P. (n.d.). Squamous cell carcinoma pathology.
Menon, A. K., Jayasumana, S., Rawat, A. S., Jain, H., Veit, A., & Kumar, S. (2020). Long-tail learning via logit adjustment. ArXiv Preprint ArXiv:2007. 07314.
Lin, Z., Sun, J., Davis, A., & Snavely, N. (2020). Visual chirality. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12295–12303.
Futrega, M., Milesi, A., Marcinkiewicz, M., & Ribalta, P. (2022, July). Optimized U-Net for brain tumor segmentation. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 7th International Workshop, BrainLes 2021, Held in Conjunction with MICCAI 2021, Virtual Event, September 27, 2021, Revised Selected Papers, Part II (pp. 15-29). Cham: Springer International Publishing.
Zhu, Q., Du, B., Turkbey, B., Choyke, P. L., & Yan, P. (2017, May). Deeply-supervised CNN for prostate segmentation. In 2017 international joint conference on neural networks (IJCNN) (pp. 178-184). IEEE.
Milletari, F., Navab, N., & Ahmadi, S. A. (2016, October). V-Net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 fourth international conference on 3D vision (3DV) (pp. 565-571). IEEE.
Rukundo, O., & Cao, H. (2012). Nearest neighbor value interpolation. arXiv preprint arXiv:1211.1768.
Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Wang, Y. J., Huang, Y. K., Wang, J. Y., & Wu, Y. H. (2019). In vivo characterization of large cell acanthoma by cellular resolution optical coherent tomography. Photodiagnosis and photodynamic therapy, 26, 199-202.
Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., & Torralba, A. (2017). Scene parsing through ADE20K dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 633-641).
Everingham, M., Eslami, S. A., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2015). The pascal visual object classes challenge: A retrospective. International journal of computer vision, 111, 98-136.
Gondara, L. (2016, December). Medical image denoising using convolutional denoising autoencoders. In 2016 IEEE 16th international conference on data mining workshops (ICDMW) (pp. 241-246). IEEE.
Yi, X., & Babyn, P. (2018). Sharpness-aware low-dose CT denoising using conditional generative adversarial network. Journal of digital imaging, 31, 655-669.
Zhou, L., Schaefferkoetter, J. D., Tham, I. W., Huang, G., & Yan, J. (2020). Supervised learning with CycleGAN for low-dose FDG PET image denoising. Medical image analysis, 65, 101770.
Winetraub, Y., Yuan, E., Terem, I., Yu, C., Mao, M., Megan, M., Yu, J., Sarin, K., Aasi, S., Chan, W., Hua, V., Lautman, Z., Shevidi, S., Blankenberg, E., Diep, A., Do, H., Chu, S., de la Zerda, A., & Rieger, K. (2021, March). Non-invasive virtual biopsy using optical coherence tomography. In Optical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XXV (Vol. 11630, p. 116300X). SPIE.
Terem, I., Winetraub, Y., Yuan, E., Yu, C., Mao, M., Hong, M., Yu, J, Sarin, K., Aasi, S., Rieger, K., Chan, W., Hua, V., Lautman, Z., Shevidi, S., Blankenberg, E., Diep, A., Do, H., Chu, S., & de la Zerda, A. (2021, March). High resolution slice to volume alignment of 2D histopathology to 3D optical coherence tomography (OCT) images. In Optical Biopsy XIX: Toward Real-Time Spectroscopic Imaging and Diagnosis (Vol. 11636, p. 1163609). SPIE.
Li, J., Garfinkel, J., Zhang, X., Wu, D., Zhang, Y., De Haan, K., Wang, H., Liu, T., Bai, B., Rivenson, Y., Rubinstein, G., O Scumpia, P., & Ozcan, A. (2021). Biopsy-free in vivo virtual histology of skin using deep learning. Light: Science & Applications, 10(1), 233.
Tsai, S. T., Chan, C. C., Chen, H. H., Tjiu, J. W., & Huang, S. L. (2020, April). Segmentation based OCT Image to H&E-like image conversion. In Microscopy Histopathology and Analytics (pp. MM3A-5). Optica Publishing Group.
Chu, C., Zhmoginov, A., & Sandler, M. (2017). CycleGAN, a master of steganography. arXiv preprint arXiv:1712.02950.
Bashkirova, D., Usman, B., & Saenko, K. (2019). Adversarial self-defense for cycle-consistent GANs. Advances in Neural Information Processing Systems, 32.
Kim, T., Cha, M., Kim, H., Lee, J. K., & Kim, J. (2017, July). Learning to discover cross-domain relations with generative adversarial networks. In International conference on machine learning (pp. 1857-1865). PMLR.
Yi, Z., Zhang, H., Tan, P., & Gong, M. (2017). DualGAN: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE international conference on computer vision (pp. 2849-2857).
Liu, M. Y., Breuel, T., & Kautz, J. (2017). Unsupervised image-to-image translation networks. Advances in neural information processing systems, 30.
Huang, X., Liu, M. Y., Belongie, S., & Kautz, J. (2018). Multimodal unsupervised image-to-image translation. In Proceedings of the European conference on computer vision (ECCV) (pp. 172-189).
Lee, H. Y., Tseng, H. Y., Huang, J. B., Singh, M., & Yang, M. H. (2018). Diverse image-to-image translation via disentangled representations. In Proceedings of the European conference on computer vision (ECCV) (pp. 35-51).
Kim, J., Kim, M., Kang, H., & Lee, K. (2019). U-GAT-IT: Unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation. arXiv preprint arXiv:1907.10830.
Park, T., Efros, A. A., Zhang, R., & Zhu, J. Y. (2020). Contrastive learning for unpaired image-to-image translation. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IX 16 (pp. 319-345). Springer International Publishing.
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). GANs trained by a two time-scale update rule converge to a local nash equilibrium. Advances in Neural Information Processing Systems, 30.
Bińkowski, M., Sutherland, D. J., Arbel, M., & Gretton, A. (2018). Demystifying MMD GANs. ArXiv Preprint ArXiv:1801. 01401.
Chen, Tianqi, & Guestrin, C. (2016). XGBoost: A scalable tree boosting system. Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, 785–794.
Weigert, M., Schmidt, U., Haase, R., Sugawara, K., & Myers, G. (2020). Star-convex polyhedra for 3D object detection and segmentation in microscopy. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 3666–3673).
Schmidt, U., Weigert, M., Broaddus, C., & Myers, G. (2018). Cell detection with star-convex polygons. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part II 11 (pp. 265–273).
Fischman, S., Pérez-Anker, J., Tognetti, L., Di Naro, A., Suppa, M., Cinotti, E., Viel, T., Monnier, J., Rubegni, P., Del Marmol, V., Malvehy, J., Puig, S., Dubois, A., & Perrot, J. L. (2022). Non-invasive scoring of cellular atypia in keratinocyte cancers in 3D LC-OCT images using deep learning. Scientific reports, 12(1), 481.
Mittal, A., Moorthy, A. K., & Bovik, A. C. (2012). No-reference image quality assessment in the spatial domain. IEEE Transactions on image processing, 21(12), 4695-4708.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). " Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144).
Arjovsky, M., Chintala, S., & Bottou, L. (2017, July). Wasserstein generative adversarial networks. In International conference on machine learning (pp. 214-223). PMLR.
Larsen, A. B. L., Sønderby, S. K., Larochelle, H., & Winther, O. (2016, June). Autoencoding beyond pixels using a learned similarity metric. In International conference on machine learning (pp. 1558-1566). PMLR.
He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R. (2020). Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 9729-9738).
Ho, T. S., Tsai, M. R., Lu, C. W., Chang, H. S., & Huang, S. L. (2021). Mirau-type full-field optical coherence tomography with switchable partially spatially coherent illumination modes. Biomedical optics express, 12(5), 2670–2683.
Sadiq, S., & Indulska, M. (2017). Open data: Quality over quantity. International journal of information management, 37(3), 150-154.
Cai, L., & Zhu, Y. (2015). The challenges of data quality and data quality assessment in the big data era. Data science journal, 14, 2-2.
Van Den Oord, A., & Vinyals, O. (2017). Neural discrete representation learning. Advances in neural information processing systems, 30.
Apelian, C., Harms, F., Thouvenin, O., & Boccara, A. C. (2016). Dynamic full field optical coherence tomography: subcellular metabolic contrast revealed in tissues by interferometric signals temporal analysis. Biomedical optics express, 7(4), 1511-1524.
Yasuno, Y., Madjarova, V. D., Makita, S., Akiba, M., Morosawa, A., Chong, C., Sakai, T., Chan, K. P., Itoh, M., & Yatagai, T. (2005). Three-dimensional and high-speed swept-source optical coherence tomography for in vivo investigation of human anterior eye segments. Optics express, 13(26), 10652–10664.
Settles, B. (2011, April). From theories to queries: Active learning in practice. In Active learning and experimental design workshop in conjunction with AISTATS 2010 (pp. 1-18). JMLR Workshop and Conference Proceedings.
Crowther, J. M., Sieg, A., Blenkiron, P., Marcott, C., Matts, P. J., Kaczvinsky, J. R., & Rawlings, A. V. (2008). Measuring the effects of topical moisturizers on changes in stratum corneum thickness, water gradients and hydration in vivo. The British journal of dermatology, 159(3), 567–577.
Sandby-Møller, J., Poulsen, T., & Wulf, H. C. (2003). Epidermal thickness at different body sites: relationship to age, gender, pigmentation, blood content, skin type and smoking habits. Acta dermato-venereologica, 83(6), 410–413.
Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. (2017). Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE transactions on image processing, 26(7), 3142-3155.
Ngiam, J., Khosla, A., Kim, M., Nam, J., Lee, H., & Ng, A. Y. (2011). Multimodal deep learning. In Proceedings of the 28th international conference on machine learning (ICML-11) (pp. 689-696).
Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., & Sutskever, I. (2021). Learning transferable visual models from natural language supervision. International Conference on Machine Learning.
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125.
OpenAI (2023). GPT-4 technical report. arXiv:2303.08774.
Wang, L., Jacques, S., & Zheng, L. (1995). MCML—Monte Carlo modeling of light transport in multi-layered tissues. Computer methods and programs in biomedicine, 47(2), 131–146.
Toublanc, D. (1996). Henyey–Greenstein and Mie phase functions in Monte Carlo radiative transfer computations. Applied optics, 35(18), 3270–3274.
Frolov, S., Potlov, A., Petrov, D., & Proskurin, S. (2017). Monte-Carlo simulation of OCT structural images of human skin using experimental B-scans and voxel based approach to optical properties distribution. In Saratov Fall Meeting 2016: Optical Technologies in Biophysics and Medicine XVIII (pp. 237–244).
Zhao, S. (2016). Advanced Monte Carlo simulation and machine learning for frequency domain optical coherence tomography. (Doctoral dissertation, California Institute of Technology).
Chang, C. K., & Huang, S. L. (2018, April). Nucleus and Cytoplasm Segmentation using Full-Field Optical Coherence Tomography. In Microscopy Histopathology and Analytics (pp. MF3A-2). Optica Publishing Group.
Chang, C. K., (2018). Quantitative analyses of human skin using full-field optical coherence tomography. (Doctoral dissertation, National Taiwan University).
Ulrich, M., Themstrup, L., de Carvalho, N., Manfredi, M., Grana, C., Ciardo, S., Kästle, R., Holmes, J., Whitehead, R., Jemec, G. B., Pellacani, G., & Welzel, J. (2016). Dynamic Optical Coherence Tomography in Dermatology. Dermatology (Basel, Switzerland), 232(3), 298–311.
Park, S., Nguyen, T., Benoit, E., Sackett, D. L., Garmendia-Cedillos, M., Pursley, R., Boccara, C., & Gandjbakhche, A. (2021). Quantitative evaluation of the dynamic activity of HeLa cells in different viability states using dynamic full-field optical coherence microscopy. Biomedical optics express, 12(10), 6431–6441.
Scholler, J., Groux, K., Goureau, O., Sahel, J. A., Fink, M., Reichman, S., Boccara, C., & Grieve, K. (2020). Dynamic full-field optical coherence tomography: 3D live-imaging of retinal organoids. Light, science & applications, 9, 140.
Deng, K., Yang, G., Ramanan, D., & Zhu, J. Y. (2023). 3d-aware conditional image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4434-4445).
Balakrishnan, G., Zhao, A., Sabuncu, M. R., Guttag, J., & Dalca, A. V. (2019). VoxelMorph: a learning framework for deformable medical image registration. IEEE transactions on medical imaging, 38(8), 1788-1800.
Ramos-Vara, J. A. (2005). Technical aspects of immunohistochemistry. Veterinary pathology, 42(4), 405-426.
O'Brien, T., Feder, N. M. E. M., & McCully, M. E. (1964). Polychromatic staining of plant cell walls by toluidine blue O. Protoplasma, 59, 368-373.
Choi, Y., Choi, M., Kim, M., Ha, J. W., Kim, S., & Choo, J. (2018). Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8789-8797).
Haghighi, F., Taher, M. R. H., Gotway, M. B., & Liang, J. (2022). DiRA: Discriminative, restorative, and adversarial learning for self-supervised medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 20824-20834).
Maron, O., & Lozano-Pérez, T. (1997). A framework for multiple-instance learning. Advances in neural information processing systems, 10.
Li, B., Li, Y., & Eliceiri, K. W. (2021). Dual-stream multiple instance learning network for whole slide image classification with self-supervised contrastive learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 14318-14328).
Ilse, M., Tomczak, J., & Welling, M. (2018, July). Attention-based deep multiple instance learning. In International conference on machine learning (pp. 2127-2136). PMLR.
Yang, L., Zhang, Y., Chen, J., Zhang, S., & Chen, D. Z. (2017). Suggestive annotation: A deep active learning framework for biomedical image segmentation. In Medical Image Computing and Computer Assisted Intervention− MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, September 11-13, 2017, Proceedings, Part III 20 (pp. 399-407). Springer International Publishing.
McCloskey, M., & Cohen, N. J. (1989). Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation (Vol. 24, pp. 109-165). Academic Press.
French, R. M. (1999). Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4), 128-135.
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., Hassabis, D., Clopath, C., Kumaran, D., & Hadsell, R. (2017). Overcoming catastrophic forgetting in neural networks. PNAS Proceedings of the National Academy of Sciences of the United States of America, 114(13), 3521–3526.
Pascanu, R., & Bengio, Y. (2013). Revisiting natural gradient for deep networks. arXiv preprint arXiv:1301.3584.
Van Rossum, G., & Drake, F. L. (2009). Python 3 Reference Manual. Scotts Valley, CA: CreateSpace.
Schindelin, J., Arganda-Carreras, I., Frise, E., Kaynig, V., Longair, M., Pietzsch, T., Preibisch, S., Rueden, C., Saalfeld, S., Schmid, B., Tinevez, J. Y., White, D. J., Hartenstein, V., Eliceiri, K., Tomancak, P., & Cardona, A. (2012). Fiji: an open-source platform for biological-image analysis. Nature methods, 9(7), 676–682.
Harris, C. R., Millman, K. J., van der Walt, S. J., Gommers, R., Virtanen, P., Cournapeau, D., Wieser, E., Taylor, J., Berg, S., Smith, N. J., Kern, R., Picus, M., Hoyer, S., van Kerkwijk, M. H., Brett, M., Haldane, A., Del Río, J. F., Wiebe, M., Peterson, P., Gérard-Marchant, P., … Oliphant, T. E. (2020). Array programming with NumPy. Nature, 585(7825), 357–362.
McKinney, W., & others. (2010). Data structures for statistical computing in python. In Proceedings of the 9th Python in Science Conference (Vol. 445, pp. 51–56).
Virtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., van der Walt, S. J., Brett, M., Wilson, J., Millman, K. J., Mayorov, N., Nelson, A. R. J., Jones, E., Kern, R., Larson, E., Carey, C. J., … SciPy 1.0 Contributors (2020). SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature methods, 17(3), 261–272.
Hunter, J. D. (2007). Matplotlib: A 2D graphics environment. Computing in science & engineering, 9(03), 90-95.
van der Walt, S., Schönberger, J. L., Nunez-Iglesias, J., Boulogne, F., Warner, J. D., Yager, N., Gouillart, E., Yu, T., & scikit-image contributors (2014). scikit-image: image processing in Python. PeerJ, 2, e453.
Bradski, G. (2000). The openCV library. Dr. Dobb's Journal: Software Tools for the Professional Programmer, 25(11), 120-123.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, O., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., & Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. the Journal of machine Learning research, 12, 2825-2830.
Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., & Lee, S. I. (2020). From local explanations to global understanding with explainable AI for trees. Nature machine intelligence, 2(1), 56-67.
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Köpf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., & Chintala, S. (2019). Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.
Milesial (2020) U-Net: Semantic segmentation with PyTorch [Source code]. https://github.com/milesial/Pytorch-UNet.
Zhu, J. Y. (2017) CycleGAN and pix2pix in PyTorch [Source code]. https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix.
Erik, L. N. (2018) PyTorch-GAN [Source code]. https://github.com/eriklindernoren/PyTorch-GAN.
Lee, H. Y. (2018) DRIT++: Diverse Image-to-Image Translation via Disentangled Representations [Source code]. https://github.com/HsinYingLee/DRIT.
Nvidia (2018) MUNIT: Multimodal UNsupervised Image-to-image Translation [Source code]. https://github.com/NVlabs/MUNIT.
Kang, H. (2019) U-GAT-IT — Official PyTorch Implementation [Source code]. https://github.com/znxlwm/UGATIT-pytorch.
Park, T. and Zhu, J. Y. (2020) Contrastive Unpaired Translation [Source code]. https://github.com/taesungp/contrastive-unpaired-translation.
Obukhov, A., Seitzer, M., Wu, P. W., Zhydenko, S., Kyl, J., & Lin, Y. J. (2020). High-fidelity performance metrics for generative models in PyTorch [Source code]. https://github.com/toshas/torch-fidelity
Silva, T. (2020) PyTorch SimCLR: A Simple Framework for Contrastive Learning of Visual Representations [Source code]. https://github.com/sthalles/SimCLR
Huang, G. (2018) Reptile pytorch [Source code]. https://github.com/gabrielhuang/reptile-pytorch
Gildenblat, J. (2021) Advanced AI explainability for PyTorch [Source code]. https://github.com/jacobgil/pytorch-grad-cam
TorchVision (2016) ImageNet pre-trained ResNet-18. https://pytorch.org/vision/main/models/generated/torchvision.models.resnet18.html#torchvision.models.resnet18
Distributed Machine Learning Community (2019) eXtreme Gradient Boosting [Source code]. https://github.com/dmlc/xgboost
TorchVision (2016) ImageNet pre-trained Inception V3. https://pytorch.org/vision/main/models/generated/torchvision.models.inception_v3.html#torchvision.models.inception_v3
Wolny, A (2018) PyTorch 3D U-Net [Source code]. https://github.com/wolny/pytorch-3dunet.
Hu, B. (2019) DICE loss for PyTorch [Source code]. https://github.com/hubutui/DiceLoss-PyTorch
Jayasumana, S. (2021) PyTorch implementation of the paper: Long-tail Learning via Logit Adjustment [Source code]. https://github.com/Chumsy0725/logit-adj-pytorch
Bukalapak (2018) Python implementation of BRISQUE Image Quality Assessment [Source code]. https://github.com/bukalapak/pybrisque
Tsoumakas, G., & Katakis, I. (2007). Multi-label classification: An overview. International Journal of Data Warehousing and Mining (IJDWM), 3(3), 1-13.
Zhang, M. L., & Zhou, Z. H. (2013). A review on multi-label learning algorithms. IEEE transactions on knowledge and data engineering, 26(8), 1819-1837.
Lee, P. H., Chan, C. C., Huang, S. L., Chen, A., & Chen, H. H. (2018). Extracting blood vessels from full-field OCT data of human skin by short-time RPCA. IEEE transactions on medical imaging, 37(8), 1899-1909.
Shimoda, W., & Yanai, K. (2019). Self-supervised difference detection for weakly-supervised semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 5208-5217).
Bertinetto, L., Valmadre, J., Henriques, J. F., Vedaldi, A., & Torr, P. H. (2016). Fully-convolutional siamese networks for object tracking. In Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part II 14 (pp. 850-865). Springer International Publishing.
Gessert, N., Nielsen, M., Shaikh, M., Werner, R., & Schlaefer, A. (2020). Skin lesion classification using ensembles of multi-resolution EfficientNets with meta data. MethodsX, 7, 100864.
Settles, B. (2009). Active learning literature survey.
Dögnitz, N., & Wagnières, G. (1998). Determination of tissue optical properties by steady-state spatial frequency-domain reflectometry. Lasers in medical science, 13, 55-65.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90984-
dc.description.abstract醫學影像分析不論是在醫學、工業還是學術界一向為一個錯綜複雜的問題。以定量和定性地解釋醫學影像需要長期的經驗和專業知識的累積。以光學同調斷層掃描(Optical coherence tomography, OCT)影像而言,其具備非侵入性和高速診斷的優勢,甚至近幾年已達到細胞級解析度的水準。然而在OCT影像中,對人體皮膚結構和病變的研究仍然有限,所以本論文的目標是通過開發一系列深度學習演算法和模型架構,來探索微米解析度等級的人體皮膚OCT影像。

為了能夠更系統性的分析和理解,本論文總共劃分成四個大項。第一大項為「定量理解」。一開始將開發二維影像分割模型,分割人類皮膚層和細胞核,並探討二維模型的侷限性,之後則進一步開發非監督式影像去雜訊模型先改善三維影像的品質,再發展半監督式三維影像分割模型解決二維模型的限制,並分析人類角質細胞核的大小。第二大項為「定性理解」。此項目中將透過開發生成式模型轉換OCT和醫學中常見的蘇木精與伊紅(Hematoxylin and eosin, H&E)染色切片影像,來理解人類皮膚組織中兩種影像的對應性,而這個部分會先以非監督式學習的方法來初步了解轉換上的效果和限制,之後將進一步利用影像標註來增加轉換的準確性。

基於前兩大項中的知識背景,接下來將探討具有疾病的人類皮膚OCT影像。而第三大項為「定性解釋」。此項目將開發一系列影像辨識模型來區別不同的疾病,並透過可解釋性人工智慧框架Grad-CAM用以辨別各個疾病的病兆,其中將先以自監督式學習開發常見皮膚疾病的辨識模型,接著以元學習開發數據量非常少的罕見疾病辨識模型,最後將針對長尾資料分布情境,開發一套可辨識健康狀態和13種皮膚疾病的模型。第四大項為「定量解釋」。在第三大項中,由於神經網絡可解釋性上的限制,無法進一步解釋疾病的嚴重程度和不同情境定量上的判斷依據,所以此項目將先以多任務學習強化二維影像分割模型在疾病影像的適應性,以及賦予模型自動辨識出黑色素分布的能力,並以此分割結果製作一系列的指標和特徵來開發可定量解釋的機器學習模型。

對於本論文的主要貢獻不僅在OCT影像上分析人體皮膚提供定量和定性的視角,並更進一步奠定了可信賴人工智慧和精準醫療診斷的基礎。
zh_TW
dc.description.abstractMedical image analysis is a convoluted and involuted problem whether in the field of medicine, industry, or academia. Quantitatively and qualitatively interpreting the medical data invariably and inevitably necessitate the long-term experience and long-standing expertise. While the progress of the cellular-resolution optical coherence tomography (OCT) present the opportunity for non-invasive and high-speed diagnosis and providing histopathological-level information, limited amounts of research have explored the human skin structures and lesions in such images. The advancement of the deep learning paves the way for systematically and automatically analyzing the medical images. Therefore, the objective of this thesis is exploring the human skin cellular-resolution OCT imaging through developing a series of deep learning approaches.

In order to facilitate a more holistic analysis, this thesis is divided into four major components. The first component is "Quantitative Comprehension." Initially, a 2D image segmentation model will be developed to segment healthy human skin layers and cell nuclei, investigating the limitations of the two-dimensional model. Subsequently, an unsupervised image denoising model will be developed to improve the quality of 3D volume, followed by the development of a semi-supervised 3D image segmentation model to address the limitations of the two-dimensional model, analyzing the size of human keratinocytes cell nuclei. The second component is "Qualitative Comprehension." In this aspect, a generative model will be developed to transform OCT images into commonly used hematoxylin and eosin (H&E) stained slides, in order to understand the correspondence between the two types of images in human skin tissue. This part will initially employ unsupervised learning methods to gain preliminary insights into the effectiveness and limitations of the transformation. Subsequently, image annotation will be utilized to enhance the accuracy of the transformation.

Based on the knowledge background from the previous two components, the subsequent exploration will focus on unhealthy human skin. The third component is "Qualitative Interpretation." This involves developing a series of image recognition models to differentiate various diseases and employing the explainable artificial intelligence framework, Grad-CAM, to identify disease-specific signs. Initially, a self-supervised learning approach will be used to develop an image recognition model for common skin diseases. Subsequently, meta-learning techniques will be utilized to develop recognition models for scarce diseases with limited data. Ultimately, the model for the long-tail data distribution is developed to form a comprehensive system capable of recognizing both healthy conditions and 13 skin diseases. The fourth component is "Quantitative Interpretation." Due to the limitations in the interpretability of neural networks, further quantitative assessments of disease severity and context-dependent judgments cannot be made. Consequently, this component will begin with multi-task learning to enhance the adaptability of the segmentation model in disease images and identify melanin distribution. Based on the segmentation results, a set of indicators and features will be generated to develop an interpretable machine learning model.

The main contribution of this thesis not only provides the quantitative and qualitative perspective for analyzing human skin but also lays the first stone toward trustworthy artificial intelligence and accurate medical diagnosis.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-10-24T16:36:39Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-10-24T16:36:39Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsAcknowledgement i
中文摘要 ii
Abstract iii
Content iv
List of Figures xv
List of Tables xxxiv
Chapter 1 Introduction 1
1.1 Background 1
1.2 Motivation 1
Chapter 2 Optical Coherence Tomography 4
2.1 Michelson interferometer 4
2.2 Time-domain OCT system 5
2.3 Full-field OCT system 8
2.4 Mirau-based FF-OCT system 9
2.4.1 Ce3+:YAG light source 11
2.4.2 Ti:sapphire light source 12
Chapter 3 Human Skin Tissue and Lesions 14
3.1 Human skin tissue structure 14
3.1.1 Epidermis 15
3.1.1.1 Stratum corneum 15
3.1.1.2 Stratum lucidum 16
3.1.1.3 Stratum granulosum 16
3.1.1.4 Stratum spinosum 16
3.1.1.5 Stratum basale 17
3.1.2 Dermis 17
3.1.2.1 Papillary layer 17
3.1.2.2 Reticular layer 18
3.1.3 Hypodermis 18
3.2 Hematoxylin and eosin stains 18
3.2.1 H&E stains for human skin 18
3.2.2 Preparation procedure 19
3.3 Human skin lesions 22
3.3.1 Inflammatory lesions 22
3.3.1.1 Eczema 22
3.3.1.2 Psoriasis 23
3.3.2 Pigmented lesions 24
3.3.2.1 Solar lentigo 24
3.3.2.2 Melasma 25
3.3.2.3 Vitiligo 26
3.3.3 Benign tumor 26
3.3.3.1 Nevus 27
3.3.3.2 Seborrhoeic keratosis 28
3.3.3.3 Hemangioma 28
3.3.4 Malignant tumor 29
3.3.4.1 Actinic keratosis 29
3.3.4.2 Bowen’s disease 30
3.3.4.3 Melanoma 31
3.3.4.4 Basal cell carcinoma 32
3.3.4.5 Squamous cell carcinoma 33
Chapter 4 Machine Learning 34
4.1 Supervised learning 34
4.2 Unsupervised learning 35
4.3 Deep learning 36
4.3.1 Deep neural network 37
4.3.2 Backpropagation 38
4.4 Transfer learning 39
4.5 Curriculum learning 40
4.6 Multi-task learning 41
4.7 Semi-supervised learning 41
4.7.1 Co-training 42
4.7.2 Self-training 42
4.8 Self-supervised learning 44
4.8.1 Generative method 44
4.8.2 Contrastive method 44
4.9 Meta-learning 46
4.10 Long-tail learning 48
4.10.1 Cost-sensitive learning 50
4.10.2 Post-hoc logit adjustment 50
4.11 Explainable machine learning 51
4.11.1 Local interpretable model-agnostic explanations 52
4.11.2 Shapley additive explanations 53
Chapter 5 Computer Vision using Deep Learning 55
5.1 Convolutional neural networks 55
5.2 Image classification 56
5.2.1 ResNet 56
5.2.2 Inception V3 57
5.3 Image segmentation 58
5.3.1 U-Net 58
5.3.2 3D U-Net 59
5.3.3 V-Net 60
5.3.4 StarDist 61
5.4 Image denoising 62
5.4.1 Denoising autoencoder 62
5.5 Image generation 63
5.5.1 Variational autoencoder 64
5.5.2 Generative adversarial network 66
5.6 Image translation 69
5.6.1 Supervised image-to-image translation 69
5.6.2 Unsupervised image-to-image translation 71
5.7 Visual explanations 79
5.7.1 Class activation map 79
5.7.2 Grad-CAM 80
5.7.3 Grad-CAM++ 81
Chapter 6 Skin Image Dataset 83
6.1 Laboratory in-vivo FF-OCT dataset 83
6.1.1 Data collection 83
6.1.2 Data pre-processing 84
6.1.3 Data annotation 85
6.1.4 Exploratory data analysis 85
6.2 Clinical H&E dataset 87
6.2.1 Data collection 87
6.2.2 Data pre-processing 87
6.2.3 Data annotation 88
6.2.4 Exploratory data analysis 88
6.3 Clinical in-vivo FF-OCT dataset 90
6.3.1 Data collection 90
6.3.2 Data pre-processing 91
6.3.3 Data annotation 92
6.3.4 Exploratory data analysis 96
Chapter 7 Healthy Human Skin: 2D and 3D Segmentation 102
7.1 2D skin layers and keratinocytes segmentation 102
7.1.1 Dataset 102
7.1.2 Image pre-processing 103
7.1.3 Model architecture 104
7.1.4 Metrics and loss function 106
7.1.5 Post-processing 107
7.1.6 Experiments 108
7.1.7 Results 109
7.1.7.1 Quantitative segmentation results 109
7.1.7.2 Qualitative segmentation results 110
7.1.7.3 Post-processing results 111
7.1.7.4 Skin layer thickness analysis 112
7.1.7.5 Cell nuclei area analysis 113
7.1.8 Labeling criteria and inconsistency 114
7.1.8.1 Metrics sensitivity 115
7.1.8.2 Dataset diversity 117
7.1.8.3 Labeling inconsistency 118
7.1.9 Summary 119
7.2 3D cross-sectional image denoising 120
7.2.1 Dataset 120
7.2.2 Image pre-processing 120
7.2.3 Method 122
7.2.4 Experiment 123
7.2.4.1 Data pipeline 123
7.2.4.2 Training details 124
7.2.5 Results 125
7.2.5.1 Qualitative results 125
7.2.5.1 Quantitative results 126
7.2.6 Summary 127
7.3 3D cell nuclei segmentation 128
7.3.1 Dataset 128
7.3.2 Image pre-processing 128
7.3.3 Method 129
7.3.3.1 Cross-sectional profile denoising 130
7.3.3.2 3D cell nuclei pseudo label generation 130
7.3.3.3 3D segmentation model 132
7.3.4 Experiment 133
7.3.5 Results 134
7.3.5.1 2D self-training segmentation results 134
7.3.5.2 3D U-Net segmentation results 135
7.3.5.3 Comparison with annotation 138
7.3.5.4 3D volume statistical analysis 139
7.3.6 Limitation 140
7.3.7 Summary 141
7.4 Discussion for quantitative comprehension 142
Chapter 8 Healthy Human Skin: H&E Translation 143
8.1 Unsupervised OCT and H&E image translation 143
8.1.1 Dataset 144
8.1.2 Method 145
8.1.2.1 Blur the excessive features 146
8.1.2.2 Defend the adversarial attack 146
8.1.2.3 Stabilize the inverse translation 147
8.1.2.4 Mitigate the speckle noise 148
8.1.2.5 Full objective function 148
8.1.3 Experiment 148
8.1.3.1 Baselines 148
8.1.3.2 Metrics 149
8.1.4 Results 149
8.1.4.1 Qualitative results 149
8.1.4.2 Quantitative results 151
8.1.4.3 Transfer learning results 151
8.1.5 Analysis 153
8.1.5.1 Effect of the noise 153
8.1.5.2 Weight of the noise 155
8.1.6 Limitation 155
8.1.7 Summary 157
8.2 Label-assisted OCT and H&E image translation 158
8.2.1 Dataset 158
8.2.2 Method 158
8.2.2.1 OCT-to-H&E direction 159
8.2.2.2 H&E-to-OCT direction 160
8.2.2.3 Adversarial function 161
8.2.2.4 Integrated with SNRGAN 162
8.2.3 Experiment 163
8.2.3.1 Baselines 163
8.2.3.2 Metrics 164
8.2.4 Results 164
8.2.4.1 Qualitative results 164
8.2.4.2 Quantitative results 166
8.2.5 Ablation studies 167
8.2.5.1 Assisted networks 167
8.2.5.2 Model architectures 171
8.2.5.3 Noise effect 175
8.2.6 Limitations 176
8.2.7 Summary 176
8.3 Discussion for qualitative comprehension 178
Chapter 9 Unhealthy Human Skin: Lesions Classification 179
9.1 General skin lesions classification 179
9.1.1 Dataset 180
9.1.2 Method 180
9.1.2.1 Self-supervised learning 180
9.1.2.2 Curriculum learning 181
9.1.2.3 Overall model learning scheme 182
9.1.3 Experiment 182
9.1.3.1 Model training 182
9.1.3.2 Model evaluation 183
9.1.4 Results 183
9.1.4.1 Self-supervised fine-tuning results 183
9.1.4.2 Easy task fine-tuning results 188
9.1.4.3 Hard task fine-tuning results 188
9.1.5 Analysis 190
9.1.6 Summary 192
9.2 Scarce Skin lesions classification 193
9.2.1 Dataset 193
9.2.2 Method 194
9.2.2.1 Self-supervised pre-training 194
9.2.2.2 Meta learning 194
9.2.3 Experiments 195
9.2.3.1 Meta training 195
9.2.3.2 Meta testing 196
9.2.4 Results 196
9.2.5 Analysis 199
9.2.6 Summary 201
9.3 Long-tail distribution classification 202
9.3.1 Dataset 203
9.3.2 Method 203
9.3.2.1 Cost-sensitive learning 203
9.3.2.2 Post-hoc logit adjustment 204
9.3.3 Experiment 204
9.3.4 Results 204
9.3.4.1 Compare with general skin lesion model 205
9.3.4.2 Compare with scarce skin lesion model 209
9.3.5 Analysis 213
9.3.6 Limitations 215
9.3.6.1 Cross-sectional view 215
9.3.6.2 Unexplainable inference 216
9.3.7 Summary 217
9.4 Discussion for qualitative interpretation 218
Chapter 10 Unhealthy Human Skin: Explainable AI 219
10.1 Melanin localization 219
10.1.1 Dataset 219
10.1.2 Method 221
10.1.3 Experiment 221
10.1.4 Results 222
10.1.4.1 Qualitative results 222
10.1.4.2 Quantitative results 226
10.1.5 Analysis 226
10.1.5.1 Normal skin evaluation 226
10.1.5.2 Inflammatory lesions evaluation 227
10.1.5.3 Pigmented lesions evaluation 229
10.1.5.4 Benign tumor evaluation 231
10.1.5.5 Malignant tumor evaluation 233
10.1.6 Statistical analysis for general skin lesions 236
10.1.6.1 Eczema statistical analysis 236
10.1.6.2 Psoriasis statistical analysis 238
10.1.6.3 Solar lentigo statistical analysis 240
10.1.6.4 Vitiligo statistical analysis 242
10.1.6.5 Nevus statistical analysis 244
10.1.6.6 Seborrhoeic keratosis statistical analysis 246
10.1.6.7 Quantitative comparison 248
10.1.7 Limitations 249
10.1.8 Summary 250
10.2 Explainable skin lesions classification 251
10.2.1 Dataset 251
10.2.2 Method 252
10.2.2.1 Feature engineering 252
10.2.2.2 Interpretable machine learning model 254
10.2.3 Experiment 255
10.2.3.1 Model training 255
10.2.3.2 Shapley additive explanations 255
10.2.4 Results 256
10.2.4.1 Eczema case 256
10.2.4.2 Psoriasis case 258
10.2.4.3 Solar lentigo case 260
10.2.4.4 Vitiligo case 263
10.2.4.5 Nevus case 265
10.2.4.6 Seborrhoeic keratosis case 267
10.2.5 Analysis 269
10.2.6 Limitations 271
10.2.6.1 Feature selection 271
10.2.6.2 Feature representative 272
10.2.7 Summary 272
10.3 Discussion for quantitative interpretation 273
Chapter 11 Overall Discussion 274
11.1 Optical system perspective 274
11.2 Clinical diagnosis perspective 275
11.3 Computer vision perspective 276
11.4 Social ethics perspectives 277
11.5 Big data and good data 279
Chapter 12 Conclusion 280
12.1 Contributions 280
12.2 Future works 283
12.2.1 Short-term objectives 283
12.2.2 Long-term objectives 285
12.2.3 Summary 286
Reference 288
Appendix 301
A.1 List of figures in appendix 301
A.2 List of tables in appendix 305
A.3 Monte-Carlo simulation pre-trained model 306
A.3.1 Dataset 306
A.3.2 Method 306
A.3.2.1 Tissue model building 307
A.3.2.2 Multi-layers Monte-Carlo simulation 308
A.3.2.3 Optical coherence tomography simulation 311
A.3.2.4 Bias Henyey-Greenstein function 311
A.3.2.5 Model pre-training and fine-tuning 313
A.3.3 Experiments 313
A.3.3.1 Monte-Carlo OCT simulation 313
A.3.3.2 Model pre-training 314
A.3.3.3 Model fine-tuning 314
A.3.3.4 Model comparison 314
A.3.4 Results 314
A.3.5 Summary 315
A.4 Cell segmentation with morphological filtering 316
A.4.1 Dataset 316
A.4.2 Method 316
A.4.2.1 Image pre-processing 316
A.4.2.2 Intensity thresholding 317
A.4.2.3 Morphological filtering 318
A.4.2.4 Morphology refining with deep learning 319
A.4.3 Experiments 319
A.4.4 Results 319
A.4.5 Summary 320
A.5 Dynamics FF-OCT image denoising 321
A.5.1 Dataset 321
A.5.2 Method 322
A.5.3 Experiments 323
A.5.4 Results 324
A.5.4.1 Cropped image denoising results 324
A.5.4.2 Whole image denoising results 324
A.5.5 Limitations 325
A.5.6 Summary 326
A.6 3D H&E volume translation 327
A.6.1 Method 327
A.6.2 Results 327
A.6.2.1 Cross-sectional slices translation results 327
A.6.2.1 En-face translation results 328
A.6.3 Limitations 329
A.6.4 Summary 330
A.7 Melanin translation with domain adaptation 331
A.7.1 Dataset 332
A.7.2 Method 332
A.7.3 Experiment 333
A.7.4 Results 333
A.7.4.1 Ce3+:YAG to Ti:sapphire image translation 333
A.7.4.2 Ti:sapphire to Ce3+:YAG image translation 334
A.7.5 Melanin translation 335
A.7.6 Limitations 335
A.7.7 Summary 336
A.8 3D segmentation model for skin lesions: concept 337
A.8.1 Method 337
A.8.1.1 2D semi-supervised learning scheme 337
A.8.1.2 3D semi-supervised learning scheme 337
A.8.2 Anticipated challenges 339
A.9 3D C-scan and 2D B-scan registration: concept 340
A.9.1 Method 340
A.9.1.1 VoxelMorph 340
A.9.1.2 OCT image registration with VoxelMorph 342
A.9.2 Anticipated challenges 343
A.10 Weakly supervised blood vessel extraction: concept 344
A.10.1 Method 344
A.10.2 Anticipated challenges 344
A.11 Multi-domains skin images translation: concept 345
A.11.1 Method 345
A.11.1.1 StarGAN 345
A.11.1.2 OCT, H&E, T-Blue, and IHC translation 347
A.11.2 Anticipated challenges 348
A.12 Multi-input lesions classification: concept 350
A.12.1 Method 350
A.12.1.1 DiRA 351
A.12.1.2 Multi-inputs with OCT and dermoscopy data 352
A.12.2 Anticipated challenges 353
A.13 Multiple-instance lesions classification: concept 354
A.13.1 Method 354
A.13.1.1 Dual-stream multiple-instance learning 354
A.13.1.2 Lesions identification with multiple-instance 356
A.13.2 Anticipated challenges 356
A.14 Multi-label lesions classification: concept 357
A.14.1 Method 357
A.14.2 Anticipated challenges 357
A.15 Active learning for OCT images annotation: concept 358
A.15.1 Method 358
A.15.1.1 Deep active learning 358
A.15.1.2 OCT data annotation with active learning 359
A.15.2 Anticipated challenges 359
A.16 Life-long learning for optimizing models: concept 360
A.16.1 Method 360
A.16.1.1 Elastic weight consolidation 360
A.16.1.2 Application on multi-tasks learning 361
A.16.2 Anticipated challenges 361
A.17 Open source software, modules, and codes. 362
A.17.1 Open source tools 362
A.17.2 Open source python modules 362
A.17.3 Open source repositories 363
A.18. Reference in appendix 365
-
dc.language.isoen-
dc.subject蘇木精與伊紅染色(H&E)zh_TW
dc.subject影像分割zh_TW
dc.subject影像去雜訊zh_TW
dc.subject光學同調斷層掃描(OCT)zh_TW
dc.subject影像辨識zh_TW
dc.subject可解釋性人工智慧zh_TW
dc.subject影像轉換zh_TW
dc.subjectexplainable AIen
dc.subjectoptical coherence tomography (OCT)en
dc.subjecthematoxylin and eosin (H&E) stainen
dc.subjectimage segmentationen
dc.subjectimage denoisingen
dc.subjectimage translationen
dc.subjectimage classificationen
dc.title應用深度學習定量和定性分析細胞解析度光學同調斷層掃描影像中的人體皮膚結構和病變zh_TW
dc.titleQuantitative and Qualitative Analysis of Human Skin Structures and Lesions in Cellular-Resolution OCT Images by Deep Learningen
dc.typeThesis-
dc.date.schoolyear111-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee吳育弘;陳宏銘;李翔傑zh_TW
dc.contributor.oralexamcommitteeYu-Hung Wu;Homer H. Chen;Hsiang-Chieh Leeen
dc.subject.keyword光學同調斷層掃描(OCT),蘇木精與伊紅染色(H&E),影像分割,影像去雜訊,影像轉換,影像辨識,可解釋性人工智慧,zh_TW
dc.subject.keywordoptical coherence tomography (OCT),hematoxylin and eosin (H&E) stain,image segmentation,image denoising,image translation,image classification,explainable AI,en
dc.relation.page369-
dc.identifier.doi10.6342/NTU202301921-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2023-07-28-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept光電工程學研究所-
dc.date.embargo-lift2025-08-01-
顯示於系所單位:光電工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-111-2.pdf32.49 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved