請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/20099
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 丁建均(Jian-Jiun Ding) | |
dc.contributor.author | Yih-Cherng Lee | en |
dc.contributor.author | 李奕承 | zh_TW |
dc.date.accessioned | 2021-06-08T02:40:04Z | - |
dc.date.copyright | 2020-11-13 | |
dc.date.issued | 2020 | |
dc.date.submitted | 2020-10-16 | |
dc.identifier.citation | [Chapter 1-3] [1] Jason Brownlee, “Photograph of Three Zebra Each Detected with the YOLOv3 Model and Localized with-Bounding-Boxes,” May 2019, https://machinelearningmastery.com/how-to-perform-object-detection-with-yolov3-in-keras/. [2] T. Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117-2125, 2017. [3] Adrian Rosebrock, “Non-Maximum Suppression for Object Detection in Python,” November 2014, https://www.pyimagesearch.com/2014/11/17/non-maximum-suppression-object-detection-python/ [4] Jeremy Jordan, “An overview of semantic image segmentation,” May 2018, https://www.jeremyjordan.me/semantic-segmentation/. [5] J. Long, E. Shelhamer and T. Darrell, “Fully convolutional networks for semantic segmentation,” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431-3440, 2015. [6] “Probability for Computer Scientists,” class notes for CS109, Department of Electrical and Computer Engineering, University of Stanford, 2016. [7] Sunil Ray, “Easy Steps to Learn Naive Bayes Algorithm with codes in Python and R,” September 2017, https://www.analyticsvidhya.com/blog/2017/09/naive-bayes-explained/ [8] H. Zhang, “The optimality of Naive Bayes,” Proc. FLAIRS, 2004 [9] Andrew Ng, “Machine Learning,” Lecture notes part IX the EM algorithm for CS229, University of Stanford, 2020 [10] 2. C. B. Do, S. Batzoglou, “What is the expectation maximization algorithm?” nature biotechnology, vol. 26, no. 8, 2008. [11] Ryan Tibshirani, “Convex Optimization, ” Lecture notes of Introduction for Convex Optimization: Fall 2015, Carnegie Mellon University, 2015. [12] Ryan Tibshirani, “Convex Optimization,” Lecture notes of Gradient descent, Carnegie Mellon University, 2019. [13] Ryan Tibshirani, “Convex Optimization,” Lecture notes of Convexity I: Sets and functions, Carnegie Mellon University, 2019 [14] Ryan Tibshirani, “Convex Optimization,” Lecture notes of Proximal gradient descent, Carnegie Mellon University, 2019 [15] Ryan Tibshirani, “Convex Optimization,” Lecture notes of Subgradients, Carnegie Mellon University, 2019 [16] Ryan Tibshirani, “Convex Optimization,” Lecture notes of Duality in general programs, Carnegie Mellon University, 2019 [17] Mathispower4u, “Ex 1: Determine a Dual Problem Given a Standard Minimization Problem,” June 2014, https://www.youtube.com/watch?v=TKxm-d9P5sQ. [18] Ryan Tibshirani, “Convex Optimization,” Lecture notes of KKT, Carnegie Mellon University, 2019 [19] S. Boyd and L. Vandenberghe, “Convex optimization,” Chapter 5 Duality, 2004. [20] Mark Chang, “Optimization Method -- Newton's Method for Optimization,” 2015, http://cpmarkchang.logdown.com/posts/436316-optimization-method-newton [Chapter 4] [21] Z. Zhang, Z. He, G. Cao, and W. Cao, “Animal detection from highly cluttered natural scenes using spatiotemporal object region proposals and patch verification,” IEEE Trans. Multimedia, vol. 18, no. 10, pp. 2079-2092, 2016. [22]C. Veibäck, G. Hendeby, and F. Gustafsson, “Tracking of dolphins in a basin using a constrained motion model,” in Int. Conf. Information Fusion, pp. 1330-1337, 2015. [23] J. Karnowski, E. Hutichins, and C. Johonson, “Dolphin detection and tracking,” in Winter Conference on Application of Computer Vision Workshops, pp. 51-56, 2015. [24] J. Li, Y. Wei, X. Liang, J. Dong, T. Xu, J. Feng, and S. Yan, “Attentive contexts for object detection,” IEEE Trans. Multimedia, vol. 19, no. 5, pp. 944-954, 2016. [25] S. Zhang, Y. Xie, J. Wan, H. Xia, S. Z. Li and G. Guo, “WiderPerson: A diverse dataset for dense pedestrian detection in the wild,” IEEE Trans. Multimedia, vol. 20, iss. 4, pp. 2079-2092, 2018. [26] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in IEEE Conf. Computer Vision and Pattern Recognition, pp. 779-788, 2016. [27] J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” in IEEE Conf. Computer Vision and Pattern Recognition, pp. 7263-7271, 2017. [28] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv:1804.02767, 2018. [29] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in European Conf. Computer Vision, pp. 21-37, 2016. [30] J. Li, X. Liang, S. Shen, T. Xu, J. Feng, and S. Yan, “Scale-aware fast R-CNN for pedestrian detection,” IEEE Trans. Multimedia, vol. 20, no. 4, pp. 985-996, 2017 [31] R. Girshick, J. Donahue, T. Darrell, and J. Malik. “Rich feature hierarchies for accurate object detection and semantic Segmentation,” in IEEE Conf. Computer Vision and Pattern Recognition, pp. 580-587, 2014. [32] R. Girshick, “Fast r-cnn,” in IEEE. Conf. international conference on computer vision, pp. 1440-1448, 2015 [33] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137-1149, 2015. [34] M. Gao, R. Yu, A. Li, V. I. Morariu, and L. S. Davis, “Dynamic zoom-in network for fast object detection in large images,” in IEEE Conf. Computer Vision and Pattern Recognition, pp. 6926-6935, 2018. [35] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual object classes challenge 2012 (VOC2012) results (2012),” available in http://www. pascal-network. org/challenges/VOC/voc2011/ workshop/index. Html, 2011. [36] T. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European conference on computer vision, pp. 740–755, 2014. [37] P. Dollar, C. Wojek, B. Schiele, and P. Perona, “Pedestrian detection: An evaluation of the state of the art,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 4, pp. 743–761, 2012. [38] S. Kalkowski, C. Schulze, A. Dengel, and D. Borth, “Real-time analysis and visualization of the YFCC100M dataset,” in Proceedings of the Workshop on Community-Organized Multimodal Mining: Opportunities for Novel Solutions, ACM, pp. 25-30, 2015. [39] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” in Int. Conf. Computer Vision, pp. 2961-2969, 2017. [40] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conf. Computer Vision and Pattern Recognition, pp. 3431-3440, 2015. [41] H. Noh, S. Hong, and B. Han, “Learning deconvolution network for semantic segmentation,” in Int. Conf. Computer Vision, pp. 1520-1528, 2015. [42] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer Assisted Intervention, pp. 234-241, 2015. [43] G. Sharma, F. Jurie, and C. Schmid, “Discriminative spatial saliency for image classification,” in Computer Vision and Pattern Recognition, pp. 3506-3513, 2012. [44] F. Murabitoa, C. Spampinatoa, S. Palazzoa, D. Giordanoa, K. Pogorelovb, and M. Riegler, “Top-down saliency detection driven by visual classification,” Computer Vision Image Understanding, vol. 172, pp. 67-76, 2018. [45] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, pp. 1097-1105, 2012. [46] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556, 2014. [47] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conf. Computer Vision and Pattern Recognition, pp. 770-778, 2016. [48] K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in European Conf. Computer Vision, pp. 630-645, Oct. 2016. [49] R. M. Haralick, S. R. Sterngerg, and X. Zhuang, “Image analysis using mathematical morphology,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 4, pp. 532–550, 1987. [50] S. Mahamud, L. R. Williams, K. K. Thornber, and K. Xu, “Segmentation of multiple salient closed contours from real images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 4, pp. 433-444, 2003. [51] A. Jalil, I. M. Qureshi, A. Manzar, and R. A. Zahoor, “Rotation-invariant features for texture image classification,” in Int. Conf. Engineering of Intelligent Systems, pp. 1-4, 2006. [52] R. D. C. da Silva, G. A. P. Thé, and F. N. S. de Medeiros, “Rotation-invariant image description from independent component analysis for classification purposes,” in Int. Conf. Informatics in Control, Automation and Robotics, pp. 210-216, 2015. [53] B. Oktavianto and T. W. Purboyo, “A study of histogram equalization techniques for image enhancement,” Int. J. Applied Engineering Research, vol. 13, no. 2, pp. 1165-1170, 2018. [54] S. Bouma, M. D. M. Pawley, K. Hupman, and A. Gilman, “Individual common dolphin identification via metric embedding learning,” in Int. Conf. Image and Vision Computing New Zealand, pp. 1-6, 2018. [55] A. Gilman, K. Hupman, K. A. Stockin, and M. D. M. Pawley, “Computer-assisted recognition of dolphin pigmentations,” in Image and Vision Computing New Zealand, pp. 1-6, 2017. [56] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” arXiv:1312.6034, 2014. [57] S. G. Barco, W. M. Swingle, W. A. Mlellan, R. N. Harris, and D. A. Pabst, “Local abundance and distribution of bottlenose dolphin (tursiops, truncatus) in the nearshore waters of Virginia Beach, Virginia,” Marine Mammal Science, vol. 15, no.2, pp. 394-408, 2006. [58] J. G. Norton, and S. J. Crooke, “Occasional availability of dolphin, Coryphaena hippurus, to Southern California Commercial Passenger Fishing Vessel Anglers: Observations and hypotheses,” California Cooperative Oceanic Fishery Investigation Report, no. 35, pp. 230-239, 1994. [Chapter 5] [59] T. E. de Carlo, A. Romano, N. K. Waheed, and J. S. Duker, “A review of optical coherence tomography angiography (OCTA),” Int. J. Retina and Vitreous, vol. 1, no. 1 1(1), pp. 5, 2015. [60] R. K. Meleppat, E. B. Miller, S. K. Manna, P. Zhang, E. N. Pugh Jr, and R. J. Zawadzki, “Multiscale Hessian filtering for enhancement of OCT angiography images,” Ophthalmic Technologies XXIX, International Society for Optics and Photonics, vol. 10858, pp. 1-7, 2015. [61] Y. Jia, S. T. Bailey, T. S. Hwang, S. M. McClintic, . S. Gao, M. E. Pennesi, C. J. Flaxel, A. K. Lauer, D. J. Wilson, J. Hornegger, J. G. Fujimoto, and D. Huang, “Quantitative optical coherence tomography angiography of vascular abnormalities in the living human eye,” Proc. National Academy of Sciences, vol. 112, no. 18, pp. 2395-2402, 2015. [62] A. Y. Kim, Z. Chu, A. Shahidzadeh, R. K. Wang, C. A. Puliafito, and A. H. Kashani, “Quantifying microvascular density and morphology in diabetic retinopathy using spectral-domain optical coherence tomography angiography,” Investigative Ophthalmology and Visual Science, vol. 57, no. 9, pp. 362-370, 2016. [63] T. Walter, J. C. Klein, P. Massin, and A. Erginay, “A contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina,” IEEE Trans. Medical Imaging, vol. 21, no. 10, pp. 1236-1243, 2002. [64] K. Gopinath, J. Sivaswamy, and T. Mansoori, “Automatic glaucoma assessment from angio-OCT images,” in IEEE Int. Symp. Biomedical Imaging, pp. 193-196, 2016. [65] D. R. Matsunaga, J. Y. Jack, L. O. de Koo, H. Ameri, C. A. Puliafito, and A. H. Kashani, “Optical coherence tomography angiography of diabetic retinopathy in human subjects,” Ophthalmic Surgery, Lasers and Imaging Retina, vol. 46, no. 8, pp. 796-805, 2015. [66] G. Holló and F. Naghizadeh, “Influence of a new software version of the RTVue-100 optical coherence tomograph on the detection of glaucomatous structural progression,” European Journal of Ophthalmology, vol. 25, no. 5, pp. 410-415, 2015. [67] S. A. Agemy, N. K. Scripsema, C. M. Shah, T. Chui, P. M. Garcia, J. G. Lee, R. C. Gentile, Y. S. Hsiao, Q. Zhou, T. Ko, and R. B. Rosen, “Retinal vascular perfusion density mapping using optical coherence tomography angiography in normals and diabetic retinopathy patients,” Retina ,vol.35, no. 11, pp. 2353-2363, 2015. [68] M. C. Savastano, B. Lumbroso, and M. Rispoli, “In vivo characterization of retinal vascularization morphology using optical coherence tomography angiography,” Retina, vol. 35, no.11, pp. 2196-2203, 2015. [69] N. Phansalkar, S. More, A. Sabale, and M. Joshi, “Adaptive local thresholding for detection of nuclei in diversity stained cytology images,” in Int. Conf. Communications and Signal Processing, pp. 218-220, 2011 [70] A. Uji, S. Balasubramanian, J. Lei, E. Baghdasaryan, M. Al‐Sheikh, E. Borrelli, and S. R. Sadda, “Multiple enface image averaging for enhanced optical coherence tomography angiography imaging,” Acta Ophthalmologica, vol. 96, no. 7, pp. 820-827, 2018. [71] E. D. Cole, E. M. Moult, S. Dang, W. Choi, S. B. Ploner, B. Lee, R. Louzada, E. Novais, J. Schottenhamml, L. Husvogt, A. Maier, J. G. Fujimoto, N. K. Waheed, J. S. Duker, “The definition, rationale, and effects of thresholding in OCT angiography,” Ophthalmology Retina, vol. 1, no. 5, pp. 435-447, 2017. [72] J. V. B. Soares, J. J. G. Leandro, R. M. Cesar, H. F. Jelinek, and M. J. Cree, “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Trans. Medical Imaging, vol. 25, no. 9, pp. 1214–1222, 2006. [73] A. F. Frangi, W. J. Niessen, K. L. Vincken, and M. A. Viergever, “Multiscale vessel enhancement filtering,” in Int. Conf. Medical Image Computing and Computer-Assisted Intervention, pp. 130-137, 1988. [74] R. Annunziata and E. Trucco, “Accelerating convolutional sparse coding for curvilinear structures segmentation by refining SCIRD-TS filter banks,” IEEE Trans. Medical Imaging, vol. 35, no. 11, pp. 2381–2392, 2016. [75] M. W. K. Law and A. C. S. Chung, “Three dimensional curvilinear structure detection using optimally oriented flux,” in Eur. Conf. Comput. Vision, pp. 368–382, 2008 [76] P. Prentašić, M. Heisler, Z. Mammo, S. Lee, A. Merkur, E. Navajas, M. F. Beg, M. Šarunic, and S. Lončarić, “Segmentation of the foveal microvasculature using deep learning networks,” J. Biomedical Optics, vol. 27, no.7, pp. 75008, 2016. [77] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Int. Conf. Medical Image Computing and Computer-Assisted Intervention, pp. 234-241, 2015. [78] Y. Giarratano, E. Bianchi, C. Gray, A. Morris, T. MacGillivray, B. Dhillon and M. O. Bernabeu, “Automated and network structure preserving segmentation of optical coherence tomography angiograms,” arXiv:1912.09978, 2019. [79] S. C. Pei and J. J. Ding, “Improved Harris' algorithm for corner and edge detections,” in IEEE Int. Conf. Image Processing, vol. 3, pp. 57-60, 2007. [80] S. C. Pei and J. J. Ding, “New corner detection algorithm by tangent and vertical axes and case table,” in IEEE Int. Conf. Image Processing, vol. 1, pp. 365-368, 2005. [81] A. Y. Kim, Z. Chu, A. Shahidzadeh, R. K. Wang, C. A. Puliafito, and A. H. Kashani, “Quantifying microvascular density and morphology in diabetic retinopathy using spectral-domain optical coherence tomography angiography,” Investigative Ophthalmology and Visual Science, vol. 57, no. 9, pp. 362-370, 2016. [82] S. Suzuki and K. Abe, “Topological structural analysis of digitized binary images by border following,” Computer Vision, Graphics, and Image Processing, vol. 30, no. 1, pp. 32-46, 1985. [83] Y. Guo, T. T. Hormel, H. Xiong, B. Wang, A. Camino, J. Wang, D. Huang, T. S. Hwang, and Y. Jia, “Development and validation of a deep learning algorithm for distinguishing the nonperfusion area from signal reduction artifacts on OCT angiography,” Biomed. Opt. Express, vol. 10, no. 7, pp. 3257–3268, 2019. [84] D. Nagasato, H. Tabuchi, H. Masumoto, H. Enno, N. Ishitobi, M. Kameoka, M. Niki, and Y. Mitamura, “Automated detection of a nonperfusion area caused by retinal vein occlusion in optical coherence tomography angiography images using deep learning,” PloS One, vol. 14, no. 11, pp. 223965, 2019 [85] Y. X. Zhao, Y. M. Zhang, M. Song, and C. L. Liu, “Multi-view semi-supervised 3D whole brain segmentation with a self-ensemble network,” in Int. Conf. Medical Image Computing and Computer-Assisted Intervention, pp. 256-265, 2019. [86] L. Husvogt, S. Ploner, E. M. Moult, A. Y. Alibhai, J. Schottenhamml, J. S. Duker, N. K. Waheed, J. G. Fujimoto, and A. K. Maier, “Using medical image reconstruction methods for denoising of OCTA data,” Investigative Ophthalmology Visual Science, vol. 60, no. 9, pp. 3096-3096, 2019. [87] N. Eladawi, M. Elmogy, O. Helmy, A. Aboelfetouh, A. Raid, H. Sandhu, S. Schaal, and A. El-Baz, “Automatic blood vessels segmentation based on different retinal maps from OCTA scans,” Computers in Biology and Medicine, vol. 89, pp. 150-161, 2017. [88] Y. Guo, A. Camino, M. Zhang, J. Wang, D. Huang, T. Hwang, and Y. Jia, “Automated segmentation of retinal layer boundaries and capillary plexuses in wide-field optical coherence tomographic angiography,” Biomedical Optics Express, vol. 9, no. 9, pp. 4429-4442, 2018. [Chapter 6] [89] V. Jha, G. Garcia-Garcia, K. Iseki, et al. “Chronic kidney disease: global dimension and perspectives,” Lancet, vol. 382, no. 9888, pp.260-272, 2013. [90] D. Drobnjak, I. C. Munch, C. Glumer, et al. “Retinal Vessel Diameters and Their Relationship with Cardiovascular Risk and All-Cause Mortality in the Inter99 Eye Study: A 15-Year Follow-Up,” J Ophthalmol, pp. 6138659, 2016. [91] P. De Boever, T. Louwies, E. Provost, et al. “Fundus photography as a convenient tool to study microvascular responses to cardiovascular disease risk factors in epidemiological studies,” J Vis Exp, vol. 92, pp. e51904, 2014. [92] B. Gopinath, J. Chiha, A. J. Plant, et al. “Associations between retinal microvascular structure and the severity and extent of coronary artery disease,” Atherosclerosis, vol. 236, no. 1, pp. 25-30, 2014. [93] Q. L. Ooi, F. K. Tow, R. Deva, et al. “The microvasculature in chronic kidney disease,” Clin J Am Soc Nephrol, vol. 6, no. 8, pp. 1872-1878, 2011. [94] C. Sabanayagam, A. Shankar, D. Koh, et al. “Retinal microvascular caliber and chronic kidney disease in an Asian population,”. Am J Epidemiol, vol. 169, no. 5, pp. 625-632, 2009. [95] L. S. Lim, C. Y. Cheung, C. Sabanayagam, et al. “Structural changes in the retinal microvasculature and renal function,” Invest Ophthalmol Vis Sci, vol. 54, no. 4, pp. 2970-2976, 2013. [96] L. Yeung, I. W. Wu, C. C. Sun, et al. “Early retinal microvascular abnormalities in patients with chronic kidney disease,” Microcirculation, vol. 26, no. 7, pp. e12555, 2019. [97] C. W. Wong, T. Y. Wong, C. Y. Cheng, C. Sabanayagam. “Kidney and eye diseases: common risk factors, etiological mechanisms, and pathways,” Kidney Int, vol. 85, no. 6, pp. 1290-302, 2014. [98] R. G. Kalaitzidis, M. S. Elisaf. “Treatment of Hypertension in Chronic Kidney Disease,” Curr Hypertens Rep, vol. 20, no. 8, pp. 64, 2018. [99] W. H. Lee, J. H. Park, Y. Won, et al. “Retinal Microvascular Change in Hypertension as measured by Optical Coherence Tomography Angiography,” Sci Rep, vol. 9, no. 1, pp. 156, 2019. [100] D. Hua, Y. Xu, X. Zeng, et al. “Use of optical coherence tomography angiography for assessment of microvascular changes in the macula and optic nerve head in hypertensive patients without hypertensive retinopathy,” Microvasc Res, vol. 129, pp. 103969, 2019. [101] J. Chua, C. W. L. Chin, J. Hong, et al. “Impact of hypertension on retinal capillary microvasculature using optical coherence tomographic angiography,” J Hypertens, vol. 37, no. 3, pp. 572, 2018. [102] A. Bosch, J. B. Scheppach, J. M. Harazny, et al. “Retinal capillary and arteriolar changes in patients with chronic kidney disease,” Microvasc Res, vol. 118, pp. 121-127, 2018. [103] M. Vadala, M. Castellucci, G. Guarrasi, et al. “Retinal and choroidal vasculature changes associated with chronic kidney disease,” Graefes Arch Clin Exp Ophthalmol, vol. 257, no. 8, pp. 1687-1698, 2019. [104] E. Ciloglu, N. T. Okcu, N. C. Dogan. “Optical coherence tomography angiography findings in preeclampsia,” Eye (Lond), vol. 33, no. 12, pp. 1946-1951, 2019. [105] A. K. van Koeverden, Z. He, C. T. O. Nguyen, et al. “Systemic hypertension is not protective against chronic intraocular pressure elevation in a rodent model,” Sci Rep, vol. 8, no. 1, pp. 7107, 2018. [106] D. Y. Yu, S. J. Cringle, V. A. Alder, E. N. Su. “Intraretinal oxygen distribution in rats as a function of systemic blood pressure,” Am J Physiol, vol. 267, no. 6 Pt 2, pp. H2498-507, 1994. [107] A. Jumar, J. M. Harazny, C. Ott, et al. “Improvement in Retinal Capillary Rarefaction After Valsartan Treatment in Hypertensive Patients,” J Clin Hypertens (Greenwich), vol. 18, no. 11, pp. 1112-1118, 2016. [108] G. Chan, C. Balaratnasingam, P. K. Yu, et al. “Quantitative morphometry of perifoveal capillary networks in the human retina,” Invest Ophthalmol Vis Sci, vol. 53, no. 9, pp. 5502-5514, 2012. [109] P. L. Nesper, A. A. Fawzi. “Human Parafoveal Capillary Vascular Anatomy and Connectivity Revealed by Optical Coherence Tomography Angiography,” Invest Ophthalmol Vis Sci, vol. 59, no.10, pp. 3858-3867, 2018. [110] T. E. Kornfield, E. A. Newman. “Regulation of blood flow in the retinal trilaminar vascular network,” J Neurosci, vol. 34, no. 34, pp. 11504-11513, 2014. [111] A. M. Hagag, A. D. Pechauer, L. Liu, et al. “OCT Angiography Changes in the 3 Parafoveal Retinal Plexuses in Response to Hyperoxia,” Ophthalmol Retina, vol. 2, no. 4, pp. 329-336, 2018. [112] S. Bonnin, V. Mane, A. Couturier, et al. “New insight into the macular deep vascular plexus imaged by optical coherence tomography angiography,” Retina, vol. 35, no. 11, pp.2347-2352, 2015. [113] H. Leung, J. J. Wang, E. Rochtchina, et al. “Impact of current and past blood pressure on retinal arteriolar diameter in an older population,” J Hypertens, vol. 22, no. 8, pp.1543-1549, 2004. [114] T. K. Wong, R. Klein, B. E. Klein, et al. “Retinal vessel diameters and their associations with age and blood pressure,” Invest Ophthalmol Vis Sci, vol. 44, no. 11, pp. 4644-4650, 2003. [115] D. Schmidl, G. Garhofer, L. Schmetterer. “The complex interaction between ocular perfusion pressure and ocular blood flow - relevance for glaucoma,” Exp Eye Res, vol. 93, no. 2, pp. 141-155, 2011. [116] O. de Montgolfier, P. Pouliot, M. A. Gillis, et al. “Systolic hypertension-induced neurovascular unit disruption magnifies vascular cognitive impairment in middle-age atherosclerotic LDLr(-/-):hApoB(+/+) mice,” Geroscience, vol. 41, no. 5, pp. 511-532, 2019. [117] O. de Montgolfier, A. Pincon, P. Pouliot, et al. “High Systolic Blood Pressure Induces Cerebral Microvascular Endothelial Dysfunction, Neurovascular Unit Damage, and Cognitive Decline in Mice,” Hypertension, vol. 73, no. 1, pp. 217-228, 2019. [118] P. E. Stevens, A. Levin. “Evaluation and management of chronic kidney disease: synopsis of the kidney disease: improving global outcomes 2012 clinical practice guideline,” Ann Intern Med, vol. 158, no. 11, pp. 825-830, 2013. [119] A. Earley, D. Miskulin, E. J. Lamb, et al. “Estimating equations for glomerular filtration rate in the era of creatinine standardization: a systematic review,” Ann Intern Med, vol. 156, no. 11, pp. 785-795, 2012. [120] J. A. Stark. “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Transactions on Image Processing, vol. 9, no. 5, pp. 889-896, 2000. [121] H. Jiang, Y. Wei, Y. Shi, et al. “Altered Macular Microvasculature in Mild Cognitive Impairment and Alzheimer Disease,” J Neuroophthalmol, vol. 38, no. 3, pp. 292-298, 2018. [122] S. A. Agemy, N. K. Scripsema, C. M. Shah, et al. “Retinal vascular perfusion density mapping using optical coherence tomography angiography in normals and diabetic retinopathy patients,” Retina, vol. 35, no. 11, pp. 2353-2363, 2015. [123] C. Y. Cheung, J. Li, N. Yuan, et al. “Quantitative retinal microvasculature in children using swept-source optical coherence tomography: the Hong Kong Children Eye Study,” Br J Ophthalmol, vol. 103, no. 5, pp. 672-679, 2018. [124] T. Y. Zhang , and C. Y. Suen. “A fast parallel algorithm for thinning digital patterns,” Communications of the ACM, vol. 27, no. 3, pp. 236-239, 1986. [125] M. R. Maire. “Contour Detection and Image Segmentation: University of California, Berkeley, 2009. [Chapter 7] [126] A. Shahlaee, M. Pefkianaki, J. Hsu, and A. C. Ho, “Measurement of foveal avascular zone dimensions and its reliability in healthy eyes using optical coherence tomography angiography,” In American journal of ophthalmology, vol. 161, pp. 50-55, 2016. [127] R. Linderman, A. E. Salmon, M. Strampe, M. Russillo, J. Khan and J. Carroll “Assessing the accuracy of foveal avascular zone measurements using optical coherence tomography angiography: segmentation and scaling,” In Translational Vision Science and Technology, vol. 6, no. 3, pp. 16-16, 2017. [128] N. Hussain and A. Hussain. “Diametric measurement of foveal avascular zone in healthy young adults using optical coherence tomography angiography,” In International Journal of Retina and Vitreous, vol. 2, no. 1, pp. 27, 2016. [129] A. Agarwal, J. J. Balaji and V. Lakshminarayanan. “A new technique for estimating the foveal avascular zone dimensions,” In Ophthalmic Technologies XXX International Society for Optics and Photonics, vol. 11218, pp. 112181R, 2020. [130] A. Agarwal, R. Raman and V. Lakshminarayanan. “The Foveal Avascular Zone Image Database (FAZID),” In Proc. of SPIEVol, vol 11510, pp.1151027-1, 2020. [131] J. J. Balaji, A. Agarwal, R. Raman, and V. Lakshminarayanan. “Comparison of foveal avascular zone in diabetic retinopathy, high myopia, and normal fundus images,” In Ophthalmic Technologies XXX. International Society for Optics and Photonics, vol. 11218, p. 112181O 2020. [132] E. R. Dougherty. “An introduction to morphological image processing,”. SPIE, 1992. [133] Quang-nnguyen, “A toolbox for Foveal Avascular Zone (FAZ) Segmentation,” in https://github.com/quang-nnguyen/FAZSEG. [134] M. Díaz, et al. “Automatic segmentation of the foveal avascular zone in ophthalmological OCT-A images,” PloS one, vol. 14, no. 2, 2019. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/20099 | - |
dc.description.abstract | 隨著硬體技術的躍進以及進入大數據資料的時代,近五年影像處理技術也因為深度學習方法的突破而有了跨時代的重大改變。因此無論是在各個不同的影像處理領域(包含影像去雜訊,影像去模糊,影像偵測辨識,影像前處理和影像後處理)文章都如火如荼地提出許多先進的改善方法。雖然深度學習在電腦視覺和影像處理已成為一個熱門的研究方向,但在其中仍有改善的空間。例如深度學習可視為一種資料學習的方式,而正常來說我們無法預期訓練資料是完美無缺,進而影響到模型的結果不準確問題。但傳統方法機器學習的方法卻有保留了人對影像物體結構特性的直覺性判斷跟處理。基於此原則,我們可以充分應用影像中的區域特徵來開發影像。本論文提出了三個獨立方法的改善在不同領域上的研究成果,分別為影像切割、物體偵測和物體辨識。 在影像切割上主要提出如何處理雜訊影響的眼睛血管的分割問題,由於大部分分割問題都是針對日常物體,且具有相對明顯的「影像語意」。然而在處理光學共軛斷層血管掃描圖片(OCTA)為非常細微且模糊的視網膜血管圖像,並沒有有效的預訓練模型以及龐大的資料去幫助醫生如何完成資料不足且細微血管影像的干擾問題。有鑒於此,我們與長庚醫院合作設計了一個全新的資料庫且提出一套結合了深度學習跟機器學習的優點方法來讓我們的成果可以達到實際應用。 在物體偵測上我們提出了如何改善小物體在大場景下的追蹤問題。在諸多物體偵測的方法提出,其最有名的方法無外乎為單層偵測(Yolo system)跟雙層偵測(Faster R-cnn),這些方法是建立在乾淨且特徵明顯的物體上去做到高效率的成果。然在處理物體小且背景雜訊干擾狀態下的影像其效果會有所侷限,因此我們與鯨豚實驗室合作提出一套系統性的方案去解決實際人員整理大量遠距照片影像切割辨識的問題。 最後在物體辨識上我們研究並提出了如何使用多重分類器去學習更有效率的合併傳統單一CNN的限制問題在辨識資料庫。CNN架構用於辨識上是一個非常好的分類器,然而單獨使用單一模型的辨識結果其成效會侷限於在成功率以及損失來抉擇出最好的模型。然而,我們提出的方法可以非常有效率的提高原先系統的成果。 這三個問題分別都是影像上不可避免的問題。此論文針對這些問題在各個真實場景的應用上,藉由改善模型的主要架構來達到更有效率且準確的結果。本論文的內容包含探討資料學習方法的缺點,訓練模型的缺點以及如何改善這些缺失並且融合之前傳統機器學的方法,以便更加和人類視覺以及算法的改善 | zh_TW |
dc.description.abstract | Along with upgraded hardware equipment and the tendency of big data, image processing algorithms have achieved significant improvement due to the application of deep learning in the past five years. In every image processing field, such as noise reduction, image deburring, pre-processing, and post-processing systems, deep learning has been explored in full swing. Although deep learning is an inevitable trend in computer vision and image processing, there is still room for improvement, especially inexactitude and accuracy. Namely, deep learning is also called data learning, mostly dependent on the completeness and wholistic of the database. However, the comprehensive database is usually not unattainable during handling real-life images, which takes many resources and time in human labeling. On the contrary, the traditional handcraft method, which depends on analyze and observe the statistical of object information, can always retain features of the image corresponding to the human experience. Therefore, this thesis focuses on extracting the object's local pattern and develops a novel algorithm to compensate for the weakness in deep learning in three different areas, including detection, segmentation, and classification. The first is segmentation. Handling noise interference and highlighting features of an object are our main contributions. Most of the related works focusing on semantic segmentation depend on a useful pre-trained model. However, our optical coherence tomography angiography (OCTA) medical images have complicated small and blurred vessel topology in a limited dataset, lacking an applicable pre-trained model. Therefore, in this work, we collaborated with Chang Gung Medical Hospital (CGMH) to develop the newest database and combine both advantages of traditional machine learning and deep learning to achieve real-life clinical application. The second direction is detection. This work proposes a systematic solution to detect small objects in a massive scene. Although there are many mature and state-of-the-art algorithms such as Faster RCNN and YOLO, they are insufficient in the super-resolution problems in remote shooting images, including small target, camera shaking, and intense light pollution. Designing a perfect system to acquire significant detection and classification features in remote-shooting images in wild scenes is priceless. Therefore, we collaborated with the Lab of Cetacean in NTU, collecting and labeling far-shooting photos of wild dolphins for more than ten years and can provide a vast and valuable database. Last but not least, for classification, we investigated and proposed an ensemble algorithm on facial emotion recognition to boost the performance of traditional CNN from several classification models, involving how to choose the best model from many functions, loss, and objective criteria rules. These contributions are significant and pioneering as we have solved the inevitable and general image processing problems. Our achievements were adopted by other teams, such as doctors and biologists. The proposed deep database learning architecture effectively integrates the merits of various image processing and machine learning methods. | en |
dc.description.provenance | Made available in DSpace on 2021-06-08T02:40:04Z (GMT). No. of bitstreams: 1 U0001-1510202012330900.pdf: 6091595 bytes, checksum: b3e18f7a2f1b1f1e31cd6ee9ee0e76b1 (MD5) Previous issue date: 2020 | en |
dc.description.tableofcontents | 誌謝 i 中文摘要 ii ABSTRACT iv CONTENT vi LIST OF FIGURES x LIST OF TABLES xiv Chapter 1 Introduction 1 1.1 Background 1 1.2 Motivation 4 1.3 Main contribution 5 1.4 Organization 7 Chapter 2 Fundamentals of machine learning 8 2.1 Decision tree 8 2.2 Naïve Bayes rule 10 2.3 EM algorithm 11 Chapter 3 Fundamentals of optimization 17 3.1 Definition of optimization problem forms 17 3.2 Convex first-order methods 17 3.2.1 Definition of convex set and examples 18 3.2.2 Gradients descent 20 3.2.3 Subgradients and example 22 3.2.4 Stochastic Gradient Descent 23 3.3 Dual problem and examples 24 3.3 KKT 28 3.4 Newton methods and examples 30 3.5 Alternating Direction Method of Multipliers 31 Chapter 4 Backbone Alignment and Cascade Tiny Object Detecting Techniques for Dolphin Detection and Classification 36 4.1 Introduction 36 4.2 Challenges in analysis 38 4.3 Related work 40 4.4 Proposed algorithm 42 4.4.1 Cascade Small Object Detection (CSOD) 44 4.4.2 Segmentation System 45 4.4.3 Visualization to backbone-based classification (V2BC) 46 4.5 Experiments 51 4.5.1 Database 51 4.5.2 Evaluation 52 4.6 Discussion 57 4.7 Summary 58 Chapter 5 Segmentation boosting with compensation methods in optical coherence tomography angiography images 59 5.1 Introduction 59 5.2 OCTA Images and challenges in analysis 62 5.3 Proposed algorithm 65 5.3.1 Patch U-Net 65 5.3.2 Compensation methods for noise reduction 68 5.3.3 Vessel compensation 71 5.4 Experiments 72 5.4.1 Database 72 5.4.2 Evaluation 73 5.5 Summary 77 Chapter 6 Impact of Blood Pressure Control on Retinal Microvasculature in Patients with Chronic Kidney Disease 78 6.1 Introduction 78 6.2 Proposed methods 79 6.3 Experiment 81 6.4 Evaluation 83 6.5 Summary 90 Chapter 7 Foveal avascular zone on Retinal images 91 7.1 Introduction 91 7.2 Proposed methods 92 7.3 Experiment 93 7.4 Validation of FAZ segmentation 94 7.5 Summary 96 Chapter 8 Conclusions 97 REFERENCE 99 PUBLICATIONS 117 | |
dc.language.iso | zh-TW | |
dc.title | 先進深度演算法在醫學、生物以及一般影像處理上的應用 | zh_TW |
dc.title | Advanced Deep Learning Methods for Medical, Biological, and General Image Processing | en |
dc.type | Thesis | |
dc.date.schoolyear | 109-1 | |
dc.description.degree | 博士 | |
dc.contributor.oralexamcommittee | 郭景明(Jing-Ming Guo),簡鳳村(Feng-Tsun Chien),張佑榕(Ronald Y Chang),余執彰(Chih-Chang Yu) | |
dc.subject.keyword | 物體偵測,語意切割,神經網路,影像視覺,保育,去雜訊,光學共軛斷層血管圖,視網膜微血管, | zh_TW |
dc.subject.keyword | object detection,sematic segmentation,convolutional neural networks,computer vision,conservation,noise reduction,optical coherence tomography,retinal vasculature, | en |
dc.relation.page | 121 | |
dc.identifier.doi | 10.6342/NTU202004274 | |
dc.rights.note | 未授權 | |
dc.date.accepted | 2020-10-16 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電信工程學研究所 | zh_TW |
顯示於系所單位: | 電信工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-1510202012330900.pdf 目前未授權公開取用 | 5.95 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。