請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69373
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 丁建均 | |
dc.contributor.author | Alexandre Constantin | en |
dc.contributor.author | 康亞力 | zh_TW |
dc.date.accessioned | 2021-06-17T03:14:04Z | - |
dc.date.available | 2018-07-19 | |
dc.date.copyright | 2018-07-19 | |
dc.date.issued | 2018 | |
dc.date.submitted | 2018-07-11 | |
dc.identifier.citation | Chapter 1. Introduction
[1] R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-3, no. 6, pp. 610–621, Nov. 1973, ISSN: 0018-9472. DOI: 10.1109/TSMC.1973.4309314. [2] M. Barzohar and D. B. Cooper, “Automatic finding of main roads in aerial images by using geometric-stochastic models and estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, no. 7, pp. 707–721, Jul. 1996, ISSN: 0162-8828. DOI: 10.1109/34.506793. [3] D. Marr and E. Hildreth, “Theory of edge detection,” Proc. R. Soc. Lond. B, vol. 207, no. 1167, pp. 187–217, 1980. [4] D. Marr and T. Poggio, “A computational theory of human stereo vision,” Proc. R. Soc. Lond. B, vol. 204, no. 1156, pp. 301–328, 1979. [5] D. Cortes, G. Calderón, A. Arista, K. Toscano, and M. Nakano, “Aerial image classification using texture and color-based descriptors,” in 2016 IEEE 1er Congreso Nacional de Ciencias Geoespaciales (CNCG), Dec. 2016, pp. 1–4. DOI: 10.1109/CNCG.2016.7985077. [6] R. Peteri, “Extraction of street networks in urban areas from very high spatial resolution satellite images,” Theses, École Nationale Supérieure des Mines de Paris, Dec. 2003. [Online]. Available: https://pastel.archives-ouvertes.fr/pastel-00000508. [7] C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, no. 3, pp. 273–297, 1995. [8] V. Poulain, “Fusion d’images optique et radar à haute résolution pour la mise à jour de bases de données cartographiques,” Oct. 2010, [Online]. Available: http: //oatao.univ-toulouse.fr/7260/. [9] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” CoRR, vol. abs/1505.04597, 2015. arXiv: 1505.04597. [Online]. Available: http://arxiv.org/abs/1505.04597. [10] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587, 2017. Chapter 2. The Fundamentals of Machine Learning [9] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” CoRR, vol. abs/1505.04597, 2015. arXiv: 1505.04597. [Online]. Available: http://arxiv.org/abs/1505.04597. [10] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587, 2017. [11] T. M. Mitchell, Machine learning. New York: The McGraw-Hill Companies, 1997. [12] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014. [13] T. Tieleman and G. Hinton, “Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude,” COURSERA: Neural networks for machine learning, vol. 4, no. 2, pp. 26–31, 2012. [14] T. Dozat, “Incorporating nesterov momentum into adam,” 2016. [15] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015. [16] G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-normalizing neural networks,” in Advances in Neural Information Processing Systems, 2017, pp. 972–981. [17] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” 2009. [18] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440. [19] S. A. Bigdeli and M. Zwicker, “Image restoration using autoencoding priors,”CoRR, vol. abs/1703.09964, 2017. arXiv: 1703.09964. [Online]. Available: http://arxiv.org/abs/1703.09964. [20] S. Jégou, M. Drozdzal, D. Vazquez, A. Romero, and Y. Bengio, “The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation,” in Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on, IEEE, 2017, pp. 1175–1183. [21] Y. Wei, Z. Wang, and M. Xu, “Road structure refined cnn for road extraction in aerial image,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5, pp. 709–713, May 2017, ISSN: 1545-598X. DOI: 10.1109/LGRS.2017.2672734. [22] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014. [23] V. Mnih, “Machine learning for aerial image labeling,” PhD thesis, University of Toronto, 2013. [24] Z. Zhong, J. Li, W. Cui, and H. Jiang, “Fully convolutional networks for building and road extraction: Preliminary results,” in Geoscience and Remote Sensing Symposium (IGARSS), 2016 IEEE International, IEEE, 2016, pp. 1591–1594. [25] J. D. Wegner, J. A. Montoya-Zegarra, and K. Schindler, “Road networks as collections of minimum cost paths,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 108, pp. 128–137, 2015. [26] L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834–848, Apr. 2018, ISSN: 0162-8828. DOI: 10.1109/TPAMI.2017.2699184. Chapter 3. Review of Common Techniques for Image Seg- mentation [6] R. Peteri, “Extraction of street networks in urban areas from very high spatial resolution satellite images,” Theses, École Nationale Supérieure des Mines de Paris, Dec. 2003. [Online]. Available: https://pastel.archives-ouvertes.fr/pastel-00000508. [27] S.-C. Pei and J.-J. Ding, “Improved harris’ algorithm for corner and edge detections,” in Image Processing, 2007. ICIP 2007. IEEE International Conference on, IEEE, vol. 3, 2007, pp. III–57. [28] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, 27:1–27:27, 3 2011, Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm. [29] Wikipedia contributors, Support vector machine — Wikipedia, the free encyclopedia, [Online; accessed 26-June-2018], 2018. [Online]. Available: https://en.wikipedia.org/wiki/Support_vector_machine. [30] M. Rossi, S. Benatti, E. Farella, and L. Benini, “Hybrid emg classifier based on hmm and svm for hand gesture recognition in prosthetics,” in Proceedings of the IEEE International Conference on Industrial Technology, vol. 2015, Feb. 2015. [31] T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE transactions on information theory, vol. 13, no. 1, pp. 21–27, 1967. [32] Wikipedia contributors, K-nearest neighbors algorithm — Wikipedia, the free encyclopedia, [Online; accessed 15-June-2018], 2018. [Online]. Available: https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm#cite_note-MirkesKnn-21. [33] A. K. Jain, R. P. W. Duin, and J. Mao, “Statistical pattern recognition: A review,”IEEE Transactions on pattern analysis and machine intelligence, vol. 22, no. 1, pp. 4-37, 2000. [34] B. Tay, J. K. Hyun, and S. Oh, “A machine learning approach for specification of spinal cord injuries using fractional anisotropy values obtained from diffusion tensor images,” vol. 2014, p. 276 589, Jan. 2014. [35] R. R. Larson, “Introduction to information retrieval,” Journal of the American Society for Information Science and Technology, vol. 61, no. 4, pp. 852–853, 2010. [36] B. G. Batchelor, Pattern recognition: ideas in practice. Springer Science & Business Media, 2012. [37] R. S. Michalski, R. E. Stepp, and E. Diday, “A recent advance in data analysis: Clustering objects into classes characterized by conjunctive concepts,” in Progress in pattern recognition, Elsevier, 1982, pp. 33–56. [38] M. Pesaresi and J. A. Benediktsson, “A new approach for the morphological segmentation of high-resolution satellite imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 39, no. 2, pp. 309–320, Feb. 2001, ISSN: 0196-2892. DOI: 10.1109/36.905239. [39] I. Laptev, H. Mayer, T. Lindeberg, W. Eckstein, C. Steger, and A. Baumgartner,“Automatic extraction of roads from aerial images based on scale space and snakes,” Machine Vision and Applications, vol. 12, no. 1, pp. 23–31, Jul. 2000, ISSN: 1432-1769. DOI: 10.1007/s001380050121. [Online]. Available: https://doi.org/10.1007/s001380050121. [40] L. Guigues and J.-M. Viglino, “Automatic road extraction through light propagation simulation,” vol. XXXIII, Jan. 2000. [41] L. C, X. Descombes, J. Zerubia, and N. Baghdadi, “Extraction automatique des réseaux linéiques à partir d’images satellitaires et aériennes par processus markov objets,” vol. 170, pp. 13–22, Jan. 2003. Chapter 4. Proposed Road Extraction using SVM [28] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, 27:1–27:27, 3 2011, Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm. [42] Z. Li, X.-M. Wu, and S.-F. Chang, “Segmentation using superpixels: A bipartite graph partitioning approach,” in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, IEEE, 2012, pp. 789–796. [43] Wikipedia contributors, K-means clustering — Wikipedia, the free encyclopedia, [Online; accessed 17-June-2018], 2018. [Online]. Available: https://en.wikipedia.org/wiki/K-means_clustering. Chapter 5. Proposed Road Extraction based on Deep Learn- ing [9] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” CoRR, vol. abs/1505.04597, 2015. arXiv: 1505.04597. [Online]. Available: http://arxiv.org/abs/1505.04597. [10] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587, 2017. [18] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440. [21] Y. Wei, Z. Wang, and M. Xu, “Road structure refined cnn for road extraction in aerial image,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5, pp. 709–713, May 2017, ISSN: 1545-598X. DOI: 10.1109/LGRS.2017.2672734. [44] V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,”arXiv preprint arXiv:1603.07285, 2016. [45] M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus, “Deconvolutional networks,” in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, IEEE, 2010, pp. 2528–2535. [46] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014. [47] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778. [48] D. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (elus),”CoRR, vol. abs/1511.07289, 2015. arXiv: 1511.07289. [Online]. Available: http://arxiv.org/abs/1511.07289. [49] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” CoRR, vol. abs/1412.6980, 2014. arXiv: 1412.6980. [Online]. Available: http://arxiv.org/abs/1412.6980. Chapter 6. Simulation Results [21] Y. Wei, Z. Wang, and M. Xu, “Road structure refined cnn for road extraction in aerial image,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5, pp. 709–713, May 2017, ISSN: 1545-598X. DOI: 10.1109/LGRS.2017.2672734. [23] V. Mnih, “Machine learning for aerial image labeling,” PhD thesis, University of Toronto, 2013. [24] Z. Zhong, J. Li, W. Cui, and H. Jiang, “Fully convolutional networks for building and road extraction: Preliminary results,” in Geoscience and Remote Sensing Symposium (IGARSS), 2016 IEEE International, IEEE, 2016, pp. 1591–1594. [25] J. D. Wegner, J. A. Montoya-Zegarra, and K. Schindler, “Road networks as collections of minimum cost paths,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 108, pp. 128–137, 2015. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69373 | - |
dc.description.abstract | 在計算機視覺中,圖像分類由於其廣泛應用在許多領域上,如圖像壓縮、目標跟踪、圖像分析和地形開發追蹤,一直佔有重要地位。在這個特定的區域,本文我們將探討圖像分類在衛星圖像中提取道路網絡的應用上。其中包括給每個像素做標籤,區分它是否屬於或不屬於道路。這個應用在地圖繪製或監控安全上一直被非常廣泛使用。
為了準確地提取道路,我們提出使用一種基於機器學習的技術,並附加圖像分類算法作為後處理。我們的程序描述如下:首先,將輸入的圖像切割成小方塊,可以更好地適用於簡化後的神經網絡,以便減少計算時間和提供更詳細的信息。除此之外,我們還對顏色進行預處理來增加輸入數據域並且利用顏色通道和附加的梯度通道作為神經網絡的入口。最後通過圖像分類算法對檢測到的道路進行驗證,也更進一步地利用後處理完善缺失的道路,建立最終的衛星道路圖。 測試結果表明,我們提出的方法能夠檢測出大部分道路,並優於目前最先進的方法。 | zh_TW |
dc.description.abstract | In computer vision, image classification has always had an important place due to its widespread applications such as image compression, object tracking, image analysis and ground use evolution. In this particular area we will use classification to extract road networks from satellites images. It consists of giving a label to each pixel if it belongs or not to a road, the applications are very wide from Network mapping to monitor territories.
To extract the roads accurately we propose a machine learning based technique with additional classification algorithm as post-processing. Our procedure is described as follows: first the input image is cutted in small squares, it will fit better to a reduced neural network and will decrease computation time and provide more detailed information, we also increase the range of input data using color’s pre-processing algorithm. Then we exploit color channels and additional gradient channels to put inside the neural network that is trained to classify road and non-road pixels. Finally we verify with additional classification algorithm the detected roads and with post-processing complete the missing road parts to construct the final road map. Simulations show that our proposed method detect most of the road pixels and outperforms state-of-the-art methods. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T03:14:04Z (GMT). No. of bitstreams: 1 ntu-107-R06942123-1.pdf: 13443512 bytes, checksum: e6004410c2feed8ac884f82dfb6253eb (MD5) Previous issue date: 2018 | en |
dc.description.tableofcontents | 口試委員會審定書 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii 摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Organization of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . 2 Chapter 2 The Fundamentals of Machine Learning . . . . . . . . . . . . . . 3 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Training / Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Artificial Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . 10 2.4.1 CNN Single Element Structure . . . . . . . . . . . . . . . . . . . 11 2.4.2 Atrous Convolution Layer . . . . . . . . . . . . . . . . . . . . . 13 2.5 Autoencoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.6 Segmentation Related works . . . . . . . . . . . . . . . . . . . . . . . . 15 2.6.1 CNN Related work . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.6.2 Atrous Convolution Related work . . . . . . . . . . . . . . . . . 16 2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Chapter 3 Review of Common Techniques for Image Segmentation . . . . . 18 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.2 Color Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.2.1 Brightness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.2.2 Luminosity - Lighten and Darken . . . . . . . . . . . . . . . . . 19 3.2.3 Contrast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.3 Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3.1 Convolution techniques . . . . . . . . . . . . . . . . . . . . . . 20 3.3.2 Ridge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.4 Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.4.1 Linear Classification . . . . . . . . . . . . . . . . . . . . . . . . 22 3.4.2 General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.5 k-Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.5.2 Classifying using the distance . . . . . . . . . . . . . . . . . . . 27 3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Chapter 4 Proposed Road Extraction using SVM . . . . . . . . . . . . . . . . 29 4.1 An Overview of our Algorithm . . . . . . . . . . . . . . . . . . . . . . . 29 4.2 Support Vector Machine - A Color Based Classifier . . . . . . . . . . . . 30 4.2.1 Road Data - Color Distribution . . . . . . . . . . . . . . . . . . . 30 4.2.2 The Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.3 k-Means - A Histogram Based Segmentation . . . . . . . . . . . . . . . 33 4.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.3.2 The Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.4 Post-Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Chapter 5 Proposed Road Extraction based on Deep Learning . . . . . . . . 37 5.1 An Overview of our Algorithm . . . . . . . . . . . . . . . . . . . . . . . 37 5.2 CNN for Road Extraction - A Modified U-net Architecture . . . . . . . . 38 5.2.1 U-Net Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.2.2 Atrous Spatial Pyramid Pooling . . . . . . . . . . . . . . . . . . 41 5.2.3 Residual Cell Unit . . . . . . . . . . . . . . . . . . . . . . . . . 42 5.2.4 Proposed Network . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.3 Training Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5.3.1 Proposed loss function . . . . . . . . . . . . . . . . . . . . . . . 47 5.3.2 Data augmentation . . . . . . . . . . . . . . . . . . . . . . . . . 49 5.3.3 Other training characteristics . . . . . . . . . . . . . . . . . . . . 50 5.3.4 From U-net to Our Proposed Network - Training Comparison . . 52 5.4 Image Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 5.4.1 Input of our Network . . . . . . . . . . . . . . . . . . . . . . . . 53 5.4.2 Additionnal Input Transformation . . . . . . . . . . . . . . . . . 54 5.5 Post-Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Chapter 6 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 6.1 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 6.1.1 Massachussetts Dataset . . . . . . . . . . . . . . . . . . . . . . . 57 6.1.2 Other Satellite Images . . . . . . . . . . . . . . . . . . . . . . . 58 6.2 Performance Measurement . . . . . . . . . . . . . . . . . . . . . . . . . 59 6.3 SVM Visual Results and Analysis . . . . . . . . . . . . . . . . . . . . . 60 6.4 Neural Network Performance . . . . . . . . . . . . . . . . . . . . . . . . 61 6.4.1 Visual Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 6.4.2 Comparison with State-of-the-Art . . . . . . . . . . . . . . . . . 63 6.4.3 Additionnal Comparison . . . . . . . . . . . . . . . . . . . . . . 64 6.5 Final Road Extraction based on Neural Network . . . . . . . . . . . . . . 65 6.5.1 The Influence of Image Transformation . . . . . . . . . . . . . . 65 6.5.2 The Influence of Adding Neighbors . . . . . . . . . . . . . . . . 66 6.5.3 Comparison with Neural Network Performance - Summary . . . . 67 6.5.4 Visual Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Chapter 7 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . 74 7.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 | |
dc.language.iso | en | |
dc.title | 機器學習在衛星圖片道路擷取上的應用 | zh_TW |
dc.title | Road Extraction from Satellite Images Using Machine Learning Techniques | en |
dc.type | Thesis | |
dc.date.schoolyear | 106-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 郭景明,王鈺強,簡鳳村 | |
dc.subject.keyword | 衛星圖,圖像分類,計算機視覺,道路提取,機器學習,神 經網絡, | zh_TW |
dc.subject.keyword | Satellite Images,Image classification,Computer Vision,Road Extraction,Machine Learning,Neural Network, | en |
dc.relation.page | 85 | |
dc.identifier.doi | 10.6342/NTU201801173 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2018-07-11 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電信工程學研究所 | zh_TW |
顯示於系所單位: | 電信工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-107-1.pdf 目前未授權公開取用 | 13.13 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。