請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73891完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 林達德(Ta-Te Lin) | |
| dc.contributor.author | Yi-Hsuan Huang | en |
| dc.contributor.author | 黃怡瑄 | zh_TW |
| dc.date.accessioned | 2021-06-17T08:12:58Z | - |
| dc.date.available | 2019-08-20 | |
| dc.date.copyright | 2019-08-20 | |
| dc.date.issued | 2019 | |
| dc.date.submitted | 2019-08-14 | |
| dc.identifier.citation | Aich, S., & Stavness, I. 2017. Leaf counting with deep convolutional and deconvolutional networks. In “Proc. of the 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)”, 22-29. Venice, Italy.
Andriyenko, A., Roth, S., & Schindler, K. 2011. An analytical formulation of global occlusion reasoning for multi-target tracking. In “2011 IEEE International Conference on Computer Vision Workshops”, pp. 1839-1846. IEEE. Bargoti, S., Underwood, J. 2016. Deep Fruit Detection in Orchards. arXiv preprint arXiv: 1610.03677. Bordes, A., Chopra, S., & Weston, J. 2014. Question answering with subgraph embeddings. arXiv preprint arXiv:1406.3676. Breitenstein, M. D., Reichlin, F., Leibe, B., Koller-Meier, E., and Van Gool, L. 2009. Robust tracking-by-detection using a detector confidence particle filter. In “Proc. IEEE Int. Conf. Comput. Vis.”, pp. 1515–1522. Cakir, Y., Kırcı, M., Güneş, E. O., & Üstündağ, B. B. 2013. Detection of oranges in outdoor conditions. In “2013 Second International Conference on Agro-Geoinformatics”, 500-503. IEEE. Chen, B., He, C., Ma, Y., & Bai, Y. 2011. 3D image monitoring and modeling for corn plants growth in field condition. Transactions of the Chinese Society of Agricultural Engineering, 27(1): 366-372. Chen, D., Neumann, K., Friedel, S., Kilian, B., Chen, M., Altmann, T., & Klukas, C. 2014. Dissecting the phenotypic components of crop plant growth and drought responses based on high-throughput image analysis. The Plant Cell, tpc-114. Chen, S.W., Shivakumar, S.S., Dcunha, S., Das, J., Okon, E., Qu, C., Kumar, V. 2017. Counting apples and oranges with deep learning: a data-driven approach. IEEE Rob. Autom. Lett. 2 (2): 781–788. Chene, Y., Rousseau, D., Lucidarme, P., Bertheloot, J., Caffier, V., Morel, P., & Chapeau-Blondeau, F. 2012. On the use of depth camera for 3D phenotyping of entire plants. Computers and Electronics in Agriculture, 82, 122-127. Choi, W. and Savarese, S. 2012. A unified framework for multi-target tracking and collective activity recognition. In “Proc. Eur. Conf. Comput. Vis.”, pp. 215–230. Ciresan, D., Meier, U. Masci, J. & Schmidhuber, J. 2012. Multi-column deep neural network for traffic sign classification. Neural Networks 32, 333–338. Cobb, J. N., DeClerck, G., Greenberg, A., Clark, R., & McCouch, S. 2013. Next-generation phenotyping: requirements and strategies for enhancing our understanding of genotype–phenotype relationships and its relevance to crop improvement. Theoretical and Applied Genetics, 126(4), 867-887. Corke, P.I., Hager, G.D. 1998. Vision-based robot control. Control Problems Robotics Automat., 177-192. Das, J., Cross, G., Qu, C., Makineni, A., Tokekar, P., Mulgaonkar, Y., & Kumar, V. 2015. Devices, systems, and methods for automated monitoring enabling precision agriculture. In “2015 IEEE International Conference on Automation Science and Engineering”, pp. 462-469. Dornbusch, T., Lorrain, S., Kuznetsov, D., Fortier, A., Liechti, R., Xenarios, I., & Fankhauser, C. 2012. Measuring the diurnal pattern of leaf hyponasty and growth in Arabidopsis–a novel phenotyping approach using laser scanning. Functional Plant Biology, 39(11), 860-869. Farabet, C., Couprie, C., Najman, L., & LeCun, Y. 2012. Scene parsing with multiscale feature learning, purity trees, and optimal covers. arXiv preprint arXiv:1202.2160. Farabet, C., Couprie, C., Najman, L., & LeCun, Y. 2013. Learning hierarchical features for scene labeling. IEEE transactions on pattern analysis and machine intelligence, 35(8), 1915-1929. Fernandez, R., Salinas, H., Sarria, J., Armada, M. 2013. Validation of a multisensory system for fruit harvesting robots in blab conditions. First Iberian Robotics Conf. Adv Intell. Syst. Comput. 252, 495-504. Food Security Information Network. 2018. 2018 Global Report on Food Crises. World Food Programme. Fragkiadaki, K., Zhang, W., Zhang, G., and Shi, J. 2012. Two-granularity tracking: Mediating trajectory and detection graphs for tracking under occlusions. In “Proc. Eur. Conf. Comput. Vis.”, pp. 552– 565. Ghosal, S., Blystone, D., Singh, A. K., Ganapathysubramanian, B., Singh, A., & Sarkar, S. 2018. An explainable deep machine vision framework for plant stress phenotyping. In “Proceedings of the National Academy of Sciences”, 115(18), 4613-4618. Girshick, R. Fast R-CNN. 2015. arXiv:1504.08083. Girshick, R., Donahue, J., Darrell, T., and Malik, J. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR. Golzarian, M. R., Frick, R. A., Rajendran, K., Berger, B., Roy, S., Tester, M., & Lun, D. S. 2011. Accurate inference of shoot biomass from high-throughput images of cereal plants. Plant Methods, 7(1), 2. Gong, A., Yu, J., He, Y., & Qiu, Z. 2013. Citrus yield estimation based on images processed by an Android mobile phone. Biosystems Engineering, 115(2), 162-170. Gongal, A., Amatya, S., Karkee, M., Zhang, Q., & Lewis, K. 2015. Sensors and systems for fruit detection and localization: A review. Computers and Electronics in Agriculture, 116, 8-19. Großkinsky, D. K., Svensgaard, J., Christensen, S., & Roitsch, T. 2015. Plant phenomics and the need for physiological phenotyping across scales to narrow the genotype-to-phenotype knowledge gap. Journal of Experimental Botany, 66(18), 5429-5440. Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A. R., Jaitly, N., ... & Kingsbury, B. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6), 82-97. Hoffmeister, D., Waldhoff, G., Korres, W., Curdt, C., & Bareth, G. 2016. Crop height variability detection in a single field by multi-temporal terrestrial laser scanning. Precision Agriculture, 17(3), 296-312. Hou, L., Wu, Q., Sun, Q., Yang, H., & Li, P. 2016. Fruit recognition based on convolution neural network. In ”Natural Computation, Fuzzy Systems and Knowledge Discovery, 2016 12th International Conference” , pp. 18-22. IEEE. Hu, W., Li, X., Luo, W., Zhang, X., Maybank, S., and Zhang, Z. 2012. Single and multiple object tracking using log-euclidean Riemannian subspace and block-division appearance model. IEEE Trans. Pattern Anal. Mach. Intel., vol. 34, no. 12, pp. 2420–2440. Jay, S., Rabatel, G., Hadoux, X., Moura, D., & Gorretta, N. 2015. In-field crop row phenotyping from 3D modeling performed using structure from motion. Computers and Electronics in Agriculture, 110, 70-77. Jean, S., Cho, K., Memisevic, R., & Bengio, Y. 2014. On using very large target vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007. Jidong, L., De-An, Z., Wei, J., & Shihong, D. 2016. Recognition of apple fruit in natural environment. Optik-International Journal for Light and Electron Optics, 127(3), 1354-1362. Jin, S., Gao, S., Su, Y., Wu, F., Hu, T., Liu, J., ... & Pang, S. 2018. Deep Learning: Individual maize segmentation from terrestrial Lidar data using Faster R-CNN and regional growth algorithms. Frontiers in Plant Science, 9, 866. Krizhevsky, A., Sutskever, I., & Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing systems, pp. 1097-1105. Kuwata, K., & Shibasaki, R. 2015. Estimating crop yields with deep learning and remotely sensed data. In Geoscience and Remote Sensing Symposium (IGARSS), 2015 IEEE International, pp. 858-861. IEEE. LeCun, Y., Bengio, Y., & Hinton, G. 2015. Deep learning. Nature, 521(7553), 436. Li, L., Zhang, Q., & Huang, D. 2014. A review of imaging techniques for plant phenotyping. Sensors, 14(11), 20078-20111. Li, P., Lee, S.H., Hsu, H. Y. 2011. Review on fruit harvesting method for potential use of automatic fruit harvesting systems. Proc. Eng. 23, 351-366. Liu, S., Whitty, M., & Cossell, S. 2015. Automatic grape bunch detection in vineyards for precise yield estimation. In “Machine Vision Applications (MVA), 2015 14th IAPR International Conference”, pp. 238-241. IEEE Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. 2016. Ssd: Single shot multibox detector. In “European Conference on Computer Vision”, pp. 21-37. Springer, Cham. Liu, X., Chen, S. W., Aditya, S., Sivakumar, N., Dcunha, S., Qu, C., & Kumar, V. 2018. Robust fruit counting: Combining deep learning, tracking, and structure from motion. In “Proc. IEEE/RSJ Int. Conf. on Intell. Robots Syst.”, pp. 1045–1052. Lobet, G. 2017. Trends in Plant Science. Image analysis in plant sciences: Publish then Perish. http://www.guillaumelobet.be/ Luo, W., Xing, J., Milan, A., Zhang, X., Liu, W., Zhao, X., & Kim, T. K. 2014. Multiple object tracking: A literature review. arXiv preprint arXiv:1409.7618. Matos, D. A., Cole, B. J., Whitney, I. P., MacKinnon, K. J. M., Kay, S. A., & Hazen, S. P. 2014. Daily changes in temperature, not the circadian clock, regulate growth rate in Brachypodium distachyon. PLoS One, 9(6), e100072. Mehta, S.S., Burks, T. F. 2014. Vision-based control of robotic manipulator for citrus harvesting. Comput. Electr. Agric. 102, 146-158. Mikolov, T., Deoras, A., Povey, D., Burget, L., & Černocký, J. 2011. Strategies for training large scale neural network language models. In “Automatic Speech Recognition and Understanding (ASRU), 2011 IEEE Workshop”, pp. 196-201. IEEE. Mitzel, D. and Leibe, B. 2011. Real-time multi-person tracking with detector assisted structure propagation. In “Proc. IEEE Int. Conf. Comput. Vis. Workshops”, pp. 974-981. Mitzel, D., Horbert, E., Ess, A., and Leibe, B. 2010. Multi-person tracking with sparse detection and continuous segmentation. In “Proc. Eur. Conf. Comput. Vis.”, pp. 397–410. Moonrinta, J., Chaivivatrakul, S., Dailey, M. N., & Ekpanyapong, M. 2010. Fruit detection, tracking, and 3D reconstruction for crop mapping and yield estimation. In “IEEE, 11th Int. Conf. on Control Automation Robotics & Vision”, pp. 1181-1186. Narvaez, F. Y., Reina, G., Torres-Torriti, M., Kantor, G., & Cheein, F. A. 2017. A survey of ranging and imaging techniques for precision agriculture phenotyping. IEEE/ASME Transactions on Mechatronics, 22(6), 2428-2439. Neilson, E. H., Edwards, A. M., Blomstedt, C. K., Berger, B., Møller, B. L., & Gleadow, R. M. 2015. Utilization of a high-throughput shoot imaging system to examine the dynamic phenotypic responses of a C4 cereal crop plant to nitrogen and water deficiency over time. Journal of Experimental Botany, 66(7), 1817-1832. Pan, S.J., Yang, Q. 2010. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22 (10), 1345–1359. Parrish, E. A., & Goksel, A. K. 1977. Pictorial pattern recognition applied to fruit harvesting. Transactions of the ASAE, 20(5), 822-0827. Pound, M. P., French, A. P., Murchie, E. H., & Pridmore, T. P. 2014. Automated recovery of 3D models of plant shoots from multiple colour images. Plant Physiology, pp-114. Qiu, R., Wei, S., Zhang, M., Li, H., Sun, H., Liu, G., & Li, M. 2018. Sensors for measuring plant phenotyping: A review. International Journal of Agricultural and Biological Engineering, 11(2), 1-17. Rahnemoonfar, M., Sheppard, C. 2017. Deep count: fruit counting based on deep simulated learning. Sensors 17 (4), 905. Redmon, J., & Farhadi, A. 2017. YOLO9000: better, faster, stronger. arXiv preprint. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. 2016. You only look once: Unified, real-time object detection. In “Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition”, pp. 779-788. Ren, S., He, K., Girshick, R., & Sun, J. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, pp. 91-99. Rodriguez, M., Sivic, J., Laptev, I., and Audibert, J.-Y. 2011. Data-driven crowd analysis in videos. In “Proc. IEEE Int. Conf. Comput. Vis.”, pp. 1235–1242. Sa, I., Ge, Z., Dayoub, F., Upcroft, B., Perez, T., McCool, C. 2016. Deepfruits: A fruit detection system using deep neural networks. Sensors 16 (8), 1222. Santos, T. T., & Rodrigues, G. C. 2016. Flexible three-dimensional modeling of plants using low-resolution cameras and visual odometry. Machine Vision and Applications, 27(5), 695-707. Schertz, C.E., Brown, G.K. 1968. Basic considerations in mechanizing citrus harvest. Trans. ASAE, 343-346. Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In “Proceedings of the IEEE International Conference on Computer Vision”, pp. 618-626. Senst, T., Eiselein, V., & Sikora, T. 2012. Robust local optical flow for feature tracking. IEEE Transactions on Circuits and Systems for Video Technology, 22(9), 1377. Sermanet, P., Kavukcuoglu, K., Chintala, S. & LeCun, Y. 2013. Pedestrian detection with unsupervised multi-stage feature learning. In “Proc. International Conference on Computer Vision and Pattern Recognition”. http://arxiv.org/abs/1212.0142. Shi, J., & Tomasi, C. 1993. Good features to track. Cornell University. Shirai, Y., Inoue, H. 1973. Guiding a robot by visual feedback in assembling tasks. Patt. Recogn. 5, 99-108. Simonyan, K., & Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Stein, M., Bargoti, S., & Underwood, J. 2016. Image based mango fruit detection, localisation and yield estimation using multiple view geometry. Sensors, 16(11), 1915. Sugimura, D., Kitani, K. M., Okabe, T., Sato, Y., and Sugimoto, A. 2009. Using individuality to track individuals: Clustering individual trajectories in crowds using local appearance and frequency trait. In “Proc. IEEE Int. Conf. Comput. Vis.”, pp. 1467–1474. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., & Rabinovich, A. 2015. Going deeper with convolutions. In “Proceedings of the IEEE conference on computer vision and pattern recognition”, pp. 1-9. Taigman, Y., Yang, M., Ranzato, M. & Wolf, L. 2014. Deepface: closing the gap to human-level performance in face verification. In “Proc. Conference on Computer Vision and Pattern Recognition”, 1701–1708. Tompson, J., Jain, A., LeCun, Y. & Bregler, C. 2014. Joint training of a convolutional network and a graphical model for human pose estimation. In “Proc. Advances in Neural Information Processing Systems”, 1799–1807. U.S. Dept. Agric./AMS, Washington, D. 1991. United states standards for grades of fresh tomatoes. http://www.ams.usda.gov/standards/vegfm.htm. Uijlings, J., Van de Sande, K., Gevers, T., and Smeulders, A. 2013. Selective search for object recognition. IJCV. Underwood, J. P., Rahman, M. M., Robson, A., Walsh, K. B., Koirala, A., & Wang, Z. 2018. Fruit load estimation in mango orchards - a method comparison. In “ICRA 2018 Workshop on Robotic Vision and Action in Agriculture”. Brisbane, Australia. Van Henten, E.J., Schenk, E.J.L, Willigenburg, G.V. 2010. Collision-free inverse kinematics of the redundant seven-link manipulator used in a cucumber picking robot. Biosyst. Eng. 106, 112-124. Vázquez-Arellano, M., Griepentrog, H., Reiser, D., & Paraforos, D. 2016. 3-D imaging systems for agricultural applications - a review. Sensors, 16(5), 618. Wang, Q., Nuske, S., Bergeman, M., & Singh, S. 2013. Automated crop yield estimation for apple orchards. In Experimental Robotics, pp. 745-758. Springer, Heidelberg. Yang, M., Lv, F., Xu, W., and Gong, Y. 2009. Detection driven adaptive multi-cue integration for multiple human tracking. In “Proc. IEEE Int. Conf. Comput. Vis.”, pp. 1554–1561. Yau, W.Y., Wang, H. 1996. Robust hand-eye coordination. Adv. Robotic 11 (1), 57-73. Zhang, L., & Grift, T. E. 2012. A LIDAR-based crop height measurement system for Miscanthus giganteus. Computers and Electronics in Agriculture, 85, 70-76. Zhang, L., Li, Y., and Nevatia, R. 2008. Global data association for multi-object tracking using network flows. In “Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.”, pp. 1–8. Zhang, Y., Teng, P., Shimizu, Y., Hosoi, F., & Omasa, K. 2016. Estimating 3D leaf and stem shape of nursery paprika plants by a novel multi-camera photography system. Sensors, 16(6), 874. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73891 | - |
| dc.description.abstract | 近年來,全球糧食危機和糧食不安全的問題越來越嚴重,利用高效率的育種技術培育出較高產量的作物是其中一種解決方式,而育種技術仰賴準確的作物表型分析系統。本研究之目的即為建構一高通量之果實偵測、定位及量測系統,利用RGB-D相機拍攝目標果實之影片,並採用電腦視覺及深度類神經網路結構,達到自動化之目的。相較於其他作品利用單一影像進行單一作物之量測,本研究將利用影片提高果實量測之精準度,且方便進行影像之取得,選擇番茄為目標果實。深度學習架構採用YOLOv2,為一即時之物件偵測演算法,作為果實之辨識及定位的物件偵測器,其於靜態影像之最佳偵測命中率為88.86%。接著,開發個別果實追蹤演算法以追蹤影片中數個果實,包括線上追蹤及線下追蹤,線上追蹤利用特徵點偵測和配對、光流法及剛體轉換,搭配有限向量機進行各顆果實的追蹤,亦針對遮蔽問題,採用閾值設定及去噪。線下追蹤則利用投票法降低由物件偵測器及深度影像造成的假警報。完成追蹤演算法後,將分析得果實形態學參數如個別果實之成熟度、大小、總果實之計數結果,以及二維空間分布圖。本研究共進行三次實驗,前兩次為系統架設及測試組,第三次實驗為驗證組,果實實際計數之最佳平均絕對相對計數誤差為9.91%,前兩次實驗中大部分的果實成熟度為二級和三級,且果實平均截面積大小分別為45.68及42.01平方公分,而驗證實驗之平均絕對相對計數誤差為15.15%,果實成熟度則大部分為三至五級,且果實平均截面積大小為40.11平方公分,由結果之合理一致性證明此實驗及提出之系統具備重複性及再現性。利用此自動化之高通量果實表型分析系統,可即時獲得溫室中果實之分布及各顆果實的生長資訊,提供農民果實的生長過程量化指標,亦可進行自動化之紀錄與分析,提供產量評估及栽培作業改善資訊,以持續優化栽種技術及提高產量。 | zh_TW |
| dc.description.abstract | Food crises and security issues are getting worse. One of the sought solutions is by using efficient breeding systems which requires accurate and detailed phenotyping of plants. In this work, a high-throughput technique for fruits detection, localization and measurement from video streams using computer vision, RGB-D camera and deep neural networks is proposed. Contrary to other works that different methods are developed for each type of fruits with image information, our work utilize the video information to do the precise detection and analysis fruits’ morphological parameters. A real-time object detection algorithm using YOLOv2, a deep neural network-based detector, is used for fruit detection and localization on video frames with a highest hit rate of 88.86%. An individual fruit tracking algorithm is performed throughout the video stream to perform tracking of multiple fruits. The online tracking algorithm based on finite state machine includes feature matching, optical flow and rigid transformation which is optimized by occlusion handling techniques such as by applying threshold indices and denoising. On the other hand, the offline tracking algorithm uses voting method to reduce the false alarms caused by the object detector. Finally, the morphological parameters such as individual fruit ripening stage, fruit size, and 2D spatial distribution maps are obtained. There are three experiments in the study. First two experiments are for system construction and testing. The third experiment is for system validation. The best absolute relative fruit count error is 9.91%. In the first two experiments, most of the ripening stage of the fruits are 2 and 3, and the average cross-sectional area of tracked fruits are 45.68 cm2, 42.01 cm2, respectively. The absolute relative fruit count error is 15.15% in the validated experiment, and most of the fruit ripening stage is 3 to 5. The average cross-sectional area of tracked fruits in the third experiment is 40.11 cm2. By the consistent results from the experiments, the validated experiment shows that the system proposed has the reproducibility. With the proposed automatic high-throughput fruit phenotyping system, it is able to obtain the fruits distribution and growing information. By recording and analyzing the numerical growing statistics, farmers are able to do the field estimation and improved the growing skill to increase the yield. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-17T08:12:58Z (GMT). No. of bitstreams: 1 ntu-108-R06631001-1.pdf: 7827899 bytes, checksum: 96c7f91a034c0976bcca31be36fcd793 (MD5) Previous issue date: 2019 | en |
| dc.description.tableofcontents | 誌謝…………………………………………………………………………………… i
摘要………………………………………………………………………………… ii Abstract.……………………………………………………………………………… iii 目錄…………………………………………………………………………………… v 圖目錄 viii 表目錄 xi 第一章 緒論 1 1.1 前言 1 1.2 研究目的 3 第二章 文獻探討 5 2.1 果實偵測及定位 5 2.1.1 植株表現型 6 2.1.2 產量預估 7 2.1.3 機械採收 8 2.2 RGB-D影像系統 9 2.2.1 RGB-D影像系統原理 9 2.2.2 RGB-D影像系統於作物表型之應用 10 2.3二維影像物件偵測及定位 11 2.3.1 深度學習 11 2.3.2 卷積神經網路 12 2.3.3 YOLOv2 15 2.3.4 深度學習應用於農業領域 17 2.4 多物件追蹤演算法 18 2.4.1 多物件追蹤演算法組成 19 2.4.2 物件遮蔽處理 20 2.4.3 果實追蹤 21 第三章 研究方法 22 3.1影像擷取與蒐集之硬體設計 22 3.2 深度相機之選擇 26 3.3軟體設計及流程架構 27 3.4蔬果偵測 28 3.4.1 蔬果偵測深度模型之建置 28 3.4.2蔬果偵測模型之訓練 32 3.4.3蔬果偵測模型之測試 36 3.5蔬果追蹤 37 3.5.1線上追蹤 37 3.5.2線下追蹤 43 3.6 蔬果追蹤結果分析 44 3.6.1 果實計數 44 3.6.2 果實成熟度及大小 44 3.6.3 果實二維空間分布圖 46 3.7 軟體架構 46 3.7.1 系統主程式 46 3.7.2 資料分析 47 第四章 結果與討論 49 4.1 深度相機之性能測試結果 49 4.2蔬果偵測模型分析 50 4.3蔬果追蹤分析 57 4.3.1 深度影像及果實偵測前處理 57 4.3.2 特徵點配對方法分析 58 4.3.3 蔬果追蹤結果 60 4.3.4 蔬果追蹤結果分析 62 4.3.4.1 果實計數 62 4.3.4.2 果實成熟度、大小及2維分佈圖 74 4.4 驗證實驗 79 第五章 結論與建議 85 5.1 結論 85 5.2 建議 87 參考文獻 88 | |
| dc.language.iso | zh-TW | |
| dc.subject | 物件偵測 | zh_TW |
| dc.subject | 物件追蹤 | zh_TW |
| dc.subject | 電腦視覺 | zh_TW |
| dc.subject | 表型分析 | zh_TW |
| dc.subject | 自動化 | zh_TW |
| dc.subject | Phenotyping | en |
| dc.subject | Computer vision | en |
| dc.subject | Object detection | en |
| dc.subject | Automation | en |
| dc.subject | Object tracking | en |
| dc.title | 應用深度學習方法發展高通量溫室番茄果實表型分析系統 | zh_TW |
| dc.title | Development of a High-Throughput Phenotyping System for Greenhouse Tomato Fruits Based on Deep Learning | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 107-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 蔡燿全,郭彥甫 | |
| dc.subject.keyword | 物件偵測,物件追蹤,電腦視覺,表型分析,自動化, | zh_TW |
| dc.subject.keyword | Automation,Computer vision,Object detection,Object tracking,Phenotyping, | en |
| dc.relation.page | 96 | |
| dc.identifier.doi | 10.6342/NTU201903006 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2019-08-15 | |
| dc.contributor.author-college | 生物資源暨農學院 | zh_TW |
| dc.contributor.author-dept | 生物產業機電工程學研究所 | zh_TW |
| 顯示於系所單位: | 生物機電工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-108-1.pdf 未授權公開取用 | 7.64 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
