請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93828完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 李佳翰 | zh_TW |
| dc.contributor.advisor | Jia-Han Li | en |
| dc.contributor.author | 許宏維 | zh_TW |
| dc.contributor.author | Hung-Wei Hsu | en |
| dc.date.accessioned | 2024-08-08T16:26:08Z | - |
| dc.date.available | 2024-08-09 | - |
| dc.date.copyright | 2024-08-08 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-07-23 | - |
| dc.identifier.citation | 參考資料
[1] NG, Carl Khee Yew; HEW, Choy Sin. Orchid pseudobulbs–false'bulbs with a genuine importance in orchid growth and survival!. Scientia Horticulturae, 2000, 83.3-4: 165-172. [2] ASHRAF, Muhammad Ali; KONDO, Naoshi; SHIIGI, Tomoo. Use of machine vision to sort tomato seedlings for grafting robot. Engineering in Agriculture, Environment and Food, 2011, 4.4: 119-125. [3] WÄLDCHEN, Jana, et al. Automated plant species identification—Trends and future directions. PLoS computational biology, 2018, 14.4: e1005993. [4] WANG, Chien-Yao; BOCHKOVSKIY, Alexey; LIAO, Hong-Yuan Mark. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. p. 7464-7475. [5] LI, Mingyong, et al. Vision-based a seedling selective planting control system for vegetable transplanter. Agriculture, 2022, 12.12: 2064. [6] XIN, Jin, et al. Design and implementation of intelligent transplanting system based on photoelectric sensor and PLC. Future Generation Computer Systems, 2018, 88: 127-139. [7] LIU, Jun; WANG, Xuewei. Plant diseases and pests detection based on deep learning: a review. Plant Methods, 2021, 17: 1-18. [8] KC, Kamal, et al. Impacts of background removal on convolutional neural networks for plant disease classification in-situ. Agriculture, 2021, 11.9: 827. [9] SUN, Yu, et al. Enhancing UAV Detection in Surveillance Camera Videos through Spatiotemporal Information and Optical Flow. Sensors, 2023, 23.13: 6037. [10] VARATHARASAN, Vinorth, et al. Improving learning effectiveness for object detection and classification in cluttered backgrounds. In: 2019 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED UAS). IEEE, 2019. p. 78-85. [11] 周明燕, 廖玉珠, 張定霖, & 陳駿季. (2008/7月). 蝴蝶蘭組織培養種苗生產費用結構分析. 種苗改良繁殖場, (193),. [12] MAYER, Juliana Lischka Sampaio; CARMELLO-GUERREIRO, Sandra Maria; APPEZZATO-DA-GLÓRIA, Beatriz. Anatomical development of the pericarp and seed of Oncidium flexuosum Sims (Orchidaceae). Flora-Morphology, Distribution, Functional Ecology of Plants, 2011, 206.6: 601-609. [13] CHANG, Chia-Man, et al. The Effects of Light Treatments on Growth and Flowering Characteristics of Oncidesa Gower Ramsey ‘Honey Angel’at Different Growth Stages. Agriculture, 2023, 13.10: 1937. [14] CHIN, Dan-Chu, et al. Prolonged exposure to elevated temperature induces floral transition via up-regulation of cytosolic ascorbate peroxidase 1 and subsequent reduction of the ascorbate redox ratio in Oncidium hybrid orchid. Plant and Cell Physiology, 2014, 55.12: 2164-2176. [15] XU, Zhibo, et al. A real-time zanthoxylum target detection method for an intelligent picking robot under a complex background, based on an improved YOLOv5s architecture. Sensors, 2022, 22.2: 682. [16] ZHANG, Chenxi; KANG, Feng; WANG, Yaxiong. An improved apple object detection method based on lightweight YOLOv4 in complex backgrounds. Remote Sensing, 2022, 14.17: 4150. [17] LIAO, Shengcai, et al. Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes. In: 2010 IEEE computer society conference on computer vision and pattern recognition. IEEE, 2010. p. 1301-1306. [18] RONNEBERGER, Olaf; FISCHER, Philipp; BROX, Thomas. U-net: Convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18. Springer International Publishing, 2015. p. 234-241. [19] QIN, Xuebin, et al. U2-Net: Going deeper with nested U-structure for salient object detection. Pattern recognition, 2020, 106: 107404. [20] SUNIL, G. C., et al. A study on deep learning algorithm performance on weed and crop species identification under different image background. Artificial Intelligence in Agriculture, 2022, 6: 242-256. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93828 | - |
| dc.description.abstract | 在背景複雜的環境下進行精確的影像辨識是機器視覺領域面臨的一大挑戰。本研究致力於開發一種應用卷積神經網絡架構YOLOv7和U2NET的影像辨識系統,以提高在背景相似的環境中目標辨識的準確性。YOLOv7的實時物件偵測能力與U2NET在去除背景方面的優勢結合,能夠識別並正確標記出背景中的目標物。
本研究首先分析了蘭花需要換盆時機,並且分析現有園藝產業中的即時辨識模型,發現背景複雜度對現有影像辨識模型的影響,因此透過資料集的不同,讓YOLOv7訓練出了四種模型。資料集總共分為去背、無去背,而模型分為去背模型、無去背模型、不同標籤混和模型以及同標籤混模型。在整理資料集中透過U2NET模型的去除背景的能力,以加速資料去背處理的效率,藉此取代YOLOv7的遮罩(Mask)功能。進一步,優化了資料的處理,透過YOLOv7模型,在不犧牲辨識速度的情況下,提高對於複雜背景中物體的辨識率。 在實驗階段,使用文心蘭資料集的複雜背景影像進行了廣泛的測試。結果表明,四種模型中,同標籤混和模型的實際辨識成果比其他優秀,在辨識準確性上有顯著提升,混和訓練模型對於一般模型提升了10%的辨識精準度,藉此可以解決背景複雜的問題。透過詳細的實驗分析,展示了模型在各種挑戰性背景下的性能,證明了它在應對真實世界複雜視覺情況中的有效性。 | zh_TW |
| dc.description.abstract | Accurate image recognition in complex environments is a significant challenge in the field of machine vision. This study is dedicated to developing an image recognition system utilizing the convolutional neural network architectures YOLOv7 and U2NET to enhance the accuracy of target recognition in environments with similar backgrounds. The real-time object detection capabilities of YOLOv7, combined with the background removal strengths of U2NET, enable the system to identify and correctly label objects against complex backgrounds.
The research initially analyzed the timing for repotting orchids and reviewed the real-time recognition models in the existing horticulture industry. It found that background complexity significantly affects current image recognition models. Thus, by varying the dataset, four models were trained using YOLOv7. The datasets were divided into categories with and without background removal, and the models included a background removal model, a no-background removal model, a mixed label model, and a same-label mix model. Using U2NET's capability to remove backgrounds sped up the efficiency of background processing, thereby replacing the Mask function of YOLOv7. Furthermore, data processing was optimized by the YOLOv7 model, improving the recognition rate of objects in complex backgrounds without sacrificing recognition speed. In the experimental phase, extensive testing was conducted using complex background images from the Oncidium dataset. The results indicate that the same-label mixed model outperformed the other four models in terms of recognition accuracy, showing a significant improvement. The mixed training model enhanced the recognition precision by 10% compared to the general models, thereby resolving issues related to complex backgrounds. Through detailed experimental analysis, the performance of the models under various challenging backgrounds was demonstrated, proving their effectiveness in handling complex visual scenarios in the real world. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-08-08T16:26:08Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-08-08T16:26:08Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 目次
誌謝 ii 研究貢獻 iii 中文摘要 iv ABSTRACT v 目次 vi 圖次 viii 第一章 緒論 1 1.1研究背景及動機 1 1.2本文架構 2 第二章 文獻探討 3 2.1蘭花的特徵 3 2.2農業機器學習及自動化 5 2.3深度學習模型強化 8 2.4背景複雜 10 2.4.1去除背景 10 2.4.2目標框選 14 第三章 研究方法 19 3.1蘭花換盆 19 3.1.1目標物品種介紹 19 3.1.2不同尺寸文心蘭的區別 20 3.2資料處理 23 3.2.2不同角度的建構 25 3.3影像處理 25 3.3.1複雜背景 26 3.3.2去背流程 27 3.4模型選用 28 3.4.1 U2NET模型[19] 29 3.4.2 YOLO V7模型[4] 30 第四章 研究結果及討論 34 4.1U2NET應用之結果 34 4.2YOLOv7訓練之結果 36 4.3模型辨識之結果 39 4.4討論 42 第五章 總結及未來展望 44 5.1總結 44 5.2未來展望 44 參考資料 45 | - |
| dc.language.iso | zh_TW | - |
| dc.subject | 選苗 | zh_TW |
| dc.subject | 影像辨識 | zh_TW |
| dc.subject | 蘭花 | zh_TW |
| dc.subject | YOLOv7 | zh_TW |
| dc.subject | U2NET | zh_TW |
| dc.subject | 機器學習 | zh_TW |
| dc.subject | 背景複雜辨識 | zh_TW |
| dc.subject | 換盆 | zh_TW |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | Orchids | en |
| dc.subject | Complex background recognition | en |
| dc.subject | Machine Learning | en |
| dc.subject | Deep Learning | en |
| dc.subject | YOLOv7 | en |
| dc.subject | Image Recognition | en |
| dc.subject | U2NET | en |
| dc.subject | Seedling Selection | en |
| dc.subject | Repotting | en |
| dc.title | YOLO V7及U2NET應用於高價植物成長監控與複雜環境背景之分辨 | zh_TW |
| dc.title | Applications of YOLO V7 and U2NET in Monitoring High-Value Plant Growth and Differentiating Complex Environmental Backgrounds | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 黃振康;朱仁佑 ;陳文平;張恆華 | zh_TW |
| dc.contributor.oralexamcommittee | Chen-Kang Huang;Jen-You Chu;Wen-Ping Chen;Herng-Hua Chang | en |
| dc.subject.keyword | 背景複雜辨識,機器學習,深度學習,YOLOv7,蘭花,影像辨識,U2NET,選苗,換盆, | zh_TW |
| dc.subject.keyword | Complex background recognition,Machine Learning,Deep Learning,YOLOv7,Orchids,Image Recognition,U2NET,Seedling Selection,Repotting, | en |
| dc.relation.page | 46 | - |
| dc.identifier.doi | 10.6342/NTU202401945 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2024-07-23 | - |
| dc.contributor.author-college | 工學院 | - |
| dc.contributor.author-dept | 工程科學及海洋工程學系 | - |
| dc.date.embargo-lift | 2029-07-19 | - |
| 顯示於系所單位: | 工程科學及海洋工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf 此日期後於網路公開 2029-07-19 | 5.8 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
