請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/87174完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 周呈霙 | zh_TW |
| dc.contributor.advisor | Cheng-Ying Chou | en |
| dc.contributor.author | 周易昕 | zh_TW |
| dc.contributor.author | Yi-Shin Chou | en |
| dc.date.accessioned | 2023-05-18T16:10:46Z | - |
| dc.date.available | 2023-11-09 | - |
| dc.date.copyright | 2023-05-10 | - |
| dc.date.issued | 2023 | - |
| dc.date.submitted | 2023-02-15 | - |
| dc.identifier.citation | Abd El-Kawy, O., Rød, J., Ismail, H., and Suliman, A. (2011). Land use and land cover change detection in the western nile delta of egypt using remote sensing data. Applied Geography, 31(2):483–494.
Abd-Elrahman, A., Britt, K., and Liu, T. (2021). Deep learning classification of highresolution drone images using the arcgis pro software. EDIS, 2021(5). Aerial Survey Office, F. B. (2022). Aerial photography information searching website. https://www.afasi.gov.tw/EN. Albawi, S., Mohammed, T. A., and Al-Zawi, S. (2017). Understanding of a convolutional neural network. In 2017 International Conference on Engineering and Technology (ICET), pages 1–6. IEEE. Audebert, N., Le Saux, B., and Lefèvre, S. (2017). Segment-before-detect: Vehicle detection and classification through semantic segmentation of aerial images. Remote Sensing, 9(4):368. Badrinarayanan, V., Kendall, A., and Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12):2481–2495. Bannari, A., Morin, D., Bonn, F., and Huete, A. (1995). A review of vegetation indices. Remote Sensing Reviews, 13(1-2):95–120. Bolya, D., Zhou, C., Xiao, F., and Lee, Y. J. (2019). Yolact: Real-time instance segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9157–9166. Cai, Z. and Vasconcelos, N. (2018). Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6154–6162. Camps-Valls, G., Gómez-Chova, L., Muñoz-Marí, J., Rojo-Álvarez, J. L., and Martínez-Ramón, M. (2008). Kernel-based framework for multitemporal and multisource remote sensing data classification and change detection. IEEE Transactions on Geoscience and Remote Sensing, 46(6):1822–1835. Chen, K., Pang, J., Wang, J., Xiong, Y., Li, X., Sun, S., Feng, W., Liu, Z., Shi, J., Ouyang, W., et al. (2019a). Hybrid task cascade for instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4974–4983. Chen, K., Wang, J., Pang, J., Cao, Y., Xiong, Y., Li, X., Sun, S., Feng, W., Liu, Z., Xu, J., et al. (2019b). Mmdetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A. L. (2017). Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4):834–848. Chen, Y., Lee, W. S., Gan, H., Peres, N., Fraisse, C., Zhang, Y., and He, Y. (2019c). Strawberry yield prediction based on a deep neural network using high-resolution aerial orthoimages. Remote Sensing, 11(13):1584. Council of Agriculture, E. Y. (2022). 農業統計資料查詢. https://agrstat.coa. gov.tw/sdweb/public/indicator/Indicator.aspx. Davis, D. S., Caspari, G., Lipo, C. P., and Sanger, M. C. (2021). Deep learning reveals extent of archaic native american shell-ring building practices. Journal of Archaeological Science, 132:105433. Davis, D. S. and Lundin, J. (2021). Locating charcoal production sites in sweden using lidar, hydrological algorithms, and deep learning. Remote Sensing, 13(18):3680. Dewan, A. M. and Yamaguchi, Y. (2009). Using remote sensing and gis to detect and monitor land use and land cover change in dhaka metropolitan of bangladesh during 1960–2005. Environmental Monitoring and Assessment, 150(1):237–249. GmbH, V. I. (2008). Ultracam-xp technical specifications. http://coello.ujaen.es/Asignaturas/fotodigital/descargas/UCXP-specs.pdf. Hafiz, A. M. and Bhat, G. M. (2020). A survey on instance segmentation: state of the art. International journal of multimedia information retrieval, 9(3):171–189. Hamdi, Z. M., Brandmeier, M., and Straub, C. (2019). Forest damage assessment using deep learning on high resolution remote sensing data. Remote Sensing, 11(17):1976. He, K., Gkioxari, G., Dollár, P., and Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pages 2961–2969. Helber, P., Bischke, B., Dengel, A., and Borth, D. (2019). Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(7):2217–2226. Hoeser, T., Bachofer, F., and Kuenzer, C. (2020). Object detection and image segmentation with deep learning on earth observation data: A review—part 2: Applications. Remote Sensing, 12(18):3053. Hoeser, T. and Kuenzer, C. (2020). Object detection and image segmentation with deep learning on earth observation data: A review-part 1: Evolution and recent trends. Remote Sensing, 12(10):1667. Howard, J. et al. (2022). fastai. https://github.com/fastai/fastai. Huang, L., Wu, X., Peng, Q., and Yu, X. (2021). Depth semantic segmentation of tobacco planting areas from unmanned aerial vehicle remote sensing images in plateau mountains. Journal of Spectroscopy, 2021. Huang, Z., Huang, L., Gong, Y., Huang, C., and Wang, X. (2019). Mask scoring rcnn. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6409–6418. Kirillov, A., He, K., Girshick, R., Rother, C., and Dollár, P. (2019). Panoptic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9404–9413. Krestenitis, M., Orfanidis, G., Ioannidis, K., Avgerinakis, K., Vrochidis, S., and Kompatsiaris, I. (2019). Oil spill identification from satellite images using deep neural networks. Remote Sensing, 11(15):1762. Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2017). Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90. LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324. Li, Y., Xu, W., Chen, H., Jiang, J., and Li, X. (2021). A novel framework based on mask r-cnn and histogram thresholding for scalable segmentation of new and old rural buildings. Remote Sensing, 13(6):1070. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., and Berg, A. C. (2016). Ssd: Single shot multibox detector. In European Conference on Computer Vision, pages 21–37. Springer. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012–10022. Ma, H., Liu, Y., Ren, Y., and Yu, J. (2019a). Detection of collapsed buildings in postearthquake remote sensing images based on the improved YOLOv3. Remote Sensing, 12(1):44. Ma, L., Liu, Y., Zhang, X., Ye, Y., Yin, G., and Johnson, B. A. (2019b). Deep learning in remote sensing applications: A meta-analysis and review. ISPRS Journal of Photogrammetry and Remote Sensing, 152:166–177. Mahdianpari, M., Salehi, B., Rezaee, M., Mohammadimanesh, F., and Zhang, Y. (2018). Very deep convolutional neural networks for complex land cover mapping using multispectral remote sensing imagery. Remote Sensing, 10(7):1119. Melgani, F., Moser, G., and Serpico, S. B. (2002). Unsupervised change-detection methods for remote-sensing images. Optical Engineering, 41(12):3288–3297. Minnett, P., Alvera-Azcárate, A., Chin, T., Corlett, G., Gentemann, C., Karagali, I., Li, X., Marsouin, A., Marullo, S., Maturi, E., et al. (2019). Half a century of satellite remote sensing of sea-surface temperature. Remote Sensing of Environment, 233:111366. Moldenhauer, K. and Slaton, N. (2001). Rice growth and development. Rice Production Handbook, 192:7–14. Nazir, A., Ullah, S., Saqib, Z. A., Abbas, A., Ali, A., Iqbal, M. S., Hussain, K., Shakir, M., Shah, M., and Butt, M. U. (2021). Estimation and forecasting of rice yield using phenology-based algorithm and linear regression model on sentinel-2 satellite data. Agriculture, 11(10):1026. O’Shea, K. and Nash, R. (2015). An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458. Pai, M. M., Mehrotra, V., Aiyar, S., Verma, U., and Pai, R. M. (2019). Automatic segmentation of river and land in sar images: A deep learning approach. In 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering(AIKE), pages 15–20. IEEE. Pleșoianu, A.-I., Stupariu, M.-S., Șandric, I., Pătru-Stupariu, I., and Drăguț, L. (2020). Individual tree-crown detection and species classification in very high-resolution remote sensing imagery using a deep learning ensemble model. Remote Sensing, 12(15):2426. Rashkovetsky, D., Mauracher, F., Langer, M., and Schmitt, M. (2021). Wildfire detection from multisensor satellite imagery using deep semantic segmentation. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14:7001–7016. Redmon, J. and Farhadi, A. (2018). YOLOv3: An incremental improvement. arXiv preprint arXiv:1804.02767. Ren, S., He, K., Girshick, R., and Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems, 28. Rogan, J. and Chen, D. (2004). Remote sensing technology for mapping and monitoring land-cover and land-use change. Progress in Planning, 61(4):301–325. Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-assisted Intervention, pages 234–241. Springer. Shanmuganathan, S. (2016). Artificial neural network modelling: An introduction. In Artificial Neural Network Modelling, pages 1–14. Springer. Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Song, S., Liu, J., Liu, Y., Feng, G., Han, H., Yao, Y., and Du, M. (2020). Intelligent object recognition of urban water bodies based on deep learning for multi-source and multi-temporal high spatial resolution remote sensing imagery. Sensors, 20(2):397. Tewkesbury, A. P., Comber, A. J., Tate, N. J., Lamb, A., and Fisher, P. F. (2015). A critical synthesis of remotely sensed optical image change detection techniques. Remote Sensing of Environment, 160:1–14. Tian, D., Han, Y., Wang, B., Guan, T., Gu, H., and Wei, W. (2021). Review of object instance segmentation based on deep learning. Journal of Electronic Imaging, 31(4):041205. Wang, H., Zhu, Y., Adam, H., Yuille, A., and Chen, L.-C. (2021). Max-deeplab: End-toend panoptic segmentation with mask transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5463–5474. Wang, M., Wang, J., Cui, Y., Liu, J., and Chen, L. (2022). Agricultural field boundary delineation with satellite image segmentation for high-resolution crop mapping: A case study of rice paddy. Agronomy, 12(10):2342. Waqas Zamir, S., Arora, A., Gupta, A., Khan, S., Sun, G., Shahbaz Khan, F., Zhu, F., Shao, L., Xia, G.-S., and Bai, X. (2019). isaid: A large-scale dataset for instance segmentation in aerial images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 28–37. Wei, S., Zeng, X., Qu, Q., Wang, M., Su, H., and Shi, J. (2020). Hrsid: A highresolution sar images dataset for ship detection and instance segmentation. IEEE Access, 8:120234–120254. Wessel, M., Brandmeier, M., and Tiede, D. (2018). Evaluation of different machine learning algorithms for scalable classification of tree types and tree species based on sentinel-2 data. Remote Sensing, 10(9):1419. Xia, G.-S., Bai, X., Ding, J., Zhu, Z., Belongie, S., Luo, J., Datcu, M., Pelillo, M., and Zhang, L. (2018). Dota: A large-scale dataset for object detection in aerial images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3974–3983. Xu, Y., Wu, L., Xie, Z., and Chen, Z. (2018). Building extraction in very high resolution remote sensing imagery using deep learning and guided filters. Remote Sensing, 10(1):144. Xue, J. and Su, B. (2017). Significant remote sensing vegetation indices: A review of developments and applications. Journal of Sensors, 2017:1353691. Yang, M.-D., Tseng, H.-H., Hsu, Y.-C., and Tsai, H. P. (2020). Semantic segmentation using deep learning with vegetation indices for rice lodging identification in multi-date uav visible images. Remote Sensing, 12(4):633. Zhao, H., Shi, J., Qi, X., Wang, X., and Jia, J. (2017). Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2881–2890. Zhao, W., Persello, C., and Stein, A. (2020). Building instance segmentation and boundary regularization from high-resolution remote sensing images. In IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, pages 3916–3919. IEEE. Zhu, X., Liang, J., and Hauptmann, A. (2021). Msnet: A multilevel instance segmentation network for natural disaster damage assessment in aerial videos. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2023–2032. 廖泰杉(2019). 植生指標之單/雙影像模組發展探討. 科儀新知, (220):62–73. 張士勳, 萬絢, and 周彥芝(2012). 綠化生態模型的評估: 以遙測影像的水稻田影像判釋: 自組織映射圖與邏輯式迴歸之研究. 水保技術, 7(4):212–220. 萬絢, 雷祖強, and 陳達祺(2010). 以樹狀倒傳遞類神經網路於田埂判釋研究. Journal of Photogrammetry and Remote Sensing, 15(3):205–214. 蕭國鑫, 劉治中, and 史天元(2000). 遙測與gis 結合應用於水稻田辨識. 航測及遙測學刊, 5(4):1–22. 蕭國鑫, 劉治中, and 徐偉城(2004). 不同影像分類方法應用於水稻辨識之探討. 航測及遙測學刊, 9(1):13–26. 陳承昌and 史天元(2007). 支持向量機應用於水稻田辨識之研究. Journal of Photogrammetry and Remote Sensing, 12(3):225–240. 陳益凰and 曾義星(1999). 應用多時段衛星影像辨識水稻田之研究. 航測及遙測學刊, 4(3):1–15. 雷祖強, 周天穎, and 鄭丁元(2006). 運用quickbird 衛星影像於水稻田坵塊萃取之研究. Journal of Photogrammetry and Remote Sensing, 11(3):297–310. 雷祖強, 李哲源, 葉惠中, and 萬絢(2009). 以區塊化物件分類法萃取ads-40 影像中水稻田坵塊資訊之研究. Journal of Photogrammetry and Remote Sensing, 14(2):127–140. 黃彥嘉(2019). 利用深度學習對衛星影像進行水稻田面積之估測. Master’s thesis, 中原大學. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/87174 | - |
| dc.description.abstract | 水稻作為台灣最重要的糧食作物之一,在農產預估及災損評估上時常有大範圍辨識的需求,然而台灣現階段對於水稻田的監測與管控作業,主要仰賴農試所專家們透過實地訪查、比對地籍圖資料,以及藉由衛星或航照影像輔助,再以人工的方式逐一辨別水稻的種植分佈,過程中除了要進行水稻田資料標記作業,還要定期更新、維護資料的正確性,使得整體作業流程曠日廢時,因此面對即時且大範圍的辨識需求往往欠缺效率。本研究以 ArcGIS Pro 軟體作為開發深度學習模型的工具,開發 Mask RCNN 實例分割模型辨識和分割航照影像中的水稻田。使用了 2018 年至 2019 年收集的航空影像,涵蓋彰化、雲林、嘉義和台南的水稻種植密集區。模型標記檔創建了四個類別,包括水稻生長、黃熟、收割階段和其他作物。另外研究在影像前處理階段對原始航照影像的波段資訊進行修改,利用不同的植生指標(NDVI、CMFI、DVI、RVI、GRVI)創建不同的訓練資料集,測試不同植生指標對於水稻分期辨識的表現。結果發現,以 ResNet-50 為特徵提取骨架的模型在 mAP 方面的表現整體優於 ResNet-101,其中 RGB + DVI、RGB + NIR 和 RGB + GRVI 影像資料集表現最好,mAP 分別為 74.01%、73.81% 和分別為 73.72%。而在水稻田分期辨識和分割的問題上,水稻生長階段建議使用 RGB + CMFI 影像訓練的模型,水稻黃熟階段建議使用 RGB + NIR 影像訓練的模型,水稻收割階段則建議使用 RGB + GRVI 影像訓練的模型,dice coefficient 分別為 79.59%、89.71% 和 87.94% 。模型分期辨識和分割結果可以提供水稻生長狀態的資訊,從而提升水稻生產管理的效率和準確性。該方法也可用於任何農作物的大規模檢測,減輕研究人員的負擔,提高土地利用調查的效率。 | zh_TW |
| dc.description.abstract | Given the significance of rice in Taiwan's agriculture, efficient methods for detecting and mapping paddy fields are critical for the effective management of agricultural production, prediction of yields, and assessment of the damage. Currently, researchers at the Taiwan Agricultural Research Institute use a combination of site surveys, cadastral maps, and satellite or aerial images to identify rice planting areas' distribution manually. However, maintaining the accuracy of the paddy field data by regularly updating the labels is time-consuming, especially when faced with the need for large-scale and immediate detection. This study aimed to detect and segment paddy fields in aerial images using Mask RCNN instance segmentation models. The study used aerial images collected from 2018 to 2019 covering rice planting-intensive areas of Changhua, Yunlin, Chiayi, and Tainan in central and southern Taiwan. The label file was created with four categories of rice growing, ripening, harvested stage, and other crops. The image pre-processing stage involved the modification of band information of the original aerial images using different vegetation indices such as NDVI, CMFI, DVI, RVI, and GRVI to create different image datasets. The study found that the ResNet-50 backbone performed better than the ResNet-101 in terms of mAP performance, with RGB + DVI, RGB + NIR, and RGB + GRVI image datasets performing the best with mAPs of 74.01%, 73.81%, and 73.72%, respectively. The models trained on RGB + CMFI images were recommended for the rice growing stage, RGB + NIR images for the rice ripening stage, and RGB + GRVI images for the rice harvested stage, with dice coefficients of 79.59%, 89.71%, and 87.94%, respectively. The detection and segmentation results can improve the efficiency and accuracy of rice production management by providing an understanding of rice growth status. This method can also be used for large-scale detection of any crops, reducing the burden on researchers and improving land use survey efficiency. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-05-18T16:10:46Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2023-05-18T16:10:46Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 摘要 i
Abstract iii Contents vii List of Figures xi List of Tables xv Chapter 1 Introduction 1 1.1 Background 1 1.2 Purpose 2 1.3 Contribution 3 1.4 Thesis Structure 4 Chapter 2 Literature Review 5 2.1 Rice 5 2.1.1 Paddy field detection 6 2.2 Remote Sensing 8 2.2.1 Types of remote sensing images 9 2.2.2 Detection of remote sensing images in the past 11 2.2.3 Using deep learning methods on remote sensing images 12 2.3 ArcGIS Pro 19 2.4 Deep Neural Network 21 2.4.1 Convolutional neural network 22 2.4.2 Instance segmentation model 24 2.5 Vegetation Index 26 Chapter 3 Materials and Methods 29 3.1 Overview of the Study 29 3.2 Equipment and Training Environment 32 3.3 Dataset 34 3.3.1 Aerial images collection 34 3.3.2 Rice phenological stage labeling 37 3.3.3 Vegetation index calculation 41 3.3.4 Training image chips cropping 42 3.3.5 Data augmentation 43 3.4 Mask RCNN 48 3.4.1 Paddy field Area Calculation 53 3.4.2 Evaluation metrics 54 Chapter 4 Results and Discussion 57 4.1 Model Training Results 57 4.1.1 Loss curve 57 4.1.2 Training results of the Mask RCNN models 58 4.2 Model Detection and Segmentation Results 60 4.2.1 The aerial image of Zhutang, Changhua 61 4.2.2 The aerial image of Xingang, Chiayi 63 4.2.3 The first aerial image of Houbi, Tainan 64 4.2.4 The second aerial image of Houbi, Tainan 66 4.3 Discussion 67 Chapter 5 Conclusion 83 References 87 Appendix A — Paddy field detection and staging 99 | - |
| dc.language.iso | en | - |
| dc.subject | 水稻田 | zh_TW |
| dc.subject | 植生指標 | zh_TW |
| dc.subject | 水稻物候期 | zh_TW |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 實例分割 | zh_TW |
| dc.subject | 航照影像 | zh_TW |
| dc.subject | Vegetation index | en |
| dc.subject | Paddy field | en |
| dc.subject | Rice phenological stage | en |
| dc.subject | Instance segmentation | en |
| dc.subject | Deep learning | en |
| dc.subject | Aerial image | en |
| dc.title | 應用深度學習方法於地理圖資系統水稻田坵塊航照圖之辨識與分期 | zh_TW |
| dc.title | Using Deep Learning Method to Conduct Paddy Field Detection and Staging in Aerial Images in Geographic Information System | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 111-1 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 溫在弘;胡明哲;余化龍 | zh_TW |
| dc.contributor.oralexamcommittee | Tzai-Hung Wen;Ming-Che Hu;Hwa-Lung Yu | en |
| dc.subject.keyword | 深度學習,實例分割,航照影像,水稻田,水稻物候期,植生指標, | zh_TW |
| dc.subject.keyword | Deep learning,Instance segmentation,Aerial image,Paddy field,Rice phenological stage,Vegetation index, | en |
| dc.relation.page | 108 | - |
| dc.identifier.doi | 10.6342/NTU202210202 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2023-02-15 | - |
| dc.contributor.author-college | 生物資源暨農學院 | - |
| dc.contributor.author-dept | 生物機電工程學系 | - |
| dc.date.embargo-lift | 2028-02-13 | - |
| 顯示於系所單位: | 生物機電工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-111-1.pdf 未授權公開取用 | 16.31 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
