請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/5503
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 連豊力 | |
dc.contributor.author | Jun-Rong Lin | en |
dc.contributor.author | 林俊榮 | zh_TW |
dc.date.accessioned | 2021-05-15T18:00:55Z | - |
dc.date.available | 2019-09-25 | |
dc.date.available | 2021-05-15T18:00:55Z | - |
dc.date.copyright | 2014-09-25 | |
dc.date.issued | 2014 | |
dc.date.submitted | 2014-09-21 | |
dc.identifier.citation | [1: Seifert & Kay 1995]
R. W. Seifert and M.G. Kay, “Evaluation of AGV routing strategies using hierarchical simulation,” in Proceedings of Winter Simulation Conference, Arlington, USA, pp. 850-856, Dec. 3-6, 1995. [2: Nishimura et al. 2007] S. Nishimura, H. Takemura, and H. Mizoguchi, “Development of attachable modules for robotizing daily items -person following shoppingcart robot-,” in Proceedings of IEEE Robotics and Biomimetics, Sanya, China, pp. 1506-1511, Dec. 15-18, 2007. [3: Chen et al. 2011] C. L. Chen, C. C. Chou, and Feng-Li Lian, “Trajectory Planning for Human Host Tracking and Following of Slave Mobile Robot on Service-Related Tasks,” in Proceedings of IEEE International Conference on Robotics and Biomimetics, Phuket, Thailand, pp. 2419-2420, Dec. 7-11, 2011. [4: Okubo et al. 2009] Y. Okubo, C.Ye, and J. Borenstein, “Characterization of the Hokuyo URG-04LX Laser Rangefinder for mobile Robot Obstacle Negotiation,” in Proceedings of SPIE Conference Unmanned Robotic and Layered Systems, Orlando, USA, Apr. 30, 2009. [5: Yagi et al. 2005] Y. Yagi, K. Imai, K. Tsuji, and M. Yachida, “Iconic Memory-Based Omnidirectional Route Panorama Navigation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, No. 1, pp. 78-87, Jan. 2005. [6: Gonzalez & Woods 2008] R. C. Gonzalez and R. E. Woods, “Digital Image Processing,” 3rd adapted ed., Editor: S. G. Miaou, Taiwan: Pearson, Jun. 2008. [7: Jia et al. 2010] S. Jia, H. Yang, X. Li, and W. Fu, “LRF-Based Data Processing Algorithm for Map Building of Mobile Robot,” in Proceedings of IEEE International Conference on Information and Automation, Harbin, China, pp. 1924 - 1929, Jun. 20-23, 2010. [8: Rebai et al. 2009] K. Rebai, A. Benabderrahmane, O. Azouaoui, and N. Ouadah, “Moving Obstacles Detection and Tracking with Laser Range Finder,” in Proceedings of International Conference on Advanced Robotics, pp. 1-6, Jun. 22-26 2009. [9: Chung et al. 2012] W. Chung, H. Kim, Y. Yoo, C. B. Moon, and J. Park, “The detection and following of human legs through inductive approaches for a mobile robot with a single laser range finder,” IEEE Transactions Industrial Electronics, Vol. 59, No. 8, pp. 3156–3166, Aug. 2012. [10: Gander et al. 1994] W. Gander, G.H. Golub, and R. Strebel, “Least-Square Fitting of Circles and Ellipses,” BIT Numerical Mathematics 34, No. 43, pp. 558-578, Dec. 1, 1994. [11: Grassi & Okamoto 2006] V. Grassi Jr., and J. Okamoto Jr., “Development of an omnidirectional vision system,” Journal of the Brazilian Society of Mechanical Sciences and Engineering, Vol. 28, No.1, pp. 58-68, Jan.-Mar. 2006. [12: Ueda et al. 2011] H. Ueda, J. H. Lee, S. Okamoto, B. J. Yi, and S. Yuta, “People tracking method for a mobile robot with laser scanner and omni directional camera,” in Proceedings of international Conference on Ubiquitous Robots and Ambient Intelligence, Incheon, Korea, pp. 503–507, Nov. 23-26, 2011. [13: Bacca et al. 2013] B. Bacca, X. Cufi, and J. Salvi, “Vertical edge-based mapping using range-augmented omnidirectional vision sensor,” IET Computer Vision, Vol. 7, No. 2, pp. 135-143, Apr. 2013. [14: Scaramuzza et al. 2006] D. Scaramuzza, A. Martinelli, and R. Siegwart, “A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion,” in Proceedings of IEEE International Conference of Vision Systems, New York, USA, pp. 45-52, Jan. 5-7, 2006. [15: Wu et al. 2013] C. H. Wu, Y. H. Chen, Y. Y. Lee, and C. H. Tsai, “A Fast Genetic SLAM Approach for Mobile Robots,” in Proceedings of 14th ACIS International Conference on Artificial Intelligence Software Engineering, Networking and Parallel/Distributed Computing, Honolulu, USA, pp. 563-568, Jul. 1-3, 2013. [16: Rusdinar et al. 2010] A. Rusdinar, J. Kim, and S. Kim “Error Pose Correction of Mobile Robot for SLAM Problem using Laser Range Finder Based on Particle Filter,” in Proceedings of International Conference on Control, Automation and Systems, Gyeonggi-do, Korea, pp. 52–55, Oct. 27-30, 2010. [17: Zhang 1994] Z. Zhang, “Iterative Point Matching for Registration of Free-Form Curves and Surfaces,” International Journal of Computer Vision, Vol. 13, No. 2, pp. 119-152, 1994. [18: Lu & Milios 1994] F. Lu and E.E. Milios, “Robot Pose Estimation in Unknown Environments by Matching 2D range scans,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, USA, pp. 935-938, Jun. 21-23, 1994. [19: Tong & Barfoot 2011] C. H. Tong and T. D. Barfoot, “Batch Heterogeneous Outlier Rejection for Feature-Poor SLAM,” in Proceedings of IEEE International Conference on Robotics and Automation, Shanghai, China, pp. 2630-2637, May 9-13, 2011. [20: Kang et al. 2010] J. G. Kang, W. S. Choi, S. Y. An, and S. Y. Oh, “Augmented EKF based SLAM method for Improving the Accuracy of the Feature Map,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, pp. 3725-3731, October 18-22, 2010. [21: Moravec & Elfes 1985] H. Moravec and A. Elfes, “High Resolution Maps from Wide Angle Sonar,” in Proceedings of IEEE International Conference on Robotics and Automation, Missouri, USA, pp. 116-121, Mar. 1985. [22: Wolf & Sukhatme 2004] D. Wolf and G. Sukhatme. “Online Simultaneous Localization and Mapping in Dynamic Environments,” in Proceedings of IEEE International Conference on Robotics and Automation, New Orleans, pp. 1301-1307, LA, USA, Apr. 26 – May 1, 2004. [23: Thrun et al. 2005] S. Thrun, W. Burgard, and D. Fox, “Probabilistic Robotics,” Editor: R. Arkin, London: The MIT Press, 2005. [24: Chang & Lian 2012] F. M. Chang and F. L. Lian, “Polar Grid Based Robust Pedestrian Tracking with Indoor Mobile Robot using Multiple Hypothesis Tracking Algorithm,” in Proceedings of SICE Annual Conference, pp. 1558-1563, Akita, Japan, Aug. 20-23, 2012. [25: Cox & Hingorani 1996] l. J. Cox and S. L. Hingorani, “An efficient Implementation of Reid's Multiple Hypothesis Tracking Algorithm and Its Evaluation for the Purpose of Visual Tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.18, No.2, pp. 138-150, Feb. 1996 [26: Sung & Chung 2011] Y. Sung and W. Chung, “Human Tracking of a Mobile Robot with an onboard LRF (Laser Range Finder) using Human Walking Motion Analysis,” in Proceedings of International Conference on Ubiquitous Robots and Ambient Intelligence, Incheon, Korea, pp. 366-370, Nov. 23-26, 2011. [27: Carballo et al. 2010] A. Carballo, A. Ohya and S. Yuta “People Detection using Range and Intensity Data from Multi-Layered Laser Range Finders,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, pp. 5849-5854, Oct. 18-22, 2010. [28: Lin & Huang 2011] D. T. Lin, and K. Y. Huang, “Collaborative Pedestrian Tracking and Data Fusion With Multiple Cameras,” IEEE Transactions on Information Forensics and Security, Vol. 6, No. 4, pp.1432-1444, Dec. 2011. [29: Stauffer & Grimson 1999] C. Stauffer and W.E.L Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings of IEEE Conference Computer Vision and Pattern Recognition, Fort Collins, USA, Vol. 2, pp. 246–252, Jun. 23-25, 1999. [30: Lee et al. 2003] D. S. Lee, J. J. Hull, and B. Erol, “A Bayesian Framework for Gaussian Mixture Background Modeling,” in Proceedings of IEEE International Conference on Image Processing, Vol. 3, pp. 973-976, Sep. 14-17, 2003. [31: Enzweiler et al. 2008] M. Enzweiler, P. Kanter, and D. M. Gavrila, “Monocular Pedestrian Recognition Using Motion Parallax,” in Proceedings of IEEE Intelligent Vehicles Symposium, Eindhoven, Holland, pp. 792-797, Jun. 4-6, 2008. [32: Leithy et al. 2010] A. Leithy, Mohamed N. Moustafa, and Ayman Wahba, “Cascade of Complementary Features for Fast and Accurate Pedestrian Detection,” in Proceedings of the 4th Pacific-Rim Symposium on Image and Video Technology, Singapore, pp. 343-348, Nov. 14-17, 2010. [33: Zhao et al. 2008] M. Zhao, D. Sun, and H. He, “Hair-color Modeling and Head Detection,” in Proceedings of the 7th World Congress on Intelligent Control and Automation, Chongqing, China, Jun. 25-27, 2008. [34: Xu & Xu 2013] F. Xu, and F. Xu, “Pedestrian Detection Based on Motion Compensation and HOG/SVM Classifier,” in Proceedings of the Fifth International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, China, pp. 334-337, Aug. 26-27, 2013. [35: Dalal & Triggs 2005] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, Vol. 1, pp. 886-893, June 25, 2005. [36: Ballard 1981] D. Ballard. “Generalizing the Hough Transform to Detect Arbitrary Shapes,” IEEE Transactions Transactions on Pattern Analysis and Machine Intelligence, Vol. 13, No. 2, pp. 111–122, 1981. [37: Kristou et al. 2011] M. Kristou, A. Ohya, and S. Yuta, “Target person identification and following based on omnidirectional camera and LRF data fusion,” in Proceedings of IEEE International Symposium on Robot and Human Interactive Communication, Atlanta, GA, USA, pp. 419-424, July 31 - August 3, 2011. [38: Li et al. 2011] H. Li, T. Shen, and X. Huang, “Approximately Global Optimization for Robust Alignment of Generalized Shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 33, No. 6, pp. 1116-1131, Jun. 2011. [39: Birk & Carpin 2006] A. Birk and S. Carpin “Merging Occupancy Grid Maps From Multiple Robots,” in Proceedings of the IEEE Transactions, Vol. 94, No. 7, pp.1384-1397, July 2006. [40: Wang et al. 2011] M. Wang, H. Qiao, and B. Zhang, “A New Algorithm for Robust Pedestrian Tracking Based on Manifold Learning and Feature Selection,” IEEE Transactions on Intelligent Transportation Systems, Vol. 12, No. 4, pp.1195-1208, Dec. 2011. [41: Kun et al. 2012] Z. Kun, S. Fengchi, Y. Jing, “An Autonomous Target-Tracking Algorithm Based on Visual Feature,” in Proceedings of the 31st Chinese Control Conference, Hefei, China, pp. 4936-4941, July 25-27, 2012. [42: Ishikawa et al. 2009] T. Ishikawa, M. Kourogi, T. Okuma, and T. Kurata, “Economic and Synergistic Pedestrian Tracking System for Indoor Environments,” in Proceedings of International Conference of Soft Computing and Pattern Recognition, Malacca, Malaysia, pp. 522-527, Dec. 4-7, 2009. [43: Besl & McKay 1992] P. J. Besl and N. D. McKay, “A method for registration of 3d shape,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, No. 2, pp. 239-256, Feb. 1992. [44: Kennedy & Eberhart 1995] J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of IEEE International Conference on Neural Networks, Perth, Australia, Vol. 4, pp. 1942-1948, Nov. 27 - Dec. 1, 1995. [45: Eberhart & Shi 1998] R. C. Eberhart and Y. Shi “Comparison between genetic algorithms and particle swarm optimization,” in Proceedings of Conference on Evolutionary Programming, San Diego, USA, Vol. 1447, pp. 611-616, Mar. 25–27, 1998. [46: Bouguet 2013] J. Y. Bouguet.(2013, Dec. 2). In Camera Calibration Toolbox for Matlab. Retrieved Jul. 8 2014, from http://www.vision.caltech.edu/bouguetj/calib_doc/ [47: O’Leary & Murray 2004] P. O’Leary, and P. Z. Murray, “Direct and Specific Least- Square Fitting of Hyperbolae and Ellipses,” Journal of Electronic Imaging, Vol. 13, No. 3, pp. 492–503, Jul. 2004. [48: Zhao et al. 2012] M. Zhao, D. H. Sun, Y. Tang, and H. P. He1, “Head Detection Based on 21HT and Circle Existence Model,” in Proceedings of the 10th World Congress on Intelligent Control and Automation, Beijing, China, pp. 4875-4880, Jul. 6-8, 2012. [49: Rahimi et al. 2013] S. Rahimi, A. Aghagolzadeh, and H. Seyedarabi, “Three camera-based human tracking using weighted color and cellular LBP histograms in a particle filter framework,” in Proceedings of Iranian Conference on Electrical Engineering, Mashhad, pp. 1-6, May 14-16, 2013 [50: YCbCr from wiki 2014] YCbCr. (2014, February 8). In Wikipedia. Retrieved July 8 2014, from http://en.wikipedia.org/wiki/YCbCr [51: Bhattacharyya distance from wiki 2014] Bhattacharyya Distance. (2014, May 31). In Wikipedia. Retrieved July 8 2014, from http://en.wikipedia.org/wiki/Bhattacharyya_distance [52: Vu et al. 2007] T. D. Vu, O. Aycard, and N. Appenrodt, “Online Localization and Mapping with Moving Object Tracking in Dynamic Outdoor Environments,” in Proceedings of IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, pp. 190-195, Jun. 13-15, 2007. [53: Yang & Lian 2012] J. Y. Yang and F. L. Lian, “Omnidirectional Vision-Based Robot Localization using Vertical Line Matching and Detection of Vanishing Point and Floor Region using Edge Orientation Information,” 1st ed., Taipei, Taiwan, Editor: NTU, 2012, pp. 47–48. [54: Kay 1988] S. M. Kay, “Fundamentals of Statistical Signal Processing,” Vol. 2: Detection Theory, US: Prentice Hall, 1998. [55: Winner et al. 2012] R. Grewe, M. Komar, A. Hohm, S. Lueke, and H. Winner “Evaluation Method and Results for the Accuracy of an Automotive Occupancy Grid,” in Proceedings of IEEE International Conference on Vehicular Electronics and Safety, Turkey, Istanbul, pp. 19-24, July 24-27, 2012. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/5503 | - |
dc.description.abstract | 生活之中,人機互動是一門重要的學問。如果機器人可以隨時隨地去跟人互動,那麼在生活上必定會便利不少。其中與人互動的情況中,追蹤行人就是其中一個必須研究的課題。機器人追蹤行人的問題,主要分成兩類:自我定位與地圖建構、行人偵測與追蹤。這些問題都是在做機器人追蹤行人時會碰到的項目。
以自我定位與地圖建構來說,要在未知路徑的情況下,得到行動機器人周邊及本身存在的資訊是一件困難的事。在本篇論文裡,主要解決行動機器人編碼器的硬體因素造成殘存誤差。利用粒子群最佳化演算法去找出更準確的機器人位置,然後利用這些資訊可以估測出機器人周遭環境地圖。由於雷射測距儀是一個相當精確的儀器,所以得到的距離資訊準確度是很高,在實驗部分會有驗證。在得到行動機器人的位置和周遭地圖之後,就可以分辨出靜態物地圖和動態物地圖。對行動機器人而言,是非常有用的資訊。然而,動態物的地圖推測會有一些問題,這部分會在論文中詳細的描述到。 另外,在行人偵測部分,除了用雷射測距儀所得到的動態點之外,也可以用點的分群、大小以及資料可信度去判斷。另外,色彩資訊的添加可以當成是一個更強韌的條件。偵測邊緣形狀的霍式圓形轉換、色彩判定以及頭的所在位置都是偵測行人的準則。利用這些準則,幾乎可以更精確的去判斷行人。基於行動機器人追蹤,由於追蹤目標行人在生活環境中可能會有靜態障礙物擋住或者附近突然出現的行人,造成雷射測距儀的資料誤判。場景設定是在一般研究室或宿舍,通常人的穿著色彩分佈及紋理分佈會不盡相同,所以可以透過色彩資訊去加以判斷。並且使用距離資訊空間連續性去做一個強韌性判斷的準則。利用這些準則,就可以強韌地去判斷行人,並且去做目標行人的追蹤。在本篇論文中,最主要的就是用色彩資訊去解決在使用雷射測距儀時,行人偵測的判斷以及突然出現的人進而造成追蹤判斷的錯誤。 在實驗結果顯示出在這些環境當中,可以解決雷射測距儀資料連結的不足,並且可以做到追蹤目標行人。 | zh_TW |
dc.description.abstract | In daily life, that mobile robot communicates with pedestrian needs many tasks. The applications are used in guided vehicle, shopping cart, or office assistance. In this thesis, the tasks include self-localization, mapping, pedestrian detection, and target pedestrian tracking in unknown indoor environment.
To do self-localization and mapping, the accurate odometry of mobile robot is important. However, skidding and slipping can induce that the odometry is not equal to the real distance. In this thesis, particle swarm optimization (PSO) algorithm correct odometry in unknown indoor environment. The combination of the self-localization and the mapping is referred to as the simultaneous localization and mapping (SLAM) [39: Birk & Carpin 2006]. In the SLAM, once odometry of mobile robot is known, building a map is also a task which can be effectively solved at the same time [39: Birk & Carpin 2006]. Afterward, moving object detection is based on the precise map. After the moving objects are detected, the next steps are pedestrian detection and target pedestrian tracking. In the pedestrian detection, the color image is regarded as an additional condition for the judgment based on the laser range finder (LRF) scan. In target pedestrian tracking, owing to pillars hindering or new pedestrians appearing, the data association may be error between two consecutive LRF scans. In this thesis, a method based on color distribution and color texture to track pedestrian in color images is proposed. The experiment demonstrates the target pedestrian in the new pedestrian appearing and the pillars hindering. Through the experiments, the performance of pedestrian detection and target pedestrian tracking is not good. However, the performance of pedestrian in color image is low owing to the resolution. In the future, the detection and tracking moving object (DATMO) with LRF scan in Chapter 3 can combine the pedestrian detection and target pedestrian tracking with color image in this thesis. | en |
dc.description.provenance | Made available in DSpace on 2021-05-15T18:00:55Z (GMT). No. of bitstreams: 1 ntu-103-R01921067-1.pdf: 12817885 bytes, checksum: b50fe765a24a4ae158e500024d6281f1 (MD5) Previous issue date: 2014 | en |
dc.description.tableofcontents | 摘要 i
ABSTRACT iii Contents v List of Figures vii List of Tables xi Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Problem Formulation 3 1.3 Contribution 7 1.4 Organization of the Thesis 8 Chapter 2 Literature Survey 9 2.1 Simultaneous Localization and Mapping 9 2.2 Pedestrian Detection and Tracking 11 Chapter 3 Simultaneous Localization and Mapping 14 3.1 Laser Range Finder Usage and Limitation 15 3.1.1 Introduction of Laser Range Finder 15 3.1.2 The Limitation of Usage in the Glass Environment 16 3.2 Robot Localization 17 3.3 Map Construction 21 3.3.1 Grid Map Construction 22 3.3.2 Static Map and Dynamic Map 23 Chapter 4 Pedestrian Detection and Target Pedestrian Tracking 26 4.1 The Operation Principle of Omnidirectional Camera 27 4.1.1 Introduction of Omnidirectional Camera 27 4.1.2 The Lightness of Omnidirectional Camera 28 4.2 Sensors Calibration 31 4.2.1 The Description of Calibration 32 4.2.2 Break Point and Angular Point Detection 33 4.2.3 Vertical Line Detection 35 4.2.4 Data Association 41 4.3 Pedestrian Detection 43 4.3.1 Pedestrian Detection Preprocessing with Laser Range Finder 43 4.3.2 Lower Line of Bounding Box Extraction 47 4.3.3 Upper Line of Bounding Box Extraction 52 4.3.4 Pedestrian Points Filtering in Static Map Construction 55 4.4 Target Pedestrian Tracking 57 4.4.1 Color Distribution 57 4.4.2 Local Binary Patterns 58 4.4.3 Bhattacharyya Distance 59 4.4.4 Database Update 61 Chapter 5 Experimental Results and Analysis 62 5.1 Hardware Platform 62 5.2 Accuracy of Sensor Measurement 64 5.2.1 Laser Range Finder Accuracy 64 5.2.2 Sensors Calibration 72 5.3 Static and Dynamic Map 83 5.3.1 The Algorithms of Localization 83 5.3.2 The Map Construction 85 5.3.3 The Dynamic Map 87 5.4 Localization Accuracy 91 5.5 Pedestrian Detection Performance 96 5.5.1 Lower Line of Bounding Box 96 5.5.2 Pedestrian Map 102 5.5.3 The Detection Accuracy 105 5.6 Target Pedestrian Tracking Performance 107 5.6.1 The Tracking Results and Accuracy 107 5.6.2 The Tracking Results and Accuracy 110 Chapter 6 Conclusions and Future Works 113 6.1 Conclusions 113 6.2 Future Works 114 References 116 Appendix 123 A.1 Distance Convert Pixel 123 A.2 Lower Line of Bounding Box Extraction 126 | |
dc.language.iso | en | |
dc.title | 利用色彩資訊及深度資訊之室內行動機器人對行人偵測與追蹤 | zh_TW |
dc.title | Pedestrian Detection and Tracking with Indoor Mobile Robot Using both Color Information and Distance Information | en |
dc.type | Thesis | |
dc.date.schoolyear | 103-1 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 簡忠漢,李後燦,黃正民 | |
dc.subject.keyword | 雷射測距儀,全向攝影機,機器人自我定位,動態物偵測,行人偵測,行人追蹤, | zh_TW |
dc.subject.keyword | laser range finder,omnidirectional camera,robot self-localization,moving objects detection,pedestrian detection,target pedestrian tracking, | en |
dc.relation.page | 129 | |
dc.rights.note | 同意授權(全球公開) | |
dc.date.accepted | 2014-09-22 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-103-1.pdf | 12.52 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。