請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70245完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 羅仁權 | |
| dc.contributor.author | Yu-Jung Liu | en |
| dc.contributor.author | 劉育榕 | zh_TW |
| dc.date.accessioned | 2021-06-17T04:24:44Z | - |
| dc.date.available | 2023-08-18 | |
| dc.date.copyright | 2018-08-18 | |
| dc.date.issued | 2018 | |
| dc.date.submitted | 2018-08-15 | |
| dc.identifier.citation | [1] “Provisional definition of service robots english, 27th of october 2012,”
[2] “Moley robotics kitchen robot: http://www.moley.com/,” [3] “Automatic self-cleaning litter box for cats: https://www.litter-robot.com/,” [4] “Robot security system: http://robotsecuritysystems.com/,” [5] J. Broekens, M. Heerink, and H. Rosendal, “Assistive social robots in elderly care: A review. gerontechnology, 8 (2), 94-103,” Zugriff am, vol. 11, p. 2012, 2009. [6] K. H. Huang, “Autonomous mobile carrier robot with perception and docking for elder and handicap care,” in Master’s Thesis, Electrical Engineering, National Taiwan University, 2014. [7] T. Shibata, “An overview of human interactive robots for psychological enrichment,” Proceedings of the IEEE, vol. 92, no. 11, pp. 1749–1758, 2004. [8] T. Makimoto and T. T. Doi, “Chip technologies for entertainment robots-present and future,” in Electron Devices Meeting, 2002. IEDM’02. International, pp. 9–16, IEEE, 2002. [9] Y. Kuroki, T. Fukushima, K. Nagasaka, T. Moridaira, T. T. Doi, and J. Yamaguchi, “A small biped entertainment robot exploring human-robot interactive applications,” in Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003. The 12th IEEE International Workshop on, pp. 303–308, IEEE, 2003. [10] S. Thrun, M. Bennewitz, W. Burgard, A. B. Cremers, F. Dellaert, D. Fox, D. Hahnel, C. Rosenberg, N. Roy, J. Schulte, et al., “Minerva: A second-generation museum tour-guide robot,” in Robotics and automation, 1999. Proceedings. 1999 IEEE international conference on, vol. 3, IEEE, 1999. [11] P. Monaghan, “An art professor uses artificial intelligence to create a computer that draws and paints.,” Chronicle of Higher Education, vol. 43, no. 35, pp. A27–A27, 1997. [12] J. Lehni, “Hektor: a graffiti output device,” 2002. [13] S. Calinon, J. Epiney, and A. Billard, “A humanoid robot drawing human portraits,” in IEEE-RAS International Conference on Humanoid Robots, no. LSA3-CONF-2005-008, 2005. [14] G. Jean-Pierre and Z. Sa¨ıd, “The artist robot: A robot drawing like a human artist,” in Industrial Technology (ICIT), 2012 IEEE International Conference on, pp. 486–491, IEEE, 2012. [15] P. Tresset and F. F. Leymarie, “Portrait drawing by paul the robot,” Computers & Graphics, vol. 37, no. 5, pp. 348–363, 2013. [16] O. Deussen, T. Lindemeier, S. Pirk, and M. Tautzenberger, “Feedback-guided stroke placement for a painting machine,” in Proceedings of the Eighth Annual Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging, pp. 25–33, Eurographics Association, 2012. [17] S. J. Kim, J. H. Cheon, S. Forsyth, and E. Jee, “Research trends in art and entertainment robots (ane robots),” in RO-MAN, 2013 IEEE, pp. 360–361, IEEE, 2013. [18] “2016 1st-annual international robotart competition founded by andrew conru. official website: http://robotart.org/,” [19] M. C. Sousa and J. W. Buchanan, “Computer-generated graphite pencil rendering of 3d polygonal models,” in Computer Graphics Forum, vol. 18, pp. 195–208, Wiley Online Library, 1999. [20] A. Hertzmann and K. Perlin, “Painterly rendering for video and interaction,” in Proceedings of the 1st international symposium on Non-photorealistic animation and rendering, pp. 7–12, ACM, 2000. [21] M. P. Salisbury, S. E. Anderson, R. Barzel, and D. H. Salesin, “Interactive pen-and-ink illustration,” in Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pp. 101–108, ACM, 1994. [22] M. P. Salisbury, M. T. Wong, J. F. Hughes, and D. H. Salesin, “Orientable textures for image-based pen-and-ink illustration,” in Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pp. 401–406, ACM Press/Addison-Wesley Publishing Co., 1997. [23] Y. H. Tsai, “7-dof redundant robot manipulator with multimodal intuitive teach and play system,” in Master’s Thesis, Electrical Engineering, National Taiwan University, 2014. [24] R. S. Hartenberg and J. Denavit, Kinematic synthesis of linkages. McGraw-Hill, 1964. [25] “Robotiq gripper 3-finger gripper is available at https://robotiq.com/products/3-finger-adaptive-robot-gripper,” [26] “Piso-da8u is available at http://www.icpdas.com/root/product/solutions/pc-based-io-board/pci/pio-da4.html,” [27] “Piso-encoder600 is available at http://www.icpdas.com/root/product/solutions/pc-based-io-board/motion-control-boards/piso-encoder600u.html,” [28] T. Kroger, “On-line trajectory generation in robotic systems basic concepts for instantaneous reactions to unforeseen (sensor) events., volume 58 of springer tracts in advanced robotics,” 2010. [29] T. Kroger, “Online trajectory generation: Straight-line trajectories,” IEEE Transactions on Robotics, vol. 27, no. 5, pp. 1010–1016, 2011. [30] T. Kroger, “On-line trajectory generation: Nonconstant motion constraints,” in Robotics and Automation (ICRA), 2012 IEEE International Conference on, pp. 2048–2054, IEEE, 2012. [31] T. Kroger, “Opening the door to new sensor-based robot applications—the reflexxes motion libraries,” in Robotics and Automation (ICRA), 2011 IEEE International Conference on, pp. 1–4, IEEE, 2011. [32] I. Sassi, A. Benabdelhafid, and S. Hammami, “Industrial ecosystem of the territory: Strategies and perspectives,” in Service Operations And Logistics, And Informatics (SOLI), 2015 IEEE International Conference on, pp. 216–219, IEEE, 2015. [33] F. Xia, F. Campi, and B. Bahreyni, “Tri-mode capacitive proximity detection towards improved safety in industrial robotics,” IEEE Sensors Journal, vol. 18, no. 12, pp. 5058–5066, 2018. [34] T. Shibata, “An overview of human interactive robots for psychological enrichment,” Proceedings of the IEEE, vol. 92, no. 11, pp. 1749–1758, 2004. [35] Y. Kuroki, M. Fujita, T. Ishida, K. Nagasaka, and J. Yamaguchi, “A small biped entertainment robot exploring attractive applications,” in Robotics and Automation, 2003. Proceedings. ICRA’03. IEEE International Conference on, vol. 1, pp. 471–476, IEEE, 2003. [36] E. Asadi, B. Li, and I.-M. Chen, “Pictobot: A cooperative painting robot for interior finishing of industrial developments,” IEEE Robotics & Automation Magazine, vol. 1070, no. 9932/18, 2018. [37] B. Zhang, J. Wu, L. Wang, Z. Yu, and P. Fu, “A method to realize accurate dynamic feedforward control of a spray-painting robot for airplane wings,” IEEE/ASME TRANSACTIONS ON MECHATRONICS, vol. 23, no. 3, 2018. [38] C. A. Behaine and J. Scharcanski, “Enhancing the performance of active shape models in face recognition applications,” IEEE Transactions on Instrumentation and Measurement, vol. 61, no. 8, pp. 2330–2333, 2012. [39] T. Xue and Y. Liu, “Robot portrait rendering based on multi-features fusion method inspired by human painting,” in Robotics and Biomimetics (ROBIO), 2017 IEEE International Conference on, pp. 2413–2418, IEEE, 2017. [40] O. Deussen, T. Lindemeier, S. Pirk, and M. Tautzenberger, “Feedback-guided stroke placement for a painting machine,” in Proceedings of the Eighth Annual Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging, pp. 25–33, Eurographics Association, 2012. [41] R. C. Luo, M.-J. Hong, and P.-C. Chung, “Robot artist for colorful picture painting with visual control system,” in Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on, pp. 2998–3003, IEEE, 2016. [42] P. Viola and M. J. Jones, “Robust real-time face detection,” International journal of computer vision, vol. 57, no. 2, pp. 137–154, 2004. [43] X. Cao, Y. Wei, F. Wen, and J. Sun, “Face alignment by explicit shape regression,” International Journal of Computer Vision, vol. 107, no. 2, pp. 177–190, 2014. [44] T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active shape models-their training and application,” Computer vision and image understanding, vol. 61, no. 1, pp. 38–59, 1995. [45] D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Transactions on pattern analysis and machine intelligence, vol. 24, no. 5, pp. 603–619, 2002. [46] L. Fu, S. Yu, and Y. Jia, “A pedestrian tracking algorithm based on mean shift using color histogram equalization method,” in 2017 IEEE 13th International Symposium on Autonomous Decentralized System (ISADS), pp. 235–240, IEEE, 2017. [47] J. F. Hughes, A. Van Dam, J. D. Foley, M. McGuire, S. K. Feiner, D. F. Sklar, and K. Akeley, Computer graphics: principles and practice. Pearson Education, 2014. [48] H. Kang, S. Lee, and C. K. Chui, “Coherent line drawing,” in Proceedings of the 5th international symposium on Non-photorealistic animation and rendering, pp. 43–50, ACM, 2007. [49] J. Canny, “A computational approach to edge detection,” IEEE Transactions on pattern analysis and machine intelligence, no. 6, pp. 679–698, 1986. [50] B. Gooch, E. Reinhard, and A. Gooch, “Human facial illustrations: Creation and psychophysical evaluation,” ACM Transactions on Graphics (TOG), vol. 23, no. 1, pp. 27–44, 2004. [51] L.-w. Lu, Y.-y. Pu, H. Zhang, and D. Xu, “A non-photorealistic rendering algorithm for cartoons,” in Image and Signal Processing (CISP), 2013 6th International Congress on, vol. 2, pp. 680–685, IEEE, 2013. [52] T. Zhang and C. Y. Suen, “A fast parallel algorithm for thinning digital patterns,” Communications of the ACM, vol. 27, no. 3, pp. 236–239, 1984. [53] R. M. Haralick, S. R. Sternberg, and X. Zhuang, “Image analysis using mathematical morphology,” IEEE transactions on pattern analysis and machine intelligence, no. 4, pp. 532–550, 1987. [54] S. Zagoruyko and N. Komodakis, “Learning to compare image patches via convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4353–4361, 2015. [55] J. Bromley, I. Guyon, Y. LeCun, E. Sackinger, and R. Shah, “Signature verification using a” siamese” time delay neural network,” in Advances in neural information processing systems, pp. 737–744, 1994. [56] S. Chopra, R. Hadsell, and Y. LeCun, “Learning a similarity metric discriminatively, with application to face verification,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1, pp. 539–546, IEEE, 2005. [57] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” in European conference on computer vision, pp. 346–361, Springer, 2014. [58] P. Litwinowicz, “Processing images and video for an impressionist effect,” in Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pp. 407–414, ACM Press/Addison-Wesley Publishing Co., 1997. [59] J. Collomosse and P. Hall, “Painterly rendering using image salience,” in Eurographics UK Conference, 2002. Proceedings. The 20th, pp. 122–128, IEEE, 2002. [60] H. Chen, N.-N. Zheng, L. Liang, Y. Li, Y.-Q. Xu, and H.-Y. Shum, “Pictoon: a personalized image-based cartoon system,” in Proceedings of the tenth ACM international conference on Multimedia, pp. 171–178, ACM, 2002. [61] R.-L. Hsu and A. K. Jain, “Generating discriminating cartoon faces using interacting snakes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 11, pp. 1388–1398, 2003. [62] M. C. Lau, J. Baltes, J. Anderson, and S. Durocher, “A portrait drawing robot using a geometric graph approach: Furthest neighbour theta-graphs,” in Advanced Intelligent Mechatronics (AIM), 2012 IEEE/ASME International Conference on, pp. 75–79, IEEE, 2012. [63] R. C. Luo, M.-C. Ko, Y.-T. Chung, and R. Chatila, “Repulsive reaction vector generator for whole-arm collision avoidance of 7-dof redundant robot manipulator,” in Advanced Intelligent Mechatronics (AIM), 2014 IEEE/ASME International Conference on, pp. 1036–1041, IEEE, 2014. [64] Y. Wan and Q. Xie, “A novel framework for optimal rgb to grayscale image conversion,” in Intelligent Human-Machine Systems and Cybernetics (IHMSC), 2016 8th International Conference on, vol. 2, pp. 345–348, IEEE, 2016. [65] R. W. Floyd, “An adaptive algorithm for spatial gray-scale,” in Proc. Soc. Inf. Disp., vol. 17, pp. 75–77, 1976. [66] V. Monga, W. S. Geisler, and B. L. Evans, “Linear color-separable human visual system models for vector error diffusion halftoning,” IEEE Signal processing letters, vol. 10, no. 4, pp. 93–97, 2003. [67] S. Wang, Z. Bao, J. S. Culpepper, T. Sellis, and G. Cong, “Reverse k nearest neighbor search over trajectories,” IEEE Transactions on Knowledge and Data Engineering, vol. 30, no. 4, pp. 757–771, 2018. [68] Y. Cao, H. Qi, W. Zhou, J. Kato, K. Li, X. Liu, and J. Gui, “Binary hashing for approximate nearest neighbor search on big data: A survey,” IEEE Access, vol. 6, pp. 2039–2054, 2018. [69] R. C. Luo and P.-K. Tseng, “Carving 2d image onto 3d curved surface using hybrid additive and subtractive 3d printing process,” in Advanced Robotics and Intelligent Systems (ARIS), 2017 International Conference on, pp. 40–45, IEEE, 2017. [70] J. Hays and I. Essa, “Image and video based painterly animation,” in Proceedings of the 3rd international symposium on Non-photorealistic animation and rendering, pp. 113–120, ACM, 2004. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70245 | - |
| dc.description.abstract | 近年來,世界各地的機器人大多被用來處理工廠自動化之研究,但是,機器人技術的應用不僅限於工廠領域和產值的增加,現今越來越多的機器人研究團隊投入到教育、醫療長照、家庭服務、娛樂等領域發展,他們都朝著模仿人類行為的目標前進,以帶來更多附加價值,為了打破科技給人們較為死板的印象,我們嘗試將藝術融和到機器人當中,並給予機器人適當的創作能力,帶來與以往借由複製藝術品所做的研究有所區隔,試圖創造出獨特的作品。
本研究的目標在於將藝術創作融合機器人領域的相關技術,讓機器人能夠繪畫出具有藝術價值的獨特作品。本研究呈現兩種不同風格及手法的藝術作品,第一種著重在卡通風格的人臉肖像轉換,我們將模仿人類藝術家,運用相機作為藝術家的眼睛,捕捉人臉的五官特徵,有效分解成單個組件,再透過機器學習方法與預先收集的卡通五官資料庫做相似度比較,並選出最相近的取代原本的五官,以此來達到機器人創作的能力,接著再通過影像處理等方法降低色彩複雜度以符合卡通風格的色彩呈現,最後,機器人使用五種基本顏色以及奇異筆繪畫出具有卡通風格的作品。第二種研究我們希望能夠打破了傳統機器人只在畫紙上進行繪畫,而嘗試在不同材料的創作,我們選擇在曲面的木頭上作烙畫,利用影像處理結合機器學習方法,使用線條畫來呈現不同顏色的深淺與立體的視覺感受,最後加上我們原有的彩色繪畫功能,映射在曲面的木頭上成為獨特又新穎的烙畫藝術。 此研究充分展現了人工智慧在創造藝術價值的可能性,我們打破了機器人創造複製作品和使用單一媒介的限制,希望透過科技技術的方法能夠輔助人們學習繪畫的創作以及創新的想法。 | zh_TW |
| dc.description.abstract | In recent years, robots around the world have mostly been used to deal with factory automation research. However, the application of robotics technology is not limited to the increase of factory field and the gross of the product. Nowadays more and more robot research teams are engaged in the fields of education, medical service, family services and entertainment. They are all moving towards the goal of imitating human behavior in order to bring additional value. In order to smash the trammels of old tradition of technology, we have tried to integrate art into robots and give robots appropriate creativity. This has led to the different between researchers who study in copying artworks and attempts to create unique artworks.
The goal of this study is to integrate the art creation into the field of robotics. Allows the robot to paint unique artworks with artistic value. This study presents two kinds of artworks with different styles and techniques. The first focus on the cartoon style of face portrait conversion. We use camera as artist's eyes to imitate human behaviour that captures facial features and effectively decompose into individual components. Then using machine learning method and pre-collected cartoon facial features data to select the most similar one to replace the original. Therefore, we achieve the creativity of robot itself. After that, the robot use image processing to reduce the color complexity to match the cartoon style of color rendering. Finally, the robot uses five basic colors and sharpie marker to paint a cartoon-style portrait. The second research, we hope to break the traditional that researchers mostly paint on the drawing paper. We focuses on the creation of different materials. We choose to make a branding on the 3D surface of the wood. Using image processing and machine learning skills to generate line painting with different depth of color to create stereo feeling. Eventually, we add our colorful painting function and project on the 3D surface to create a unique and novel art of pyrography. This research fully demonstrates the possibility of artificial intelligence in creating artistic value. We have broken the limits of robots creating duplicate artworks and using a single medium. We hoped that through technological methods, people can learn the creative and innovative ideas of the painting. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-17T04:24:44Z (GMT). No. of bitstreams: 1 ntu-107-R05921010-1.pdf: 20725740 bytes, checksum: 9ed5e7ad62cd41f35b5f19796eca6454 (MD5) Previous issue date: 2018 | en |
| dc.description.tableofcontents | 審定書 i
誌謝 iii 中文摘要 v Abstract vii LIST OF FIGURES ix LIST OF TABLES xi 1 Introduction 1 1.1 Service Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Entertainment Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Art Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4 Non-Photorealistic Rendering . . . . . . . . . . . . . . . . . . . . . . . 13 1.5 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2 Manipulator 15 2.1 Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.1.1 D-H Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.1.2 Transmission and Actuator . . . . . . . . . . . . . . . . . . . . . 19 2.1.3 Gripper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2 Control Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3 Online Trajectory Generation . . . . . . . . . . . . . . . . . . . . . . . . 24 3 Cartoon Style Facial Portrait Painting 27 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.2 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.2.1 Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.2.2 Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.2.3 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.3 System Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.4 Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.4.1 Face Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.4.2 Facial Decomposition . . . . . . . . . . . . . . . . . . . . . . . 33 3.4.3 Color Segment . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.4.4 Contour Extraction . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.4.5 Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.5 Compare Image Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.6 Trajectory Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.6.1 No Facial Feature Painting . . . . . . . . . . . . . . . . . . . . . 40 3.6.2 Cartoon Facial Feature Painting . . . . . . . . . . . . . . . . . . 40 3.7 Experimental Results and Discussions . . . . . . . . . . . . . . . . . . . 41 3.7.1 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . 41 3.7.2 Compare With Other Researches and Discussions . . . . . . . . . 41 4 Colorful Pyrography on Three-Dimensional Curve Surface 45 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.2.1 Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.2.2 Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.3 System Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.4 Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.4.1 Grayscale Image Conversion . . . . . . . . . . . . . . . . . . . . 49 4.4.2 Error Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.4.3 Nearest Neighbor Search . . . . . . . . . . . . . . . . . . . . . . 53 4.4.4 Distinctly Contours . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.5 Conformal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.5.1 Conformal Mapping . . . . . . . . . . . . . . . . . . . . . . . . 57 4.5.2 Vertices Unifying Procedure in u-v Plane . . . . . . . . . . . . . 59 4.5.3 2D Iamge Mapped onto the Flattening Surface . . . . . . . . . . 60 4.5.4 Obtain the Trajectory of the Shape of Logo in 3D Space . . . . . 60 4.6 Trajectory Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.6.1 Woodburning Pyrography . . . . . . . . . . . . . . . . . . . . . 61 4.6.2 Colorful Water Painting . . . . . . . . . . . . . . . . . . . . . . 61 4.7 Experimental Results and Discussions . . . . . . . . . . . . . . . . . . . 61 5 Conclusion, Contributions and Future Works 63 Bibliography 65 VITA 73 | |
| dc.language.iso | zh-TW | |
| dc.subject | 服務型機器人 | zh_TW |
| dc.subject | 影像分析 | zh_TW |
| dc.subject | 人工智慧 | zh_TW |
| dc.subject | 機器人視覺系統 | zh_TW |
| dc.subject | 藝術機器人 | zh_TW |
| dc.subject | 娛樂型機器人 | zh_TW |
| dc.subject | art robotics | en |
| dc.subject | robotic vision system | en |
| dc.subject | image parsing | en |
| dc.subject | service robotics | en |
| dc.subject | arti?cial intelligence | en |
| dc.subject | entertainment robotics | en |
| dc.title | 俱視覺回授控制之機器人藝術繪畫應用影像分析及機器學習方法 | zh_TW |
| dc.title | Robot Artistic Painting with Visual Feedback Control Using Image Parsing and Machine Learning Methodologies | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 106-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 張帆人,顏炳郎 | |
| dc.subject.keyword | 服務型機器人,娛樂型機器人,藝術機器人,機器人視覺系統,人工智慧,影像分析, | zh_TW |
| dc.subject.keyword | service robotics,entertainment robotics,art robotics,robotic vision system,arti?cial intelligence,image parsing, | en |
| dc.relation.page | 73 | |
| dc.identifier.doi | 10.6342/NTU201803445 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2018-08-15 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-107-1.pdf 未授權公開取用 | 20.24 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
