Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/48929
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor傅立成
dc.contributor.authorShih-Huan Tsengen
dc.contributor.author曾士桓zh_TW
dc.date.accessioned2021-06-15T11:11:44Z-
dc.date.available2018-08-31
dc.date.copyright2016-09-08
dc.date.issued2016
dc.date.submitted2016-08-22
dc.identifier.citation[1] Shih-Huan Tseng, Yuan-Han Hsu, Yi-Shiu Chiang, Tung-Yen Wu, and Li-Chen Fu. ”Multi-human spatial social pattern understanding for a multi-modal robot through nonverbal social signals”. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication (ICRA), pages 531–536, Aug 2014.
[2] Edward T. Hall. The hidden dimension, 1966.
[3] Aaron Steinfeld, Terrence Fong, David Kaber, Michael Lewis, Jean Scholtz, Alan Schultz, and Michael Goodrich. ”Common metrics for human-robot interaction”. In 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, volume 30, pages 33–40, 2006.
[4] Terrence Fong, Illah Nourbakhsh, and Kerstin Dautenhahn. ”A survey of socially interactive robots”. Robotics and Autonomous Systems, 42(3-4):143–166, 2003.
[5] M. Numao and Y. Kuniyoshi. ”50-Year Outlook of Robot Technology Future Vision and Technical Challenges”, 2008.
[6] Z. Khan. ”Attitudes towards intelligent service robots.”. Technical report, Numeri- cal Analysis and Computing Science (Nada), Royal Institure of Technology (KTH), 1998.
[7] N.Mitsunaga, Z.Miyashita, K.Shinozawa, T.Miyashita, H.Ishiguro, and N.Hagita. ”What makes people accept a robot in a social environment - discussion from six- week study in an office”. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3336–3343, 2008.
[8] Matthew D. Lieberman. Social judgments: Implicit and explicit processes, chapter ”A Social Cognitive Neuroscience Approach”. Cambridge University Press, 2003.
[9] Ambady Nalini, Frank J. Bernieri, and Jennifer A. Richeson. ”Toward a histology of social behavior: Judgmental accuracy from thin slices of the behavioral stream”. Advances in experimental social psychology, 32:201 207, 2000.
[10] M. Pantic nd H. Bourlard Vinciarelli. Social signal processing: Survey of an emerg- ing domain. Image and Vision Computing, 2009.
[11] Yoshinori Kuno, Kazuhisa Sadazuka, Michie Kawashima, Keiichi Yamazaki, Akiko Yamazaki, and Hideaki Kuzuoka. ”Museum Guide Robot Based on Sociological Interaction Analysis”. In SIGCHI Conference on Human Factors in Computing Systems, CHI ’07, pages 1191–1194, New York, NY, USA, 2007. ACM.
[12] Hideaki Kuzuoka, Yuya Suzuki, Jun Yamashita, and Keiichi Yamazaki. ”Reconfig- uring Spatial Formation Arrangement by Robot Body Orientation”. In 5th ACM/ IEEE International Conference on Human-robot Interaction, HRI ’10, pages 285– 292, Piscataway, NJ, USA, 2010. IEEE Press.
[13] Candace L. Sidner, Cory D. Kidd, Christopher Lee, and Neal Lesh. ”Where to look: a study of human-robot engagement”. In Intelligent User Interfaces, pages 78–84. ACM Press, 2004.
[14] B. Mutlu, J. Forlizzi, and J. Hodgins. ”A Storytelling Robot: Modeling and Evaluation of Human-like Gaze Behavior”. In 6th IEEE-RAS International Conference on Humanoid Robots, pages 518–523, Dec 2006.
[15] C. Breazeal, C.D. Kidd, A.L. Thomaz, G. Hoffman, and M. Berlin. ”Effects of non- verbal communication on efficiency and robustness in human-robot teamwork”. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 708– 713, Aug 2005.
[16] M.P. Michalowski, S. Sabanovic, and R. Simmons. ”A spatial model of engagement for a social robot”. In Advanced Motion Control, 2006. 9th IEEE International Workshop on, pages 762–767, 2006.
[17] Chao Shi, Michihiro Shimada, Takayuki Kanda, Hiroshi Ishiguro, and Norihiro Hagita. ”Spatial Formation Model for Initiating Conversation”. In Proceedings of Robotics: Science and Systems, Los Angeles, CA, USA, June 2011.
[18] S. Satake, T. Kanda, D.F. Glas, M. Imai, H. Ishiguro, and N. Hagita. ”How to ap- proach humans?-strategies for social robots to initiate interaction”. In 4th ACM/ IEEE International Conference on Human-Robot Interaction (HRI), pages 109–116, March 2009.
[19] M.L. Walters, K. Dautenhahn, K.L. Koay, C. Kaouri, R. Boekhorst, C. Nehaniv, I. Werry, and D. Lee. ”Close encounters: spatial distances between people and a robot of mechanistic appearance”. In 5th IEEE-RAS International Conference on Humanoid Robots, pages 450–455, 2005.
[20] P. Holthaus, Ingo Lutkebohle, Marc Hanheide, and Sven Wachsmuth. Social Robotics, chapter ”Can I Help You?”, pages 325–334. Springer Berlin Heidelberg, 2010.
[21] Chi-Pang Lam, Chen-Tun Chou, Kuo-Hung Chiang, and Li-Chen Fu. ”Human- Centered Robot Navigation-Towards a Harmoniously Human-Robot Coexisting Environment”. IEEE Transactions on Robotics, 27(1):99–112, 2011.
[22] Elena Pacchierotti, Henrik I Christensen, and Patric Jensfelt. ”Human-robot em- bodied interaction in hallway settings: a pilot user study”. In IEEE International Workshop on Robot and Human Interactive Communication (ROMAN), 2005.
[23] Leila Takayama and C. Pantofaru. ”Influences on proxemic behaviors in human- robot interaction”. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5495–5502, Oct 2009.
[24] T. M. Ciolek and A. Kendon. ”Environment and the spatial arrangement of conver- sational encounters”. Sociological Inquiry, 50(3-4):237–271, 1980.
[25] F. Setti, O. Lanz, R. Ferrario, V. Murino, and M. Cristani. ”Multi-scale f-formation discovery for group detection”. In 20th IEEE International Conference on Image Processing (ICIP), pages 3547–3551, Sept 2013.
[26] Tian Gan, Yongkang Wong, Daqing Zhang, and Mohan S Kankanhalli. ”Temporal encoded F-formation system for social interaction detection”. In 21st ACM Interna- tional Conference on Multimedia, pages 937–946. ACM, 2013.
[27] Y. Kizumi, K. Kakusho, T. Okadome, T. Funatomi, and M. Iiyama. ”Detection of social interaction from observation of daily living environments”. In International Conference on Future Generation Communication Technology (FGCT), pages 162– 167, Dec 2012.
[28] M.A. Yousuf, Y. Kobayashi, Y. Kuno, K. Yamazaki, and A. Yamazaki. ”Establish- ment of spatial formation by a mobile guide robot”. In 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 281–282, March 2012.
[29] Jorge Rios-Martinez, Anne Spalanzani, and Christian Laugier. ”Understanding hu- man interaction for probabilistic autonomous navigation using Risk-RRT approach”. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2014–2019. IEEE, 2011.
[30] Joelle Pineau and Sebastian Thrun. ”High-level robot behavior control using POMDPs”. AAAI Technical Report, 2002.
[31] S. R. Schmidt-Rohr, M. Losch, and R. Dillmann. ”Learning flexible, multi-modal human-robot interaction by observing human-human-interaction”. IEEE International Symposium on Robot and Human Interactive Communication(RO-MAN), pages 582–597, 2010.
[32] P. Prodanov and A. Drygajlo. ”Decision Networks for Repair Strategies in Speech- Based Interaction with Mobile Tour-Guide Robots”. In IEEE/RSJ International Con- ference on Robotics and Automation (ICRA), pages 3041–3046, 2005.
[33] YuQinxin and Terzopoulos Demetri. ”A decision network framework for the behavioral animation of virtual humans”. ACM SIGGRAPH/Eurographics symposium on Computer Animation, pages 119–128, 2007.
[34] L.P. Kaelbling and T.Lozano-Perez.”Unifying perception, estimation and action for mobile manipulation via belief space planning”. In IEEE International Conference on Robotics and Automation (ICRA), pages 2952–2959, 2012.
[35] Yi-Han Chen, Ching-Hu Lu, Kuo-Chung Hsu, Li-Chen Fu, Yu-Jung Yeh, and Lun- Chia Kuo. ”Preference model assisted activity recognition learning in a smart home environment”. In IEEE/RSJ International Conference on Intelligent Robots and Sys- tems(IROS), pages 4657–4662, 2009.
[36] Andrea L. Thomaz and Cynthia Breazeal. ”Teachable robots: Understanding human teaching behavior to build more effective robot learners”. Artificial Intelligence, 172(6-7):716–737, 2008.
[37] Andrea L. Thomaz and Cynthia Breazeal. ”Experiments in socially guided exploration: lessons learned in building robots that learn with and without human teachers”. Connection Science, 20(2):91–110, 2008.
[38] N. Mitsunaga, C. Smith, T. Kanda, H. Ishiguro, and N. Hagita. ”Adapting Robot Behavior for Human-Robot Interaction”. IEEE Transactions on Robotics, 24(4): 911–916, 2008.
[39] Maya Cakmak, Siddhartha S. Srinivasa, Min Kyung Lee, Jodi Forlizzi, and Sara Kiesler. ”Human preferences for robot-human hand-over configurations”. In IEEE/ RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1986–1993, 2011.
[40] M. Mataric. ”Reinforcement learning in the multi-robot domain”. Autonomous Robots, 4(1):73–83, 1997.
[41] St. Russell and P. Norvig. Artificial Intelligence A Modern Approach. Prentice Hall, 2010.
[42] Andrea L. Thomaz, Guy Hoffman, and Cynthia Breazeal. ”Reinforcement Learning with Human Teachers: Understanding How People Want to Teach Robots”. In IEEE International Symposium on Robot and Human Interactive Communication, pages 352–357, 2006.
[43] Andrea L. Thomaz and Cynthia Breazeal. ”Reinforcement Learning with Human Teachers: Evidence of Feedback and Guidance with Implications for Learning Performance”. In National Conference on Artificial Intelligence, volume 21, pages 1000–1005, 2006.
[44] RonenI.Brafman and MosheTennenholtz.”R-max-a general polynomial time algorithm for near-optimal reinforcement learning”. The Journal of Machine Learning Research, 3:213–231, 2003.
[45] Wei-Jen Kuo, Shih-Huan Tseng, Jia-Yuan Yu, and Li-Chen Fu. ”A hybrid approach to RBPF based SLAM with grid mapping enhanced by line matching”. In IIEEE/ RSJ International Conference on Intelligent Robots and Systems(IROS), pages 1523– 1528, 2010.
[46] A. S. Huang, E. Olson, and D. C. Moore. ”LCM: Lightweight communications and marshalling”. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4057–4062, 2010.
[47] Reid Simmons and Dale James. Inter-Process Communication: a reference manual. Robotics Institute, Carnegie Mellon University, 2001.
[48] S.R.Schmidt-Rohr, M.Losch, R.Jakel, and R. Dillmann. ”Programming by demonstration of probabilistic decision making on a multi-modal service robot”. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 784–789, 2010.
[49] T. Germa, F. Lerasle, N. Ouadah, and V. Cadenat. ”Vision and RFID data fusion for tracking people in crowds by a mobile robot”. Computer Vision and Image Under- standing, 114:641–651, 2010.
[50] V. Alvarez-Santos, X.M. Pardo, R. Iglesias, A. Canedo-Rodriguez, and C.V. Regueiro. ”Feature analysis for human recognition and discrimination: Application to a person-following behaviour in a mobile robot”. Robotics and Autonomous Systems, 60:1021–1036, 2012.
[51] Chen-Tun Chou, Jiun-Yi Li, Ming-Fang Chang, and Li Chen Fu. ”Multi-robot cooperation based human tracking system using Laser Range Finder”. In Proceedings of IEEE International Conference on Robotics and Automation(ICRA), pages 532–537, 2011.
[52] J. Xavier, M. Pacheco, D. Castro, A. Ruano, and U. Nunes. ”Fast Line, Arc/Circle and Leg Detection from Laser Scan Data in a Player Driver”. In IEEE International Conference on Robotics and Automation (ICRA), 2005.
[53] P. Viola and M. Jones. ”Rapid object detection using a boosted cascade of simple features”. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 1, pages 511–518, 2001.
[54] S. Julier and J.K. Uhlmann. Handbook of Multisensor Data Fusion, chapter General Decentralized Data Fusion with Covariance Intersection (CI). CRC Press, 2001.
[55] Yinpeng Chen, Weiwei Xu, Hari Sundaram, Thanassis Rikakis, and Sheng-Min Liu. ”A dynamic decision network framework for online media adaptation in stroke re- habilitation”. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 5(1):4, 2008.
[56] Yusuke Kato, Takayuki Kanda, and Hiroshi Ishiguro. ”May I help you?: Design of Human-like Polite Approaching Behavior”. In ACM/IEEE International Conference on Human-Robot Interaction, pages 35–42, 2015.
[57] Shih Huan Tseng, Ju-Hsuan Hua2, Shao-Po Ma, and Li-Chen Fu. ”Human Aware- ness based Robot Performance Learning in a Social Environment”. In IEEE Inter- national Conference on Robotics and Automation (ICRA), 2013.
[58] SeongyongKooandDong-SooKwon.”Recognizinghumanintentionalactionsfrom the relative movements between human and robot”. The IEEE International Sympo- sium on Robot and Human Interactive Communication (RO-MAN), pages 939–944, 2009.
[59] Jin-Hyuk Hong, Youn-Suk Song, and Sung-Bae Cho. ”Mixed-Initiative Human– Robot Interaction Using Hierarchical Bayesian Networks”. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans., 37(6):1158–1164, 2007.
[60] K.A.Tahboub. ”Intelligent Human-Machine Interaction based on Dynamic Bayesian Networks Probabilistic Intention Recognition”. Journal of Intelligent and Robotic Systems, 45(1):31–52, 2006.
[61] A. Akl and S. Valaee. ”Accelerometer-based gesture recognition via dynamic-time warping, affinity propagation, and compressive sensing”. In IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pages 2270–2273, 2010.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/48929-
dc.description.abstract在機器人開始與一群人互動的情況下,需要解決這些的問題“這 群人在哪裡?”和“機器人什麼時候應該接近他們?“,因此本論文開 發了一個新的系統,使機器人識別當前的社交情境是什麼情況後確 定何接近上述之人群,與它們進行互動。該系統主要是融合深度相關 的數據來追蹤這群人的位置,通過深度相關的數據中提取這些人的 社交提示,以及使用動態決策網絡(DDN)模型適時提供服務。主要 的挑戰在於機器人需先了解人群的社交提示進而了解機器人和人群 之間的社交情境。社交提示是包括人群的人際距離學 (Proxemics) 和 交談時的隊形 (F-formation),而社交情況基於社交提示被分成個別交 談 (individual-to-individual)、個人對機器人 (individual-to-robot)、機器 人對個人 (robot-to-individual)、人群對機器人 (group-to-robot)、機器人 對人群 (robot-to-group)、講悄悄話 (confidential discussion)、團體討論 (group discussion)。我們的系統執行如後所述: 機器人一旦發現有一群 人後,開始提取目標人群的社交提示,適當推斷相應的社交情境,然 而機器人決定是否應該發起基於指定規則的互動行為。所進行的實驗 結果表明該系統設計的適當性和所提出的方法的有效性,對於辨識人 群表現的社交提示以及考慮到機器人與人群的社交情境。
要完成從一個機器人提供更加自然和智慧服務,機器人不僅能夠按 照使用者的意圖和執行情況理想的服務,同時也感知用戶的反饋,需 要滿足用戶的期望下,調整服務內容和/或執行。故在這篇論文中,提 出了一項搭配使用者自然語言回饋以及含有推理策略的學習計畫,使 得決策網絡和動態貝氏網絡具有適應性,以滿足人類使用者的期望。 實驗結果表明,所提出的學習計劃,使機器人的任務執行有效地自主 適應使用者的期望。此外,我們證明了此系統足以讓人滿意的性能, 也就是系統成功地推理出人的意圖,以及機器人決定出使用者期望的任務。
zh_TW
dc.description.abstractIn a situation where a robot initiates interaction with a group of people, questions such as ”where is the people group?” and ”when the robot should approach them?” should be addressed. This thesis develops a new system that enables a robot to determine when it approaches the aforementioned human group and interact with them after identifying what the current social situation is. The system is mainly to fuse depth-related data to track the positions of a group of people, extract social cues of those people by using depth-related data, and a dynamic decision network (DDN) model to provide service in proper time. The main challenge lies in understanding of the social cues of the group and the current underlying social situation concerning the relation between the robot and the group. The social cues are consist of Proxemics and F-formations, whereas the social situations based on social cues are categorized as individual-to-individual, individual-to-robot, robot-to-individual, group-to-robot, robot-to-group, confidential discussion and group discussion. Our system proceeds as follows : once a group of people are detected and the social cues of that target group of people are extracted, the corresponding social situation is appropriately inferred, and in turn the robot decides whether it should initiate interaction with the group based on rules to be specified later. The conducted experimental results demonstrate the properness of the system design and the efficacy of the proposed method in recognizing the social cues among individuals of the group as well as the nature of the social situations concerning the group and the robot.
To accomplish more natural and intelligent service provided by a robot, the robot should not only be able to perform the desirable service in accordance with the user intention and the circumstances, but also perceive user feedbacks and adjust service content and/or executions if needed to meet the user's expectation. In this thesis, the learning scheme with a reasoning strategy is proposed to adapt the dynamic decision network. The robot can analyze a user's natural speech feedback to adjust its decisions and the current social situation as what the user expects through the strategy. The experimental results show the effectiveness of the proposed learning scheme that enables autonomous adaptation of robot's tasks to fill the user's expectation.
en
dc.description.provenanceMade available in DSpace on 2021-06-15T11:11:44Z (GMT). No. of bitstreams: 1
ntu-105-D96922018-1.pdf: 27762050 bytes, checksum: 2df4fa848d7ba2f959774378a5353d9b (MD5)
Previous issue date: 2016
en
dc.description.tableofcontents1 Introduction 1
1.1 Motivation.................................. 2
1.2 Challenges and RelatedWork........................ 4
1.3 ThesisOverview .............................. 7
1.3.1 Objective.............................. 8
1.3.2 Contribution ............................ 8
1.3.3 Organization ............................ 9
2 Preliminaries 11
2.1 TrackingMethodology ........................... 11
2.2 DecisionMakingMethodology....................... 12
2.2.1 DecisionNetworks......................... 12
2.2.2 MarkovDecisionProcess ..................... 13
2.3 MachineLearningMethodology ...................... 14
2.3.1 Learning with Hidden Variables: The EM algorithm ... 14
2.3.2 ReinforcementLearning ...................... 15
2.3.3 SociallyGuideSupervisedLearning................ 16
2.4 SystemOverview.............................. 17
2.4.1 ProblemFormulation........................ 17
2.4.2 RobotPlatform........................... 20
2.4.3 Action Skills of Service Robot................... 20
2.4.4 TaskManager............................ 21
2.4.5 ProcessCommunication ...................... 22
2.4.6 SystemArchitecture ........................ 23
3 People Finding and Tracking through Data Fusion 25
3.1 People Finding using Different Sensors .................. 25
3.1.1 LaserRangeFinder......................... 25
3.1.2 VisionSensor............................ 27
3.2 Data Fusion using Covariance Intersection . . . . . . . . . 29
3.3 Tracking Positions using Kalman Filter .................. 31
3.4 ExperimentResults............................. 32
3.4.1 Performance Analysis of Fusion.................. 32
3.4.2 TrackingResults .......................... 34
3.5 Discussion.................................. 34
4 Inference based Decision Making for Initiating Interaction with a Group of People . . . . . . . . . . . . . . . . . 39
4.1 Extraction of Social Cues of People Group. . . . . . . . . . . . . 39
4.1.1 People in Robot’s Proxemics.................... 39
4.1.2 F-formation of a HumanGroup .................. 40
4.2 Determine When to Initiate Interaction................... 42
4.2.1 Defining Social Situations and Tasks . . . . . . . . . . . . . 44
4.2.2 ModelTraining........................... 45
4.3 ExperimentResults............................. 48
4.3.1 Performance Analysis of Inference................. 49
4.3.2 Results of Inference and Initiating Interaction . . . . . . 50
4.4 Discussion.................................. 53
5 Interaction Behavior Learning through Human Feedback . . . . .58
5.1 Perceptions of Human Intention ...................... 58
5.1.1 GestureRecognition ........................ 59
5.1.2 MovementRecognition....................... 61
5.1.3 SpeechRecognition ........................ 62
5.2 Online Learning Scheme with Reasoning Strategies for Social Situations andTasks .................................. 62
5.2.1 ProblemConditions ........................ 63
5.2.2 Online learning scheme with reasoning strategies . .64
5.2.3 Experiments and Results...................... 67
5.2.4 Discussion ............................. 74
5.3 Human Intention Inference with Reasoning Strategies for Recognition Un- certainty................................... 76
5.3.1 Inferring Human Intention by Error Recovery of Human Action Recognition............................. 77
5.3.2 Experiments and Results...................... 80
5.3.3 Discussion ............................. 82
6 Conclusion and Future Work 87
Reference 93
dc.language.isoen
dc.subject強化學習。zh_TW
dc.subject以人為感知的動態馬爾可夫決策模型zh_TW
dc.subject任務規劃zh_TW
dc.subject使用者意圖zh_TW
dc.subject人機互動zh_TW
dc.subject初始互動zh_TW
dc.subjectdata fusionen
dc.subjecthuman-aware dynamic decision networksen
dc.subjectHuman-robot interactionen
dc.subjectinitiate interactionen
dc.subjectreinforcement learning.en
dc.subjectsocial situationsen
dc.title智慧型機器人於動態社交環境下以人為感知之互動行為zh_TW
dc.titleHuman-aware Interaction Behaviors of an Intelligent Robot in a Dynamic Social Environmenten
dc.typeThesis
dc.date.schoolyear104-2
dc.description.degree博士
dc.contributor.oralexamcommittee李祖添,李祖聖,宋開泰,楊谷洋,李蔡彥
dc.subject.keyword人機互動,初始互動,以人為感知的動態馬爾可夫決策模型,使用者意圖,任務規劃,強化學習。,zh_TW
dc.subject.keywordHuman-robot interaction,initiate interaction,human-aware dynamic decision networks,data fusion,social situations,reinforcement learning.,en
dc.relation.page100
dc.identifier.doi10.6342/NTU201603478
dc.rights.note有償授權
dc.date.accepted2016-08-22
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-105-1.pdf
  未授權公開取用
27.11 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved