請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72251
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 林沛群(Pei-Chun Lin) | |
dc.contributor.author | Bo-Hsun Chen | en |
dc.contributor.author | 陳柏勳 | zh_TW |
dc.date.accessioned | 2021-06-17T06:31:24Z | - |
dc.date.available | 2023-08-20 | |
dc.date.copyright | 2018-08-20 | |
dc.date.issued | 2018 | |
dc.date.submitted | 2018-08-16 | |
dc.identifier.citation | [1] J. J. Craig, Introduction to robotics: mechanics and control, Third Edition ed. Pearson Prentice, Hall Upper Saddle River, 2005.
[2] 壹讀. (2016, 1/12). 機器人換人,裁員6萬?聽富士康老員工訴說真相. Available: https://read01.com/zh-tw/MxmBzN.html#.WlhxgqiWY2w [3] W. Garage. PR2. Available: http://www.willowgarage.com/pages/pr2/overview [4] DLR. (12th, Jan.). Rollin' Justin. Available: http://www.dlr.de/rmc/rm/en/desktopdefault.aspx/tabid-11427#gallery/29202 [5] ABB. YuMi. Available: http://new.abb.com/products/robotics/industrial-robots/yumi [6] R. Robotics. Baxter. Available: http://www.rethinkrobotics.com/baxter/ [7] EPSON. Autonomous Dual-arm Robot. Available: http://global.epson.com/products/robots/dualarmrobot/ [8] C. Smith et al., 'Dual arm manipulation-A survey,' Robotics and Autonomous systems, vol. 60, pp. 1340-1353, 2012. [9] A. M. Okamura, N. Smaby, and M. R. Cutkosky, 'An overview of dexterous manipulation,' in IEEE International Conference on Robotics and Automation, San Francisco, CA, 2000, pp. 255-262. [10] F. Chaumette and S. Hutchinson, 'Visual servo control. I. Basic approaches,' IEEE Robotics & Automation Magazine, vol. 13, no. 4, pp. 82-90, Dec. 2006. [11] F. Chaumette and S. Hutchinson, 'Visual servo control. II. Advanced approaches [Tutorial],' IEEE Robotics & Automation Magazine, vol. 14, no. 1, pp. 109-118, March 2007. [12] G. Zeng and A. Hemami, 'An overview of robot force control,' Robotica, vol. 15, no. 05, pp. 473-482, 1997. [13] D. Kruse, R. J. Radke, and J. T. Wen, 'Collaborative Human-Robot Manipulation of Highly Deformable Materials,' in IEEE International Conference on Robotics and Automation, Seattle, Washington, 2015, pp. 3782-3787. [14] D. Almeida and Y. Karayiannidis, 'Folding Assembly by Means of Dual-Arm Robotic Manipulation,' in IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 2016, pp. 3987-3993. [15] D. Surdilovic, Y. Yakut, T.-M. Nguyen, X. B. Pham, A. Vick, and R. M. Martin, 'Compliance Control with Dual-Arm Humanoid Robots: Design, Planning and Programming ' in IEEE-RAS International Conference on Humanoid Robots, Nashville, TN, USA, 2010, pp. 275-281. [16] H. Park, P. K. Kim, J.-H. Bae, J.-H. Park, M.-H. Baeg, and J. Park, 'Dual Arm Peg-in-Hole Assembly with a Programmed Compliant System,' in International Conference on Ubiquitous Robots and Ambient Intelligence, Kuala Lumpur, Malaysia, 2014, pp. 431-433. [17] J. Lee, P. H. Chang, and R. S. J. Jr., 'Relative Impedance Control for Dual-Arm Robots Performing Asymmetric Bimanual Tasks,' IEEE Transactions On Industrial Electronics, vol. 61, no. 7, pp. 3786-3796, 2014. [18] T. Wimbock, C. Ott, and G. Hirzinger, 'Impedance Behaviors for Two-handed Manipulation: Design and Experiments,' in IEEE International Conference on Robotics and Automation, Roma, Italy, 2007, pp. 4182-4189. [19] S. Erhart, D. Sieber, and S. Hirche, 'An impedance-based control architecture for multi-robot cooperative dual-arm mobile manipulation,' in IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 2013, pp. 315-322. [20] M. Bjerkeng, J. Schrimpf, T. Myhre, and K. Y. Pettersen, 'Fast Dual-Arm Manipulation Using Variable Admittance Control: Implementation and Experimental Results,' in IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, 2014, pp. 4728-4734. [21] F. Caccavale, P. Chiacchio, Alessandro Marino, and L. Villani, 'Six-DOF Impedance Control of Dual-Arm Cooperative Manipulators,' IEEE/ASME Transactions On Mechatronics, vol. 13, no. 5, pp. 576-586, 2008. [22] Y. Ren, Y. Liu, M. Jin, and H. Liu, 'Biomimetic object impedance control for dual-arm cooperative 7-DOF manipulators,' Robotics and Autonomous Systems, vol. 75, pp. 273-287, 2016. [23] P. Hebert, N. Hudson, J. Ma, and J. W. Burdick, 'Dual Arm Estimation for Coordinated Bimanual Manipulation,' in IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 2013, pp. 120-125. [24] D. Kruse, J. T. Wen, and R. J. Radke, 'A Sensor-Based Dual-Arm Tele-Robotic System,' IEEE Transactions On automation Science And Engineering, vol. 12, no. 1, pp. 4-18, 2015. [25] 周敬凱, '具定位避障導航與順應式雙機械手臂操作之野外用智慧型機器人,' 碩士 學位論文, 工學院機械工程學系, 國立臺灣大學, 臺北, 2014. [26] 楊武德, '雙機械手臂自我校正與協同操作,' 碩士 學位論文, 工學院機械工程學系, 國立臺灣大學, 臺北, 2016. [27] HIWIN. RA605. Available: http://www.hiwin.tw/Products/Products_robot.aspx?type=robot&subtype=robot_six_axis [28] WACOH. Capacitive 6-axis force sensor (200N). Available: http://www.wacoh-tech.com/en/products/dynpick/200n.html [29] N. Instruments. (2010). NI PXI-8110 User Manual. Available: http://www.ni.com/pdf/manuals/372655d.pdf [30] 蔡謹容, '以基礎幾何模型搭配具主被動自由度和掌內壓力與近接感測之夾爪達到快速低計算成本之多樣化物體夾取,' 碩士, 工學院 機械工程學研究所, 國立台灣大學, 台北, 2017. [31] D. Prattichizzo and J. C. Trinkle, Springer Handbook of Robotics: Ch28 Grasping. Springer, 2008. [32] G. Welch and G. Bishop, 'An Introduction to the Kalman Filter,' Department of Computer Science, University of North Carolina at Chapel Hill2006. [33] C. Stachniss, 'SLAM-Course - 04 - Extended Kalman Filter (2013/14; Cyrill Stachniss),' in SLAM-Course, C. Stachniss, Ed., ed. Youtube, 2013. [34] C. M. Bishop, Pattern Recognition and Machine Learning: Ch13 Sequential Data. Springer, 2006. [35] A. M. Turing, 'Computing Machinery and Intelligence,' Mind, vol. 49, pp. 433-460, 1950. [36] S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, 3 ed. 2009. [37] T. K. Ho, 'The Random Subspace Method for Constructing Decision Forests,' IEEE Transactions on Pattern Analysis And Machine Intelligence, vol. 20, no. 8, pp. 832-844, 1998. [38] T. Chen and C. Guestrin, 'XGBoost: A Scalable Tree Boosting System,' presented at the Conference on Knowledge Discovery and Data Mining, California, USA, 2016. [39] C. C. Chang and C. J. Lin, 'LIBSVM : a library for support vector machines,' ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, pp. 1-27, 2011. [40] Y. Bengio, 'Learning Deep Architectures for AI,' Foundations and Trends in Machine Learning, vol. 2, no. 1, pp. 1-127, 2009. [41] Y. LeCun et al., 'Backpropagation Applied to Handwritten Zip Code Recognition,' Neural Computation, vol. 1, no. 4, pp. 541-551, 1989. [42] D. Silver. (2015, Oct. 19). Reinforcement Learning. Available: http://www0.cs.ucl.ac.uk/staff/D.Silver/web/Teaching.html [43] R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour, 'Policy Gradient Methods for Reinforcement Learning with Function Approximation,' in Advances in Neural Information Processing Systems, 2000, vol. 12, pp. 1057-1063. [44] V. Mnih et al., 'Human-level control through deep reinforcement learning,' Nature, vol. 518, pp. 529-533, 2015. [45] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller, 'Deterministic Policy Gradient Algorithms,' Proceedings of Machine Learning Research, vol. 32, no. 1, pp. 387-395, 2014. [46] T. P. Lillicrap et al., 'Continuous Control with Deep Reinforcement Learning,' presented at the International Conference on Learning Representations (ICLR), San Juan, Puerto Rico, May 2 - 4, 2016. [47] S. Levine, C. Finn, T. Darrell, and P. Abbeel, 'End-to-End Training of Deep Visuomotor Policies,' Journal of Machine Learning Research, vol. 17, pp. 1-40, 2016. [48] A. Ghadirzadeh, A. Maki, D. Kragic, and M. Bjorkman, 'Deep Predictive Policy Training using Reinforcement Learning,' presented at the International Conference on Intelligent Robots and Systems, Vancouver, Canada, 2017. [49] Y. Chebotar, M. Kalakrishnan, A. Yahya, A. Li, S. Schaal, and S. Levine, 'Path integral guided policy search,' in International Conference on Robotics and Automation (ICRA), Singapore, 2017, pp. 3381-3388: IEEE. [50] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning. MIT Press, 2006. [51] P. Abbeel, A. Coates, and A. Y. Ng, 'Autonomous Helicopter Aerobatics through Apprenticeship Learning,' International Journal of Robotics Research, vol. 29, no. 13, pp. 1608-1639, 2010. [52] K. Mülling, J. Kober, O. Kroemer, and J. Peters, 'Learning to select and generalize striking movements in robot table tennis,' International Journal of Robotics Research, vol. 32, no. 3, pp. 263-279, 2013. [53] M. Cutler and J. P. How, 'Autonomous drifting using simulation-aided reinforcement learning,' in IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 2016, pp. 5442-5448: IEEE. [54] M. P. Deisenroth, D. Fox, and C. E. Rasmussen, 'Gaussian Processes for Data-Efficient Learning in Robotics and Control,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 2, pp. 408-423, 2015. [55] P. Pastor, M. Kalakrishnan, S. Chitta, E. Theodorou, and S. Schaal, 'Skill Learning and Task Outcome Prediction for Manipulation,' in International Conference on Robotics and Automation (ICRA), Shanghai, China, 2011, pp. 3828-3834: IEEE. [56] M. P. Deisenroth, C. E. Rasmussen, and D. Fox, 'Learning to Control a Low-Cost Manipulator using Data-Efficient Reinforcement Learning,' presented at the Robotics: Science and Systems, C.A., USA, 2011. [57] J. M. Wong, T. Takahashi, and R. A. Grupen, 'Self-Supervised Deep Visuomotor Learning from Motor Unit Feedback,' presented at the Workshop at Robotics: Science and Systems, Ann Arbor, USA, 2016. [58] A. Clegg, W. Yu, Z. Erickson, J. Tan, C. K. Liu, and G. Turk, 'Learning to Navigate Cloth using Haptics,' in IEEE/RSJ International Conference on Intelligent Robots and Systems, Vancouver, Canada, 2017, pp. 2799-2805: IEEE. [59] T. Inoue, G. D. Magistris, A. Munawar, T. Yokoya, and R. Tachibana, 'Deep Reinforcement Learning for High Precision Assembly Tasks,' in IEEE/RSJ International Conference on Intelligent Robots and Systems, Vancouver, Canada, 2017, pp. 819-825: IEEE. [60] T. Haarnoja, V. Pong, A. Zhou, M. Dalal, P. Abbeel, and S. Levine, 'Composable Deep Reinforcement Learning for Robotic Manipulation,' ArXiv e-prints, vol. 1803, Accessed on: March 1, 2018Available: http://adsabs.harvard.edu/abs/2018arXiv180306773H [61] K. Lowrey, S. Kolev, J. Dao, A. Rajeswaran, and E. Todorov, 'Reinforcement learning for non-prehensile manipulation: Transfer from simulation to physical system,' in IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots, Brisbane, Australia, 2018: IEEE. [62] I. Clavera, D. Held, and P. Abbeel, 'Policy transfer via modularity and reward guiding,' in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017, pp. 1537-1544: IEEE. [63] K. L. Moore, 'Iterative Learning Control: An Expository Overview,' in Applied and Computational Control, Signals, and Circuits, vol. 1, B. N. Datta, Ed. Boston, MA: Birkhäuser Boston, 1999, pp. 151-214. [64] A. Gams, B. Nemec, A. J. Ijspeert, and A. Ude, 'Coupling Movement Primitives: Interaction With the Environment and Bimanual Tasks,' IEEE Transactions On Robotics, vol. 30, no. 4, pp. 816-830, 26 February 2014 2014. [65] A. Gams, A. Ude, and J. Morimoto, 'Accelerating Synchronization of Movement Primitives: Dual-Arm Discrete-Periodic Motion of a Humanoid Robot,' in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 2015, pp. 2754-2760. [66] N. Likar, B. Nemec, L. Zlajpah, S. Ando, and A. Ude, 'Adaptation of Bimanual Assembly Tasks using Iterative Learning Framework,' in IEEE-RAS International Conference on Humanoid Robots (Humanoids), Seoul, Korea, 2015, pp. 771-776: IEEE. [67] J. Bos, A. Wahrburg, and K. D. Listmann, 'Iteratively Learned and Temporally Scaled Force Control with Application to Robotic Assembly in Unstructured Environments,' in IEEE International Conference on Robotics and Automation (ICRA), Singapore, 2017, pp. 3000-3007. [68] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, 'Proximal Policy Optimization Algorithms,' ArXiv e-prints, vol. 1707, Accessed on: July 1, 2017Available: http://adsabs.harvard.edu/abs/2017arXiv170706347S [69] N. Heess et al., 'Emergence of Locomotion Behaviours in Rich Environments,' ArXiv e-prints, vol. 1707, Accessed on: July 1, 2017Available: http://adsabs.harvard.edu/abs/2017arXiv170702286H [70] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel, 'Trust Region Policy Optimization,' ArXiv e-prints, vol. 1502, Accessed on: February 1, 2015Available: http://adsabs.harvard.edu/abs/2015arXiv150205477S | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72251 | - |
dc.description.abstract | 在工業4.0中,關節型機器手臂在智慧化產線扮演關鍵角色,而雙手臂機器人具有許多優勢,包含更多的自由度、可以更牢固地抓取大型加工件。要操作雙手機器人,多種量測資訊是必要的,尤其是力回授。然而,過去較少有研究關於結合位置與力回授資訊,也較少有用單獨位置控制之機器手臂組成雙機器手臂系統,因此本研究嘗試用卡曼濾波器混合位置與力誤差回授以估測力資訊;並且由兩台獨立的位置控制機器手臂組成雙手機器人執行實驗,以符合實際工廠的需求。
雙手協同持物移動操作部分,提出了受控體之彈簧-阻尼-慣質模型的線性系統假設與識別方法,並基於此模型發展了可以結合力與位置誤差以估測混合力誤差的卡曼濾波器,以及將夾爪不同的夾持型態視為不同模型參數但為同一個模型的假設。接著透過實驗識別系統參數、檢驗卡曼濾波器、比較卡曼濾波器的優勢,並透過夾持不同物體沿著空間中的8字軌跡移動以驗證控制器架構的可行性。 學習演算法應用方面,在完整地介紹機器學習、強化學習與迭代學習控制的基礎知識後,將迭代學習控制運用於雙手協同持物移動的任務上,透過數學證明與實驗驗證改善。接著介紹PPO強化學習演算法與4個模擬範例,將一般常當作實時控制器的用法轉化成當作半即時輔助軌跡規劃器的用法,並透過單手推彈簧追力軌跡實驗與單手一維以及二維力控制打磨任務,驗證架構的可行性。 | zh_TW |
dc.description.abstract | In Industry 4.0, the articulated robot arm plays an important role in intellectual manufacturing, and among robot arms, the dual arm system has many advantages, such as more degree of freedom and more firm grasping of large pieces. To operate dual arm robots, it needs multiple feedback especially force. However, there were few research about combining position and force feedback, and few using two independent position-controlled manipulators to compose the system. So in this thesis, Kalman filter is proposed to fuse position and force error measurement to estimate force feedback, and two independent position-controlled manipulators compose the dual arm robot to conduct experiments, which is more realistic in real manufacturing factories.
In the part of the dual arm system coordinately grasping-and-moving objects, the controlled plant is assumed as a spring-inerter-damper model and the system identification method was proposed. Based on this model, the Kalman filter fusing measured force and position error to estimate the fusion force error was developed, and different grasping profiles are considered as different model parameters but the same model type. Then, model parameter identification, robustness of Kalman filter and comparison of the advantages of Kalman filter were carried out through experiments, and the dual arm system grasped and moved different objects along the spatial 8-figured trajectory to verify the controller structure. And in the part of the application of learning algorithm, after completely explaining the basic knowledge of machine learning, reinforcement learning and iterative learning control (ILC), ILC was applied on the task of dual arm system coordinately grasping-and-moving objects, and the improvement is proven by the theoretical and experimental method. Then, Proximal Policy Optimization algorithm and four simulation cases were illustrated, in which the usually usage of reinforcement learning to be a real-time controller is transformed to be the aided trajectory planner. The single arm pushing a spring to do force control experiments and the single arm 1D and 2D force control pseudo-grinding experiment were conducted to verify the proposed structure. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T06:31:24Z (GMT). No. of bitstreams: 1 ntu-107-R05522811-1.pdf: 40710518 bytes, checksum: c592a5f49971829d32f900fb208b0038 (MD5) Previous issue date: 2018 | en |
dc.description.tableofcontents | 致謝 III
摘要 IV Abstract V 目錄 VII 圖目錄 XI 表目錄 XV 第1章 緒論 1 1.1 前言 1 1.2 文獻回顧 2 1.3 研究動機與貢獻 6 1.4 論文架構 8 第2章 平台架構 9 2.1 硬體架構 9 2.1.1 機器手臂、驅動器與新代控制器 10 2.1.2 夾爪機構設計 11 2.1.3 六軸力規 14 2.1.4 工業電腦控制器與個人電腦 14 2.2 軟體架構 15 2.2.1 TPD程式 15 2.2.2 MACRO程式 18 2.2.2.1 LabVIEW程式 18 2.2.2.2 左、右手與力規的通訊狀態機迴圈 18 2.2.2.3 左、右手的主要狀態機迴圈 19 2.2.2.4 雙手協同操作的狀態機迴圈 21 2.3 小結 22 第3章 雙手協同持物力控制 23 3.1 逆運動學推導 23 3.2 重力補償問題 26 3.3 單手臂簡單力軌跡追隨實驗 28 3.3.1 實驗設置 28 3.3.2 模型(Plant)選定與系統識別方式、實驗與結果 29 3.3.3 機器手臂子系統(Robot Arm) 30 3.3.4 力控制的步級響應與頻率響應 30 3.4 雙手協同力控制持物移動操作 32 3.4.1 前提假設 32 3.4.2 控制器架構 34 3.4.2.1 從手臂(Slave Robot Arm)與受控體(Plant) 34 3.4.2.2 卡曼濾波器(Kalman filter) 35 3.4.2.3 PID控制器(PID controller) 37 3.4.3 不同夾取姿態之接觸介面模型處理方式 38 3.5 雙手協同力控制持物移動操作:實驗設置、結果與討論 39 3.5.1 接觸介面模型之模型參數識別實驗 39 3.5.2 卡曼濾波器模型參數變異對濾波器表現影響之訊號處理實驗 41 3.5.3 正向力控制用於雙手抓著盒子沿著一維軌跡移動實驗 43 3.5.4 表面正向控制用於雙手抓著盒子沿著一維軌跡旋轉實驗 45 3.5.5 雙手協同持物沿著複雜軌跡移動實驗 46 3.6 小結 50 第4章 學習演算法應用於力控制 51 4.1 前言 51 4.1.1 人工智慧與強化學習簡介 51 4.1.2 學習演算法應用於手臂機器人之文獻探討 55 4.1.3 迭代學習控制應用於雙手臂機器人協同操作之文獻探討 57 4.1.4 研究動機 58 4.2 基於迭代學習控制之雙手協同力控制持物移動操作 59 4.2.1 理論證明 60 4.2.2 迭代學習控制應用於雙手持盒沿著一維軌跡移動實驗 61 4.2.3 迭代學習控制應用於雙手持盒沿著一維軌跡旋轉實驗 62 4.2.4 迭代學習控制應用於雙手協同持物沿著複雜軌跡移動實驗 62 4.2.5 小結 64 4.3 強化學習演算法簡介與應用 64 4.3.1 PPO演算法簡介 64 4.3.2 應用了PPO演算法的4個模擬力控制範例 66 4.3.2.1 範例1:以PPO作為實時控制器的彈簧阻尼質量系統 66 4.3.2.2 範例2:以PPO作為半實時軌跡規劃器的彈簧阻尼質量系統 69 4.3.2.3 範例3:以PPO作為輔助軌跡規劃器的彈簧阻尼質量系統 71 4.3.2.4 範例4:以PPO作為輔助力軌跡規劃器的單手臂力控制模擬 74 4.3.2.5 以PPO作為輔助力軌跡規劃器之單手臂力軌跡追隨實驗 77 4.3.3 小結 80 4.4 以PPO作為輔助規劃器運用於單手一維力控制打磨任務 81 4.4.1 任務描述 81 4.4.2 控制架構、模擬與訓練結果 82 4.4.3 實驗結果 86 4.5 以PPO作為輔助規劃器運用於單手二維力控制打磨任務 87 4.5.1 任務描述 87 4.5.2 控制架構、模擬與訓練結果 88 4.5.3 二維力控制實驗與結果 91 4.5.4 小結 94 4.6 小結 95 第5章 結論與未來展望 97 5.1 結論 97 5.2 未來展望 99 5.2.1 卡曼濾波器的功能延伸 99 5.2.2 雙手協同持物移動操作的延伸 100 5.2.3 雙手力控制任務的多樣性 101 5.2.4 其他 101 參考文獻 103 | |
dc.language.iso | zh-TW | |
dc.title | 基於位置與力複合誤差控制之雙機器手臂協同持物操作與學習演算法之應用 | zh_TW |
dc.title | A Control Strategy for Dual-arm Object Manipulation Based on Fused Force/Position Errors and Learning Algorithms | en |
dc.type | Thesis | |
dc.date.schoolyear | 106-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 連豊力(Feng-Li Lian),陳中明(Chung-Ming Chen),顏炳郎(Ping-Lang Yen) | |
dc.subject.keyword | 機器手臂,雙手機器人,雙機器手臂,雙手操作,力控制,卡曼濾波器,持物移動,迭代學習控制,強化學習,PPO, | zh_TW |
dc.subject.keyword | dual manipulators,dual arm robot,dual robot arm,dual-arm manipulation,Kalman filter,force control,iterative learning,reinforcement learning,proximal policy optimization, | en |
dc.relation.page | 108 | |
dc.identifier.doi | 10.6342/NTU201803765 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2018-08-16 | |
dc.contributor.author-college | 工學院 | zh_TW |
dc.contributor.author-dept | 機械工程學研究所 | zh_TW |
顯示於系所單位: | 機械工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-107-1.pdf 目前未授權公開取用 | 39.76 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。