Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92653
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳祝嵩zh_TW
dc.contributor.advisorChu-Song Chenen
dc.contributor.author洪宗維zh_TW
dc.contributor.authorZong-Wei Hongen
dc.date.accessioned2024-05-30T16:05:34Z-
dc.date.available2024-05-31-
dc.date.copyright2024-05-30-
dc.date.issued2024-
dc.date.submitted2024-03-29-
dc.identifier.citation[1] Least-squares fitting of two 3-d point sets. IEEE Transactions on pattern analysis and machine intelligence, (5):698–700, 1987.
[2] Mean shift: A robust approach toward feature space analysis. IEEE Transactions on pattern analysis and machine intelligence, 24(5):603–619, 2002.
[3] A. Amini, A. S. Periyasamy, and S. Behnke. T6d-direct: Transformers for multi-object 6d pose direct regression. In DAGM German Conference on Pattern Recognition, pages 530–544. Springer, 2021.
[4] A. Amini, A. Selvam Periyasamy, and S. Behnke. Yolopose: Transformer-based multi-object 6d pose estimation using keypoint regression. In International Conference on Intelligent Autonomous Systems, pages 392–406. Springer, 2022.
[5] E. Brachmann, A. Krull, F. Michel, S. Gumhold, J. Shotton, and C. Rother. Learning 6d object pose estimation using 3d object coordinates. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part II 13, pages 536–551. Springer, 2014.
[6] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko. End-to-end object detection with transformers. In European conference on computer vision, pages 213–229. Springer, 2020.
[7] L. Chen, H. Yang, C. Wu, and S. Wu. Mp6d: An rgb-d dataset for metal parts'6d pose estimation. IEEE Robotics and Automation Letters, 7(3):5912–5919, 2022.
[8] S. Chen, X. Li, Z. Wang, and V. Prisacariu. Dfnet: Enhance absolute pose regression with direct feature matching. In Proceedings of the European Conference on Computer Vision (ECCV), 2022.
[9] W. Chen, X. Jia, H. J. Chang, J. Duan, and A. Leonardis. G2l-net: Global to local network for real-time 6d pose estimation with embedding vector features. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4233–4242, 2020.
[10] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia. Multi-view 3d object detection network for autonomous driving. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1907–1915, 2017.
[11] A. Collet, M. Martinez, and S. S. Srinivasa. The moped framework: Object recognition and pose estimation for manipulation. The international journal of robotics research, 30(10):1284–1306, 2011.
[12] Y. Di, F. Manhardt, G. Wang, X. Ji, N. Navab, and F. Tombari. So-pose: Exploiting self-occlusion for direct 6d pose estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12396–12405, 2021.
[13] T. Do. Real-time monocular object instance 6d pose estimation. 2019.
[14] A. Doumanoglou, R. Kouskouridas, S. Malassiotis, and T.-K. Kim. Recovering 6d object pose and predicting next-best-view in the crowd. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3583–3592, 2016.
[15] B. Drost, M. Ulrich, P. Bergmann, P. Hartinger, and C. Steger. Introducing mvtec itodd-a dataset for 3d object recognition in industry. In Proceedings of the IEEE international conference on computer vision workshops, pages 2200–2208, 2017.
[16] Y. Eldar, M. Lindenbaum, M. Porat, and Y. Zeevi. The farthest point strategy for progressive image sampling. IEEE Transactions on Image Processing, 6(9):1305– 1315, 1997.
[17] M. A. Fischler and R. C. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381–395, 1981.
[18] Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun. Yolox: Exceeding yolo series in 2021. arXiv preprint arXiv:2107.08430, 2021.
[19] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the kittivision benchmark suite. In 2012 IEEE conference on computer vision and pattern recognition, pages 3354–3361. IEEE, 2012.
[20] Y. Hai, R. Song, J. Li, M. Salzmann, and Y. Hu. Rigidity-aware detection for 6d object pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8927–8936, 2023.
[21] R. L. Haugaard and A. G. Buch. Surfemb: Dense and continuous correspondence distributions for object pose estimation with learnt surface embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6749–6758, 2022.
[22] Y. He, H. Huang, H. Fan, Q. Chen, and J. Sun. Ffb6d: A full flow bidirectional fusion network for 6d pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3003–3013, 2021.
[23] Y. He, W. Sun, H. Huang, J. Liu, H. Fan, and J. Sun. Pvn3d: A deep point-wise 3d keypoints voting network for 6dof pose estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11632–11641, 2020.
[24] S. Hinterstoisser, V. Lepetit, S. Ilic, S. Holzer, G. Bradski, K. Konolige, and N. Navab. Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes. In Computer Vision–ACCV 2012: 11th Asian Conference on Computer Vision, Daejeon, Korea, November 5-9, 2012, Revised Selected Papers, Part I 11, pages 548–562. Springer, 2013.
[25] T. Hodan, D. Barath, and J. Matas. Epos: Estimating 6d pose of objects with symmetries. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11703–11712, 2020.
[26] T. Hodan, P. Haluza, Š. Obdržálek, J. Matas, M. Lourakis, and X. Zabulis. T-less: An rgb-d dataset for 6d pose estimation of texture-less objects. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 880–888. IEEE, 2017.
[27] T. Hodaň, J. Matas, and Š. Obdržálek. On evaluation of 6d object pose estimation. In Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part III 14, pages 606–619. Springer, 2016.
[28] T. Hodaň, M. Sundermeyer, B. Drost, Y. Labbé, E. Brachmann, F. Michel, C. Rother, and J. Matas. Bop challenge 2020 on 6d object localization. In Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pages 577–594. Springer, 2020.
[29] T. Hodaň, X. Zabulis, M. Lourakis, Š. Obdržálek, and J. Matas. Detection and fine 3d pose estimation of texture-less objects in rgb-d images. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4421– 4428. IEEE, 2015.
[30] Y. Hu, P. Fua, and M. Salzmann. Perspective flow aggregation for data-limited 6d object pose estimation. In European Conference on Computer Vision, pages 89–106. Springer, 2022.
[31] X. Jiang, D. Li, H. Chen, Y. Zheng, R. Zhao, and L. Wu. Uni6d: A unified cnn framework without projection breakdown for 6d pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11174–11184, 2022.
[32] R. Kaskman, S. Zakharov, I. Shugurov, and S. Ilic. Homebreweddb: Rgb-d dataset for 6d pose estimation of 3d objects. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pages 0–0, 2019.
[33] W. Kehl, F. Manhardt, F. Tombari, S. Ilic, and N. Navab. Ssd-6d: Making rgb-based 3d detection and 6d pose estimation great again. In Proceedings of the IEEE international conference on computer vision, pages 1521–1529, 2017.
[34] Y. Li, G. Wang, X. Ji, Y. Xiang, and D. Fox. Deepim: Deep iterative matching for 6d pose estimation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 683–698, 2018.
[35] Z. Li, G. Wang, and X. Ji. Cdpn: Coordinates-based disentangled pose network for real-time rgb-based 6-dof object pose estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7678–7687, 2019.
[36] M. Liang, B. Yang, S. Wang, and R. Urtasun. Deep continuous fusion for multi-sensor 3d object detection. In Proceedings of the European conference on computer vision (ECCV), pages 641–656, 2018.
[37] L. Lipson, Z. Teed, A. Goyal, and J. Deng. Coupled iterative refinement for 6d multi-object pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6728–6737, 2022.
[38] L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han. On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265, 2019.
[39] X. Liu, R. Zhang, C. Zhang, B. Fu, J. Tang, X. Liang, J. Tang, X. Cheng, Y. Zhang, G. Wang, and X. Ji. Gdrnpp. https://github.com/shanice-l/gdrnpp_bop2022, 2022.
[40] I. Loshchilov and F. Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
[41] D. G. Lowe. Object recognition from local scale-invariant features. In Proceedings of the seventh IEEE international conference on computer vision, volume 2, pages 1150–1157. Ieee, 1999.
[42] E. Marchand, H. Uchiyama, and F. Spindler. Pose estimation for augmented reality: a hands-on survey. IEEE transactions on visualization and computer graphics, 22(12):2633–2651, 2015.
[43] N. Mo, W. Gan, N. Yokoya, and S. Chen. Es6d: A computation efficient and symmetry-aware 6d pose regression framework. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6718–6727, 2022.
[44] K. Park, A. Mousavian, Y. Xiang, and D. Fox. Latentfusion: End-to-end differentiable reconstruction and rendering for unseen object pose estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10710–10719, 2020.
[45] K. Park, T. Patten, and M. Vincze. Pix2pose: Pixel-wise coordinate regression of objects for 6d pose estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7668–7677, 2019.
[46] S. Peng, Y. Liu, Q. Huang, X. Zhou, and H. Bao. Pvnet: Pixel-wise voting network for 6dof pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4561–4570, 2019.
[47] N. Pereira and L. A. Alexandre. Maskedfusion: Mask-based 6d object pose estimation. In 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), pages 71–78. IEEE, 2020.
[48] T. Sattler, Q. Zhou, M. Pollefeys, and L. Leal-Taixe. Understanding the limitations of cnn-based absolute camera pose regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
[49] Y. Shavit, R. Ferens, and Y. Keller. Coarse-to-fine multi-scene pose regression with transformers. IEEE transactions on pattern analysis and machine intelligence, PP, 08 2023.
[50] I. Shugurov, S. Zakharov, and S. Ilic. Dpodv2: Dense correspondence-based 6 dof pose estimation. IEEE transactions on pattern analysis and machine intelligence, 44(11):7417–7435, 2021.
[51] C. Song, J. Song, and Q. Huang. Hybridpose: 6d object pose estimation under hybrid representations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 431–440, 2020.
[52] Y. Su, M. Saleh, T. Fetzer, J. Rambach, N. Navab, B. Busam, D. Stricker, and F. Tombari. Zebrapose: Coarse to fine surface encoding for 6dof object pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6738–6748, 2022.
[53] M. Sun, Y. Zheng, T. Bao, J. Chen, G. Jin, L. Wu, R. Zhao, and X. Jiang. Uni6dv2: Noise elimination for 6d pose estimation. In International Conference on Artificial Intelligence and Statistics, pages 1832–1844. PMLR, 2023.
[54] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI conference on artificial intelligence, volume 31, 2017.
[55] J. Tremblay, T. To, B. Sundaralingam, Y. Xiang, D. Fox, and S. Birchfield. Deep object pose estimation for semantic robotic grasping of household objects. arXiv preprint arXiv:1809.10790, 2018.
[56] C. Wang, D. Xu, Y. Zhu, R. Martín-Martín, C. Lu, L. Fei-Fei, and S. Savarese. Densefusion: 6d object pose estimation by iterative dense fusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3343– 3352, 2019.
[57] G. Wang, F. Manhardt, F. Tombari, and X. Ji. Gdr-net: Geometry-guided direct regression network for monocular 6d object pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16611–16621, 2021.
[58] Y. Wu and K. He. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018.
[59] Y. Wu, A. Javaheri, M. Zand, and M. Greenspan. Keypoint cascade voting for point cloud based 6dof pose estimation. In 2022 International Conference on 3D Vision (3DV), pages 176–186. IEEE, 2022.
[60] Y. Wu, M. Zand, A. Etemad, and M. Greenspan. Vote from the center: 6 dof pose estimation in rgb-d images by radial keypoint voting. In European Conference on Computer Vision, pages 335–352. Springer, 2022.
[61] Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox. Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes. 2018.
[62] D. Xu, D. Anguelov, and A. Jain. Pointfusion: Deep sensor fusion for 3d bounding box estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 244–253, 2018.
[63] S. Zakharov, I. Shugurov, and S. Ilic. Dpod: 6d pose object detector and refiner. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1941–1950, 2019.
[64] M. Zhang, J. Lucas, J. Ba, and G. E. Hinton. Lookahead optimizer: k steps forward, 1 step back. Advances in neural information processing systems, 32, 2019.
[65] J. Zhou, K. Chen, L. Xu, Q. Dou, and J. Qin. Deep fusion transformer network with weighted vector-wise keypoints voting for robust 6d object pose estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 13967–13977, October 2023.
[66] Y. Zhou, C. Barnes, J. Lu, J. Yang, and H. Li. On the continuity of rotation representations in neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5745–5753, 2019.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92653-
dc.description.abstract在本研究中,我們提出了一種新穎的方法,用於從單一 RGB-D 影像中還原物體的姿態,其中包含旋轉與平移,因此包含六個自由度(6DoF)。與現有的方法不同,現有方法主要分成兩類,一是直接透過網路預測物體的姿態,二是間接地預測稀疏的關鍵點,並透過這些稀疏關鍵點來還原物體姿態。我們的方法則是通過預測稠密的對應點,即針對每個可見像素預測物體座標,來應對這一具有挑戰性的任務。我們的方法充分利用現有的物體檢測方法來檢測每個待測物體。並引入一種重新投影機制,改變相機內參矩陣以應對 RGB-D 影像中的物體的裁剪。此外,我們將 3D 物體座標轉換為殘差表示使得模型能夠減小輸出空間,以提供更好的訓練效果。我們進行了大量實驗,以驗證我們的方法在 6DoF 姿態估計方面的有效性。實驗表明我們的方法在大多數先前的方法中表現優越,特別是在遮擋場景中與最先進的方法表現出顯著的改進。zh_TW
dc.description.abstractIn this work, we present a novel method for determining the 6DoF pose of an object from a single RGB-D image. Unlike existing methods that either directly predict the object's pose or rely on sparse keypoints for pose recovery, our approach addresses this challenging task using dense correspondence, i.e., it regresses the object coordinates for each visible pixel. Our approach leverages readily available object detection methods. A reprojection mechanism is introduced to change the camera intrinsic matrix to handle cropping in RGB-D images. Moreover, we transform the 3D object coordinates into a residual representation, which proves effective in reducing the output space and yields superior performance. We conducted extensive experiments to validate the effectiveness of our approach for 6D pose estimation. Our approach outperforms most previous methods, especially in occlusion scenarios, and demonstrates notable improvements over the state-of-the-art methods.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-05-30T16:05:34Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-05-30T16:05:34Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
Acknowledgements ii
摘要 iii
Abstract iv
Contents v
List of Figures viii
List of Tables xi
Chapter 1 Introduction 1
1.1 RGB-D Object Pose Estimation . . . . . . . . . . . . . . . . . . . . 1
Chapter 2 Related work 6
2.1 RGB-based Prediction . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 RGB-D-based Prediction . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Direct Pose Prediction . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Keypoint-based Prediction . . . . . . . . . . . . . . . . . . . . . . 9
Chapter 3 Method 10
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Handling Cropping in RGB-D Images . . . . . . . . . . . . . . . . . 12
3.3 Residual Representation . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4 RDPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Chapter 4 Experiments 19
4.1 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2 Benchmark Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3 BOP Challenge Datasets . . . . . . . . . . . . . . . . . . . . . . . . 22
4.4 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.4.1 Metrics for Four Benchmark Datasets . . . . . . . . . . . . . . . . 23
4.4.2 Metrics for BOP Challenge . . . . . . . . . . . . . . . . . . . . . . 24
4.5 Comparison with State-of-the-Art Methods . . . . . . . . . . . . . . 25
4.5.1 Results on LineMOD & Occlusion LineMOD . . . . . . . . . . . . 25
4.5.2 Results on YCB-V . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.5.3 Results on MP6D . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.5.4 Results on BOP Challenge . . . . . . . . . . . . . . . . . . . . . . 26
4.5.5 Qualitative Results on Occlusion LineMOD . . . . . . . . . . . . . 27
4.6 Ablation Study on Occlusion LineMOD . . . . . . . . . . . . . . . . 28
4.6.1 Effectiveness of the Residual Representation . . . . . . . . . . . . . 28
4.6.2 Effectiveness of the Dense Correspondence components . . . . . . 28
4.6.3 Effectiveness of adjusting intrinsic Korg . . . . . . . . . . . . . . . 29
4.6.4 Effectiveness of different number of anchors . . . . . . . . . . . . . 29
Chapter 5 Conclusion 31
References 32
Appendix A — Details 42
A.1 More Implementation Details . . . . . . . . . . . . . . . . . . . . . 42
A.1.1 Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.1.2 Training Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.1.3 Training Enhancements . . . . . . . . . . . . . . . . . . . . . . . . 43
A.2 More Visualization Results . . . . . . . . . . . . . . . . . . . . . . . 43
A.3 Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
A.4 Ablation Study Table . . . . . . . . . . . . . . . . . . . . . . . . . . 47
A.5 Quantitative Results on the BOP challenge . . . . . . . . . . . . . . 48
A.6 Quantitative Results under the same detections on the YCB-V Dataset 48
-
dc.language.isoen-
dc.subject六自由度物體姿態估計zh_TW
dc.subject機器人控制zh_TW
dc.subjectRGB 和深度模態融合zh_TW
dc.subject深度學習zh_TW
dc.subject密集點對應zh_TW
dc.subjectDeep Learningen
dc.subject6DoF object pose estimationen
dc.subjectRobotic Manipulationen
dc.subjectDense correspondenceen
dc.subjectRGB and Depth Modality Fusionen
dc.title基於RGB-D影像的殘差密集點對點網絡之物體姿態估計zh_TW
dc.titleResidual-based Dense Point-wise Network for 6Dof Object Pose Estimation Based on RGB-D Imagesen
dc.typeThesis-
dc.date.schoolyear112-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee陳駿丞;楊惠芳zh_TW
dc.contributor.oralexamcommitteeJun-Cheng Chen;Huei-Fang Yangen
dc.subject.keyword六自由度物體姿態估計,深度學習,RGB 和深度模態融合,密集點對應,機器人控制,zh_TW
dc.subject.keyword6DoF object pose estimation,Deep Learning,RGB and Depth Modality Fusion,Dense correspondence,Robotic Manipulation,en
dc.relation.page51-
dc.identifier.doi10.6342/NTU202400663-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2024-03-29-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
dc.date.embargo-lift2029-03-29-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-112-2.pdf
  未授權公開取用
20.74 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved