Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88672
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor洪一平zh_TW
dc.contributor.advisorYi-Ping Hungen
dc.contributor.author謝宜儒zh_TW
dc.contributor.authorI-Ju Hsiehen
dc.date.accessioned2023-08-15T17:18:45Z-
dc.date.available2023-11-09-
dc.date.copyright2023-08-15-
dc.date.issued2023-
dc.date.submitted2023-08-04-
dc.identifier.citationH. Wang, S. Sridhar, J. Huang, J. Valentin, S. Song, and L. J. Guibas, “Normalized object coordinate space for category-level 6d object pose and size estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2642–2651.
M. Tian, M. H. Ang, and G. H. Lee, “Shape prior deformation for categorical 6d object pose and size estimation,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXI 16. Springer, 2020, pp. 530–546.
A. Tarvainen and H. Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,” Advances in neural information processing systems, vol. 30, 2017.
Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” in International conference on machine learning. PMLR, 2015, pp. 1180–1189.
J. Lin, Z. Wei, C. Ding, and K. Jia, “Category-level 6d object pose and size estimation using self-supervised deep prior deformation networks,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IX. Springer, 2022, pp. 19–34.
Y. Ze and X. Wang, “Category-level 6d object pose estimation in the wild: A semi-supervised learning approach and a new dataset,” Advances in Neural Information Processing Systems, vol. 35, pp. 27 469–27 483, 2022.
K. Chen and Q. Dou, “Sgpa: Structure-guided prior adaptation for category-level 6d object pose estimation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 2773–2782.
G. Du, K. Wang, S. Lian, and K. Zhao, “Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: a review,” Artificial Intelligence Review, vol. 54, no. 3, pp. 1677–1734, 2021.
A. Zeng, K.-T. Yu, S. Song, D. Suo, E. Walker, A. Rodriguez, and J. Xiao, “Multi-view self-supervised deep learning for 6d pose estimation in the amazon picking challenge,” in 2017 IEEE international conference on robotics and automation (ICRA). IEEE, 2017, pp. 1386–1383.
X. Deng, Y. Xiang, A. Mousavian, C. Eppner, T. Bretl, and D. Fox, “Self-supervised 6d object pose estimation for robot manipulation,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 3665–3671.
Y. Su, J. Rambach, N. Minaskan, P. Lesur, A. Pagani, and D. Stricker, “Deep multi-state object pose estimation for augmented reality assembly,” in 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019, pp. 222–227.
F. Tang, Y. Wu, X. Hou, and H. Ling, “3d mapping and 6d pose computation for real time augmented reality on cylindrical objects,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 9, pp. 2887–2899, 2019.
Y. Nie, X. Han, S. Guo, Y. Zheng, J. Chang, and J. J. Zhang, “Total3dunderstanding: Joint layout, object pose and mesh reconstruction for indoor scenes from a single image,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 55–64.
C. Zhang, Z. Cui, Y. Zhang, B. Zeng, M. Pollefeys, and S. Liu, “Holistic 3d scene understanding from a single image with implicit representation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 8833–8842.
Y. Di, F. Manhardt, G. Wang, X. Ji, N. Navab, and F. Tombari, “So-pose: Exploiting self-occlusion for direct 6d pose estimation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 12 396–12 405.
S. Iwase, X. Liu, R. Khirodkar, R. Yokota, and K. M. Kitani, “Repose: Fast 6d object pose refinement via deep texture rendering,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3303–3312.
Y. Su, M. Saleh, T. Fetzer, J. Rambach, N. Navab, B. Busam, D. Stricker, and F. Tombari, “Zebrapose: Coarse to fine surface encoding for 6dof object pose estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 6738–6748.
J. Lin, Z. Wei, Z. Li, S. Xu, K. Jia, and Y. Li, “Dualposenet: Category-level 6d object pose and size estimation using dual pose network with refined learning of pose consistency,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3560–3569.
W. Kehl, F. Manhardt, F. Tombari, S. Ilic, and N. Navab, “Ssd-6d: Making rgb-based 3d detection and 6d pose estimation great again,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 1521–1529.
M. Sundermeyer, Z.-C. Marton, M. Durner, M. Brucker, and R. Triebel, “Implicit 3d orientation learning for 6d object detection from rgb images,” in Proceedings of the european conference on computer vision (ECCV), 2018, pp. 699–715.
J. Tremblay, T. To, B. Sundaralingam, Y. Xiang, D. Fox, and S. Birchfield, “Deep object pose estimation for semantic robotic grasping of household objects,” arXiv preprint arXiv:1809.10790, 2018.
T. Lee, B.-U. Lee, I. Shin, J. Choe, U. Shin, I. S. Kweon, and K.-J. Yoon, “Uda-cope: unsupervised domain adaptation for category-level object pose estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 14 891–14 900.
G. French, M. Mackiewicz, and M. Fisher, “Self-ensembling for visual domain adaptation,” in International Conference on Learning Representations, 2018.
J. Deng, W. Li, Y. Chen, and L. Duan, “Unbiased mean teacher for cross-domain object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 4091–4101.
Y.-J. Li, X. Dai, C.-Y. Ma, Y.-C. Liu, K. Chen, B. Wu, Z. He, K. Kitani, and P. Vajda, “Cross-domain adaptive teacher for object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 7581–7590.
Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” The journal of machine learning research, vol. 17, no. 1, pp. 2096–2030, 2016.
S. Umeyama, “Least-squares estimation of transformation parameters between two point patterns,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 13, no. 04, pp. 376–380, 1991.
M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
J. Wang, K. Chen, and Q. Dou, “Category-level 6d object pose estimation via cascaded relation and recurrent reconstruction networks,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021, pp. 4807–4814.
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
W. Chen, X. Jia, H. J. Chang, J. Duan, L. Shen, and A. Leonardis, “Fs-net: Fast shape-based network for category-level 6d object pose estimation with decoupled rotation mechanism,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1581–1590.
Y. Di, R. Zhang, Z. Lou, F. Manhardt, X. Ji, N. Navab, and F. Tombari, “Gpv-pose: Category-level object pose estimation via geometry-guided point-wise voting,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 6781–6791.
R. Zhang, Y. Di, Z. Lou, F. Manhardt, F. Tombari, and X. Ji, “Rbp-pose: Residual bounding box projection for category-level pose estimation,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part I. Springer, 2022, pp. 655–672.
R. Zhang, Y. Di, F. Manhardt, F. Tombari, and X. Ji, “Ssp-pose: Symmetry-aware shape prior deformation for direct category-level object pose estimation,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022, pp. 7452–7459.
F. Manhardt, G. Wang, B. Busam, M. Nickel, S. Meier, L. Minciullo, X. Ji, and N. Navab, “Cps++: Improving class-level 6d pose and shape estimation from monocular images with self-supervised learning,” arXiv preprint arXiv:2003.05848, 2020.
W. Peng, J. Yan, H. Wen, and Y. Sun, “Self-supervised category-level 6d object pose estimation with deep implicit shape representation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 2, 2022, pp. 2082–2090.
J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove, “Deepsdf: Learning continuous signed distance functions for shape representation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 165–174.
M. Zaccaria, F. Manhardt, Y. Di, F. Tombari, J. Aleotti, and M. Giorgini, “Self-supervised category-level 6d object pose estimation with optical flow consistency,” IEEE Robotics and Automation Letters, vol. 8, no. 5, pp. 2510–2517, 2023.
K. M. Borgwardt, A. Gretton, M. J. Rasch, H.-P. Kriegel, B. Scholkopf, and A. J. Smola, “Integrating structured biological data by kernel maximum mean discrepancy,” Bioinformatics, vol. 22, no. 14, pp. e49–e57, 2006.
M. Long, Y. Cao, J. Wang, and M. Jordan, “Learning transferable features with deep adaptation networks,” in International conference on machine learning. PMLR, 2015, pp. 97–105.
M. Long, H. Zhu, J. Wang, and M. I. Jordan, “Unsupervised domain adaptation with residual transfer networks,” Advances in neural information processing systems, vol. 29, 2016.
A. T. Nguyen, T. Tran, Y. Gal, P. H. S. Torr, and A. G. Baydin, “Kl guided domain adaptation,” International Conference on Learning Representations, 2022.
E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial discriminative domain adaptation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7167–7176.
A. Blum and T. Mitchell, “Combining labeled and unlabeled data with co-training,” in Proceedings of the eleventh annual conference on Computational learning theory, 1998, pp. 92–100.
K. Sohn, D. Berthelot, N. Carlini, Z. Zhang, H. Zhang, C. A. Raffel, E. D. Cubuk, A. Kurakin, and C.-L. Li, “Fixmatch: Simplifying semi-supervised learning with consistency and confidence,” Advances in neural information processing systems, vol. 33, pp. 596–608, 2020.
Y. Ouali, C. Hudelot, and M. Tami, “Semi-supervised semantic segmentation with cross-consistency training,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 12 674–12 684.
Y. Liu, Y. Tian, Y. Chen, F. Liu, V. Belagiannis, and G. Carneiro, “Perturbed and strict mean teachers for semi-supervised semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 4258–4267.
Y.-C. Liu, C.-Y. Ma, Z. He, C.-W. Kuo, K. Chen, P. Zhang, B. Wu, Z. Kira, and P. Vajda, “Unbiased teacher for semi-supervised object detection,” arXiv preprint arXiv:2102.09480, 2021.
J. Fan, B. Gao, H. Jin, and L. Jiang, “Ucc: Uncertainty guided cross-head co-training for semi-supervised semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 9947–9956.
L. Melas-Kyriazi and A. K. Manrai, “Pixmatch: Unsupervised domain adaptation via pixelwise consistency training,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 12 435–12 445.
Q. Zhou, Z. Feng, Q. Gu, J. Pang, G. Cheng, X. Lu, J. Shi, and L. Ma, “Context-aware mixup for domain adaptive semantic segmentation,” IEEE Transactions on Circuits and Systems for Video Technology, 2022.
I. Triguero, S. Garcia, and F. Herrera, “Self-labeled techniques for semi-supervised learning: taxonomy, software and empirical study,” Knowledge and Information systems, vol. 42, pp. 245–284, 2015.
G. French, M. Mackiewicz, and M. Fisher, “Self-ensembling for visual domain adaptation,” arXiv preprint arXiv:1706.05208, 2017.
Y. Zou, Z. Yu, X. Liu, B. Kumar, and J. Wang, “Confidence regularized self-training,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 5982–5991.
H. Liu, J. Wang, and M. Long, “Cycle self-training for domain adaptation,” Advances in Neural Information Processing Systems, vol. 34, pp. 22 968–22 981, 2021.
Q. Cai, Y. Pan, C.-W. Ngo, X. Tian, L. Duan, and T. Yao, “Exploring object relation in mean teacher for cross-domain detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 11 457–11 466.
J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223–2232.
Z. Pei, Z. Cao, M. Long, and J. Wang, “Multi-adversarial domain adaptation,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.
K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961–2969.
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
L. N. Smith, “Cyclical learning rates for training neural networks,” in 2017 IEEE winter conference on applications of computer vision (WACV). IEEE, 2017, pp. 464–472.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88672-
dc.description.abstract類別層級之物體位姿估計致力於估測未見過之物體的六自由度位姿,而現有的方法大多倚賴像是物體位姿及 CAD 模型的標記。然而,在真實世界中以人工取得這些標記相當費時且容易出錯。因此,我們提出一種方法來解決類別層級之物體位姿估計中無監督領域自適應的問題。我們採用了一個師生共同學習的架構來同時利用有標記的合成資料與無標記的真實世界資料,學生模型與教師模型被訓練成在不同的干擾下做出一致的預測。此外,我們引入了領域對抗訓練來縮短合成資料與真實世界資料之間的差距,為了避免領域間錯誤的特徵對齊,我們使用了多個領域識別器,以在已知類別的情況下進行特徵對齊。實驗結果顯示我們的方法在公開的真實世界資料集上,取得了無監督方法中最好的結果。透過消融研究,我們也證明了我們的方法不受特定網路架構的限制,並可以作為一種於類別層級物體位姿估計的通用無監督領域自適應方法。zh_TW
dc.description.abstractCategory-level object pose estimation aims at predicting 6-DoF object poses for previously unseen objects. Current methods mostly rely on ground-truth labels such as object poses and CAD models. However, manually annotating these labels is time-consuming and error-prone in the real-world scenario. Hence, we propose a method to solve unsupervised domain adaptation (UDA) for category-level object pose estimation. We adopt a teacher-student joint learning framework to utilize both labeled synthetic data and unlabeled real-world data. The student model and the teacher model are trained to make consistent predictions under different perturbations. Furthermore, we introduce domain adversarial training to bridge the domain gap between synthetic and real-world data. To prevent false feature alignment between domains, we adopt multiple domain discriminators instead of a single one and perform category-aware alignments. Extensive experiments show that our method achieves state-of-the-art results among unsupervised methods on a public real-world dataset. Through ablation studies, we also demonstrate that our method is not restricted to certain network architectures and can serve as a general UDA method for category-level object pose estimation.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-15T17:18:45Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-08-15T17:18:45Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents誌謝 i
摘要 ii
Abstract iii
List of Figures vi
List of Tables viii
1 Introduction 1
2 Related Work 4
2.1 Category-Level Object Pose Estimation 4
2.1.1 Fully-Supervised Methods 4
2.1.2 Unsupervised Methods 6
2.2 Unsupervised Domain Adaptation 6
2.2.1 Feature Alignment 7
2.2.2 Consistency Regularization 7
2.2.3 Self-Training 7
3 Proposed Method 9
3.1 Problem Formulation 9
3.2 Overview 10
3.3 Deep Prior Deformation Network (DPDN) 11
3.4 Teacher and Student 12
3.4.1 Student Pre-training 12
3.4.2 Teacher-student Joint Learning 14
3.5 Domain Adversarial Training 15
3.6 Overall Training and Inference 18
4 Experiments 19
4.1 Datasets 19
4.2 Implementation Details 21
4.3 Evaluation Metrics 21
4.4 Quantitative Comparison with Existing Methods 22
4.5 Ablation Studies 23
4.5.1 Mean Teacher 25
4.5.2 Domain Adversarial Training 25
4.5.3 Unsupervised Loss 25
4.5.4 2D and 3D Augmentation 26
4.5.5 EMA Update 26
4.5.6 Base Network 27
4.6 Qualitative Results 28
5 Conclusion and Future Work 31
References 33
-
dc.language.isoen-
dc.subject類別層級之物體位姿估計zh_TW
dc.subject平均教師zh_TW
dc.subject無監督領域自適應zh_TW
dc.subject領域對抗訓練zh_TW
dc.subject深度學習zh_TW
dc.subjectunsupervised domain adaptationen
dc.subjectMean Teacheren
dc.subjectdomain adversarial trainingen
dc.subjectcategory-level object pose estimationen
dc.subjectdeep learningen
dc.title域適應式平均教師於類別層級之物體位姿估計zh_TW
dc.titleDomain-Adaptive Mean Teacher for Category-Level Object Pose Estimationen
dc.typeThesis-
dc.date.schoolyear111-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee陳祝嵩;王鈺強;花凱龍;徐繼聖zh_TW
dc.contributor.oralexamcommitteeChu-Song Chen;Yu-Chiang Wang;Kai-Lung Hua;Gee-Sern Hsuen
dc.subject.keyword類別層級之物體位姿估計,無監督領域自適應,平均教師,領域對抗訓練,深度學習,zh_TW
dc.subject.keywordcategory-level object pose estimation,unsupervised domain adaptation,Mean Teacher,domain adversarial training,deep learning,en
dc.relation.page41-
dc.identifier.doi10.6342/NTU202301994-
dc.rights.note未授權-
dc.date.accepted2023-08-08-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-111-2.pdf
  未授權公開取用
8.06 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved