Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80456
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor莊永裕(Yung-Yu Chung)
dc.contributor.authorChung-Min Tsaien
dc.contributor.author蔡仲閔zh_TW
dc.date.accessioned2022-11-24T03:07:01Z-
dc.date.available2022-01-17
dc.date.available2022-11-24T03:07:01Z-
dc.date.copyright2022-01-17
dc.date.issued2022
dc.date.submitted2022-01-12
dc.identifier.citation[1] P. Bergmann, T. Meinhardt, and L. Leal-Taixe. Tracking without bells and whistles. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Oct 2019. [2] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft. Simple online and realtime tracking. In 2016 IEEE International Conference on Image Processing (ICIP), pages 3464–3468, 2016. [3] E. Bochinski, V. Eiselein, and T. Sikora. High-speed tracking-by-detection with- out using image information. In International Workshop on Traffic and Street Surveillance for Safety and Security at IEEE AVSS 2017, Lecce, Italy, Aug. 2017. [4] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. Deformable convolu- tional networks, 2017. [5] C. Feichtenhofer, A. Pinz, and A. Zisserman. Detect to track and track to detect, 2018. [6] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn, 2018. [7] A. H. Jonathon Luiten. Trackeval. https://github.com/JonathonLuiten/ TrackEval, 2020. [8] R. E. Kalman. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82(1):35, 1960. [9] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization, 2017. [10] M. Kristan, J. Matas, A. Leonardis, T. Vojir, R. Pflugfelder, G. Fernandez, G. Nebe- hay, F. Porikli, and L. Cehovin. A novel performance evaluation methodology for single-target trackers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(11):2137–2155, Nov 2016. [11] H. Law and J. Deng. Cornernet: Detecting objects as paired keypoints, 2019. [12] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár. Focal loss for dense object detection, 2018. [13] T.-Y.Lin,M.Maire,S.Belongie,L.Bourdev,R.Girshick,J.Hays,P.Perona,D.Ra- manan, C. L. Zitnick, and P. Dollár. Microsoft coco: Common objects in context, 2015. [14] J. Luiten, A. Osep, P. Dendorfer, P. Torr, A. Geiger, L. Leal-Taixé, and B. Leibe. Hota: A higher order metric for evaluating multi-object tracking. International Journal of Computer Vision, pages 1–31, 2020. [15] A. Milan, L. Leal-Taixe, I. Reid, S. Roth, and K. Schindler. Mot16: A benchmark for multi-object tracking, 2016. [16] S. Schulter, P. Vernaza, W. Choi, and M. Chandraker. Deep network flow for multi- object tracking. CoRR, abs/1706.08482, 2017. [17] S. Shao, Z. Zhao, B. Li, T. Xiao, G. Yu, X. Zhang, and J. Sun. Crowdhuman: A benchmark for detecting human in a crowd, 2018. [18] B. Shuai, A. Berneshawi, X. Li, D. Modolo, and J. Tighe. Siammot: Siamese multi- object tracking. In CVPR, 2021. [19] Y. Song and M. Jeon. Online multi-object tracking and segmentation with GMPHD filter and simple affinity fusion. CoRR, abs/2009.00100, 2020. [20] D. Stadler and J. Beyerer. Improving multiple pedestrian tracking by track man- agement and occlusion handling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10958–10967, June 2021. [21] Z. Tian, C. Shen, and H. Chen. Conditional convolutions for instance segmentation. In Proc. Eur. Conf. Computer Vision (ECCV), 2020. [22] P. Tokmakov, J. Li, W. Burgard, and A. Gaidon. Learning to track with object per- manence. In ICCV, 2021. [23] P. Voigtlaender, M. Krause, A. Osep, J. Luiten, B. B. G. Sekar, A. Geiger, and B. Leibe. MOTS: Multi-object tracking and segmentation. In CVPR, 2019. [24] Z. Wang, L. Zheng, Y. Liu, Y. Li, and S. Wang. Towards real-time multi-object tracking, 2020. [25] N.WojkeandA.Bewley.Deepcosinemetriclearningforpersonre-identification.In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 748–756. IEEE, 2018. [26] N.Wojke,A.Bewley,andD.Paulus.Simpleonlineandrealtimetrackingwithadeep association metric. In 2017 IEEE International Conference on Image Processing (ICIP), pages 3645–3649. IEEE, 2017. [27] J. Xu, Y. Cao, Z. Zhang, and H. Hu. Spatial-temporal relation networks for multi- object tracking, 2019. [28] Z. Xu, W. Zhang, X. Tan, W. Yang, H. Huang, S. Wen, E. Ding, and L. Huang. Segment as points for efficient online multi-object tracking and segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), 2020. [29] B.Yang,G.Bender,Q.V.Le,andJ.Ngiam.Condconv:Conditionallyparameterized convolutions for efficient inference, 2020. [30] F. Yang, X. Chang, C. Dang, Z. Zheng, S. Sakti, S. Nakamura, and Y. Wu. Remots: Self-supervised refining multi-object tracking and segmentation, 2021. [31] F. Yu, W. Li, Q. Li, Y. Liu, X. Shi, and J. Yan. Poi: Multiple object tracking with high performance detection and appearance feature, 2016. [32] F. Yu, D. Wang, and T. Darrell. Deep layer aggregation. CoRR, abs/1707.06484, 2017. [33] Y. Zhang, C. Wang, X. Wang, W. Zeng, and W. Liu. Fairmot: On the fairness of detection and re-identification in multiple object tracking. International Journal of Computer Vision, pages 1–19, 2021. [34] X. Zhou, V. Koltun, and P. Krähenbühl. Tracking objects as points. ECCV, 2020. [35] X. Zhou, D. Wang, and P. Krähenbühl. Objects as points. In arXiv preprint arXiv:1904.07850, 2019.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80456-
dc.description.abstract在這篇論文中我們聚焦在解決多目標追蹤與實例分割問題。目前多數優秀的 解決方案如 PointTrack 多專注於尋找更好的表面特徵表示方法。除了表面特徵外, 在進行物件關聯時,物體運動也是同樣不可忽略的線索。結合兩者,我們應用動 態物件感知網路結合由卡爾曼濾波器所預測的位置資訊。透過預測的物體運動給 予提示,我們的動態搜索器即可以在下一個幀數畫面定位該物體位置。我們的追 蹤器 SearchTrack 對於表面特徵與物體運動給予完整的結合應用。我們提出了一個 多任務聯合追蹤器,它的速度快、相當直觀且相比現今的方法都還要更加精準, 無論在二維的多目標追蹤問題以及多目標追蹤與實例分割問題。我們的方法在 KITTI MOTS 的排行榜上達到 71.2/57.6 HOTA 分別在汽車與行人類別上。此外, 在 MOT17 我們也達到了 53.1 HOTA.zh_TW
dc.description.provenanceMade available in DSpace on 2022-11-24T03:07:01Z (GMT). No. of bitstreams: 1
U0001-0212202110452300.pdf: 4524389 bytes, checksum: 7feeb327940e8661bf70c7269eea8572 (MD5)
Previous issue date: 2022
en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i Acknowledgements iii 摘要 v Abstract vii Contents ix List of Figures xi List of Tables xiii Chapter 1 Introduction 1 Chapter 2 Related Work 3 Chapter 3 Preliminaries 7 Chapter 4 SearchTrack 9 Chapter 5 Experiments and Results 15 Chapter 6 Conclusion 21 Reference 23
dc.language.isoen
dc.subject多目標物件追蹤與分割zh_TW
dc.subject物件追蹤zh_TW
dc.subject電腦視覺zh_TW
dc.subjectMultiple Object Trackingen
dc.subjectComputer Visionen
dc.subjectMulti-Object Tracking and Segmentationen
dc.title基於搜索方法且帶有位置感知運動模型的追蹤器zh_TW
dc.titleA Search-Based Tracker with Position-Aware Motion Modelen
dc.date.schoolyear110-1
dc.description.degree碩士
dc.contributor.oralexamcommittee廖弘源(Hsin-Tsai Liu),林永隆(Chih-Yang Tseng),王建堯
dc.subject.keyword物件追蹤,多目標物件追蹤與分割,電腦視覺,zh_TW
dc.subject.keywordMultiple Object Tracking,Multi-Object Tracking and Segmentation,Computer Vision,en
dc.relation.page26
dc.identifier.doi10.6342/NTU202104502
dc.rights.note同意授權(限校園內公開)
dc.date.accepted2022-01-12
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊網路與多媒體研究所zh_TW
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
U0001-0212202110452300.pdf
授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務)
4.42 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved