Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91398
Title: 使用Swin Transformer 進行針對遮擋行人的行人偵測任務
Swin Transformer for Pedestrian and Occluded Pedestrian Detection
Authors: 梁榮恩
Jung-An Liang
Advisor: 丁建均
Jian-Jiun Ding
Keyword: 深度學習,電腦視覺,物件偵測,Transformer,
Deep learning,Computer vision,Object detection,Transformer,
Publication Year : 2023
Degree: 碩士
Abstract: 本研究主要是提出一種高精確度的行人辨識模型。在自駕車所必須的道路物件辨識功能中,以辨識行人最為重要。因為與行人的碰撞事故一定傷亡最為嚴重,所以行人是車輛在道路上最不該與其發生碰撞的物件。本文預期後續能應用在自駕車的車內系統中,發揮即時偵測行人效果。因為於這幾年Transformer架構的模型在自然語言處理上的巨大成功,許多研究便將Transformer架構的模型試著應用在電腦視覺相關的任務上面。在其中的Vision Transformer中雖獲得了可以與CNN並駕齊驅的結果,但在訓練過程中其所需高額的運算量與龐大的模型參數量,對於要應用在終端的設備中,這是非常需要克服的難點。
2021年微軟所提出的Swin Transformer,其強大的性能、相比於Vision Transformer之下更為精簡的模型架構以及可在各種下游任務中廣泛且自由的應用,非常適合拿來當作一個物件偵測模型的特徵抽取器。本研究便利用其的強大功能來捕捉圖像中的多尺度特徵和空間關係,使其非常適合處理行人檢測這一具有挑戰性的任務。再搭配基於Faster R-CNN架構的兩階段的檢測器,結合階層式的RPN與ROI Head,並且在訓練RPN的過程中使用所有Anchors與Focal Loss。本研究在Euro City Persons和CityPersons數據集上的實驗展示了令人鼓舞的結果。特別是在檢測高度遮擋的行人方面,本研究的模型表現出色,展示了它應對傳統方法可能難以應對的挑戰性場景的能力。
This study primarily proposes a high-precision pedestrian recognition model. In the context of road object recognition, which is crucial for autonomous vehicles, pedestrian recognition holds paramount importance. Pedestrian collisions result in the most severe casualties, making pedestrians the objects that vehicles should avoid colliding with on the road. This paper is expected to be used in self-driving in-car systems to achieve real-time detection of pedestrians. Given the significant success of Transformer architecture models in natural language processing in recent years, researchers have explored the application of Transformer-based models in computer vision-related tasks. While the Vision Transformer (ViT) have shown promise in achieving results on par with Convolutional Neural Networks (CNNs), overcoming the high computational requirements and extensive model parameters during training poses a significant challenge, especially when targeting deployment on edge devices.
In 2021, Microsoft introduced the Swin Transformer, known for its powerful performance, more streamlined model architecture compared to the Vision Transformers, and its versatility in various downstream tasks. Swin Transformer is particularly suitable as a feature extractor for object detection models. This research harnesses its robust capabilities to capture multi-scale features and spatial relationships in images, making it well-suited for the challenging task of pedestrian detection. Combined with a two-stage detector based on the Faster R-CNN framework, which includes a cascade Region Proposal Network (RPN) and Region of Interest (ROI) Head, and the use of all anchors and Focal Loss during RPN training, this study showcases promising results in experiments conducted on the Euro City Persons and CityPersons datasets. Particularly, the model in this study demonstrates outstanding performance in detecting heavily occluded pedestrians, highlighting its ability to handle challenging scenarios that traditional methods may struggle with.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91398
DOI: 10.6342/NTU202304551
Fulltext Rights: 同意授權(全球公開)
Appears in Collections:電信工程學研究所

Files in This Item:
File SizeFormat 
ntu-112-1.pdf1.48 MBAdobe PDFView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved