Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 工程科學及海洋工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90582
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor黃心豪zh_TW
dc.contributor.advisorHsin-Haou Huangen
dc.contributor.author蔡承恩zh_TW
dc.contributor.authorCheng-En Tsaien
dc.date.accessioned2023-10-03T16:43:49Z-
dc.date.available2023-11-09-
dc.date.copyright2023-10-03-
dc.date.issued2023-
dc.date.submitted2023-08-04-
dc.identifier.citation[1] J.-S. Chou and W.-T. Tu, "Failure analysis and risk management of a collapsed large wind turbine tower," Engineering Failure Analysis, vol. 18, no. 1, pp. 295-313, 2011.
[2] L. Luo and M. Q. Feng, "Edge‐enhanced matching for gradient‐based computer vision displacement measurement," Computer‐Aided Civil and Infrastructure Engineering, vol. 33, no. 12, pp. 1019-1040, 2018.
[3] C.-Z. Dong and F. N. Catbas, "A review of computer vision–based structural health monitoring at local and global levels," Structural Health Monitoring, vol. 20, no. 2, pp. 692-743, 2021.
[4] C. Harris and M. Stephens, "A combined corner and edge detector," in Alvey vision conference, 1988, vol. 15, pp. 10-5244.
[5] D. G. Lowe, "Distinctive image features from scale-invariant keypoints," International journal of computer vision, vol. 60, no. 2, pp. 91-110, 2004.
[6] H. Bay, T. Tuytelaars, and L. V. Gool, "Surf: Speeded up robust features," in European conference on computer vision, 2006: Springer, pp. 404-417.
[7] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, "ORB: An efficient alternative to SIFT or SURF," in 2011 International conference on computer vision, 2011: Ieee, pp. 2564-2571.
[8] D. DeTone, T. Malisiewicz, and A. Rabinovich, "Superpoint: Self-supervised interest point detection and description," in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 224-236.
[9] B. D. Lucas and T. Kanade, "An iterative image registration technique with an application to stereo vision," in IJCAI'81: 7th international joint conference on Artificial intelligence, 1981, vol. Vol. 2.
[10] G. Farnebäck, "Two-frame motion estimation based on polynomial expansion," in Scandinavian conference on Image analysis, 2003: Springer, pp. 363-370.
[11] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, "Flownet: Learning optical flow with convolutional networks," in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2758-2766.
[12] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, "Vision meets robotics: The kitti dataset," The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231-1237, 2013.
[13] D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, "Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8934-8943.
[14] Z. Teed and J. Deng, "Raft: Recurrent all-pairs field transforms for optical flow," in European conference on computer vision, 2020: Springer, pp. 402-419.
[15] X.-S. Gao, X.-R. Hou, J. Tang, and H.-F. Cheng, "Complete solution classification for the perspective-three-point problem," IEEE transactions on pattern analysis and machine intelligence, vol. 25, no. 8, pp. 930-943, 2003.
[16] G. Nakano, "A Simple Direct Solution to the Perspective-Three-Point Problem," in BMVC, 2019, p. 26.
[17] V. Lepetit, F. Moreno-Noguer, and P. Fua, "Epnp: An accurate o (n) solution to the pnp problem," International journal of computer vision, vol. 81, no. 2, pp. 155-166, 2009.
[18] T. Sattler, B. Leibe, and L. Kobbelt, "Fast image-based localization using direct 2d-to-3d matching," in 2011 International Conference on Computer Vision, 2011: IEEE, pp. 667-674.
[19] A. Kendall, M. Grimes, and R. Cipolla, "Posenet: A convolutional network for real-time 6-dof camera relocalization," in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2938-2946.
[20] H. C. Longuet-Higgins, "A computer algorithm for reconstructing a scene from two projections," Nature, vol. 293, no. 5828, pp. 133-135, 1981.
[21] O. D. Faugeras, "What can be seen in three dimensions with an uncalibrated stereo rig," in European conference on computer vision, 1992: Springer, pp. 563-578.
[22] J. Iglhaut, C. Cabo, S. Puliti, L. Piermattei, J. O’Connor, and J. Rosette, "Structure from motion photogrammetry in forestry: A review," Current Forestry Reports, vol. 5, no. 3, pp. 155-168, 2019.
[23] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, "ORB-SLAM: a versatile and accurate monocular SLAM system," IEEE transactions on robotics, vol. 31, no. 5, pp. 1147-1163, 2015.
[24] R. Mur-Artal and J. D. Tardós, "Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras," IEEE transactions on robotics, vol. 33, no. 5, pp. 1255-1262, 2017.
[25] D. Feng, M. Q. Feng, E. Ozer, and Y. Fukuda, "A vision-based sensor for noncontact structural displacement measurement," Sensors, vol. 15, no. 7, pp. 16557-16575, 2015.
[26] L. Tian and B. Pan, "Remote bridge deflection measurement using an advanced video deflectometer and actively illuminated LED targets," Sensors, vol. 16, no. 9, p. 1344, 2016.
[27] Y. Xu, J. Brownjohn, and D. Kong, "A non‐contact vision‐based system for multipoint displacement monitoring in a cable‐stayed footbridge," Structural Control and Health Monitoring, vol. 25, no. 5, p. e2155, 2018.
[28] J. Won, J.-W. Park, K. Park, H. Yoon, and D.-S. Moon, "Non-target structural displacement measurement using reference frame-based deepflow," Sensors, vol. 19, no. 13, p. 2992, 2019.
[29] G. Deng, Z. Zhou, S. Shao, X. Chu, and C. Jian, "A novel dense full-field displacement monitoring method based on image sequences and optical flow algorithm," Applied Sciences, vol. 10, no. 6, p. 2118, 2020.
[30] H. Yoon, H. Elanwar, H. Choi, M. Golparvar‐Fard, and B. F. Spencer Jr, "Target‐free approach for vision‐based structural system identification using consumer‐grade cameras," Structural Control and Health Monitoring, vol. 23, no. 12, pp. 1405-1416, 2016.
[31] Y. Tian, J. Zhang, and S. Yu, "Vision-based structural scaling factor and flexibility identification through mobile impact testing," Mechanical Systems and Signal Processing, vol. 122, pp. 387-402, 2019.
[32] K. Bharadwaj, A. Sheidaei, A. Afshar, and J. Baqersad, "Full-field strain prediction using mode shapes measured with digital image correlation," Measurement, vol. 139, pp. 326-333, 2019.
[33] D. Feng and M. Q. Feng, "Experimental validation of cost-effective vision-based structural health monitoring," Mechanical Systems and Signal Processing, vol. 88, pp. 199-211, 2017.
[34] Y.-J. Cha, J. G. Chen, and O. Büyüköztürk, "Output-only computer vision based damage detection using phase-based optical flow and unscented Kalman filters," Engineering Structures, vol. 132, pp. 300-313, 2017.
[35] L. Felipe-Sesé and F. A. Díaz, "Damage methodology approach on a composite panel based on a combination of Fringe Projection and 2D Digital Image Correlation," Mechanical Systems and Signal Processing, vol. 101, pp. 467-479, 2018.
[36] Y. Xu, "Photogrammetry-based structural damage detection by tracking a visible laser line," Structural Health Monitoring, vol. 19, no. 1, pp. 322-336, 2020.
[37] O. Avci, O. Abdeljaber, S. Kiranyaz, M. Hussein, M. Gabbouj, and D. J. Inman, "A review of vibration-based damage detection in civil structures: From traditional methods to Machine Learning and Deep Learning applications," Mechanical systems and signal processing, vol. 147, p. 107077, 2021.
[38] W. Yan, "Application of random forest to aircraft engine fault diagnosis," in The Proceedings of the Multiconference on" Computational Engineering in Systems Applications", 2006, vol. 1: IEEE, pp. 468-475.
[39] Y. Yu, C. Wang, X. Gu, and J. Li, "A novel deep learning-based method for damage identification of smart building structures," Structural Health Monitoring, vol. 18, no. 1, pp. 143-163, 2019.
[40] M. Yuan, Y. Wu, and L. Lin, "Fault diagnosis and remaining useful life estimation of aero engine using LSTM neural network," in 2016 IEEE international conference on aircraft utility systems (AUS), 2016: IEEE, pp. 135-140.
[41] V. Hoskere, J.-W. Park, H. Yoon, and B. F. Spencer Jr, "Vision-based modal survey of civil infrastructure using unmanned aerial vehicles," Journal of Structural Engineering, vol. 145, no. 7, p. 04019062, 2019.
[42] T. Khuc, T. A. Nguyen, H. Dao, and F. N. Catbas, "Swaying displacement measurement for structural monitoring using computer vision and an unmanned aerial vehicle," Measurement, vol. 159, p. 107769, 2020.
[43] D. Ribeiro, R. Santos, R. Cabral, G. Saramago, P. Montenegro, H. Carvalho, J. Correia, and R. Calçada, "Non-contact structural displacement measurement using unmanned aerial vehicles and video-based systems," Mechanical Systems and Signal Processing, vol. 160, p. 107869, 2021.
[44] W. Li, W. Zhao, J. Gu, B. Fan, and Y. Du, "Dynamic characteristics monitoring of large wind turbine blades based on target-free DSST vision algorithm and UAV," Remote Sensing, vol. 14, no. 13, p. 3113, 2022.
[45] Z. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on pattern analysis and machine intelligence, vol. 22, no. 11, pp. 1330-1334, 2000.
[46] C. Tomasi and T. Kanade, "Detection and tracking of point," Int J Comput Vis, vol. 9, pp. 137-154, 1991.
[47] "3360 Series Dual Column Table Models." (accessed.
[48] "Using the Single Camera Calibrator App." (accessed.
[49] A. Pandey, M. Biswas, and M. Samman, "Damage detection from changes in curvature mode shapes," Journal of sound and vibration, vol. 145, no. 2, pp. 321-332, 1991.
[50] X. Dong, J. Lian, H. Wang, T. Yu, and Y. Zhao, "Structural vibration monitoring and operational modal analysis of offshore wind turbine structure," Ocean Engineering, vol. 150, pp. 280-297, 2018.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90582-
dc.description.abstract傳統影像量測結構振動最大的限制在相機必須在量測過程中維持不動,本文突破此限制,提出基於影像移動式量測結構響應的方法,以標定不動的三維點位反算出每幀影像拍攝時的相機姿態,達到移動式晃動補償的目的,為移動式量測結構振動的問題中提出一個全新的方向。
本文提出的演算法共分成3塊模組,首先使用光流法獲取結構上移動特徵點的振動訊號,由於移動式相機所引起的漂移訊號是無法被忽略的,所以第二塊模組使用三點透視法估算出每幀影像拍攝時的相機姿態,最後補償模組用來消除與優化晃動補償的效果。為了驗證相機姿態估算結果的正確性,本研究產生不同相機姿態下拍攝的虛擬影像資料集與實際精密機械線性移動的影像,驗證結果皆可以有效的估算出相機的姿態。
實際移動式相機量測過程中,面外移動是無法避免的。當相機面外移動時,比例因子會隨著相機與物體間的深度而產生變化。透過本研究開發的相機姿態估算程式獲得每幀影像相機的位置,可以解決比例因子隨相機深度變化的問題。結合本文提出的更新比例因子法與移動式補償演算法,成功消除移動式相機在面內與面外量測結構振動時的漂移,此外,補償後的訊號與加速規在頻域中具有相當高的一致性。最後,本文進行移動式相機量測結構損傷定位實驗,透過頻域分解法萃取結構在健康與損傷時的模態,再使用模態曲率法有效地定位出結構損傷位置。
zh_TW
dc.description.abstractTraditional image-based vibration measurement is constrained by the need for the camera to remain still during the process. However, this research introduces a novel method that overcomes this limitation by utilizing a moving camera for vibration measurement. The proposed approach involves calibrating stationary 3D points to determine the camera pose in each frame, thereby compensating for motion-induced from camera. Proposing a new perspective to tackle the problem of structure vibration measurement from moving camera.
The algorithm comprises three modules. Firstly, the optical flow method computes the vibration signal from moving feature points on the structure. In the second module, affected by camera drift, the perspective three-point method is utilized to estimate the camera pose for each frame. Finally, the compensation module performs motion compensation and optimization. To validate the accuracy of camera pose estimation, virtual image datasets with varying camera poses and actual images from precise mechanical linear motion are generated.
During practical movable camera measurements, out-of-plane motion is inevitable. Consequently, when the camera experiences such motion, the scale factor varies with the depth between the camera and the object. By utilizing the developed camera pose estimation program, the issue of scale factor variation due to camera motion in depth can be resolved. By combining the proposed motion compensation algorithm and the updated scale factor method, drifting signals can be effectively mitigated during both in-plane and out-of-plane measurements performed by the movable camera. Moreover, the compensated signal exhibits strong consistency with the accelerometer in the frequency domain. Lastly, the paper conducts experiments to locate structural damage using a movable camera. The proposed algorithm compensates for drifting and extracts the mode shape from intact and damaged states by using frequency domain decomposition. The modal curvature method is then employed to precisely locate the structural damage.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-10-03T16:43:49Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-10-03T16:43:49Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents誌謝 i
摘要 ii
Abstract iii
目錄 iv
表目錄 viii
圖目錄 ix
第一章 緒論 1
1.1 研究動機 1
1.2 研究背景 3
1.3 研究目的 4
1.4 重要性與貢獻 4
1.5 研究流程 5
1.6 名詞對照與符號說明 7
1.6.1 英文專有名詞與中文翻譯對照 7
1.6.2 符號說明表 8
第二章 文獻探討 11
2.1 目標追蹤演算法 11
2.1.1 幾何形狀匹配法 11
2.1.2 數位影像相關法 11
2.1.3 特徵點提取與匹配法 12
2.1.4 光流法 13
2.2 相機姿態估算演算法 14
2.3 應用電腦視覺演算法量測結構響應 17
2.3.1 結構行為分析 18
2.3.2 模態識別 20
2.3.3 損傷監測 21
2.3.4 移動式量測 22
第三章 研究方法 24
3.1 相機模型 24
3.1.1 針孔成像模型 24
3.1.2 畸變模型 27
3.2 影像位移量測演算法 29
3.2.1 Harris角點偵測 29
3.2.2 KLT演算法 31
3.2.3 圖像幾何變換 32
3.2.4 演算法計算流程 34
3.3 晃動式補償演算法 35
3.3.1 PnP演算法 35
3.3.2 移動式程式計算流程 39
3.3.3 高通濾波器 40
3.4 振動訊號處理 42
3.4.1 頻域分解法 42
3.4.2 模態保證指標 44
3.4.3 互譜密度函數 44
3.5 實驗設備介紹 45
3.5.1 剪力剛架結構 45
3.5.2 消費型手機 45
3.5.3 單眼相機 46
3.5.4 加速度量測儀器 46
3.5.5 精密線性移動機械設備 46
3.5.6 結構輸入訊號 46
第四章 估算相機姿態實驗與驗證 48
4.1 相機多自由運動下的姿態估算驗證 48
4.1.1 座標系定義 48
4.1.2 測試影像集 48
4.1.3 相機姿態估算結果 50
4.2 實驗驗證相機在線性移動下的姿態估算 52
4.2.1 實驗目的 52
4.2.2 手機影像校正 53
4.2.3 實驗架設 54
4.2.4 量測結果 55
第五章 影像量測結構振動與健康監測 58
5.1 複合式面內量測剪力剛架結構振動 58
5.1.1 實驗目的 58
5.1.2 實驗架設 59
5.1.3 晃動式與固定式位移量測結果 60
5.1.4 剪力剛架結構系統識別結果 65
5.2 複合式面外量測剪力剛架結構振動 69
5.2.1 實驗目的 69
5.2.2 實驗架設 71
5.2.3 晃動式與固定式量測結果 71
5.3 移動式相機進行結構損傷定位 77
5.3.1 實驗目的 77
5.3.2 損傷案例 77
5.3.3 案例一破壞定位結果 79
5.3.4 案例二破壞定位結果 81
5.4 量測精度討論 83
5.4.1 實驗目的 83
5.4.2 測試影像資料集 83
5.4.3 演算法計算結果 84
5.4.4 實際應用說明 85
第六章 結論與未來展望 87
6.1 結論 87
6.2 未來展望 88
附錄 90
附錄A 相機D600影像較正結果 90
附錄B 複合式面內量測剪力剛架結構振動結果 91
附錄B.1 相對位移訊號量測結果(位移訊號補償前) 91
附錄B.2 絕對位移訊號補償結果(視覺演算法) 93
附錄B.3絕對位移訊號補償結果(高通濾波器) 94
附錄B.4 加速規與視覺感測計計算MAC值結果 96
附錄C 複合式面外量測剪力剛架結構振動結果 97
附錄C.1 相對位移訊號量測結果(位移訊號補償前) 97
附錄C.2 絕對位移訊號補償結果(視覺演算法) 98
附錄C.3 絕對位移訊號補償結果(高通濾波器) 100
參考文獻 102
-
dc.language.isozh_TW-
dc.subject光流法zh_TW
dc.subject三點透視法zh_TW
dc.subject非接 觸式量測zh_TW
dc.subject結構損傷定位zh_TW
dc.subject相機飄移補償zh_TW
dc.subject結構系統識別zh_TW
dc.subjectstructural system identificationen
dc.subjectcamera motion compensationen
dc.subjectnon-contact measurementen
dc.subjectperspective three-pointen
dc.subjectoptical flowen
dc.subjectstructural damage localizationen
dc.title結合三點透視相機姿態估算法與光流法於結構振動量測zh_TW
dc.titleImage-based structural vibration measurement using perspective-three-point pose estimation and optical flow methodsen
dc.typeThesis-
dc.date.schoolyear111-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee宋家驥;張恆華;鍾承憲;黃勝翊zh_TW
dc.contributor.oralexamcommitteeChia-Chi Sung;Heng-Hua Chang;Cheng-Hsien Chung;Hseng-Ji Huangen
dc.subject.keyword光流法,三點透視法,結構損傷定位,相機飄移補償,非接 觸式量測,結構系統識別,zh_TW
dc.subject.keywordoptical flow,perspective three-point,structural damage localization,camera motion compensation,non-contact measurement,structural system identification,en
dc.relation.page106-
dc.identifier.doi10.6342/NTU202303004-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2023-08-08-
dc.contributor.author-college工學院-
dc.contributor.author-dept工程科學及海洋工程學系-
dc.date.embargo-lift2028-08-04-
顯示於系所單位:工程科學及海洋工程學系

文件中的檔案:
檔案 大小格式 
ntu-111-2.pdf
  未授權公開取用
10.93 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved