Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/57650Full metadata record
| ???org.dspace.app.webui.jsptag.ItemTag.dcfield??? | Value | Language |
|---|---|---|
| dc.contributor.advisor | 謝尚賢(Shang-Hsien Hsieh) | |
| dc.contributor.author | Zi-Hao Lin | en |
| dc.contributor.author | 林子皓 | zh_TW |
| dc.date.accessioned | 2021-06-16T06:56:02Z | - |
| dc.date.available | 2020-08-21 | |
| dc.date.copyright | 2020-08-21 | |
| dc.date.issued | 2020 | |
| dc.date.submitted | 2020-08-14 | |
| dc.identifier.citation | [1] Michael C. Gouett, Carl T. Haas, Paul M. Goodrum, and Carlos H. Caldas. Activity analysis for direct-work rate improvement in construction. Journal of Construction Engineering and Management, 137(12):1117–1124, 2011. doi: 10.1061/(ASCE)CO. 1943-7862.0000375. [2] Jacob J. Lin and Mani Golparvar-Fard. Visual data and predictive analytics for proactive project controls on construction sites. In Workshop of the European Group for Intelligent Computing in Engineering, pages 412–430. Springer, 2018. doi: 10.1007/978-3-319-91635-4_21. [3] CII (Construction Industry Institute). Guide to Activity Analysis. The University of Texas, Austin, Texas, 2010. [4] Hyunsoo Kim, Changbum R. Ahn, David Engelhaupt, and SangHyun Lee. Application of dynamic time warping to the recognition of mixed equipment activities in cycle time measurement. Automation in Construction, 87:225–234, 2018. doi: 10.1016/j.autcon.2017.12.014. [5] Ardalan Khosrowpour, Igor Fedorov, Aleksander Holynski, Juan Carlos Niebles, and Mani Golparvar-Fard. Automated worker activity analysis in indoor environments for direct-work rate improvement from long sequences of RGB-D images. In Construction Research Congress 2014: Construction in a Global Network, pages 729–738, 2014. doi: 10.1061/9780784413517.075. [6] Kaijian Liu and ManiGolparvar-Fard. Crowdsourcing construction activity analysis from jobsite video streams. Journal of Construction Engineering and Management, 141(11):04015035, 2015. doi: 10.1061/(ASCE)CO.1943-7862.0001010. [7] Dominic Roberts and Mani Golparvar-Fard. End-to-end vision-based detection, tracking and activity analysis of earthmoving equipment filmed at ground level. Automation in Construction, 105:102811, 2019. doi: 10.1016/j.autcon.2019.04.006. [8] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988, 2017. doi: 10.1109/ICCV.2017.324. [9] Kai Kang, Wanli Ouyang, Hongsheng Li, and Xiaogang Wang. Object detection from video tubelets with convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 817–825, 2016. doi: 10.1109/CVPR.2016.95. [10] Dominic Roberts, Mani Golparvar-Fard, Juan Carlos Niebles, JunYoung Gwak, and Ruxiao Bao. Vision-based construction activity analysis in long video sequences via hidden markov models: Experiments on earthmoving operations. In Construction Research Congress 2018, pages 164–173, 2018. doi: 10.1061/9780784481288.017. [11] Weili Fang, Lieyun Ding, Botao Zhong, Peter E.D. Love, and Hanbin Luo. Auto- mated detection of workers and heavy equipment on construction sites: A convolutional neural network approach. Advanced Engineering Informatics, 37:139–149, 2018. doi: 10.1016/j.aei.2018.05.003. [12] Man-Woo Park and Ioannis Brilakis. Construction worker detection in video frames for initializing vision trackers. Automation in Construction, 28:15–25, 2012. doi: 10.1016/j.autcon.2012.06.001. [13] Man-WooPark,Nehad Elsafty, and Zhenhua Zhu. Hardhat-wearing detection for enhancing on-site safety of construction workers. Journal of Construction Engineer- ing and Management, 141(9):04015024, 2015. doi: 10.1061/(asce)co.1943-7862. 0000974. [14] Abbas Rashidi, Mohamad Hoseyn Sigari, Marcel Maghiar, and David Citrin. An analogy between various machine-learning techniques for detecting construction materials in digital images. KSCE Journal of Civil Engineering, 20(4):1178–1188, 2016. doi: 10.1007/s12205-015-0726-0. [15] Stefania C. Radopoulou and Ioannis Brilakis. Automate ddetection of multiple pavement defects. Journal of Computing in Civil Engineering, 31(2):04016057, 2017. doi: 10.1061/(asce)cp.1943-5487.0000623. [16] Bo Xiaoa and Shih-Chung Kanga. Deep learning detection for real-time construction machine checking. In ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction, volume 36, pages 1136–1141. IAARC Publications, 2019. doi: 10.22260/ISARC2019/0151. [17] Rong-Chun Chien and Shang-Hsien Hsieh. Deep learning based computer vision techniques for real-time identification of construction site personal equipment violations (in Chinese). 2019. doi: 10.6342/NTU201900514. [18] Qi Fang, Heng Li, Xiaochun Luo, Lieyun Ding, Hanbin Luo, Timothy M. Rose, and Wangpeng An. Detecting non-hardhat-use by a deep learning method from far-field surveillance videos. Automation in Construction, 85:1–9, 2018. doi: 10.1016/j. autcon.2017.09.018. [19] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015. doi: 10.1109/TPAMI.2016. 2577031. [20] Xiaochun Luo, Heng Li, Dongping Cao, Fei Dai, JoonOh Seo, and SangHyun Lee. Recognizing diverse construction activities in site images via relevance networks of construction-related objects detected by convolutional neural networks. Journal of Computing in Civil Engineering, 32(3):04018012, 2018. doi: 10.1061/(ASCE)CP. 1943-5487.0000756. [21] Bo Xiao and Zhenhua Zhu. Two-dimensional visual tracking in construction scenarios: A comparative study. Journal of Computing in Civil Engineering, 32(3): 04018006, 2018. doi: 10.1061/(ASCE)CP.1943-5487.0000738. [22] Jun Yang, Patricio Vela, Jochen Teizer, and Zhongke Shi. Vision-based tower crane tracking for understanding construction activity. Journal of Computing in Civil Engineering, 28(1):103–112, 2014. doi: 10.1061/(ASCE)CP.1943-5487.0000242. [23] Nobuyuki Otsu. A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics, 9(1):62–66, 1979. [24] Christopher Richard Wren, Ali Azarbayejani, Trevor Darrell, and Alex Paul Pent- land. Pfinder: Real-time tracking of the human body. IEEE Transactions on pattern analysis and machine intelligence, 19(7):780–785, 1997. doi: 10.1109/34.598236. [25] Hongjo Kim, Kinam Kim, and Hyoungkwan Kim. Vision-based object-centric safety assessment using fuzzy inference: Monitoring struck-by accidents with moving objects. Journal of Computing in Civil Engineering, 30(4):04015075, 2016. doi: 10.1061/(ASCE)CP.1943-5487.0000562. [26] Chenxi Yuan, Shuai Li, and Hubo Cai. Vision-based excavator detection and track- ing using hybrid kinematic shapes and key nodes. Journal of Computing in Civil Engineering, 31(1):04016038, 2017. doi: 10.1061/(ASCE)CP.1943-5487.0000602. [27] Maximilian Bügler, André Borrmann, Gbolabo Ogunmakin, Patricio Vela, and Jochen Teizer. Fusion of photogrammetry and video analysis for productivity assessment of earthwork processes. Computer-Aided Civil and Infrastructure Engineering, 32(2):107–123, 2017. doi: doi.org/10.1111/mice.12235. [28] Karl Pearson. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2 (11):559–572, 1901. doi: 10.1080/14786440109462720. [29] Zhenhua Zhu, Xiaoning Ren, and Zhi Chen. Visual tracking of construction job- site workforce and equipment with particle filtering. Journal of Computing in Civil Engineering, 30(6):04016023, 2016. doi: 10.1061/(ASCE)CP.1943-5487.0000573. [30] Dominic Roberts, Wilfredo Torres Calderon, Shuai Tang, and Mani Golparvar- Fard. Vision-based construction worker activity analysis informed by body posture. Journal of Computing in Civil Engineering, 34(4):04020017, 2020. doi: doi.org/10.1061/(ASCE)CP.1943-5487.0000898. [31] Kanghyeok Yang, Changbum R. Ahn, Mehmet Vuran, and Sepideh Aria. Semi- supervised near-miss fall detection for ironworkers with a wearable inertial measure- ment unit. Automation in Construction, 68:194–202, 2016. doi: 10.1016/j.autcon. 2016.04.007. [32] Trevor Slaton, Carlos Hernandez, and Reza Akhavian. Construction activity recog- nition with convolutional recurrent networks. Automation in Construction, 113: 103138, 2020. doi: 10.1016/j.autcon.2020.103138. [33] Xiaochun Luo, Heng Li, Hao Wang, Zezhou Wu, Fei Dai, and Dongping Cao. Vision-based detection and visualization of dynamic workspaces. Automation in Construction, 104:1–13, 2019. doi: 10.1016/j.autcon.2019.04.001. [34] Alex Bewley, Zongyuan Ge, Lionel Ott, Fabio Ramos, and Ben Upcroft. Simple online and realtime tracking. In 2016 IEEE International Conference on Image Pro- cessing (ICIP), pages 3464–3468. IEEE, 2016. doi: 10.1109/ICIP.2016.7533003. [35] João F. Henriques, Rui Caseiro, Pedro Martins, and Jorge Batista. High-speed track- ing with kernelized correlation filters. IEEE transactions on pattern analysis and machine intelligence, 37(3):583–596, 2014. doi: 10.1109/TPAMI.2014.2345390. [36] Kensho Hara, Hirokatsu Kataoka, and Yutaka Satoh. Can spatiotemporal 3D CNNs retrace the history of 2D CNNs and Imagenet? In Proceedings of the IEEE con- ference on Computer Vision and Pattern Recognition, pages 6546–6555, 2018. doi: 10.1109/CVPR.2018.00685. [37] Chen Chen, Zhenhua Zhu, and Amin Hammad. Automated excavators activity recognition and productivity analysis from construction site surveillance videos. Au- tomation in Construction, 110:103045, 2020. doi: 10.1016/j.autcon.2019.103045. [38] NicolaiWojke,AlexBewley,andDietrichPaulus.Simpleonlineandrealtimetrack- ing with a deep association metric. In 2017 IEEE international conference on im- age processing (ICIP), pages 3645–3649. IEEE, 2017. doi: 10.1109/ICIP.2017. 8296962. [39] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. doi: 10.1109/CVPR.2016.90. [40] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Su- leyman, and Andrew Zisserman. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. [41] Lieyun Ding, Weili Fang, Hanbin Luo, Peter E.D. Love, Botao Zhong, and Xi Ouyang. A deep hybrid learning model to detect unsafe behavior: Integrating convolution neural networks and long short-term memory. Automation in construc- tion, 86:118–124, 2018. doi: 10.1016/j.autcon.2017.11.002. [42] Jeff Tang. Intelligent mobile projects with TensorFlow: Build 10+ artificial intelli- gence apps using TensorFlow Mobile and Lite for IOS, Android, and Raspberry Pi. Packt Publishing Ltd, 2018. [43] Bolin Gao and Lacra Pavel. On the properties of the softmax function with applica- tion in game theory and reinforcement learning. arXiv preprint arXiv:1704.00805, 2017. [44] Jinwoo Kim and Seokho Chi. Action recognition of earthmoving excavators based on sequential pattern analysis of visual features and operation cycles. Automation in Construction, 104:255–264, 2019. doi: 10.1016/j.autcon.2019.03.025. [45] Zdenek Kalal, Krystian Mikolajczyk, and Jiri Matas. Tracking-learning-detection. IEEE transactions on pattern analysis and machine intelligence, 34(7):1409–1422, 2011. doi: 10.1109/TPAMI.2011.239. [46] John A. Kuprenas and Abdallah S. Fakhouri. A crew balance case study–improving construction productivity. CM eJournal, 1(1):10–27, 2001. [47] James Jerome O’Brien and Robert G. Zilly. Contractor’s management handbook. McGraw-Hill Companies, 1971. [48] Clarkson Hill Oglesby, Henry W. Parker, and Gregory A. Howell. Productivity im- provement in construction. Mcgraw-Hill College, 1989. [49] ProjectManagementInstitute.Aguidetotheprojectmanagementbodyofknowledge (PMBOK guide)., volume 2. Project Management Inst., 2000. [50] Jie Gong and Carlos H. Caldas. Computer vision-based video interpretation model for automated productivity analysis of construction operations. Journal of Com- puting in Civil Engineering, 24(3):252–263, 2010. doi: 10.1061/(ASCE)CP. 1943-5487.0000027. [51] ArdalanKhosrowpour,JuanCarlosNiebles,andManiGolparvar-Fard.Vision-based workface assessment using depth images for activity analysis of interior construction operations. Automation in Construction, 48:74–87, 2014. doi: 10.1016/j.autcon. 2014.08.003. [52] Xiaochun Luo, Heng Li, Xincong Yang, Yantao Yu, and Dongping Cao. Capturing and understanding workers’ activities in far-field surveillance videos with deep ac- tion recognition and bayesian nonparametric learning. Computer-Aided Civil and Infrastructure Engineering, 34(4):333–351, 2019. doi: 10.1111/mice.12419. [53] Thommen George Karimpanal and Roland Bouffanais. Self-organizing maps for storage and transfer of knowledge in reinforcement learning. Adaptive Behavior, 27 (2):111–126, 2019. doi: 10.1177/1059712318818568. [54] Daniel L. Silver and Kristin P. Bennett. Guest editor’s introduction: special issue on inductive transfer learning. Machine Learning, 73(3):215, 2008. doi: 10.1007/ s10994-008-5087-1. [55] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2. https://github.com/facebookresearch/detectron2, 2019. [56] Jasper R.R. Uijlings, Koen E.A. Van De Sande, Theo Gevers, and Arnold W.M. Smeulders. Selective search for object recognition. International journal of com- puter vision, 104(2):154–171, 2013. doi: 10.1007/s11263-013-0620-5. [57] Yann LeCun, Patrick Haffner, Léon Bottou, and Yoshua Bengio. Object recognition with gradient-based learning. In Shape, contour and grouping in computer vision, pages 319–345. Springer, 1999. doi: 10.1007/3-540-46805-6_19. [58] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Spatial pyramid pool- ing in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 37(9):1904–1916, 2015. doi: 10.1007/ 978-3-319-10578-9_23. [59] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hi- erarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014. doi: 10.1109/CVPR.2014.81. [60] Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2117–2125, 2017. doi: 10.1109/CVPR.2017.106. [61] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. In Proceedings of the IEEE international conference on computer vision, pages 2961– 2969, 2017. doi: 10.1109/ICCV.2017.322. [62] Ju Hong Yoon, Ming-Hsuan Yang, Jongwoo Lim, and Kuk-Jin Yoon. Bayesian multi-object tracking using motion context from multiple objects. In 2015 IEEE Winter Conference on Applications of Computer Vision, pages 33–40. IEEE, 2015. doi: 10.1109/WACV.2015.12. [63] Rudolph Emil Kalman. A new approach to linear filtering and prediction problems. 1960. doi: 10.1109/9780470544334.ch9. [64] HaroldW.Kuhn.Thehungarianmethodfortheassignmentproblem.Navalresearch logistics quarterly, 2(1-2):83–97, 1955. doi: 10.1002/nav.3800020109. [65] Liang Zheng, Zhi Bie, Yifan Sun, Jingdong Wang, Chi Su, Shengjin Wang, and Qi Tian. Mars: A video benchmark for large-scale person re-identification. In European Conference on Computer Vision, pages 868–884. Springer, 2016. doi: 10.1007/978-3-319-46466-4_52. [66] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. doi: 10.5244/C.30.87. [67] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural com- putation, 9(8):1735–1780, 1997. doi: 10.1162/neco.1997.9.8.1735. [68] Hasim Sak, Andrew W. Senior, and Françoise Beaufays. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. 2014. [69] AlexGraves,MarcusLiwicki,SantiagoFernández,RomanBertolami,HorstBunke, and Jürgen Schmidhuber. A novel connectionist system for unconstrained handwrit- ing recognition. IEEE transactions on pattern analysis and machine intelligence, 31 (5):855–868, 2008. doi: 10.1109/TPAMI.2008.137. [70] Xiangang Li and Xihong Wu. Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4520–4524. IEEE, 2015. doi: 10.1109/ICASSP.2015.7178826. [71] JeffreyDonahue,LisaAnneHendricks,SergioGuadarrama,MarcusRohrbach,Sub- hashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convo-lutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2625–2634, 2015. doi: 10.1109/TPAMI.2016.2599174. [72] SaraleesNadarajah.Ageneralizednormaldistribution.JournalofAppliedStatistics, 32(7):685–694, 2005. doi: 10.1080/02664760500079464. [73] Ronald Walpole. Probability statistics for engineers scientists : MyStatLab update. Pearson, Boston, 2017. ISBN 978-0134115856. [74] John G. Saw, Mark C. K. Yang, and Tse Chin Mo. Chebyshev inequality with es- timated mean and variance. The American Statistician, 38(2):130, May 1984. doi: 10.2307/2683249. [75] Andy Field. Discovering statistics using IBM SPSS statistics. Sage, 2013. [76] SamuelSanfordShapiroandMartinB.Wilk.Ananalysisofvariancetestfornormal- ity (complete samples). Biometrika, 52(3/4):591–611, 1965. doi: 10.2307/2333709. [77] Nornadiah Mohd Razali and Yap Bee Wah. Power comparisons of shapiro-wilk, kolmogorov-smirnov, lilliefors and anderson-darling tests. Journal of statistical modeling and analytics, 2(1):21–33, 2011. [78] Ralph D’Agostino and E. S. Pearson. Tests for departure from normality. empirical results for the distributions of b2 and √b1. Biometrika, 60(3):613, December 1973. doi: 10.2307/2335012. [79] David M. Mason and John H. Schuenemeyer. A modified kolmogorov-smirnov test sensitive to tail alternatives. The Annals of Statistics, pages 933–946, 1983. doi: doi:10.1214/aos/1176346259. [80] Ronald L. Wasserstein and Nicole A. Lazar. The ASA statement on p-values: Con- text, process, and purpose. The American Statistician, 70(2):129–133, 2016. doi: 10.1080/00031305.2016.1154108. [81] Ronald Aylmer Fisher. Statistical methods for research workers. In Breakthroughs in statistics, pages 66–70. Springer, 1992. [82] Jason Brownlee. How to remove outliers for machine learning. https://machine- learningmastery.com/how-to-use-statistics-to-identify-outliers-in-data/, May 2020. [83] John Wilder Tukey. Exploratory data analysis. Addison-Wesley, 1977. [84] Robert Dawson. How significant is a boxplot outlier? Journal of Statistics Educa- tion, 19(2), 2011. doi: 10.1080/10691898.2011.11889610. [85] Chiawa H. Sim, Francis F. Gan, and Tang C. Chang. Outlier labeling with box- plot procedures. Journal of the American Statistical Association, 100(470):642–652, 2005. doi: 10.1198/016214504000001466. [86] AIRCon-Lab. Alberta construction image dataset. https://www.acidb.ca/, 2020. [87] Ayoosh Kathuria. Data augmentation for bounding boxes: Rethinking image trans- forms for object detection. https://blog.paperspace.com/data-augmentation-for- bounding-boxes/, Sep 2018. [88] Hongjo Kim, Hyoungkwan Kim, Yong Won Hong, and Hyeran Byun. Detecting construction equipment using a region-based fully convolutional network and trans- fer learning. Journal of computing in Civil Engineering, 32(2):04017082, 2018. doi: 10.1061/(ASCE)CP.1943-5487.0000731. [89] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014. doi: 10.1007/978-3-319-10602-1_48. [90] A. Milan, L. Leal-Taixé, I. Reid, S. Roth, and K. Schindler. MOT16: A benchmark for multi-object tracking. arXiv:1603.00831 [cs], March 2016. doi: arXiv:1603. 00831. [91] Ohay Angah and Albert Y. Chen. Tracking multiple construction workers through deep learning and the gradient based method with re-matching based on multi-object tracking accuracy. Automation in Construction, 119:103308, 2020. doi: 10.1016/j. autcon.2020.103308. [92] ErgysRistani,FrancescoSolera,RogerZou,RitaCucchiara,andCarloTomasi.Per- formance measures and a data set for multi-target, multi-camera tracking. In Euro- pean Conference on Computer Vision, pages 17–35. Springer, 2016. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/57650 | - |
| dc.description.abstract | 對營造業而言,增加利潤、降低成本是首要目標,意味著維持施工順暢極其重要。異常施工活動不僅容易拖延進度,且與人員安全息息相關。因此,本研究提出時序性圖像之分析方法,藉此辨識潛在異常活動,並視覺化呈現施工過程。分析流程共可分為四個模組:物件偵測、物件追蹤、行為辨識、施工活動分析。物件偵測模組採用遷移式學習訓練辨識模型,於影像中辨識工地人員及重機具,再透過物件追蹤模組匹配辨識到的物件。接著,行為辨識模組利用物件追蹤結果辨識出人員及重機具所執行的動作,再根據時間軸繪製成折線圖。施工活動分析模組採用統計理論從所有動作週期中過濾出潛在異常活動,並於折線圖中標註異常區塊。
本研究以實際於工地拍攝之開挖作業影片進行驗證,物件辨識模組達到70.30%之精度,物件追蹤模組達到82.12%之準確度。施工活動分析模組不僅辨識出影片中的潛在異常活動,亦過濾出卡車交班。為達到實際應用效益,該模組統計出施工過程各式資訊,同時標註潛在異常事件之起終時間,方便使用者回顧監視影像。透過折線圖紀錄並分析施工活動,工地主任將能快速回顧整個施工過程,探討異常事件發生原因,進而維持順暢施工,確保人員安全。 | zh_TW |
| dc.description.abstract | To increase profit, ensuring project schedule is the primary objective for the construction industry. Abnormal activities may reduce productivity or even cause accidents. Therefore, this study adopts image analytics approach to automatically identify potential abnormal events and visualize construction process. The pipeline consists of four modules: object detection, object tracking, action recognition, and operational analysis. At first, Faster Region-proposal Convolutional Neural Network (Faster R-CNN) is adopted to detect workers and heavy equipment on the construction jobsites, and Simple Online and Realtime Tracking (SORT) algorithm is improved to associate detected objects between images. Afterwards, a hybrid model integrating CNN and Long Short Term Memory (LSTM) is employed for the purpose of action recognition. The results are documented into a line chart form of Crew-balance Chart (CBC) to visualize the construction process, in which irregular operations are identified through statistical theory. The approaches were validated with the videos of earthmoving operation. The Average Precision (AP) of the trained detector is about 70.31%, and the Multiple Object Tracking Accuracy (MOTA) of the modified SORT is around 82.12%. In the testing videos, not only potential abnormal activities were pre-screened, but truck exchanges were filtered. Meanwhile, an activity log with operational information and starting and ending times of irregular events was created. Through the line chart form of CBC and activity log provided by the image analytics approach, potential abnormal events can be deeply investigated for further enhancement. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-16T06:56:02Z (GMT). No. of bitstreams: 1 U0001-1707202023351800.pdf: 9579429 bytes, checksum: 9102b1b2eec505160ca948c7e2212801 (MD5) Previous issue date: 2020 | en |
| dc.description.tableofcontents | Acknowledgements i 中文摘要 ii Abstract iii Contents v List of Figures vii List of Tables x Chapter 1 Introduction 1 1.1 Background and Motivation 1 1.2 Objective 2 1.3 Organization of Thesis 3 Chapter 2 Literature Review 4 2.1 Construction Object Detection 4 2.2 Construction Object Tracking 7 2.3 Construction Action Recognition 8 2.4 Image Analytics of Construction Process 10 2.5 Summary 13 Chapter 3 Image Analytics Approach 14 3.1 Object Detection 14 3.2 Object Tracking 16 3.2.1 SORT Algorithm 17 3.2.2 DeepSORT Algorithm 18 3.3 Action Recognition 20 3.4 Operational Analysis 22 3.4.1 Gaussian Distribution Theory 23 3.4.2 Box Plot Theory 24 3.4.3 Line Chart Form of CBC 25 Chapter 4 Implementation, Validation and Application 26 4.1 Heavy Equipment Detection 26 4.1.1 Implementation Details 26 4.1.2 Detection Results 28 4.2 Heavy Equipment Tracking 33 4.2.1 Implementation Details 33 4.2.2 Tracking Results 33 4.3 Operational Analysis 39 4.3.1 Normality Test Results 40 4.3.2 Identification of Potential Abnormal Construction Activities 41 4.3.3 Practical Applications 52 Chapter 5 Conclusion and Recommendations 57 5.1 Conclusion 57 5.2 Recommendations 58 References 60 | |
| dc.language.iso | en | |
| dc.subject | 物件追蹤 | zh_TW |
| dc.subject | 施工活動追蹤 | zh_TW |
| dc.subject | 物件偵測 | zh_TW |
| dc.subject | 生產力分析 | zh_TW |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | Activity Tracking | en |
| dc.subject | Productivity | en |
| dc.subject | Deep Learning | en |
| dc.subject | Object Detection | en |
| dc.subject | Object Tracking | en |
| dc.title | 以圖像分析方法辨識潛在異常施工活動 | zh_TW |
| dc.title | Identification of Potential Abnormal Construction Activities Using Image Analytics | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 108-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.coadvisor | 陳柏華(Albert Y. Chen) | |
| dc.contributor.oralexamcommittee | 陳柏翰(Po-Han Chen),周建成(Chien-Cheng Chou) | |
| dc.subject.keyword | 施工活動追蹤,生產力分析,深度學習,物件偵測,物件追蹤, | zh_TW |
| dc.subject.keyword | Activity Tracking,Productivity,Deep Learning,Object Detection,Object Tracking, | en |
| dc.relation.page | 73 | |
| dc.identifier.doi | 10.6342/NTU202001611 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2020-08-14 | |
| dc.contributor.author-college | 工學院 | zh_TW |
| dc.contributor.author-dept | 土木工程學研究所 | zh_TW |
| Appears in Collections: | 土木工程學系 | |
Files in This Item:
| File | Size | Format | |
|---|---|---|---|
| U0001-1707202023351800.pdf Restricted Access | 9.35 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
