Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 工程科學及海洋工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72338
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor黃乾綱(Chien-Kang Huang)
dc.contributor.authorHong-Rui Zhangen
dc.contributor.author張洪瑞zh_TW
dc.date.accessioned2021-06-17T06:36:16Z-
dc.date.available2021-02-22
dc.date.copyright2021-02-22
dc.date.issued2020
dc.date.submitted2020-09-18
dc.identifier.citation[1] Hartley, R. and A. Zisserman. 'Multiple view geometry in computer vision.' Cambridge university press, 2003.
[2] N. Brandl and E. Jørgensen. 'Determination of live weight of pigs from dimensions measured using image analysis.' Computers and electronics in agriculture 15.1: 57-72, 1996.
[3] Chih-Yao Tang. 'Camera Calibration by Using Trajectory and Modeling Analysis.' M.S. thesis, National Taiwan University, Taipei, Taiwan, 2015.
[4] P. Grossmann. 'Depth from focus.' Pattern recognition letters 5.1: 63-69, 1987.
[5] J. C. Yang, M. Everett, C. Buehler L. McMillan. 'A real-time distributed light field camera.' Rendering Techniques, 77-86, 2002.
[6] Camera, 'D415.' Intel RealSense Depth, 2018.
[7] C. Strecha, W. Von Hansen, L. Van Gool, P. Fua U. Thoennessen. 'On benchmarking camera calibration and multi-view stereo for high resolution imagery.' 2008 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2008.
[8] A. Pezzuolo, M. Guarino, L. Sartori, L. A. González F Marinello. 'On-barn pig weight estimation based on body measurements by a Kinect v1 depth camera.' Computers and Electronics in Agriculture, 148: 29-36, 2018.
[9] J. Wu, R. Tillett, N. McFarlane, X. Ju, J. P. Siebert P Schofield. 'Extracting the three-dimensional shape of live pigs using stereo photogrammetry. ' Computers and Electronics in Agriculture, 44.3: 203-222, 2004.
[10] A. R. Frost, C. P. Schofield, S. A. Beaulah, T. T. Mottram, J. A. Lines C. M. Wathes. 'A review of livestock monitoring and the need for integrated systems.' Computers and electronics in agriculture, 17.2: 139-159, 1997.
[11] O. Faugeras. 'Three-dimensional computer vision: a geometric viewpoint.' MIT press, 1993.
[12] S. Stavrakakis, W. Li, J. H. Guy, G. Morgan, G. Ushaw, G. R. Johnson S. A. Edwards. 'Validity of the Microsoft Kinect sensor for assessment of normal walking patterns in pigs.' Computers and Electronics in Agriculture, 117: 1-7, 2015.
[13] K. Wang, H. Guo, Q. Ma, W. Su, L. Chen, D. Zhu. 'A portable and automatic Xtion-based measurement system for pig body size.' Computers and Electronics in Agriculture, 148, 291-298, 2018.
[14] C. P. Schofield, 'Evaluation of image analysis as a means of estimating the weight of pigs. ' Journal of Agricultural Engineering Research, 47: 287-296, 1990.
[15] K. Wang, D. Zhu, H. Guo, Q. Ma, W. Su, Y. Su. 'Automated calculation of heart girth measurement in pigs using body surface point clouds.' Computers and electronics in agriculture, 156, 565-573, 2019.
[16] M. G. Poxton, and G. T. Goldsworthy. 'The remote estimation of weight and growth in turbot using image analysis. ' IFAC Proceedings Volumes, 20.7: 163-170, 1987.
[17] I.C. Condotta, T. M. Brown-Brandl, S. K. Pitla, J. P. Stinn K. O. Silva-Miranda. 'Evaluation of low-cost depth cameras for agricultural applications.' Computers and Electronics in Agriculture, 173, 105394, 2020.
[18] M. T. Al Muallim, H. Küçük, F. Yılmaz M. Kahraman. 'Development of a dimensions measurement system based on depth camera for logistic applications.' Eleventh International Conference on Machine Vision (ICMV 2018). Vol. 11041. International Society for Optics and Photonics, 2019.
[19] J. Seo, J. Sa, Y. Choi, Y. Chung, D. Park and H. Kim. 'A yolo-based separation of touching-pigs for smart pig farm applications.' 2019 21st International Conference on Advanced Communication Technology ICACT. IEEE, 2019.
[20] H. Minagawa. 'Stereo photogrammetric errors in determining the surface area of a small pig model with non-metric cameras.' Journal of Agricultural Meteorology, 51.4: 335-343, 1995.
[21] E. Hörster and R. Lienhart. 'On the optimal placement of multiple visual sensors.' Proceedings of the 4th ACM international workshop on Video surveillance and sensor networks. 2006.
[22] M. Benjamin, S. YIK. 'Precision livestock farming in swine welfare: a review for swine practitioners.' Animals, 9.4: 133, 2019.
[23] S. Giancola, M. Valenti, and R. Sala. 'A survey on 3D cameras: Metrological comparison of time-of-flight, structured-light and active stereoscopy technologies.' Springer Nature, 2018.
[24] C. Shi, J. Zhang, and G. Teng. 'Mobile measuring system based on LabVIEW for pig body components estimation in a large-scale farm.' Computers and electronics in agriculture 156: 399-405, 2019.
[25] A. Grunnet-Jepsen, J. N. Sweetser, P. Winer, A. Takagi and J. Woodfill. 'Projectors for Intel® RealSense™ Depth Cameras D4xx. ' Intel Support, Interl Corporation: Santa Clara, CA, USA, 2018.
[26] J.G. Fryer, D.C. Brown. 'Lens distortion for close-range photogrammetry.' Photogrammetric engineering and remote sensing, 52.1: 51-58, 1986.
[27] V. Sterzentsenko, A. Karakottas. 'A low-cost, flexible and portable volumetric capturing system.' 2018 14th International Conference on Signal-Image Technology Internet-Based Systems (SITIS). IEEE, 2018.
[28] A. Grunnet-Jepsen, D. Tong. 'Depth Post-Processing for Intel® RealSense™ D400 Depth Cameras.' New Technologies Group, Intel Corporation, 2018.
[29] R. Tsai. 'A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses.' IEEE Journal on Robotics and Automation, 3.4: 323-344, 1987.
[30] Z. Zhang. 'Flexible camera calibration by viewing a plane from unknown orientations.' Proceedings of the seventh ieee international conference on computer vision. Vol. 1. Ieee, 1999.
[31] R.L. Solomon, J.D. Corbit. 'An opponent-process theory of motivation: I. Temporal dynamics of affect. ' Psychological review, 81.2: 119, 1974.
[32] M. Kazhdan, H. Hoppe. 'Screened poisson surface reconstruction.' ACM Transactions on Graphics (ToG), 32.3: 1-13, 2013.
[33] B. Curless, M. Levoy 'A volumetric method for building complex models from range images.' Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. 1996.
[34] A. Savitzky, M.J.E. Golay. 'Smoothing and Differentiation of Data by Simplified Least Squares Procedures,' Analytical Chemistry, vol. 36, pp. 1627-1639, 1964.
[35] T. Möller, B. Trumbore. 'Fast, minimum storage ray-triangle intersection. ' Journal of graphics tools, 2.1: 21-28, 1997.
[36] O. Sorkine. 'Laplacian mesh processing.' Eurographics, 2005.
[37] H. Sarbolandi, D. Lefloch, and A. Kolb. 'Kinect range sensing: Structured-light versus Time-of-Flight Kinect.' Computer vision and image understanding, 139, 1-
20, 2015.
[38] DNA Swine Genetics. 'What a difference weaning TWO more P/S/Y can make.'
http://dnaswinegenetics.com/swine-maternal-lines.html.
[39] T. Sonoda, A. Grunnet-Jepsen. 'Depth image compression by colorization for
Intel® RealSense™ Depth Cameras.'
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72338-
dc.description.abstract本論文藉由非接觸式的模型重構達成物體型態測量,以取代直接接觸物體所得到的體積測量。2018推出的Intel RealSense深度攝影機提供簡便的介面將接收到的資訊轉換為三維點雲模型,可進一步透過計算幾何原理估算模型的體積。然而,深度感光設備先天存在一定的信噪比(Signal-to-Noise Ratio, SNR),使得輸出模型雜訊與觀測距離成正比,無法在相隔較遠的條件下對目標物得出高精度的測量值;再者,有鑑於單一深度攝影機僅能得到場景中特定視角的深度資訊,在攝影機照不到的區域(如:背面)將形成死角,難以完整還原全域的資訊。為了解決上述問題,本研究以多重攝影機陣列取代單一鏡頭補償投影成像資訊量的不足,並提出自動化的流程,針對落在多重攝影機視角共同範圍內移動中的物體計算出即時的體型估計值。在本研究提出的流程中,舉凡時間同步、多重視角座標轉換、物體辨識去除背景以及合成點雲表面曲線平滑化等皆是必須克服的問題。實驗結果證實由本研究設計的流程所得出的模型體測值,不僅適用於測量物體靜止時,對於移動中的物體進行非接觸式體型測量時也有一定程度的準確性。zh_TW
dc.description.abstractThis thesis proposes a method to measure body volume indirectly based on non-contact object point cloud modeling, providing an alternative to the traditional approaches that require laborious configuration and tremendous efforts. Recently, the improvement on three-dimensional sensor techniques, such as depth cameras, features high resolutions along with stability. Researchers find it suitable to apply depth cameras to the body volume measurement of livestock, which keep it from being disturbed during the measurement. Since the projected view of a single depth camera has limit surface information, in order to produce a complete 3D model that contains each side of the object for further measuring, we design a procedure that can fast and efficiently calibrated and align 3D point cloud models from multiple views. The measurement can thus be fulfilled on the synthesized point cloud models.
In addition, the stereoscopic sensors are prone to be noisy due to the systematic errors. The noises should be removed to achieve a high precision of modeling. Therefore, we conduct quantitative analysis regarding to the noises. It turns out that the noises shall be eliminated with the appropriate filter chosen according to the analysis. The process of how to produce a comparable model of the object will be investigated thoroughly in our thesis.
Once the model is built, we find out the desired body values, including body length, body width, heart girth, and body height through computational geometry. We claim that the error rate between our non-contact measuring and the tradition methods is under a certain percentage, which demonstrate the practicability of our research approach.
en
dc.description.provenanceMade available in DSpace on 2021-06-17T06:36:16Z (GMT). No. of bitstreams: 1
U0001-2108202010351700.pdf: 10419982 bytes, checksum: 1eebca109707e775b5f2666350f12a58 (MD5)
Previous issue date: 2020
en
dc.description.tableofcontents誌謝 i
摘要 ii
Abstract iii
Contents iv
List of figures vi
List of tables viii
Chapter 1. 緒論 1
1.1 研究動機 1
1.2 研究目的 2
1.3 研究貢獻 3
1.4 論文架構 3
Chapter 2. 文獻探討 4
2.1. 非接觸式型態測量 4
2.2. 深度成像原理 5
2.2.1. 時差測距 6
2.2.2. 結構光 7
2.2.3. 立體感測 8
2.3. 深度成像誤差 10
2.3.1. 隨機誤差 10
2.3.2. 系統誤差 11
2.4. 多重視角座標轉換 12
2.4.1. 座標透視投影 13
2.4.2. 雙重視角系統 15
Chapter 3. 建模方法與流程設計 18
3.1. 建模測量對象 18
3.2. 建模儀器及使用目的 19
3.3. 建模流程 21
3.3.1. 實驗設定 21
3.3.2. 深度影像拍攝與多重視角校正24
3.3.3. 模型化 32
Chapter 4. 測量方法與結果 43
4.1. 前期實驗 43
4.2. 測量方法 44
4.2.1. 模型體長測量 44
4.2.2. 模型體圍測量 46
4.2.3. 模型體寬測量 48
4.3. 體態測量值體重迴歸模型 56
Chapter 5. 結論 59
Bibliography 60
附錄A. 多重視角疊合點雲圖 64
dc.language.isozh-TW
dc.subject三維重建zh_TW
dc.subject深度攝影機zh_TW
dc.subject攝影測量zh_TW
dc.subject非接觸式zh_TW
dc.subject點雲zh_TW
dc.subjectNon-Contact Measurementen
dc.subject3D Reconstructionen
dc.subjectMulti-View Geometryen
dc.title多重感測視角系統中非接觸式的模型化體積測量zh_TW
dc.titleNon-Contact Volume Measurements Based on Multiple Cameras Systemen
dc.typeThesis
dc.date.schoolyear109-1
dc.description.degree碩士
dc.contributor.oralexamcommittee林恩仲(En-Chung Lin),傅楸善(Chiou-Shann Fuh),丁肇隆(Chao-Lung Ting)
dc.subject.keyword非接觸式,攝影測量,深度攝影機,三維重建,點雲,zh_TW
dc.subject.keywordNon-Contact Measurement,3D Reconstruction,Multi-View Geometry,en
dc.relation.page76
dc.identifier.doi10.6342/NTU202004155
dc.rights.note有償授權
dc.date.accepted2020-09-22
dc.contributor.author-college工學院zh_TW
dc.contributor.author-dept工程科學及海洋工程學研究所zh_TW
顯示於系所單位:工程科學及海洋工程學系

文件中的檔案:
檔案 大小格式 
U0001-2108202010351700.pdf
  未授權公開取用
10.18 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved