請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90022
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 楊馥菱 | zh_TW |
dc.contributor.advisor | Fu-Ling Yang | en |
dc.contributor.author | 宋雲揚 | zh_TW |
dc.contributor.author | Yun-Yang Sung | en |
dc.date.accessioned | 2023-09-22T17:05:29Z | - |
dc.date.available | 2023-11-09 | - |
dc.date.copyright | 2023-09-22 | - |
dc.date.issued | 2023 | - |
dc.date.submitted | 2023-08-10 | - |
dc.identifier.citation | [1] Copeland, M., "What’s the Difference Between Artificial Intelligence, Machine Learning and Deep Learning?," https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/, 2016.
[2] Sarker, I. H., "Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions," SN Computer Science, 2(6), p. 420, 2021. [3] Khan, P., Kader, M. F., Islam, S. R., Rahman, A. B., Kamal, M. S., Toha, M. U., and Kwak, K.-S., "Machine learning and deep learning approaches for brain disease diagnosis: principles and recent advances," IEEE Access, 9, pp. 37622-37655, 2021. [4] Goodfellow, I., Bengio, Y., and Courville, A., Deep learning, MIT press, 2016. [5] LeCun, Y., Bengio, Y., and Hinton, G., "Deep learning," nature, 521(7553), pp. 436-444, 2015. [6] Loy, J., Neural Network Projects with Python: The ultimate guide to using Python to explore the true power of neural networks through six projects, Packt Publishing Ltd, 2019. [7] Li, H., Xu, Z., Taylor, G., Studer, C., and Goldstein, T., "Visualizing the loss landscape of neural nets," Advances in neural information processing systems, 31, 2018. [8] Ruder, S., "An overview of gradient descent optimization algorithms," arXiv preprint arXiv:1609.04747, 2016. [9] Rumelhart, D. E., Hinton, G. E., and Williams, R. J., "Learning representations by back-propagating errors," Nature, 323(6088), pp. 533-536, 1986. [10] Kingma, D. P., and Ba, J., "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014. [11] Loshchilov, I., and Hutter, F., "Decoupled weight decay regularization," arXiv preprint arXiv:1711.05101, 2017. [12] Dozat, T., "Incorporating nesterov momentum into adam," 2016. [13] Krizhevsky, A., Sutskever, I., and Hinton, G. E., "Imagenet classification with deep convolutional neural networks," Communications of the ACM, 60(6), pp. 84-90, 2017. [14] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A., "Going deeper with convolutions," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015. [15] Pak, M., and Kim, S., "A review of deep learning in image recognition," 2017 4th International Conference on Computer Applications and Information Processing Technology (CAIPT), pp. 1-3, 2017. [16] Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A.-r., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., and Sainath, T. N., "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups," IEEE Signal processing magazine, 29(6), pp. 82-97, 2012. [17] Graves, A., Mohamed, A.-r., and Hinton, G., "Speech recognition with deep recurrent neural networks," Proc. 2013 IEEE international conference on acoustics, speech and signal processing, Ieee, pp. 6645-6649, 2013. [18] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I., "Language models are unsupervised multitask learners," OpenAI blog, 1(8), p. 9, 2019. [19] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I., "Attention is all you need," Advances in neural information processing systems, 30, 2017. [20] Bengio, Y., "Practical recommendations for gradient-based training of deep architectures," Neural Networks: Tricks of the Trade: Second Edition, pp. 437-478, 2012. [21] Olah, C., Satyanarayan, A., Johnson, I., Carter, S., Schubert, L., Ye, K., and Mordvintsev, A., "The building blocks of interpretability," Distill, 3(3), p. e10, 2018. [22] Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., Chaudhary, V., Young, M., Crespo, J.-F., and Dennison, D., "Hidden technical debt in machine learning systems," Advances in neural information processing systems, 28, 2015. [23] Wakabayashi, T., "Photo-elastic method for determination of stress in powdered mass," Journal of the Physical Society of Japan, 5(5), pp. 383-385, 1950. [24] Cloud, G., "Photoelasticity," Cambridge University Press, Cambridge, 1995. [25] Majmudar, T. S., and Behringer, R. P., "Contact force measurements and stress-induced anisotropy in granular materials," nature, 435(7045), pp. 1079-1082, 2005. [26] Ren, J., Dijksman, J. A., and Behringer, R. P., "Reynolds pressure and relaxation in a sheared granular system," Physical review letters, 110(1), p. 018302, 2013. [27] Wang, D., Ren, J., Dijksman, J. A., Zheng, H., and Behringer, R. P., "Microscopic origins of shear jamming for 2d frictional grains," Physical review letters, 120(20), p. 208004, 2018. [28] Clark, A. H., Kondic, L., and Behringer, R. P., "Particle scale dynamics in granular impact," Physical review letters, 109(23), p. 238302, 2012. [29] Iikawa, N., Bandi, M. M., and Katsuragi, H., "Structural evolution of a granular pack under manual tapping," Journal of the Physical Society of Japan, 84(9), p. 094401, 2015. [30] Howell, D., Behringer, R. P., and Veje, C., "Stress Fluctuations in a 2D Granular Couette Experiment: A Continuous Transition," Physical Review Letters, 82(26), pp. 5241-5244, 1999. [31] Daniels, K. E., Kollmer, J. E., and Puckett, J. G., "Photoelastic force measurements in granular materials," Review of Scientific Instruments, 88(5), p. 051808, 2017. [32] Majmudar, T. S., "Experimental studies of two-dimensional granular systems using grain-scale contact force measurements," 2006. [33] Puckett, J. G., and Daniels, K. E., "Equilibrating temperaturelike variables in jammed granular subsystems," Physical Review Letters, 110(5), p. 058001, 2013. [34] Press, W. H., Teukolsky, S. A., Vetterling, W. T., and Flannery, B. P., Numerical recipes 3rd edition: The art of scientific computing, Cambridge university press, 2007. [35] Zhao, Y., Zheng, H., Wang, D., Wang, M., and Behringer, R. P., "Particle scale force sensor based on intensity gradient method in granular photoelastic experiments," New Journal of Physics, 21(2), p. 023009, 2019. [36] Shorten, C., and Khoshgoftaar, T. M., "A survey on image data augmentation for deep learning," Journal of big data, 6(1), pp. 1-48, 2019. [37] Timoshenko, S., and Goodier, J. N., Theory of Elasticity, McGraw-Hill, New York, 1970. [38] Lin, C.-C., and Yang, F.-L., "Continuum simulation for regularized non-local μ (I) model of dense granular flows," Journal of Computational Physics, 420, p. 109708, 2020. [39] Thomas, A., and Vriend, N., "Photoelastic study of dense granular free-surface flows," Physical Review E, 100(1), p. 012902, 2019. [40] Berzi, D., Jenkins, J. T., and Richard, P., "Erodible, granular beds are fragile," Soft Matter, 15(36), pp. p. 7173-7178, 2019. [41] Vescovi, D., Berzi, D., and di Prisco, C., "Fluid–solid transition in unsteady, homogeneous, granular shear flows," Granular Matter, 20(2), p. 27, 2018. [42] Chialvo, S., Sun, J., and Sundaresan, S., "Bridging the rheology of granular flows in three regimes," Physical review E, 85(2), p. 021305, 2012. [43] "LabVIEW Camera Control for Nikon SLR," https://ackermann-automation.de/labview-camera-control-nikon.htm. [44] Frocht, M. M., Photoelasticity, Wiley, 1962. [45] Sergazinov, R., and Kramár, M., "Machine learning approach to force reconstruction in photoelastic materials," Machine Learning: Science and Technology, 2(4), p. 045030, 2021. [46] Micro-Measurements, V., "PhotoStress Coating Materials and Adhesives," Document, 2015. [47] "Friction - Friction Coefficients and Calculator," The Engineering ToolBox, 2004. [48] Williams, S., Waterman, A., and Patterson, D., "Roofline: an insightful visual performance model for multicore architectures," Communications of the ACM, 52(4), pp. 65-76, 2009. [49] Meng, Q., Chen, W., Wang, Y., Ma, Z.-M., and Liu, T.-Y., "Convergence analysis of distributed stochastic gradient descent with shuffling," Neurocomputing, 337, pp. 46-57, 2019. [50] Smith, S. L., Kindermans, P.-J., Ying, C., and Le, Q. V., "Don't decay the learning rate, increase the batch size," arXiv preprint arXiv:1711.00489, 2017. [51] Bishop, C. M., "Training with noise is equivalent to Tikhonov regularization," Neural computation, 7(1), pp. 108-116, 1995. [52] LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P., "Gradient-based learning applied to document recognition," Proceedings of the IEEE, 86(11), pp. 2278-2324, 1998. [53] Simonyan, K., and Zisserman, A., "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014. [54] Yu, D., Wang, H., Chen, P., and Wei, Z., "Mixed pooling for convolutional neural networks," Rough Sets and Knowledge Technology: 9th International Conference, RSKT 2014, Shanghai, China, October 24-26, 2014, Proceedings 9, Springer, pp. 364-375, 2014. [55] He, K., Zhang, X., Ren, S., and Sun, J., "Deep residual learning for image recognition," Proc. Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. [56] He, K., Zhang, X., Ren, S., and Sun, J., "Identity mappings in deep residual networks," Proc. Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14, Springer, pp. 630-645, 2016. [57] Chollet, F., "Xception: Deep learning with depthwise separable convolutions," Proc. Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1251-1258, 2017. [58] Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H., "Mobilenets: Efficient convolutional neural networks for mobile vision applications," arXiv preprint arXiv:1704.04861, 2017. [59] Tan, M., and Le, Q., "Efficientnet: Rethinking model scaling for convolutional neural networks," International conference on machine learning, PMLR, pp. 6105-6114, 2019. [60] Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.-C., "Mobilenetv2: Inverted residuals and linear bottlenecks," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4510-4520, 2018. [61] Hu, J., Shen, L., and Sun, G., "Squeeze-and-excitation networks," Proc. Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132-7141, 2018. [62] Tan, M., and Le, Q., "Efficientnetv2: Smaller models and faster training," International conference on machine learning, PMLR, pp. 10096-10106, 2021. [63] Radosavovic, I., Kosaraju, R. P., Girshick, R., He, K., and Dollár, P., "Designing network design spaces," Proc. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10428-10436, 2020. [64] Gholami, A., Kwon, K., Wu, B., Tai, Z., Yue, X., Jin, P., Zhao, S., and Keutzer, K., "Squeezenext: Hardware-aware neural network design," Proc. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1638-1647, 2018. [65] Larochelle, H., Erhan, D., Courville, A., Bergstra, J., and Bengio, Y., "An empirical evaluation of deep architectures on problems with many factors of variation," Proceedings of the 24th international conference on Machine learning, pp. 473-480, 2007. [66] Bergstra, J., and Bengio, Y., "Random search for hyper-parameter optimization," Journal of machine learning research, 13(2), 2012. [67] Frazier, P. I., "Bayesian optimization," Recent advances in optimization and modeling of contemporary problems, Informs, pp. 255-278, 2018. [68] Bergstra, J., Bardenet, R., Bengio, Y., and Kégl, B., "Algorithms for hyper-parameter optimization," Advances in neural information processing systems, 24, 2011. [69] Garrido-Merchán, E. C., and Hernández-Lobato, D., "Dealing with categorical and integer-valued variables in Bayesian Optimization with Gaussian processes," Neurocomputing, 380, pp. 20-35, 2020. [70] Ozaki, Y., Tanigaki, Y., Watanabe, S., Nomura, M., and Onishi, M., "Multiobjective tree-structured Parzen estimator," Journal of Artificial Intelligence Research, 73, pp. 1209-1250, 2022. [71] Montavon, G., Orr, G. B., and Muller, K.-R., Neural networks tricks of the trade / edited by Gregoire Montavon, Genevieve B. Orr, Klaus-Robert Muller, Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. [72] Ge, R., Kakade, S. M., Kidambi, R., and Netrapalli, P., "The step decay schedule: A near optimal, geometrically decaying learning rate procedure for least squares," Advances in neural information processing systems, 32, 2019. [73] Li, Z., and Arora, S., "An exponential learning rate schedule for deep learning," arXiv preprint arXiv:1910.07454, 2019. [74] Loshchilov, I., and Hutter, F., "Fixing weight decay regularization in adam," 2017. [75] Smith, L. N., "Cyclical learning rates for training neural networks," 2017 IEEE winter conference on applications of computer vision (WACV), IEEE, pp. 464-472, 2017. [76] Long, M., Cao, Y., Wang, J., and Jordan, M., "Learning transferable features with deep adaptation networks," International conference on machine learning, PMLR, pp. 97-105, 2015. [77] Bottou, L., "Stochastic gradient descent tricks," Neural Networks: Tricks of the Trade: Second Edition, Springer, pp. 421-436, 2012. [78] Qian, N., "On the momentum term in gradient descent learning algorithms," Neural networks, 12(1), pp. 145-151, 1999. [79] Sutskever, I., Martens, J., Dahl, G., and Hinton, G., "On the importance of initialization and momentum in deep learning," International conference on machine learning, PMLR, pp. 1139-1147, 2013. [80] Duchi, J., Hazan, E., and Singer, Y., "Adaptive subgradient methods for online learning and stochastic optimization," Journal of machine learning research, 12(7), 2011. [81] Hinton, G., Srivastava, N., and Swersky, K., "Neural networks for machine learning lecture 6a overview of mini-batch gradient descent," Cited on, 14(8), p. 2, 2012. [82] Reddi, S. J., Kale, S., and Kumar, S., "On the convergence of adam and beyond," arXiv preprint arXiv:1904.09237, 2019. [83] Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G., Davis, A., Dean, J., and Devin, M., "TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems," 2015. [84] Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M., "Optuna: A next-generation hyperparameter optimization framework," Proc. Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 2623-2631, 2019. [85] Goodarzi, E., Ziaei, M., and Hosseinipour, E. Z., Introduction to optimization analysis in hydrosystem engineering, Springer, 2014. [86] Hutter, F., Hoos, H., and Leyton-Brown, K., "An Efficient Approach for Assessing Hyperparameter Importance," Proceedings of the 31st International Conference on Machine Learning, P. X. Eric, and J. Tony, eds., PMLR, Proceedings of Machine Learning Research, pp. 754--762, 2014. [87] Seide, F., Li, G., and Yu, D., "Conversational speech transcription using context-dependent deep neural networks," Twelfth annual conference of the international speech communication association, 2011. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90022 | - |
dc.description.abstract | 光彈技術是一種非接觸式的應力測量方法,其因雙折射效應所產生之光彈性條紋的光強度場可用來量測二維顆粒流實驗中的應力場。但現今將光強度場對應至應力之校準方法不僅計算需求大,且準確度的不確定性高。本研究採用快速發展之深度學習方法建構出新型的光彈顆粒受力量測方法,我們以實驗光彈影像與電腦產生之模擬光彈影像訓練出兩種可對顆粒尺度進行受力量測的卷積神經網路模型,其可直接從光強度場預測顆粒尺度下的總受力與各分力的量值及受力角度,最終將此快速且高效之模型應用至剪切流的瞬時影像之分析。由預測施加在邊界的正向壓力之結果顯示深度學習模型的架構還需進行修正,但已嶄露深度學習模型輔助顆粒尺度下的應力量測的潛力。 | zh_TW |
dc.description.abstract | To experimentally investigate the stress field of a 2D granular system, the photoelastic technique is a promising intrusive measurement method that links the stress magnitude with the light intensity field of photoelastic fringes. Unfortunately, the intensity-to-stress calibration is both case-dependent and computation-demanding with accuracy uncertainty. This work adopts the fast-growing deep learning strategy to establish a novel method for measuring forces acting on photoelastic disks. With the help of the experimental and computer-simulated photoelastic images, we have trained two convolutional neural network models capable of predicting total force, individual force components, and force angles based on the light intensity field at the particle scale. By utilizing these fast and efficient models, analyses were conducted on the instantaneous images of a plane photoelastic flow in simple plane shear. The result of predicting the pressures subjected to the boundary indicated the architecture of the models requires further refinement. However, it reveals the promising potential of analyzing the rheology of granular flow via artificial intelligence. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-09-22T17:05:29Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2023-09-22T17:05:29Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | Abstract i
摘要 ii Contents iii List of Figures vi List of Tables xii Nomenclature xiv Chapter 1 Introduction 1 1.1 Deep learning 1 1.2 Internal stress measurement in granular flows 5 1.3 Motivation and Objectives 9 Chapter 2 Methodology 13 2.1 Workflow of supervised learning 13 2.2 Domain knowledge 16 2.2.1 Photoelastic material 16 2.2.2 Coordination number in different flow regimes 18 2.3 Data collection—experiment images 19 2.3.1 Experimental set-up 19 2.3.2 Force data calibration 26 2.3.3 Preprocessing of experimental dataset 29 2.3.4 Labeling experiment images 33 2.4 Data expansion—pseudo images 35 2.4.1 Photoelastic theory 35 2.4.2 Pseudo image dataset construction 41 2.5 Input data pipeline 51 Chapter 3 Deep learning algorithm 56 3.1 Convolution neural network 56 3.1.1 CNN structure 57 3.1.2 Adjustment for efficiency and accuracy 63 3.1.3 EfficientNet 67 3.1.4 EfficientNetV2 70 3.2 Hyperparameter optimization 73 3.2.1 Hyperparameter sampling strategy 73 3.2.2 Learning rate scheduling 74 3.3 Transfer learning 76 3.4 Deep learning model and implement 76 3.4.1 Dropout rate 77 3.4.2 Optimizer 78 3.4.3 Deep learning environment set up 83 Chapter 4 Results and application 85 4.1 Total force measurement model 85 4.1.1 Pretraining with pseudo images 86 4.1.2 Transfer learning with experimental images 97 4.2 Individual force measurement model 98 4.2.1 Pretraining with pseudo images 100 4.2.2 Transfer learning with experimental images 105 4.3 Application for analyzing simple shear flow 109 Chapter 5 Conclusion 116 Reference 120 | - |
dc.language.iso | en | - |
dc.title | 建構以影像為基底之光彈顆粒受力之深度學習模型及其於剪切流之初步應用 | zh_TW |
dc.title | Developing an image-based deep learning model for force measurement on a granular disk and its application on a simple shear flow | en |
dc.type | Thesis | - |
dc.date.schoolyear | 111-2 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 張書瑋;游濟華;林正釧 | zh_TW |
dc.contributor.oralexamcommittee | Shu-Wei Chang;Chi-Hua Yu;Cheng-Chuan Lin | en |
dc.subject.keyword | 深度學習,光彈材料,顆粒流, | zh_TW |
dc.subject.keyword | deep learning,photoelastic material,granular flow, | en |
dc.relation.page | 126 | - |
dc.identifier.doi | 10.6342/NTU202302458 | - |
dc.rights.note | 同意授權(全球公開) | - |
dc.date.accepted | 2023-08-12 | - |
dc.contributor.author-college | 工學院 | - |
dc.contributor.author-dept | 機械工程學系 | - |
顯示於系所單位: | 機械工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-111-2.pdf | 11.52 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。