請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94578完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 楊馥菱 | zh_TW |
| dc.contributor.advisor | Fu-Ling Yang | en |
| dc.contributor.author | 吳維軒 | zh_TW |
| dc.contributor.author | Wei-Hsuan Wu | en |
| dc.date.accessioned | 2024-08-16T16:50:35Z | - |
| dc.date.available | 2024-08-17 | - |
| dc.date.copyright | 2024-08-16 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-08-14 | - |
| dc.identifier.citation | [1] Goodfellow, I., Bengio, Y., and Courville, A., 2016, Deep Learning, MIT Press.
[2] Chen, G. R., 2022, "Non-intrusive stress measurement in a steady inclined surface granular flow of photoelastic disks through their dynamics and fringe characteristics analysis,"Master Thesis, Mechanical Engineering, National Taiwan University. [3] Nitish, S., Geoffrey, H., Alex, K., Ilya, S., and Ruslan, S., 2014, "Dropout: a simple way to prevent neural networks from overfitting," J. Mach. Learn. Res., 15, pp. 1929–1958. [4] Wu, Y., Wang, D., Li, P., and Niu, Z., 2023, "Experimental investigation of dry granular flows down an inclined channel against a wall-like obstacle of limited width," Acta Geotechnica, 18(4), pp. 2141-2154. [5] Savage, S. B., and Lun, C. K. K., 1988, "Particle size segregation in inclined chute flow of dry cohesionless granular solids," Journal of Fluid Mechanics, 189, pp. 311-335. [6] MiDi, G. D. R., 2004, "On dense granular flows," The European Physical Journal E, 14(4), pp. 341-365. [7] Tsai, C.-T., Cheng, C.-Y., and Yang, F.-L., 2023, "Growth of force chain network upon non-Bagnold transition of inclined surface granular flows via discrete element simulation," Journal of Mechanics, 39, pp. 431-441. [8] Sanvitale, N., Gheller, C., and Bowman, E., 2022, "Deep learning assisted particle identification of photoelastic images of granular flows," Granular Matter, 24. [9] Daniels, K. E., Kollmer, J. E., and Puckett, J. G., 2017, "Photoelastic force measurements in granular materials," Rev Sci Instrum, 88(5). [10] Feng, Y., and Yuan, Z., 2021, "Discrete element method modeling of granular flow characteristics transition in mixed flow," Computational Particle Mechanics, 8(1), pp. 21-34. [11] Zhang, S., Ge, W., and Liu, C., 2023, "Spatial–temporal multiscale discrete–continuum simulation of granular flow," Physics of Fluids, 35(5). [12] Sung, Y. Y., 2023, "Developing an image-based deep learning model for force measurement on a granular disk and its application on a simple shear flow,"Master Thesis, Mechanical Engineering, National Taiwan University. [13] Abed Zadeh, A., Barés, J., Brzinski, T. A., Daniels, K. E., Dijksman, J., Docquier, N., Everitt, H. O., Kollmer, J. E., Lantsoght, O., Wang, D., Workamp, M., Zhao, Y., and Zheng, H., 2019, "Enlightening force chains: a review of photoelasticimetry in granular matter," Granular Matter, 21(4), pp. 83. [14] Sergazinov, R., and Kramár, M., 2021, "Machine learning approach to force reconstruction in photoelastic materials," Machine Learning: Science and Technology, 2. [15] Schmidgall, S., Ziaei, R., Achterberg, J., Kirsch, L., Hajiseyedrazi, S. P., and Eshraghian, J., 2024, "Brain-inspired learning in artificial neural networks: A review," APL Machine Learning, 2(2). [16] Zhang, H., Hao, K., Gao, L., Wei, B., and Tang, X.-s., 2020, "Optimizing Deep Neural Networks Through Neuroevolution With Stochastic Gradient Descent," IEEE Transactions on Cognitive and Developmental Systems, 15, pp. 111-121. [17] Li, F., Ye, Y., Tian, Z., and Zhang, X., 2019, "CPU versus GPU: which can perform matrix computation faster—performance comparison for basic linear algebra subprograms," Neural Computing and Applications, 31(8). [18] Ruder, S., 2016, "An overview of gradient descent optimization algorithms." [19] Ahmad, R., Alsmadi, I., and Al-Ramahi, M., 2023, "Optimization of deep learning models: benchmark and analysis," Advances in Computational Intelligence, 3(2), pp. 7. [20] Hubel, D. H., and Wiesel, T. N., 1964, "Effects of monocular deprivation in kittens," Naunyn-Schmiedebergs Archiv für experimentelle Pathologie und Pharmakologie, 248(6), pp. 492-497. [21] Fukushima, K., 1980, "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position," Biological Cybernetics, 36(4), pp. 193-202. [22] Lecun, Y., Bottou, L., Bengio, Y., and Haffner, P., 1998, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, 86(11), pp. 2278-2324. [23] Alex, K., Ilya, S., and Geoffrey, E. H., 2017, "ImageNet classification with deep convolutional neural networks," Commun. ACM, 60, pp. 84–90. [24] Simonyan, K., and Zisserman, A., 2015, "Very deep convolutional networks for large-scale image recognition," Computational and Biological Learning Society, pp. 1-14. [25] Lin, M., Chen, Q., and Yan, S., 2013, "Network In Network," CoRR, abs/1312.4400. [26] He, K., Zhang, X., Ren, S., and Sun, J., 2015, "Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778. [27] O'Shea, K., and Nash, R., 2015, "An Introduction to Convolutional Neural Networks," arXiv e-prints, p. arXiv:1511.08458. [28] Hannun, A., Case, C., Casper, J., Catanzaro, B., Diamos, G., Elsen, E., Prenger, R., Satheesh, S., Sengupta, S., Coates, A., and Ng, A., 2014, "DeepSpeech: Scaling up end-to-end speech recognition." [29] Goldberg, Y., and Levy, O., 2014, "word2vec Explained: deriving Mikolov et al.'s negative-sampling word-embedding method." [30] Schmidt, R. M., 2019, "Recurrent Neural Networks (RNNs): A gentle Introduction and Overview," p. arXiv:1912.05911. [31] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., and Hassabis, D., 2016, "Mastering the game of Go with deep neural networks and tree search," Nature, 529(7587), pp. 484-489. [32] Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., and Bharath, A. A., 2018, "Generative Adversarial Networks: An Overview," IEEE Signal Processing Magazine, 35, pp. 53-65. [33] Hochreiter, S., and Schmidhuber, J., 1997, "Long Short-term Memory," Neural computation, 9, pp. 1735-1780. [34] Wei, R., Garcia, C., El-Sayed, A., Peterson, V., and Mahmood, A., 2020, "Variations in Variational Autoencoders - A Comparative Evaluation," IEEE Access, 8. [35] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I., 2017, "Attention is all you need," Proceedings of the 31st International Conference on Neural Information Processing Systems, Curran Associates Inc., Long Beach, California, USA, pp. 6000–6010. [36] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K., 2019, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding," pp. 4171-4186. [37] Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu, K., Chen, H., Yi, X., Wang, C., Wang, Y., Ye, W., Zhang, Y., Chang, Y., Yu, P. S., Yang, Q., and Xie, X., 2024, "A Survey on Evaluation of Large Language Models," ACM Trans. Intell. Syst. Technol., 15(3), pp. Article 39. [38] Sarker, I. H., 2021, "Machine Learning: Algorithms, Real-World Applications and Research Directions," SN Computer Science, 2(3), pp. 160. [39] Kollmer, J. E., "Photo-elastic Grain Solver (PEGS)," https://github.com/jekollmer/PEGS. [40] Fu, B., Zhao, X., Li, Y., Wang, X., and Ren, Y., 2019, "A convolutional neural networks denoising approach for salt and pepper noise," Multimedia Tools and Applications, 78(21). [41] Alomar, K., Aysel, H. I., and Cai, X., 2023, "Data Augmentation in Classification and Segmentation: A Survey and New Strategies," Journal of Imaging, 9(2), p. 46. [42] Pizer, S. M., Johnston, R. E., Ericksen, J. P., Yankaskas, B. C., and Muller, K. E., "Contrast-limited adaptive histogram equalization: speed and effectiveness," Proc. [1990] Proceedings of the First Conference on Visualization in Biomedical Computing, pp. 337-345. [43] Li, B., Su, C.-T., Yin, H., Ou-Yang, A.-G., and Liu, Y.-D., 2024, "Detection of bruised yellow peach using hyperspectral imaging combined with curvature-assisted Hough transform circle detection and improved Otsu," Journal of Food Measurement and Characterization, 18(6), pp. 4865-4878. [44] Mukhopadhyay, P., and Chaudhuri, B. B., 2015, "A survey of Hough Transform," Pattern Recognit., 48, pp. 993-1010. [45] Akbari Sekehravani, E., Babulak, E., and Masoodi, M., 2020, "Implementing canny edge detection algorithm for noisy image," Bulletin of Electrical Engineering and Informatics, 9, pp. 1404-1410. [46] Shehata Hassanein, A., Mohammad, S., Sameer, M., and Ehab Ragab, M., 2015, "A Survey on Hough Transform, Theory, Techniques and Applications," arXiv e-prints, p. arXiv:1502.02160. [47] "Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Springer-Verlag. [48] Gao, X.-Z., Kumar, R., Srivastava, S., and Soni, B. P., 2020, Applications of Artificial Intelligence in Engineering. [49] Dodge, S. F., and Karam, L., 2016, "Understanding how image quality affects deep neural networks," 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), pp. 1-6. [50] Montesinos-López, O., Montesinos, A., and Crossa, J., 2022, Multivariate Statistical Machine Learning Methods for Genomic Prediction. [51] Rosenbaum, R., 2022, "On the relationship between predictive coding and backpropagation," PLoS ONE, 17. [52] Iman, M., Arabnia, H. R., and Rasheed, K., 2023, "A Review of Deep Transfer Learning and Recent Advancements," Technologies, 11(2), pp. 40. [53] Dumoulin, V., and Visin, F., 2016, "A guide to convolution arithmetic for deep learning," arXiv e-prints, p. arXiv:1603.07285. [54] Gholamalinezhad, H., and Khosravi, H., 2020, "Pooling Methods in Deep Neural Networks, a Review," arXiv e-prints, p. arXiv:2009.07485. [55] Zhao, L., and Zhang, Z., 2024, "A improved pooling method for convolutional neural networks," Scientific Reports, 14(1), p. 1589. [56] Frazier, P. I., 2018, "A Tutorial on Bayesian Optimization," arXiv e-prints, p. arXiv:1807.02811. [57] Lhermitte, E., Hilal, M., Furlong, R., O’Brien, V., and Humeau-Heurtier, A., 2022, "Deep Learning and Entropy-Based Texture Features for Color Image Classification" Entropy, 24(11), p. 1577. [58] Scabini, L. F. S., and Bruno, O. M., 2023, "Structure and performance of fully connected neural networks: Emerging complex network properties," Physica A: Statistical Mechanics and its Applications, 615. [59] Bjorck, J., Gomes, C., Selman, B., and Weinberger, K. Q., 2018, "Understanding batch normalization," Proceedings of the 32nd International Conference on Neural Information Processing Systems, Curran Associates Inc., Montréal, Canada, pp. 7705–7716. [60] Dubey, S. R., Singh, S. K., and Chaudhuri, B. B., 2022, "Activation functions in deep learning: A comprehensive survey and benchmark," Neurocomputing, 503, pp. 92-108. [61] Kingma, D. P., and Ba, J., 2015, "Adam: A Method for Stochastic Optimization." [62] Dozat, T., "Incorporating Nesterov Momentum into Adam." [63] Loshchilov, I., and Hutter, F., "Decoupled Weight Decay Regularization," Proc. International Conference on Learning Representations. [64] Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., Zhu, H., Xiong, H., and He, Q., 2020, "A Comprehensive Survey on Transfer Learning," Proceedings of the IEEE, PP, pp. 1-34. [65] Deng, J., Dong, W., Socher, R., Li, L. J., Kai, L., and Li, F.-F., "ImageNet: A large-scale hierarchical image database," Proc. 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255. [66] Krizhevsky, A., "Learning Multiple Layers of Features from Tiny Images." [67] LeCun, Y., Corinna Cortes, and Christopher J.C. Burges, "The MNIST Database of Handwritten Digits," http://yann.lecun.com/exdb/mnist/. [68] Keras, "Keras Applications," https://keras.io/api/applications/. [69] Tan, M., and Le, Q., 2019, "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks," Proceedings of the 36th International Conference on Machine Learning, C. Kamalika, and S. Ruslan, eds., PMLR, Proceedings of Machine Learning Research, pp. 6105-6114. [70] Tan, M., and Le, Q. V., "EfficientNetV2: Smaller Models and Faster Training," Proc. International Conference on Machine Learning. [71] Krichen, M., 2023, "Convolutional Neural Networks: A Survey," Computers 12(8), p. 151. [72] Tan, J., Yang, J., Wu, S., Chen, G., and Zhao, J., 2021, "A critical look at the current train/test split in machine learning," arXiv e-prints, p. arXiv:2106.04525. [73] Janocha, K., and Czarnecki, W., 2017, "On Loss Functions for Deep Neural Networks in Classification," Schedae Informaticae, 25. [74] Naidu, G., Zuva, T., and Sibanda, E. M., "A Review of Evaluation Metrics in Machine Learning Algorithms," Proc. Artificial Intelligence Application in Networks and Systems, R. Silhavy, and P. Silhavy, eds., Springer International Publishing, pp. 15-25. [75] Di Mari, R., Ingrassia, S., and Punzo, A., 2023, "Local and Overall Deviance R-Squared Measures for Mixtures of Generalized Linear Models," Journal of Classification, 40(2), pp. 233-266. [76] Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M., 2019, "Optuna: A Next-generation Hyperparameter Optimization Framework," Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Association for Computing Machinery, Anchorage, AK, USA, pp. 2623–2631. [77] James, B., and Yoshua, B., 2012, "Random search for hyper-parameter optimization," J. Mach. Learn. Res., pp. 281–305. [78] Watanabe, S., 2023, "Tree-Structured Parzen Estimator: Understanding Its Algorithm Components and Their Roles for Better Empirical Performance," arXiv e-prints, p. arXiv:2304.11127. [79] Terven, J., Cordova-Esparza, D. M., Ramirez-Pedraza, A., and Chavez-Urbiola, E. A., 2023, "Loss Functions and Metrics in Deep Learning," arXiv e-prints, p. arXiv:2307.02694. [80] Garg, D., Rodrigues, J. J. P. C., Gupta, S. K., Cheng, X., Sarao, P., and Patel, G. S., 2024, Advanced Computing 13th International Conference, IACC 2023, Kolhapur, India, December 15–16, 2023, Revised Selected Papers, Part I, Springer Nature Switzerland, Cham. [81] Dutschmann, T.-M., Kinzel, L., ter Laak, A., and Baumann, K., 2023, "Large-scale evaluation of k-fold cross-validation ensembles for uncertainty estimation," Journal of Cheminformatics, 15(1), p. 49. [82] TensorFlow, "Build from source on Windows," https://www.tensorflow.org/install/source_windows. [83] NVIDIA, "cuDNN Support Matrix," https://docs.nvidia.com/deeplearning/cudnn/latest/reference/support-matrix.html. [84] TensorFlow, "TensorFlow Forum," https://discuss.ai.google.dev/. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94578 | - |
| dc.description.abstract | 本研究旨在發展深度學習模型以分析斜坡光彈顆粒流中單顆光彈顆粒在斜坡流中的影像來評估其受力和傾倒狀況。由於準確的分析需要高品質的影像,我們首先進行多顆粒影像裁切和分類等前處理工作,以獲得單顆顆粒影像。隨後,我們發展了三大類模型:第一類包括二分類和多分類模型,其中二分類模型用於識別顆粒有無光條紋及判斷是否帶有因傾倒所產生的月牙形殘影,而多分類模型則用來評估顆粒的配位數。第二類模型用於評估單顆顆粒的受力大小和受力角度,以獲得等價於離散元素法計算所得之細部資訊,為光彈顆粒應用之一大突破。第三類模型分析顆粒的傾倒資訊,包括傾倒幅度(月牙寬度)及其傾倒方位(月牙發生位置)。這些模型以自定義或遷移學習方式建置,在自定義模型中,我們通過經驗法則調整超參數及模型架構,以應對簡單的任務;在遷移學習模型中,我們選擇合適的預訓練模型,使用自定義層,並根據需要進行人工調整超參數以進行遷移學習。結果顯示,這些模型在訓練和驗證過程中表現良好,對未來光彈顆粒的影像分析效率有具體的助益。本研究更進一步統整所得之單顆粒資訊以獲得多顆粒斜坡流之流場資訊,包含顆粒中光條紋以及月牙殘影的出現與否、顆粒受力、月牙殘影厚度與發生位置於流場的分布狀況。 | zh_TW |
| dc.description.abstract | This study develops deep learning models to analyze single photoelastic disk images in inclined chute flows, evaluating their force and tilting conditions. To ensure accurate analysis, we first preprocess multi-disk images by cropping and classifying them to obtain single-disk images. We then developed three types of models: the first type includes binary and multiclass classification models. Binary classification model identifies the presence of fringe patterns and the occurrence of the crescent rim from tilting, while multiclass classification model evaluates the disk coordination number. The second model type assesses force magnitude and force loading angles on individual disks, providing detailed information equivalent to discrete element method calculations, marking a significant breakthrough in photoelastic disk applications. The third model type analyzes disk tilting, including tilt magnitude (crescent rim thickness) and orientation (crescent rim location). These models use either custom designs or transfer learning. In custom-built models, we adjust hyperparameters and architectures based on empirical rules for simple tasks. For transfer learning models, we select suitable pre-trained models, use custom layers, and manually adjust hyperparameters. The results show these models perform well in training and validation, enhancing the efficiency of future image analysis of photoelastic disks. This study further integrates individual disk information to derive flow field details in multi-disk slope flows, including the presence of fringes and crescent rims, disk loading forces, and the distribution of crescent rim thickness and occurrence within the flow field. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-08-16T16:50:35Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-08-16T16:50:35Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 誌謝 i
中文摘要 iii ABSTRACT iv CONTENTS v LIST OF FIGURES viii LIST OF TABLES xiii Nomenclature xv Chapter 1 Introduction 1 1.1 Prior Photoelastic Granular Flow Experiments 3 1.2 Photoelastic Material & Photoelasticity 10 1.3 Deep Learning 15 1.4 Motivation & Objective 20 Chapter 2 Methodology 25 2.1 Supervised Learning 27 2.2 Data Preparation 29 2.2.1 Disk Extraction and Neighbor Elimination 30 2.2.2 Disk Tilt Analysis 38 2.2.3 Computer-Generated Rim Images 45 2.2.4 Pseudo Photoelastic Disk Images 49 2.3 Data Labeling 52 2.3.1 Force Information 52 2.3.2 Fringe Existence 53 2.3.3 Coordination Number 54 2.3.4 Crescent Rims Existence 55 2.3.5 Crescent Rims Thickness 56 2.3.6 Crescent Rims Location 56 Chapter 3 Deep Learning Models and Algorithm 58 3.1 Convolution Neural Networks (CNNs) Model 61 3.1.1 Convolutional Layer 63 3.1.2 Pooling Layer 65 3.1.3 Dropout Layer 66 3.1.4 Flatten Layer 67 3.1.5 Dense Layer 67 3.1.6 Batch Normalization (BN) 69 3.1.7 Hyperparameter 69 3.1.8 Transfer Learning 80 3.2 Preanalysis Models 86 3.2.1 Model Outline 86 3.2.2 Data Preprocessing and Selection 87 3.2.3 Loss Function 90 3.2.4 Evaluation Metric 91 3.3 Force Measurement Models 92 3.3.1 Model Outline 93 3.3.2 Data Preprocessing and Selection 94 3.3.3 Evaluation Metric 95 3.3.4 Hyperparameter Tuning 96 3.4 Crescent Rim Features Models 100 3.4.1 Model Outline 100 3.4.2 Data Preprocessing and Selection 101 3.4.3 L1 and L2 Regularization 102 3.4.4 Cross-Validation 104 3.5 Training Environment Setup 106 3.5.1 Hardware Environment 106 3.5.2 Software Environment 107 Chapter 4 Results 109 4.1 Presence of Fringe and Rim 110 4.1.1 Models Setting 111 4.1.2 Performance and Demonstration 112 4.2 Prediction of the Coordination Number 117 4.2.1 Models Setting 117 4.2.2 Performance and Demonstration 118 4.3 Force Measurement on Disks 120 4.3.1 Hyperparameter Tuning Outputs and Model Setting 121 4.3.2 Performance 126 4.4 Crescent Rim Thickness and Location 131 4.4.1 Hyperparameter Tuning Outputs and Model Setting 132 4.4.2 Performance 138 Chapter 5 Conclusions and Future Works 143 5.1 Conclusions 143 5.2 Future Works 146 Reference 149 | - |
| dc.language.iso | en | - |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 光彈顆粒 | zh_TW |
| dc.subject | 影像處理 | zh_TW |
| dc.subject | 力量測 | zh_TW |
| dc.subject | 傾倒趨勢 | zh_TW |
| dc.subject | 配位數 | zh_TW |
| dc.subject | tilt trend | en |
| dc.subject | image processing | en |
| dc.subject | force measurement | en |
| dc.subject | coordination number | en |
| dc.subject | Deep learning | en |
| dc.subject | photoelastic disk | en |
| dc.title | 為分析斜坡流光彈顆粒受力與傾倒所發展之深度學習模型 | zh_TW |
| dc.title | Development of Deep Learning Models for Forcing and Tilting Analysis of Photoelastic Disks in Inclined Chute flows | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 林正釧;李庚霖 | zh_TW |
| dc.contributor.oralexamcommittee | Cheng-Chuan Lin;Keng-Lin Lee | en |
| dc.subject.keyword | 深度學習,光彈顆粒,影像處理,力量測,傾倒趨勢,配位數, | zh_TW |
| dc.subject.keyword | Deep learning,photoelastic disk,image processing,force measurement,tilt trend,coordination number, | en |
| dc.relation.page | 154 | - |
| dc.identifier.doi | 10.6342/NTU202404087 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2024-08-14 | - |
| dc.contributor.author-college | 工學院 | - |
| dc.contributor.author-dept | 機械工程學系 | - |
| 顯示於系所單位: | 機械工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf 未授權公開取用 | 7.21 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
