請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93945完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳良基 | zh_TW |
| dc.contributor.advisor | Liang-Gee Chen | en |
| dc.contributor.author | 艾弗里 | zh_TW |
| dc.contributor.author | Everett Fall | en |
| dc.date.accessioned | 2024-08-09T16:36:12Z | - |
| dc.date.available | 2024-08-10 | - |
| dc.date.copyright | 2024-08-09 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-08-06 | - |
| dc.identifier.citation | [1] J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. P. Singh, "Action-conditional video prediction using deep networks in atari games," in Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Sys- tems 2015, December 7-12, 2015, Montreal, Quebec, Canada, 2015, pp. 2863–2871.
[2] C. Finn, I. J. Goodfellow, and S. Levine, "Unsupervised learning for physical interaction through video prediction," [3] H. Schwarz, D. Marpe, and T. Wiegand, "Overview of the scalable video coding extension of the H.264/AVC standard," IEEE Trans. Circuits Syst. Video Techn., [4] G. J. Sullivan, J. Ohm, W. Han, and T. Wiegand, "Overview of the high efficiency video coding (HEVC) standard," IEEE Trans. Circuits Syst. Video Techn., vol. 22, [5] E. Fall, K. Chang, and L. Chen, "Dynamically expanded CNN array for video coding," in ICIGP 2020: 3rd International Conference on Image and Graphics Processing, Singapore, February, 2020. ACM, 2020, pp. 85–90. [6] W. Cui, T. Zhang, S. Zhang, F. Jiang, W. Zuo, Z. Wan, and D. Zhao, "Convolutional neural networks based intra prediction for HEVC," in 2017 Data Compression Conference, DCC 2017, Snowbird, UT, USA, April 4-7, 2017, 2017, p. 436. [7] J. Li, B. Li, J. Xu, R. Xiong, and W. Gao, "Fully connected network-based intra prediction for image coding," IEEE Trans. Image Processing, vol. 27, no. 7, pp. 3236–3247, 2018. [8] S. Huo, D. Liu, F. Wu, and H. Li, "Convolutional neural network-based motion compensation refinement for video coding," in IEEE International Symposium on Circuits and Systems, ISCAS 2018, 27-30 May 2018, Florence, Italy, 2018, pp. 1–4. [9] N. Yan, D. Liu, H. Li, B. Li, L. Li, and F. Wu, "Convolutional neural network-based fractional-pixel motion compensation," IEEE Trans. Circuits Syst. Video Techn., vol. 29, no. 3, pp. 840–853, 2019. [10] W. Park and M. Kim, "Cnn-based in-loop filtering for coding efficiency improvement," in IEEE 12th Image, Video, and Multidimensional Signal Processing Workshop, IVMSP 2016, Bordeaux, France, July 11-12, 2016, 2016, pp. 1–5. [11] J. Kang, S. Kim, and K. M. Lee, "Multi-modal/multi-scale convolutional neural network based in-loop filter design for next generation video codec," in 2017 IEEE International Conference on Image Processing, ICIP 2017, Beijing, China, September 17-20, 2017, 2017, pp. 26–30. [12] Y. Zhang, T. Shen, X. Ji, Y. Zhang, R. Xiong, and Q. Dai, "Residual highway convolutional neural networks for in-loop filtering in HEVC," IEEE Trans. Image Processing, vol. 27, no. 8, pp. 3827–3841, 2018. [13] C. Jia, S. Wang, X. Zhang, S. Wang, J. Liu, S. Pu, and S. Ma, "Content-aware convolutional neural network for in-loop filtering in high efficiency video coding," IEEE Trans. Image Processing, vol. 28, no. 7, pp. 3343–3356, 2019. [14] C. Li, L. Song, R. Xie, and W. Zhang, "CNN based post-processing to improve HEVC," in 2017 IEEE International Conference on Image Processing, ICIP 2017, Beijing, China, September 17-20, 2017, 2017, pp. 4577–4580. [15] R. Yang, M. Xu, and Z. Wang, "Decoder-side HEVC quality enhancement with scalable convolutional neural network," in 2017 IEEE International Conference on Multimedia and Expo, ICME 2017, Hong Kong, China, July 10-14, 2017, 2017, pp. 817–822. [16] T. Wang, M. Chen, and H. Chao, "A novel deep learning-based method of improving coding efficiency from the decoder-end for HEVC," in 2017 Data Compression Conference, DCC 2017, Snowbird, UT, USA, April 4-7, 2017, 2017, pp. 410–419. [17] C. Dong, C. C. Loy, K. He, and X. Tang, "Learning a deep convolutional network for image super-resolution," in Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part IV, 2014, pp. 184–199. [18] C. Dong, Y. Deng, C. C. Loy, and X. Tang, "Compression artifacts reduction by a deep convolutional network," in 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, 2015, pp. 576–584. [19] C. Wu, N. Singhal, and P. Krähenbühl, "Video compression through image inter- polation," in Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part VIII, 2018, pp. 425–440. [20] G. Lu, W. Ouyang, D. Xu, X. Zhang, C. Cai, and Z. Gao, "DVC: an end-to-end deep video compression framework," CoRR, vol. abs/1812.00101, 2018. [21] O. Rippel, S. Nair, C. Lew, S. Branson, A. G. Anderson, and L. D. Bourdev, "Learned video compression," CoRR, vol. abs/1811.06981, 2018. [22] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, "How transferable are features in deep neural networks?" in Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, 2014, pp. 3320–3328. [23] E. Fall, M. Tatsubori, D. J. Agravante, M. Asai, S. Morikuni, D. Kimura, S. Chaud- hury, A. Munawar, and L.-G. Chen, "Option discovery with prediction network ensembles from demonstrations in unconstrained state spaces," in ICML Workshop on Multi-Task and Lifelong Reinforcement Learning, jun 2019, workshop paper. [24] N. Mehta, S. Ray, P. Tadepalli, and T. Dietterich, "Automatic discovery and transfer of task hierarchies in reinforcement learning," AI Magazine, vol. 32, no. 1, p. 35, Mar. 2011. [25] T. D. Kulkarni, K. Narasimhan, A. Saeedi, and J. Tenenbaum, "Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation," in Advances in Neural Information Processing Systems, 2016, pp. 3675–3683. [26] C. Tessler, S. Givony, T. Zahavy, D. J. Mankowitz, and S. Mannor, "A deep hierar- chical approach to lifelong learning in minecraft," in Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, 2017, pp. 1553–1561. [27] S. Minton, "Quantitative Results Concerning the Utility of Explanation-Based Learn- ing," Artificial Intelligence, vol. 42, no. 2, pp. 363–391, 1990. [28] D. McDermott, M. Ghallab, A. Howe, C. Knoblock, A. Ram, M. Veloso, D. Weld, and D. Wilkins, "Pddl-the planning domain definition language," 1998, technical Report CVC TR98003/DCS TR1165. New Haven, CT: Yale Center for Computational Vision and Control. [29] M. Helmert, "A Planning Heuristic Based on Causal Graph Analysis," in Proceedings of the International Conference on Automated Planning and Scheduling(ICAPS), 2004, pp. 161–170. [30] M. Helmert and C. Domshlak, "Landmarks, Critical Paths and Abstractions: What's the Difference Anyway?" in Proceedings of the International Conference on Automated Planning and Scheduling(ICAPS), 2009. [31] A. Botea, M. Enzenberger, M. Müller, and J. Schaeffer, "Macro-FF: Improving AI Planning with Automatically Learned Macro-Operators," J. Artif. Intell. Res.(JAIR), vol. 24, pp. 581–621, 2005. [32] L. Chrpa, "Generation of Macro-Operators via Investigation of Action Dependencies in Plans," Knowledge Engineering Review, vol. 25, no. 3, p. 281, 2010. [33] L. Chrpa, M. Vallati, and T. L. McCluskey, "MUM: A Technique for Maximising the Utility of Macro-operators by Constrained Generation and Use," in Proceedings of the International Conference on Automated Planning and Scheduling(ICAPS), 2014. [34] A. Coles and A. Smith, "Marvin: Macro Actions from Reduced Versions of the Instance," in Proceedings of the International Planning Competition, 2004, http://www.tzi.de/ edelkamp/ipc-4/IPC-4.pdf. [35] A. I. Coles and A. J. Smith, "Marvin: A heuristic search planner with online macro- action learning," Journal of Artificial Intelligence Research, vol. 28, pp. 119–156, 2007. [36] M. Asai and A. Fukunaga, "Solving Large-Scale Planning Problems by Decompo- sition and Macro Generation," in Proceedings of the International Conference on Automated Planning and Scheduling(ICAPS), Jerusalem, Israel, June 2015. [37] M. A. H. Newton, J. Levine, M. Fox, and D. Long, "Learning macro-actions for arbitrary planners and domains." 01 2007, pp. 256–263. [38] C. Hogg, H. Muñoz-Avila, and U. Kuter, "Learning hierarchical task models from input traces," Computational Intelligence, vol. 32, no. 1, pp. 3–48, 2016. [39] N. Nejati, P. Langley, and T. Könik, "Learning hierarchical task networks by observa- tion," in Proceedings of the International Conference on Machine Learning, 2006, pp. 665–672. [40] A. McGovern and A. G. Barto, "Automatic discovery of subgoals in reinforcement learning using diverse density," in Proceedings of the International Conference on Machine Learning, C. E. Brodley and A. P. Danyluk, Eds. Morgan Kaufmann, 2001, pp. 361–368. [41] I. Menache, S. Mannor, and N. Shimkin, "Q-cut - dynamic discovery of sub-goals in reinforcement learning," in Proceedings of the European Conference on Machine Learning, ser. Lecture Notes in Computer Science, T. Elomaa, H. Mannila, and H. Toivonen, Eds., vol. 2430. Springer, 2002, pp. 295–306. [42] P. Bacon, J. Harb, and D. Precup, "The option-critic architecture," in Proceedings of AAAI Conference on Artificial Intelligence, S. P. Singh and S. Markovitch, Eds. AAAI Press, 2017, pp. 1726–1734. [43] R. S. Sutton, D. Precup, and S. Singh, "Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning," Artificial Intelligence, vol. 112, no. 1, pp. 181–211, 1999. [44] A. Newell and J. Deng, "Pixels to graphs by associative embedding," in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, 2017, pp. 2168–2177. [45] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," nature, vol. 521, no. 7553, pp. 436–444, 2015. [46] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. Cambridge, MA: MIT press, 2016. [47] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., "Language models are unsupervised multitask learners," OpenAI blog, vol. 1, no. 8, p. 9, 2019. [48] J. Hestness, S. Narang, N. Ardalani, G. F. Diamos, H. Jun, H. Kianinejad, M. M. A. Patwary, Y. Yang, and Y. Zhou, "Deep learning scaling is predictable, empirically," CoRR, vol. abs/1712.00409, 2017. [49] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, "Understanding deep learning requires rethinking generalization," Communications of the ACM, vol. 64, 11 2016. [50] I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio, "An empirical investigation of catastrophic forgetting in gradient-based neural networks," arXiv preprint arXiv:1312.6211, 2013. [51] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al., "Overcoming catas- trophic forgetting in neural networks," Proceedings of the national academy of sciences, vol. 114, no. 13, pp. 3521–3526, 2017. [52] R. Kemker, M. McClure, A. Abitino, T. Hayes, and C. Kanan, "Measuring catas- trophic forgetting in neural networks," in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018. [53] G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter, "Continual lifelong learning with neural networks: A review," Neural networks, vol. 113, pp. 54–71, 2019. [54] R. Hadsell, D. Rao, A. A. Rusu, and R. Pascanu, "Embracing change: Continual learning in deep neural networks," Trends in cognitive sciences, vol. 24, no. 12, pp. 1028–1040, 2020. [55] E. Fall, K.-W. Chang, and L.-G. Chen, "Tree-managed network ensembles for video prediction," Machine Vision and Applications, vol. 35, 07 2024. [56] N. C. Oza and S. Russell, Online ensemble learning. University of California, Berkeley, 2001. [57] A. Mohammed and R. Kora, "A comprehensive review on ensemble deep learning: Opportunities and challenges," J. King Saud Univ. Comput. Inf. Sci., vol. 35, no. 2, pp. 757–774, 2023. [58] M. A. Ganaie, M. Hu, A. K. Malik, M. Tanveer, and P. N. Suganthan, "Ensemble deep learning: A review," Eng. Appl. Artif. Intell., vol. 115, p. 105151, 2022. [59] L. Rokach, "Ensemble-based classifiers," Artificial Intelligence Review, vol. 33, no. 1-2, pp. 1–39, 2010. [60] A. S. Vezhnevets, S. Osindero, T. Schaul, N. Heess, M. Jaderberg, D. Silver, and K. Kavukcuoglu, "Feudal networks for hierarchical reinforcement learning," in Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, 2017, pp. 3540–3549. [61] B. Bzier, "Gym-mupen64plus code repository," 2017. [62] B. Shaqour, M. Abuabiah, S. Abdel-fattah, A. Juaidi, R. Abdallah, W. Abuzaina, M. Alqarout, B. Verleije, and P. Cos, "Gaining a better understanding of the extrusion process in fused filament fabrication 3d printing: a review," The International Journal of Advanced Manufacturing Technology, vol. 114, 05 2021. [63] R. Côté, V. Demers, N. R. Demarquette, S. Charlon, and J. Soulestin, "A strategy to eliminate interbead defects and improve dimensional accuracy in material extrusion 3d printing of highly filled polymer," Additive Manufacturing, vol. 68, p. 103509, 2023. [64] J. Butt, R. Bhaskar, and V. Mohaghegh, "Investigating the effects of extrusion temper- atures and material extrusion rates on fff-printed thermoplastics," The International Journal of Advanced Manufacturing Technology, vol. 117, 12 2021. [65] N. Siddique, P. Dhakan, I. Rañó, and K. E. Merrick, "A review of the relationship between novelty, intrinsic motivation and reinforcement learning," Paladyn, vol. 8, no. 1, pp. 58–69, 2017. [66] J. Achiam and S. Sastry, "Surprise-based intrinsic motivation for deep reinforcement learning," CoRR, vol. abs/1703.01732, 2017. [67] F. Pardo, V. Levdik, and P. Kormushev, "Goal-oriented trajectories for efficient exploration," CoRR, vol. abs/1807.02078, 2018. [68] R. Houthooft, X. Chen, Y. Duan, J. Schulman, F. D. Turck, and P. Abbeel, "VIME: variational information maximizing exploration," in Advances in Neural Informa- tion Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, 2016, pp. 1109–1117. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93945 | - |
| dc.description.abstract | 本論文主要探討使用神經網路集成於視訊預測任務。每個章節聚焦於一個獨立的任務或應用,每章包含了該任務或應用的介紹、問題定義,以及結果評估和討論。
在第一章中,我們利用視訊預測來提高視訊編碼的品質,透過小型卷積神經網絡(CNN)的集成來預測因標準編碼與解碼演算法所導致的誤差。該預測誤差隨後可從視訊中被減去以提高視訊品質。我們對神經網路集成進行分群,將集成中不同的神經網絡與參數分配到視訊中的特定區域。實驗結果顯示,當我們的方法應用於 H265 編碼時,可以在相同位元率的情況下提升圖像品質。 在第二章中,我們研究在虛擬世界中,代理角色(agent)與環境具有互動的視訊預測。我們提出了一種使用 CNN 集成來識別代理角色在執行較大任務過程中完成的子任務的技術。我們證明,通過在集成中納入不同的預測時間範圍,神經網絡可以學會根據子任務開始時的狀態預測子任務完成後環境的狀態。這種預測反過來可以用來預測子任務執行的開始和結束時間,從而提供一種從可能包含許多其他子任務的較長視訊中提取子任務的方法。 在第三章中,我們進一步研究動作條件視訊預測(action-conditional video prediction),在進行視訊預測時因同時考慮代理角色所採取的動作,複雜性因而提高。這種動作條件視訊預測可用於預測代理角色在環境中的移動軌跡。我們評測代理角色位置的長期預測準確性,並展現我們的方法表現優於目前最先進的方法。我們也開發了一種新穎的指標來量化隨機環境中的視訊預測,並證明該指標可以與定性結果更加一致,因此能更好地區分模型表現。 在第四章中,我們應用本論文提出的神經網路集成樹於3D列印中的誤差預測。我們展示了這種技術可用於預測不同幾何形狀的物件表面上的列印瑕疵,即使在訓練資料集中沒有無瑕疵物件的情況下,仍然可以學習預測誤差。我們計算了透過早期檢測錯誤,理論上所節省的時間和材料。最後,我們設計並製作了一種新型的自動校正3D列印機,具自動檢測錯誤功能並自動使用銑削工具進行修正。本論文提出了一個新穎的方法來建構和存取集成中的神經網路以分割問題空間,我們希望論文中研究的案例能作為神經網路集成於理論和實際應用中的範例。 | zh_TW |
| dc.description.abstract | This dissertation explores strategies for applying network ensembles in video prediction tasks. Each chapter focuses on a separate task or application, and therefore includes an introduction and problem formulation in the context of that task or application as well as an evaluation and discussion of results.
We begin in Chapter 1 by using prediction to enhance the quality of video encoding, employing an ensemble of small convolutional neural networks (CNN) to predict errors that were introduced through the application of a standard encoding/decoding algorithm. That predicted error can then be subtracted from the decoded video to improve quality. We partition the ensemble, assigning different NNs or groups of parameters to specific regions of the video. We show that for a given bit rate, image quality is improved when our method is paired with the H265 encoding scheme. In Chapter 2 we move to video prediction as it pertains to the state of an agent taking actions in an environment. We propose a technique that uses an ensemble of CNNs to identify subtasks which are completed during the performance of a larger task by an agent. We show that by incorporating a range of different prediction time horizons within the ensemble, the networks can learn to predict the state of the environment after a subtask has been completed based on the state when it is initiated. This prediction in turn can be used to predict the initiation and termination timing of subtask execution and thus provide a way to extract the subtask from a longer video which may contain many other subtasks. In Chapter 3 we further develop the methods for video prediction of agents where we extend the complexity by conditioning the prediction on an action taken by the agent. This action-conditional video prediction can be used to predict an agent's trajectory within an environment. We evaluate the accuracy of long-term predictions of the agent's location and show improvement over state-of-the-art methods. We also develop a novel, high-level metric to quantify predictions in stochastic environments and show that this metric better aligns with qualitative results and further distinguishes the model. Chapter 4 applies our proposed ensemble-tree in predicting errors in 3D printing. We demonstrate that this technique can be used to predict error artifacts on the surface of printed parts, of varying geometry and that can even learn to predict errors when the training data had no parts that were error free. We calculate the theoretical time and material savings that can be achieved through early detection of errors. Finally, we designed and implemented a novel Auto-Correcting Printer that detects errors and uses a milling tool to make corrections. The goal of this dissertation is to present novel techniques and methods for partitioning a problem space, constructing, and accessing networks in an ensemble. We intend for the cases we have studied to serve as examples of theoretical and practical applications of network ensembles for video prediction. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-08-09T16:36:12Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-08-09T16:36:12Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Contents ix
List of Figures xiii List of Tables xxv Abstract xxviii 1 Applications in Video Coding 1 1.1 Introduction 2 1.2 Related Work 4 1.2.1 Intra-Prediction 4 1.2.2 Inter-Prediction 4 1.2.3 Post In-loop Filtering 5 1.2.4 End-to-end video coding framework 6 1.3 Coding Problem 7 1.4 Dynamically Expanded CNN Array 8 1.4.1 Hierarchical Parameter Groups 9 1.4.2 Video Partitioning 11 1.5 Experimental Evaluation 14 1.6 Conclusion 18 2 Applications in Subtask Extraction 19 2.1 Introduction 20 2.2 Related Work 24 2.3 Subtask Extraction Problem 27 2.4 Prediction Network Ensemble 30 2.4.1 Network Ensemble 30 2.4.2 Kernel K-Means Clustering 33 2.5 Experimental Results 35 2.5.1 Single Prediction Network 35 2.5.2 Prediction Network Ensemble 37 2.5.3 Similar Subtasks 38 2.5.4 Application Example with Construction World 42 2.6 Conclusion 45 3 Applications in Action-Conditional Video Prediction 47 3.1 Introduction 48 3.2 Related Works 51 3.2.1 Video Prediction 51 3.2.2 Action-conditional Video Prediction 52 3.2.3 Dynamic Neural Networks 53 3.3 Tree-Managed network Ensembles 55 3.3.1 Problem Statement 55 3.3.2 Architecture Motivation 55 3.3.3 Architecture 56 3.4 Evaluation 64 3.4.1 Time complexity 65 3.4.2 Division of labor 66 3.4.3 Prediction quality evaluation 68 3.5 Conclusions 76 4 Applications 3D Printing Video Prediction 79 4.1 Ensemble-Tree for Video Prediction in 3D printing 80 4.2 Auto-Correcting Printer 83 4.2.1 Mechanical System Design 83 4.2.2 Electronic System Design 85 4.3 Evaluation 91 5 Conclusion 97 5.1 Future Works 98 6 Bibliography 101 A Unconstrained State-Space Complexity 113 A.1 Introduction 113 A.2 Constraints on State Spaces 118 A.2.1 Empirical Categorization Method 121 B Improved Quantitative Evaluation Metric Action-conditional Predictions 123 B.1 Qualitative Analysis of Extended Recurrent Action-conditional Predictions 125 C Ensemble-tree Model Architectures 127 C.1 Leaf-node Model 127 C.2 Comparison-node Model 128 C.3 Action-conditional Prediction Model 129 | - |
| dc.language.iso | en | - |
| dc.subject | 集成樹 | zh_TW |
| dc.subject | 視頻預測 | zh_TW |
| dc.subject | 動作條件式 | zh_TW |
| dc.subject | 視頻編碼 | zh_TW |
| dc.subject | 子任務提取 | zh_TW |
| dc.subject | 3D打印 | zh_TW |
| dc.subject | 錯誤檢測 | zh_TW |
| dc.subject | video prediction | en |
| dc.subject | error detection | en |
| dc.subject | 3D printing | en |
| dc.subject | subtask extraction | en |
| dc.subject | video coding | en |
| dc.subject | action-conditional | en |
| dc.subject | Ensemble-tree | en |
| dc.title | 動態集成樹於視訊預測及其應用 | zh_TW |
| dc.title | Dynamic Ensemble-Trees for Video Prediction and Applications | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 博士 | - |
| dc.contributor.oralexamcommittee | 吳安宇;黃俊郎;盧奕璋;劉宗德;陳尚澤;黃聖傑;莊子德 | zh_TW |
| dc.contributor.oralexamcommittee | An-Yeu Wu;Jiun-Lang Huang;Yi-Chang Lu;Tsung-Te Liu;Shang-Tse Chen;Sheng-Chieh Huang;Tzu-Der Chuang | en |
| dc.subject.keyword | 集成樹,視頻預測,動作條件式,視頻編碼,子任務提取,3D打印,錯誤檢測, | zh_TW |
| dc.subject.keyword | Ensemble-tree,video prediction,action-conditional,video coding,subtask extraction,3D printing,error detection, | en |
| dc.relation.page | 130 | - |
| dc.identifier.doi | 10.6342/NTU202403008 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2024-08-06 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電機工程學系 | - |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 38.07 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
