請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100922完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 管希聖 | zh_TW |
| dc.contributor.advisor | Hsi-Sheng Goan | en |
| dc.contributor.author | 劉宸銉 | zh_TW |
| dc.contributor.author | Chen-Yu Liu | en |
| dc.date.accessioned | 2025-11-26T16:06:22Z | - |
| dc.date.available | 2025-11-27 | - |
| dc.date.copyright | 2025-11-26 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-11-18 | - |
| dc.identifier.citation | References
[1] S. Ahmed, C. Sánchez Muñoz, F. Nori, and A. F. Kockum. Quantum state tomography with conditional generative adversarial networks. Physical review letters, 127(14):140502, 2021. [2] H. B. Barlow. Unsupervised learning. Neural computation, 1(3):295–311, 1989. [3] A. R. Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information theory, 39(3):930–945, 1993. [4] M. Benedetti, E. Lloyd, S. Sack, and M. Fiorentini. Parameterized quantum circuits as machine learning models. Quantum Science and Technology, 4(4):043001, 2019. [5] V. Bergholm, J. Izaac, M. Schuld, C. Gogolin, S. Ahmed, V. Ajith, M. S. Alam, G. Alonso-Linaje, B. AkashNarayanan, A. Asadi, et al. Pennylane: Automatic differentiation of hybrid quantum-classical computations. arXiv preprint arXiv:1811.04968, 2018. [6] D. Blalock, J. J. Gonzalez Ortiz, J. Frankle, and J. Guttag. What is the state of neural network pruning? Proceedings of machine learning and systems, 2:129–146, 2020. [7] D. Bluvstein, S. J. Evered, A. A. Geim, S. H. Li, H. Zhou, T. Manovitz, S. Ebadi, M. Cain, M. Kalinowski, D. Hangleiter, et al. Logical quantum processor based on reconfigurable atom arrays. Nature, 626(7997):58–65, 2024. [8] A. Bouland, B. Fefferman, C. Nirkhe, and U. Vazirani. On the complexity and verification of quantum random circuit sampling. Nature Physics, 15(2):159–163, 2019. [9] S. Bravyi, A. W. Cross, J. M. Gambetta, D. Maslov, P. Rall, and T. J. Yoder. High-threshold and low-overhead fault-tolerant quantum memory. Nature, 627(8005):778–782, 2024. [10] T. B. Brown. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. [11] M. C. Caro, H.-Y. Huang, M. Cerezo, K. Sharma, A. Sornborger, L. Cincio, and P. J. Coles. Generalization in quantum machine learning from few training data. Nature communications, 13(1):4919, 2022. [12] M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii, J. R. McClean, K. Mitarai, X. Yuan, L. Cincio, et al. Variational quantum algorithms. Nature Reviews Physics, 3(9):625–644, 2021. [13] A. M. Childs. Lecture notes on quantum algorithms. Lecture notes at University of Maryland, 5, 2017. [14] P. Cunningham, M. Cord, and S. J. Delany. Supervised learning. In Machine learning techniques for multimedia: case studies on organization and retrieval, pages 21–49. Springer, 2008. [15] C. M. Dawson and M. A. Nielsen. The solovay-kitaev algorithm. arXiv preprint quant-ph/0505030, 2005. [16] L. Deng. The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE signal processing magazine, 29(6):141–142, 2012. [17] L. Domingo, G. Carlo, and F. Borondo. Taking advantage of noise in quantum reservoir computing. Scientific Reports, 13(1):8790, 2023. [18] Y. Du, M.-H. Hsieh, T. Liu, and D. Tao. Expressive power of parametrized quantum circuits. Physical Review Research, 2(3):033125, 2020. [19] Y. Du, M.-H. Hsieh, T. Liu, S. You, and D. Tao. Learnability of quantum neural networks. PRX quantum, 2(4):040337, 2021. [20] Y. Du, Y. Yang, D. Tao, and M.-H. Hsieh. Problem-dependent power of quantum neural networks on multiclass classification. Physical Review Letters, 131(14):140601, 2023. [21] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. JMLR Workshop and Conference Proceedings, 2010. [22] Q. A. Google. Choosing hardware for your qsim simulation. https://quantumai.google/qsim/choose_hw, 2024. [23] J. Gou, B. Yu, S. J. Maybank, and D. Tao. Knowledge distillation: A survey. International Journal of Computer Vision, 129(6):1789–1819, 2021. [24] J. Haah, A. W. Harrow, Z. Ji, X. Wu, and N. Yu. Sample-optimal tomography of quantum states. IEEE Transactions on Information Theory, page 1–1, 2017. [25] D. Hangleiter, M. Kalinowski, D. Bluvstein, M. Cain, N. Maskara, X. Gao, A. Kubica, M. D. Lukin, and M. J. Gullans. Fault-tolerant compiling of classically hard instantaneous quantum polynomial circuits on hypercubes. PRX Quantum, 6(2):020338, 2025. [26] V. Havlíček, A. D. Córcoles, K. Temme, A. W. Harrow, A. Kandala, J. M. Chow, and J. M. Gambetta. Supervised learning with quantum-enhanced feature spaces. Nature, 567(7747):209–212, 2019. [27] M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt, and B. Scholkopf. Support vector machines. IEEE Intelligent Systems and their applications, 13(4):18–28, 1998. [28] T. Hofmann, B. Schölkopf, and A. J. Smola. Kernel methods in machine learning. 2008. [29] N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Laroussilhe, A. Gesmundo, M. Attariyan, and S. Gelly. Parameter-efficient transfer learning for nlp. In International conference on machine learning, pages 2790–2799. PMLR, 2019. [30] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. [31] H.-Y. Huang, M. Broughton, M. Mohseni, R. Babbush, S. Boixo, H. Neven, and J. R. McClean. Power of data in quantum machine learning. Nature communications, 12(1):2631, 2021. [32] M. Javaheripi, S. Bubeck, M. Abdin, J. Aneja, S. Bubeck, C. C. T. Mendes, W. Chen, A. Del Giorno, R. Eldan, S. Gopi, et al. Phi-2: The surprising power of small language models. Microsoft Research Blog, 2023. doi:10.6342/NTU202504677 [33] A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, D. d. l. Casas, F. Bressand, G. Lengyel, G. Lample, L. Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023. [34] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal of artificial intelligence research, 4:237–285, 1996. [35] D. P. Kingma. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [36] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM review, 51(3):455–500, 2009. [37] A. Krizhevsky, V. Nair, and G. Hinton. Cifar-10 (canadian institute for advanced research). URL http://www. cs. toronto. edu/kriz/cifar. html, 5(4):1, 2010. [38] J. Landman, S. Thabet, C. Dalyac, H. Mhiri, and E. Kashefi. Classically approximating variational quantum machine learning with random fourier features. arXiv preprint arXiv:2210.13200, 2022. [39] R. LaRose and B. Coyle. Robust data encodings for quantum classifiers. Physical Review A, 102(3):032420, 2020. [40] X. L. Li and P. Liang. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190, 2021. [41] C.-H. A. Lin, C.-Y. Liu, and K.-C. Chen. Quantum-train long short-term memory: Application on flood prediction problem. In 2024 IEEE International Conference on Quantum Computing and Engineering (QCE), volume 2, pages 268–273. IEEE, 2024. [42] D. Lin, S. Talathi, and S. Annapureddy. Fixed point quantization of deep convolutional networks. International conference on machine learning, pages 2849–2858, 2016. [43] Z. Lin, A. Madotto, and P. Fung. Exploring versatile generative language model via parameter-efficient transfer learning. arXiv preprint arXiv:2004.03829, 2020. [44] C.-Y. Liu, K.-C. Chen, Y.-C. Chen, S. Y.-C. Chen, W.-H. Huang, W.-J. Huang, and Y.-J. Chang. Quantum-enhanced parameter-efficient learning for typhoon trajectory forecasting. arXiv preprint arXiv:2505.09395, 2025. [45] C.-Y. Liu, K.-C. Chen, K. Murota, S. Y.-C. Chen, and E. Rinaldi. Quantum relational knowledge distillation. arXiv preprint arXiv:2508.13054, 2025. [46] C.-Y. Liu and S. Y.-C. Chen. Federated quantum-train with batched parameter generation. In 2024 15th International Conference on Information and Communication Technology Convergence (ICTC), pages 1133–1138. IEEE, 2024. [47] C.-Y. Liu, E.-J. Kuo, C.-H. Abraham Lin, J. Gemsun Young, Y.-J. Chang, M.H. Hsieh, and H.-S. Goan. Quantum-train: Rethinking hybrid quantum-classical machine learning in the model compression perspective. Quantum Machine Intelligence, 7(2):80, 2025. [48] C.-Y. Liu, E.-J. Kuo, C.-H. Abraham Lin, J. Gemsun Young, Y.-J. Chang, M.H. Hsieh, and H.-S. Goan. Quantum-train: Rethinking hybrid quantum-classical machine learning in the model compression perspective. Quantum Machine Intelligence, 7(2):80, 2025. [49] C.-Y. Liu, E.-J. Kuo, C.-H. A. Lin, S. Chen, J. G. Young, Y.-J. Chang, and M.-H. Hsieh. Training classical neural networks by quantum machine learning. In 2024 IEEE International Conference on Quantum Computing and Engineering (QCE), volume 2, pages 34–38. IEEE, 2024. [50] C.-Y. Liu, C.-H. A. Lin, C.-H. H. Yang, K.-C. Chen, and M.-H. Hsieh. Qtrl: Toward practical quantum reinforcement learning via quantum-train. In 2024 IEEE International Conference on Quantum Computing and Engineering (QCE), volume 2, pages 317–322. IEEE, 2024. [51] C.-Y. Liu, C.-H. H. Yang, H.-S. Goan, and M.-H. Hsieh. A quantum circuit-based compression perspective for parameter-efficient learning. In The Thirteenth International Conference on Learning Representations. [52] S.-Y. Liu, C.-Y. Wang, H. Yin, P. Molchanov, Y.-C. F. Wang, K.-T. Cheng, and M.-H. Chen. Dora: Weight-decomposed low-rank adaptation. arXiv preprint arXiv:2402.09353, 2024. [53] A. Mari, T. R. Bromley, J. Izaac, M. Schuld, and N. Killoran. Transfer learning in hybrid classical-quantum neural networks. Quantum, 4:340, 2020. [54] A. Mari, T. R. Bromley, J. Izaac, M. Schuld, and N. Killoran. Transfer learning in hybrid classical-quantum neural networks. Quantum, 4:340, 2020. [55] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven. Barren plateaus in quantum neural network training landscapes. Nature communications, 9(1):4812, 2018. [56] K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii. Quantum circuit learning. Physical Review A, 98(3):032309, 2018. [57] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of machine learning. MIT press, 2018. [58] R. Movassagh. The hardness of random quantum circuits. Nature Physics, 19(11):1719–1724, 2023. [59] J. O. Neill. An overview of neural network compression. arXiv preprint arXiv:2006.03669, 2020. [60] M. A. Nielsen and I. L. Chuang. Quantum computation and quantum information. Cambridge university press, 2010. [61] S. J. Nowlan and G. E. Hinton. Simplifying neural networks by soft weight sharing. pages 373–394. CRC Press, 2018. [62] C.-Y. Park, M. Kang, and J. Huh. Hardware-efficient ansatz without barren plateaus in any depth. arXiv preprint arXiv:2403.04844, 2024. [63] J. Qi, C.-H. H. Yang, P.-Y. Chen, and M.-H. Hsieh. Theoretical error performance analysis for variational quantum circuit based functional regression. npj Quantum Information, 9(1):4, 2023. [64] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. [65] M. Ragone, B. N. Bakalov, F. Sauvage, A. F. Kemper, C. Ortiz Marrero, M. Larocca, and M. Cerezo. A lie algebraic theory of barren plateaus for deep parameterized quantum circuits. Nature Communications, 15(1):7172, 2024. [66] N. Sardana, J. Portes, S. Doubov, and J. Frankle. Beyond chinchilla-optimal: Accounting for inference in language model scaling laws. arXiv preprint arXiv:2401.00448, 2023. [67] M. Schuld, V. Bergholm, C. Gogolin, J. Izaac, and N. Killoran. Evaluating analytic gradients on quantum hardware. Physical Review A, 99(3):032331, 2019. [68] M. Schuld, V. Bergholm, C. Gogolin, J. Izaac, and N. Killoran. Evaluating analytic gradients on quantum hardware. Physical Review A, 99(3):032331, 2019. [69] S. Sim, P. D. Johnson, and A. Aspuru-Guzik. Expressibility and entangling capability of parameterized quantum circuits for hybrid quantum-classical algorithms. Advanced Quantum Technologies, 2(12):1900070, 2019. [70] E. Stoudenmire and D. J. Schwab. Supervised learning with tensor networks. Advances in neural information processing systems, 29, 2016. [71] G. Team, M. Riviere, S. Pathak, P. G. Sessa, C. Hardin, S. Bhupatiraju, L. Hussenot, T. Mesnard, B. Shahriari, A. Ramé, et al. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118, 2024. [72] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. [73] H. Wang, Y. Ding, J. Gu, Z. Li, Y. Lin, D. Z. Pan, F. T. Chong, and S. Han. Quantumnas: Noise-adaptive search for robust quantum circuits. In The 28th IEEE International Symposium on High-Performance Computer Architecture (HPCA-28), 2022. [74] H. Wang, Y. Ding, J. Gu, Y. Lin, D. Z. Pan, F. T. Chong, and S. Han. Quantum nas: Noise-adaptive search for robust quantum circuits. 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pages 692–708, 2022. [75] D. Wierichs, J. Izaac, C. Wang, and C. Y.-Y. Lin. General parameter-shift rules for quantum gradients. Quantum, 6:677, 2022. [76] C. Wu, F. Wu, T. Qi, Y. Huang, and X. Xie. Noisytune: A little noise can help you finetune pretrained language models better. arXiv preprint arXiv:2202.12024, 2022. [77] H. Xiao, K. Rasul, and R. Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017. [78] G. Xu, D. Xian, P. Fournier-Viger, X. Li, Y. Ye, and X. Hu. Am-convgru: a spatio-temporal model for typhoon path prediction. Neural Computing and Applications, 34(8):5905–5921, 2022. [79] C.-H. H. Yang, Y.-Y. Tsai, and P.-Y. Chen. Voice2series: Reprogramming acoustic models for time series classification. In International conference on machine learning, pages 11808–11819. PMLR, 2021. [80] K. Zhang, L. Liu, M.-H. Hsieh, and D. Tao. Escaping from the barren plateau via gaussian initializations in deep variational quantum circuits. Advances in Neural Information Processing Systems, 35:18612–18627, 2022. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100922 | - |
| dc.description.abstract | 本論文探討量子計算在參數高效學習中的應用,將量子電路重新定位為參數生成器,而非直接用於推論的模型。我們提出Quantum-Train (QT)架構,利用量子神經網路將需訓練的參數數量從 O(M) 降至 polylog(M),同時保持推論完全於古典電腦上執行;並進一步提出Quantum Parameter Adaptation (QPA),將此原理擴展至大型預訓練模型的微調,有效降低微調過程中的參數成本。理論分析涵蓋近似誤差與泛化能力,實驗則驗證了這些方法在影像分類、語言模型微調、洪水預測、強化學習以及颱風路徑預測等任務上的效能。結果顯示,QT與QPA能在大幅壓縮參數的同時維持接近的性能,提供一條以效率與可部署性為核心的量子實用性途徑。透過將量子生成的參數整合進古典機器學習流程,本研究展現了量子與經典協同運作於可擴展且資源高效人工智慧中的潛力。 | zh_TW |
| dc.description.abstract | This thesis explores quantum approaches to parameter-efficient learning, reframing quantum circuits as parameter generators for classical models rather than direct inference engines. We introduce Quantum-Train (QT), which employs quantum neural networks to reduce the number of trainable parameters from O(M) to polylog(M) while keeping inference fully classical, and Quantum Parameter Adaptation (QPA), which extends this principle to fine-tuning large pre-trained models with dramatic reductions in adaptation cost. Theoretical analyses examine approximation error and generalization, while empirical studies validate the frameworks across image classification, language model fine-tuning, flood forecasting, reinforcement learning, and typhoon trajectory prediction. Results show that QT and QPA achieve substantial compression with minimal performance loss, offering a realistic pathway toward quantum utility rooted in efficiency and deployability. By integrating quantum-generated parameters into classical machine learning pipelines, this work highlights the potential of quantum--classical synergy for scalable and resource-efficient AI. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-11-26T16:06:22Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-11-26T16:06:22Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Acknowledgements i
摘要 ii Abstract iii Contents v List of Figures viii List of Tables xiv Chapter 1 Introduction to Quantum Machine Learning 1 1.1 What is Machine Learning? 1 1.2 Quantum Computing as an Element of Machine Learning 3 1.3 Quantum Kernel Method 4 1.4 Parameterized Quantum Circuits as Quantum Neural Networks 6 1.5 Hybrid Quantum–Classical Models 10 Chapter 2 Quantum-Train 11 2.1 Parameter Generation via Quantum Neural Networks 11 2.2 Training Process and Gradient-Based Updates 15 2.3 Theoretical Perspective on Approximation Error 16 2.4 Results on Model Complexity and Efficiency 20 2.5 On Generalization Error 25 2.6 Beyond MLP Mapping Models 28 2.6.1 Impact of MLP Architecture 29 2.6.2 Tensor Network Mapping Model 29 2.7 Comparison with Classical Compression Methods 31 2.7.1 Weight Sharing 32 2.7.2 Pruning 33 2.7.3 Comparison of QT and Other Methods 33 2.7.4 Low-Rank Adaptation 34 2.7.5 Combination of QT and LoRA 35 2.8 Effect of Noise and Finite Measurement Shots 37 2.8.1 Model Accuracy Across Configurations 37 2.8.2 Variance of QNN Gradients 39 2.9 Practicality of Quantum-Train 42 Chapter 3 Quantum Parameter Adaptation 46 3.1 Beyond Quantum-Train 47 3.2 Parameter-Efficient Fine-Tuning (PEFT) Methods 48 3.3 Quantum Parameter Generation for Efficient Adaptation 50 3.3.1 Quantum Circuit–Based Model Parameter Generation 51 3.4 Batched Parameter Generation in Quantum Parameter Adaptation 53 3.4.1 From Model Tuning to General Parameter-Tuning Tasks 55 3.5 Performance of Quantum Parameter Adaptation 56 3.6 Effects of Hyperparameter Settings 61 3.7 Training Hyperparameter Configuration 63 3.8 Effects of Different Circuit Ansatz 65 3.9 Finite Measurement Shots and Noise 67 3.10 Gradient Variance of Quantum Circuit Parameters 70 3.11 On Computational Time 71 Chapter 4 Applications of QT and QPA 74 4.1 Quantum-Train on Flood Prediction Problem 75 4.2 Quantum-Train Reinforcement Learning 77 4.3 QPA for Typhoon Trajectory Forecasting 80 Chapter 5 Conclusion 84 5.1 Summary of Contributions 84 5.2 Key Insights 86 5.3 Future Directions 87 5.3.1 Model Compression in the Inference Stage 88 5.4 Concluding Remarks 88 References 90 Appendix A — Proof of QT Approximation Bound 100 | - |
| dc.language.iso | en | - |
| dc.subject | 量子計算 | - |
| dc.subject | 量子機器學習 | - |
| dc.subject | Quantum Computing | - |
| dc.subject | Quantum Machine Learning | - |
| dc.title | 量子訓練: 從模型壓縮的觀點重新思考混合式量子古典機器學習 | zh_TW |
| dc.title | Quantum-Train: Rethinking Hybrid Quantum-Classical Machine Learning in the Model Compression Perspective | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 114-1 | - |
| dc.description.degree | 博士 | - |
| dc.contributor.oralexamcommittee | 張慶瑞;周耀新;歐家和;賴青瑞;江振瑞;蔡芸琤 | zh_TW |
| dc.contributor.oralexamcommittee | Ching-Ray Chang;Yao-Hsin Chou;Chia-Ho Ou;Ching-Jui Lai;Jehn-Ruey Jiang;Yun-Cheng Tsai | en |
| dc.subject.keyword | 量子計算,量子機器學習 | zh_TW |
| dc.subject.keyword | Quantum Computing,Quantum Machine Learning | en |
| dc.relation.page | 104 | - |
| dc.identifier.doi | 10.6342/NTU202504677 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2025-11-18 | - |
| dc.contributor.author-college | 理學院 | - |
| dc.contributor.author-dept | 應用物理研究所 | - |
| dc.date.embargo-lift | 2025-11-27 | - |
| 顯示於系所單位: | 應用物理研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-114-1.pdf | 17.71 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
