Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94241
Full metadata record
???org.dspace.app.webui.jsptag.ItemTag.dcfield???ValueLanguage
dc.contributor.advisor王勝德zh_TW
dc.contributor.advisorSheng-De Wangen
dc.contributor.author黃允誠zh_TW
dc.contributor.authorYun-Cheng Huangen
dc.date.accessioned2024-08-15T16:23:55Z-
dc.date.available2024-08-16-
dc.date.copyright2024-08-15-
dc.date.issued2024-
dc.date.submitted2024-08-07-
dc.identifier.citation[1] #Jakub Konečný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh and Dave Bacon, "Federated Learning: Strategies for Improving Communication Efficiency," 2016, https://doi.org/10.48550/arXiv.1610.05492.
[2] #L. Yuan, Z. Wang, L. Sun, P. S. Yu and C. G. Brinton, "Decentralized Federated Learning: A Survey and Perspective," in IEEE Internet of Things Journal (Early Access), 2024, https://doi.org/10.1109/JIOT.2024.3407584.
[3] #Yansong Gao, Bao Gia Doan, Zhi Zhang et al., "Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review," 2022, https://doi.org/10.48550/arXiv.2007.10760.
[4] #Y. Li, S. Zhang, W. Wang and H. Song, "Backdoor Attacks to Deep Learning Models and Countermeasures: A Survey," in IEEE Open Journal of the Computer Society, vol. 4, pp. 134-146, 2023, https://doi.org/10.1109/OJCS.2023.3267221.
[5] #E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin and V. Shmatikov, "How to backdoor federated learning," in International Conference on Artificial Intelligence and Statistics, pp. 2938-2948, 2020. [Online]. Available: https://github.com/ebagdasa/backdoor_federated_learning
[6] #J. Kang, Z. Xiong, D. Niyato, S. Xie and J. Zhang, "Incentive Mechanism for Reliable Federated Learning: A Joint Optimization Approach to Combining Reputation and Contract Theory," in IEEE Internet of Things Journal, vol. 6, no. 6, pp. 10700-10714, Dec. 2019, https://doi.org/10.1109/JIOT.2019.2940820.
[7] #I. D. Mienye and Y. Sun, "A Survey of Ensemble Learning: Concepts, Algorithms, Applications, and Prospects," in IEEE Access, vol. 10, pp. 99129-99149, 2022, https://doi.org/10.1109/ACCESS.2022.3207287.
[8] #M. Arya and H. S. G, "Ensemble Federated Learning for Classifying IoMT Data Streams," in IEEE 7th International conference for Convergence in Technology, pp. 1-5, 2022, http://doi.org/10.1109/I2CT54291.2022.9824145.
[9] #N. Shi, F. Lai, R. A. Kontar and M. Chowdhury, "Fed-ensemble: Ensemble Models in Federated Learning for Improved Generalization and Uncertainty Quantification," in IEEE Transactions on Automation Science and Engineering (Early Access), 2023, https://doi.org/10.1109/TASE.2023.3269639.
[10] #Alhassan Mabrouk, Rebeca P. Díaz Redondo, Mohamed Abd Elaziz and Mohammed Kayed, "Ensemble Federated Learning: An approach for collaborative pneumonia diagnosis," in Applied Soft Computing, vol. 144, p. 110500, 2023, ISSN 1568-4946, https://doi.org/10.1016/j.asoc.2023.110500.
[11] #F.E. Casado, D. Lema, R. Iglesias et al., "Ensemble and continual federated learning for classification tasks," in Machine Learning, vol. 112, pp. 3413–3453, 2023, https://doi.org/10.1007/s10994-023-06330-z.
[12] #David H. Wolpert, "Stacked generalization," in Neural Networks, vol 5, no. 2, pp. 241-259, 1992, ISSN 0893-6080, https://doi.org/10.1016/S0893-6080(05)80023-1.
[13] #Goertzel, Ben., "Artificial General Intelligence: Concept, State of the Art, and Future Prospects," in Journal of Artificial General Intelligence, vol. 5, no. 1, pp. 1-48, 2014, https://doi.org/10.2478/jagi-2014-0001.
[14] #Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton and Jeff Dean, "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer," in International Conference on Learning Representations, 2017. [Online]. Available: https://doi.org/10.48550/arXiv.1701.06538
[15] #I. Susnea, E. Pecheanu, A. Cocu and S.-M. Susnea, "Superintelligence Revisited in Times of ChatGPT," in Broad Research in Artificial Intelligence and Neuroscience, vol. 15, no. 2, pp. 344-361, Jul. 2024, https://doi.org/10.18662/brain/15.2/579.
[16] #Kontio, E., Salmi, J. (2024). Democracy and Artificial General Intelligence. In: Vesa Salminen (eds) Human Factors, Business Management and Society. AHFE (2024) International Conference. AHFE Open Access, vol 135. AHFE International, USA. http://doi.org/10.54941/ahfe1004960
[17] #Sachin Sharma et al., "The End of History? Envisioning the Economy at Technological Singularity," in Gospodarka Narodowa. The Polish Journal of Economics, vol. 318, no. 2, pp. 53-63, 2024, https://doi.org/10.33119/GN/184316.
[18] #Seth D. Baum, "Assessing the risk of takeover catastrophe from large language models," in Risk Analysis (Early View), 2024, https://doi.org/10.1111/risa.14353
[19] #Alex Krizhevsky, Vinod Nair and Geoffrey Hinton, "The CIFAR-10 dataset," 2014. [Online]. Available: https://www.cs.toronto.edu/~kriz/cifar.html
[20] #Alex Krizhevsky, "Learning Multiple Layers of Features from Tiny Images," 2009.
[21] #Yann LeCun, "The MNIST Database of Handwritten Digits," 1998. [Online]. Available: https://yann.lecun.com/exdb/mnist/
[22] #Y. Chen, X. Qin, J. Wang, C. Yu and W. Gao, "FedHealth: A Federated Transfer Learning Framework for Wearable Healthcare," in IEEE Intelligent Systems, vol. 35, no. 4, pp. 83-93, 1 July-Aug. 2020, https://doi.org/10.1109/MIS.2020.2988604.
[23] #Resul Das, Muhammad Muhammad Inuwa, "A review on fog computing: Issues, characteristics, challenges, and potential applications," in Telematics and Informatics Reports, vol. 10, p. 100049, 2023, ISSN 2772-5030, https://doi.org/10.1016/j.teler.2023.100049.
[24] #Huy Phan, "PyTorch models trained on CIFAR-10 dataset," 2021. [Online]. Available: https://github.com/huyvnphan/PyTorch_CIFAR10
[25] #D. Silver, A. Huang, C. Maddison et al., "Mastering the game of Go with deep neural networks and tree search," in Nature vol. 529, pp. 484–489, 2016, https://doi.org/10.1038/nature16961
[26] #Open-source of experiment scripts for this research: [Online]. Available: https://github.com/X-Engineer-001/ABCDEFL
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94241-
dc.description.abstract本研究先探討聯邦學習框架之侷限性、傳統分散式聯邦學習與分散式架構矛盾之處及侷限性、其造成之問題及困境,接著了解並分析數個嘗試在聯邦學習領域引入集成學習技術之研究;發掘其受聯邦學習框架制約並限制創新之關鍵,最後提出更高彈性、高容錯、高可擴展性之架構:基於交叉驗證及分散式集成聯邦學習之架構(Architecture Based on Cross-validation and Decentralized Ensemble Federated Learning, ABCDEFL)。另對於在分類任務中之集成學習,本論文亦提出一個更細緻的集成輸出方法:Classwise Weighted Majority Voting (CWMV)。
其後本論文以一系列實驗驗證對ABCDEFL各方面可能性和優勢之猜想;以及CWMV概念之效果,最終確認了所有有疑慮之猜想;展示及討論了ABCDEFL之各種可能性和優勢,並證實了CWMV輸出集成方法相較於傳統集成方法在各種情況中皆更有優勢。實驗程式碼開源,但論文發布當下毫無可讀性,需後續修繕。
論文最後根據本研究提出數個未來可研究之方向或重要之新概念;並於最後一節由研究中所得發想、思考及討論前往通用人工智慧、人工意識之可預見障礙、可能道路及其潛在威脅、應對方案等相關看法。
zh_TW
dc.description.abstractThe research first investigates the limitations in the framework of Federated Learning (FL), the limitations in the framework of conventional Decentralized Federated Learning (DFL), the apparently visible contradictions between the common design of DFL and the strengths of decentralized architecture, and the predicaments caused by all the above. Then, we consult recent researches trying to adopt Ensemble Learning (EL) into the narrow framework of FL to spot the key restrictions that depressed their innovation. Finally, an architecture that is more flexible, robust, and scalable is proposed: Architecture Based on Cross-validation and Decentralized Ensemble Federated Learning (ABCDEFL). Also, in the domain of classification problems, a more meticulous output aggregation method for EL is proposed: Classwise Weighted Majority Voting (CWMV).
A series of experiments are designed to verify the expected strengths of proposed methods, and the results confirm them. Thus, the thesis shows and argues the possibilities and advantages brought by ABCDEFL, and the superiority of CWMV is validated. The experiment scripts are open-sourced, yet without feasible readability at the publication of this thesis. They should be revised in the future.
In the final chapter, several important innovative concepts and directions for future researches are proposed. In the last section, inspired by the gains from this research, we also embark on discussions about the expected handicaps toward Artificial General Intelligence or even Artificial Consciousness, a possible road through them, the potential threats awaiting at the destination, and my proposed way to counter.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-08-15T16:23:55Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-08-15T16:23:55Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents口試委員會審定書 i
誌謝 ii
中文摘要 iii
英文摘要 iv
目次 v
圖次 viii
表次 x
第一章 緒論 1
第二章 網路架構流程及具體演算法示例 7
2.1 模型輸出正規化 8
2.1.1 使用Softmax 8
2.1.2 使用標準化 8
2.2 網路互評 9
2.2.1 各類別精確率及召回率評分 9
2.2.2 各類別精確傾向及召回傾向評分 10
2.2.3 多算法合併評分 10
2.3 評分集成及正規化 11
2.3.1 算術平均後各類別等比拉伸 11
2.3.2 算術平均後各類別Softmax 11
2.4 模型輸出調整 12
2.4.1 經Softmax後乘評分 12
2.4.2 標準化後正負各乘評分 13
2.5 模型輸出集成 13
2.5.1 經過Softmax後不調整直接相乘 14
2.5.2 使用2.4提出之模型輸出調整 14
2.5.3 使用meta-learner集成 15
第三章 實驗 17
3.1 實驗基本設置 17
3.2 實驗結果呈現說明 17
3.3 實驗基準線 18
3.4 架構基礎測試 21
3.4.1 訓練20 epochs 21
3.4.2 訓練80 epochs 21
3.5 各節點同模型 23
3.5.1 各節點平均分配資料 23
3.5.2 各節點資料量不同 24
3.6 各節點模型於各類別表現有大幅差距 26
3.6.1 以各節點分配之各類別資料量差距模擬 26
3.6.2 以訓練時真實標籤隨機部分錯誤至指定類別模擬 30
3.6.3 以訓練時真實標籤固定全部錯誤至指定類別模擬 34
3.7 各節點可分類目標類別僅部分交集 36
3.7.1 各節點同模型但架構中輸出層改為只對可分類目標輸出 36
3.7.2 以3.7.1之方法處理3.6.3中之情況 37
3.8 各節點可分類目標類別完全無交集 39
3.9 霧節點各自管理CFL後共行本論文架構 40
3.10 後門攻擊抗性測試 43
3.10.1 注重對抗正確預測輸出 44
3.10.2 注重拉高目標類別輸出 44
3.10.3 無目標類別之蓄意破壞 44
3.11 本論文所提集成方式於純集成學習上之應用 46
第四章 討論 49
4.1 實際應用問題 49
4.2 各流程各方法優劣 49
4.3 論文實驗設計之相關可能質疑 55
4.4 本論文集成方法與論文[11]選取模型方法並用 58
4.5 使用meta-learner之優勢及限制 59
4.6 本論文架構銜接CFL配合霧運算網路之適性 61
4.7 該架構對後門攻擊之抗性 62
4.8 應對實驗3.6、3.7、3.8情況之必要性及價值 62
第五章 結論及未來可研究方向 64
5.1 不統一epochs而改為統一可用訓練時間 64
5.2 在本論文架構下抵抗concept drift 64
5.3 聯邦學習網路上之FOM/DFOM攻擊及防禦 64
5.4 以State-of-the-Art模型運用本論文集成方法 66
5.5 本架構一般化之各種可能 66
5.6 由Stacking及MoE衍生Model of Models概念 66
5.7 由本研究衍生之AGI相關看法 68
參考文獻 71
附錄 75
-
dc.language.isozh_TW-
dc.subject分散式聯邦學習zh_TW
dc.subject集成學習zh_TW
dc.subject交叉驗證zh_TW
dc.subject霧運算zh_TW
dc.subject後門攻擊zh_TW
dc.subject強人工智慧zh_TW
dc.subject通用人工智慧zh_TW
dc.subject人工超智能zh_TW
dc.subject人工意識zh_TW
dc.subject聯邦學習zh_TW
dc.subjectSuperintelligent AIen
dc.subjectArtificial General Intelligenceen
dc.subjectAGIen
dc.subjectStrong AIen
dc.subjectBackdoor Attacken
dc.subjectFog Computingen
dc.subjectCross-Validationen
dc.subjectEnsemble Learningen
dc.subjectDecentralized Federated Learningen
dc.subjectFederated Learningen
dc.subjectArtificial Consciousnessen
dc.title基於交叉驗證及分散式集成聯邦學習之架構zh_TW
dc.titleArchitecture Based on Cross-validation and Decentralized Ensemble Federated Learningen
dc.typeThesis-
dc.date.schoolyear112-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee于天立;陳永昇zh_TW
dc.contributor.oralexamcommitteeTian-Li Yu;Yeong-Sheng Chenen
dc.subject.keyword聯邦學習,分散式聯邦學習,集成學習,交叉驗證,霧運算,後門攻擊,強人工智慧,通用人工智慧,人工超智能,人工意識,zh_TW
dc.subject.keywordFederated Learning,Decentralized Federated Learning,Ensemble Learning,Cross-Validation,Fog Computing,Backdoor Attack,Strong AI,AGI,Artificial General Intelligence,Superintelligent AI,Artificial Consciousness,en
dc.relation.page288-
dc.identifier.doi10.6342/NTU202402182-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2024-08-07-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept電機工程學系-
dc.date.embargo-lift2025-09-01-
Appears in Collections:電機工程學系

Files in This Item:
File SizeFormat 
ntu-112-2.pdf12.95 MBAdobe PDFView/Open
Show simple item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved