Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88568
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor王鈺強zh_TW
dc.contributor.advisorYu-Chiang Frank Wangen
dc.contributor.author許元譯zh_TW
dc.contributor.authorYuan-Yi Hsuen
dc.date.accessioned2023-08-15T16:52:25Z-
dc.date.available2023-11-09-
dc.date.copyright2023-08-15-
dc.date.issued2023-
dc.date.submitted2023-07-26-
dc.identifier.citation[1] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778. 1, 16
[2] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018. 1
[3] B. Kim, H. Kim, K. Kim, S. Kim, and J. Kim, “Learning not to learn: Training deep neural networks with biased data,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 9012–9020. 1
[4] K. Karkkainen and J. Joo, “Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), January 2021, pp. 1548–1558. 1
[5] H. Shah, K. Tamuly, A. Raghunathan, P. Jain, and P. Netrapalli, “The pit-falls of simplicity bias in neural networks,” Advances in Neural Information Processing Systems, vol. 33, pp. 9573–9585, 2020. 1, 4
[6] R. Geirhos, J.-H. Jacobsen, C. Michaelis, R. Zemel, W. Brendel, M. Bethge, and F. A. Wichmann, “Shortcut learning in deep neural networks,” Nature Machine Intelligence, vol. 2, no. 11, pp. 665–673, 2020. 1, 4
[7] R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, and W. Brendel, “Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness.” in International Conference on Learning Representations, 2019. [Online]. Available: https: //openreview.net/forum?id=Bygh9j09KX 1, 4
[8] Y. Li and N. Vasconcelos, “Repair: Removing representation bias by dataset resampling,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 9572–9581. 1
[9] H. Bahng, S. Chun, S. Yun, J. Choo, and S. J. Oh, “Learning de-biased representations with biased representations,” in International Conference on Machine Learning. PMLR, 2020, pp. 528–539. 1, 4
[10] J. Nam, H. Cha, S. Ahn, J. Lee, and J. Shin, “Learning from failure: De-biasing classifier from biased classifier,” Advances in Neural Information Processing Systems, vol. 33, pp. 20 673–20 684, 2020. 1, 2, 4, 10, 16, 17, 24
[11] J. Lee, E. Kim, J. Lee, J. Lee, and J. Choo, “Learning debiased representation via disentangled feature augmentation,” Advances in Neural Information Processing Systems, vol. 34, pp. 25 123–25 133, 2021. 1, 2, 4, 16, 24
[12] Y. Hong and E. Yang, “Unbiased classification through bias-contrastive and bias-balanced learning,” Advances in Neural Information Processing Systems, vol. 34, pp. 26 449–26 461, 2021. 1, 4, 7, 16, 24
[13] S. Sagawa*, P. W. Koh*, T. B. Hashimoto, and P. Liang, “Distributionally robust neural networks,” in International Confer-ence on Learning Representations, 2020. [Online]. Available: https: //openreview.net/forum?id=ryxGuJrFvS 1
[14] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics. PMLR, 2017, pp. 1273–1282. 2, 8, 12, 16, 24, 25
[15] P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings et al., “Advances and open problems in federated learning,” Foundations and Trends® in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021. 3
[16] Q. Li, Y. Diao, Q. Chen, and B. He, “Federated learning on non-iid data silos: An experimental study,” in 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE, 2022, pp. 965–978. 3
[17] Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated learning with non-iid data,” arXiv preprint arXiv:1806.00582, 2018. 3
[18] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” Proceedings of Machine Learning and Systems, vol. 2, pp. 429–450, 2020. 3, 5, 16, 24
[19] S. P. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich, and A. T. Suresh, “Scaffold: Stochastic controlled averaging for federated learning,” in Interna-tional Conference on Machine Learning. PMLR, 2020, pp. 5132–5143. 3, 5, 16, 24
[20] J. Wang, Q. Liu, H. Liang, G. Joshi, and H. V. Poor, “Tackling the objective inconsistency problem in heterogeneous federated optimization,” Advances in neural information processing systems, vol. 33, pp. 7611–7623, 2020. 3, 5
[21] Q. Li, B. He, and D. Song, “Model-contrastive federated learning,” in Pro-ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10 713–10 722. 3, 5, 16, 24
[22] M. Luo, F. Chen, D. Hu, Y. Zhang, J. Liang, and J. Feng, “No fear of heterogeneity: Classifier calibration for federated learning with non-iid data,” Advances in Neural Information Processing Systems, vol. 34, pp. 5972–5984, 2021. 3, 5
[23] X. Li, M. Jiang, X. Zhang, M. Kamp, and Q. Dou, “Fedbn: Federated learning on non-iid features via local batch normalization,” arXiv preprint arXiv:2102.07623, 2021. 3, 5, 16, 24
[24] W. Huang, M. Ye, and B. Du, “Learn from others and be yourself in hetero-geneous federated learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10 143–10 153. 3, 5
[25] H. Wang, Z. He, Z. C. Lipton, and E. P. Xing, “Learning robust representations by projecting superficial statistics out,” arXiv preprint arXiv:1903.06256, 2019. 4
[26] E. Tartaglione, C. A. Barbano, and M. Grangetto, “End: Entangling and disentangling deep representations for bias correction,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 13 508–13 517. 4
[27] D. Teney, E. Abbasnejad, S. Lucey, and A. van den Hengel, “Evading the simplicity bias: Training a diverse set of models discovers solutions with superior ood generalization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 761–16 772. 4, 15
[28] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 9729–9738. 4
[29] P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan, “Supervised contrastive learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 18 661–18 673, 2020. 4
[30] Z. Zhu, J. Hong, and J. Zhou, “Data-free knowledge distillation for heterogeneous federated learning,” in Proceedings of the 38th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, M. Meila and T. Zhang, Eds., vol. 139. PMLR, 18–24 Jul 2021, pp. 12 878–12 889. [Online]. Available: https://proceedings.mlr.press/v139/zhu21b.html 5
[31] T. Lin, L. Kong, S. U. Stich, and M. Jaggi, “Ensemble distillation for ro-bust model fusion in federated learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 2351–2363, 2020. 5
[32] D. A. E. Acar, Y. Zhao, R. Zhu, R. Matas, M. Mattina, P. Whatmough, and V. Saligrama, “Debiasing model updates for improving personalized federated training,” in International Conference on Machine Learning. PMLR, 2021, pp. 21–31. 5
[33] Anonymous, “Learning to aggregate: A parameterized aggregator to debias aggregation for cross-device federated learning,” in Submitted to The Eleventh International Conference on Learning Representations, 2023, under review. [Online]. Available: https://openreview.net/forum?id=IQM-3 Tzldw 5
[34] Y. Guo, X. Tang, and T. Lin, “Feddebias: Reducing the local learning bias improves federated learning on heterogeneous data,” 2023. [Online]. Available: https://openreview.net/forum?id=m thN8e6qrF 5
[35] A. Abay, Y. Zhou, N. Baracaldo, S. Rajamoni, E. Chuba, and H. Ludwig, “Mitigating bias in federated learning,” arXiv preprint arXiv:2012.02447, 2020. 5
[36] Y. H. Ezzeldin, S. Yan, C. He, E. Ferrara, and S. Avestimehr, “Fairfed: En-abling group fairness in federated learning,” arXiv preprint arXiv:2110.00857, 2021. 5
[37] S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, “Cutmix: Reg-ularization strategy to train strong classifiers with localizable features,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 6023–6032. 8, 10, 12, 17, 25
[38] E. Harris, A. Marcu, M. Painter, M. Niranjan, A. Pr¨ugel-Bennett, and J. Hare, “Fmix: Enhancing mixed sample data augmentation,” arXiv preprint arXiv:2002.12047, 2020. 8, 10
[39] D. Hendrycks, N. Mu, E. D. Cubuk, B. Zoph, J. Gilmer, and B. Lakshmi-narayanan, “Augmix: A simple data processing method to improve robustness and uncertainty,” arXiv preprint arXiv:1912.02781, 2019. 8, 10
[40] K. Zhou, Y. Yang, Y. Qiao, and T. Xiang, “Domain generalization with mixstyle,” arXiv preprint arXiv:2104.02008, 2021. 8, 10, 17, 25
[41] V. Olsson, W. Tranheden, J. Pinto, and L. Svensson, “Classmix: Segmentation-based data augmentation for semi-supervised learning,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Com-puter Vision, 2021, pp. 1369–1378. 8, 10
[42] A. Dabouei, S. Soleymani, F. Taherkhani, and N. M. Nasrabadi, “Supermix: Supervising the mixing data augmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 13 794–13 803. 8, 10
[43] M. Hong, J. Choi, and G. Kim, “Stylemix: Separating content and style for enhanced data augmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 14 862–14 870. 8, 10
[44] M. Arjovsky, L. Bottou, I. Gulrajani, and D. Lopez-Paz, “Invariant risk minimization,” arXiv preprint arXiv:1907.02893, 2019. 15
[45] D. Hendrycks and T. G. Dietterich, “Benchmarking neural network ro-bustness to common corruptions and surface variations,” arXiv preprint arXiv:1807.01697, 2018. 15
[46] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. 15
[47] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017. 15
[48] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” in NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011. [Online]. Available: http://ufldl.stanford.edu/housenumbers/nips2011 housenumbers.pdf 15
[49] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” University of Toronto, Toronto, Ontario, Tech. Rep. 0, 2009. 15
[50] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241. 16
[51] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: Beyond empirical risk minimization,” arXiv preprint arXiv:1710.09412, 2017. 17, 25
[52] L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research, vol. 9, no. 11, 2008. 19
[53] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based local-ization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626. 21
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88568-
dc.description.abstract在訓練於具有偏見數據集上的學習模型往往會觀察到類別和不良特徵之間的相關性,導致模型性能下降。大多數現有的去偏差化學習模型是為集中式機器學習而設計的,無法直接應用於保護隱私的分散式設置,如在不同客戶端收集數據的聯邦學習。為了應對具有挑戰性的去偏差化聯邦學習任務,我們提出了一種新穎的聯邦學習框架,稱為偏差消除資料增強學習(FedBEAL),該框架學習使用偏差消除資料增強器(BEA)在每個客戶端生成特定於客戶端的偏差衝突樣本。由於事先不知道偏差類型或屬性,我們提出了一種獨特的學習策略,以共同訓練BEA和提出的聯邦學習框架。我們對具有各種偏差類型的數據集進行了廣泛的圖像分類實驗,以證實FedBEAL的有效性和可應用性,在去偏差聯邦學習的性能上表現優於最先進的去偏差化方法和聯邦學習方法。zh_TW
dc.description.abstractLearning models trained on biased datasets tend to observe correlations between categorical and undesirable features, which result in degraded performances. Most existing debiased learning models are designed for centralized machine learning, which cannot be directly applied to distributed settings like federated learning (FL), which collects data at distinct clients with privacy preserved. To tackle the challenging task of debiased federated learning, we present a novel FL framework of Bias-Eliminating Augmentation Learning (FedBEAL), which learns to deploy Bias-Eliminating Augmenters (BEA) for producing client-specific bias-conflicting samples at each client. Since the bias types or attributes are not known in advance, a unique learning strategy is presented to jointly train BEA with the proposed FL framework. Extensive image classification experiments on datasets with various bias types confirm the effectiveness and applicability of our FedBEAL, which performs favorably against state-of-the-art debiasing and FL methods for debiased FL.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-15T16:52:25Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-08-15T16:52:25Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsAbstract i
Contents ii
List of Figures iv
List of Tables vi
Chapter 1 Introduction 1
Chapter 2 Related Work 4
Chapter 2.1 Debiasing in Centralized Machine Learning 4
Chapter 2.2 Federated Learning with Data Heterogeneity 5
Chapter 3 Proposed Method 7
Chapter 3.1 Problem Definition and Method Overview 7
Chapter 3.2 Bias-Eliminating Augmenter 8
Chapter 3.2.1 Design and architecture 8
Chapter 3.2.2 Learning of BEA 9
Chapter 3.3 Training of FedBEAL 12
Chapter 4 Experiments 15
Chapter 4.1 Datasets and Implementation Details 15
Chapter 4.2 Quantitative Evaluation 16
Chapter 4.2.1 Comparisons to debiasing and FL methods 16
Chapter 4.2.2 Comparisons to MSDA methods 17
Chapter 4.2.3 Debiasing server and client models 17
Chapter 4.3 Qualitative Evaluation 19
Chapter 5 Conclusion 26
Reference 27
-
dc.language.isoen-
dc.subject聯邦學習zh_TW
dc.subjectFederated Learningen
dc.title偏差消除之資料增強學習於去偏差化聯邦學習zh_TW
dc.titleBias-Eliminating Augmentation Learning for Debiased Federated Learningen
dc.typeThesis-
dc.date.schoolyear111-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee孫紹華;陳祝嵩zh_TW
dc.contributor.oralexamcommitteeShao-Hua Sun;Chu-Song Chenen
dc.subject.keyword聯邦學習,zh_TW
dc.subject.keywordFederated Learning,en
dc.relation.page34-
dc.identifier.doi10.6342/NTU202301252-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2023-07-28-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept電信工程學研究所-
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-111-2.pdf4.3 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved