請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86507完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳尚澤(Shang-Tse Chen) | |
| dc.contributor.author | Pei-Chi Wang | en |
| dc.contributor.author | 王珮綺 | zh_TW |
| dc.date.accessioned | 2023-03-19T23:59:54Z | - |
| dc.date.copyright | 2022-08-19 | |
| dc.date.issued | 2022 | |
| dc.date.submitted | 2022-08-15 | |
| dc.identifier.citation | M. Ali, P. Sapiezynski, M. Bogen, A. Korolova, A. Mislove, and A. Rieke. Discrimination through optimization: How facebook’s ad delivery can lead to biased outcomes. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW):1–30, 2019. I. Y. Chen, F. D. Johansson, and D. Sontag. Why is my classifier discriminatory? In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 3543–3554, 2018. E. Chzhen, C. Denis, M. Hebiri, L. Oneto, and M. Pontil. Leveraging labeled and unlabeled data for consistent fair binary classification. In Advances in Neural Information Processing Systems, 2019. C. DiCiccio, S. Vasudevan, K. Basu, K. Kenthapadi, and D. Agarwal. Evaluating fairness using permutation tests. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1467–1477, 2020. F. Ding, M. Hardt, J. Miller, and L. Schmidt. Retiring adult: New datasets for fair machine learning. Advances in Neural Information Processing Systems, 34, 2021. M. Donini, L. Oneto, S. Ben-David, J. S. Shawe-Taylor, and M. Pontil. Empirical risk minimization under fairness constraints. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/83cdcec08fbf90370fcf53bdd56604ff-Paper.pdf. D. Dua and C. Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml. C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214–226, 2012. J. R. Foulds, R. Islam, K. N. Keya, and S. Pan. An intersectional definition of fairness. In 2020 IEEE 36th International Conference on Data Engineering (ICDE), pages 1918–1921. IEEE, 2020. M. Hardt, E. Price, E. Price, and N. Srebro. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, 2016. V. Iosifidis, B. Fetahu, and E. Ntoutsi. Fae: A fairness-aware ensemble framework. In 2019 IEEE International Conference on Big Data (Big Data), pages 1375–1380. IEEE, 2019. D. Ji, P. Smyth, and M. Steyvers. Can i trust my fairness metric? assessing fairness with unlabeled data and bayesian inference. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 18600–18612. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/d83de59e10227072a9c034ce10029c39-Paper.pdf. K. Karkkainen and J. Joo. Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1548–1558, 2021. M. P. Kim, A. Ghorbani, and J. Zou. Multiaccuracy: Black-box post-processing for fairness in classification. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 247–254, 2019. V. Koltchinskii. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems: Ecole d'Eté de Probabilités de Saint-Flour XXXVIII-2008, volume 2033. Springer Science & Business Media, 2011. M. J. Kusner, J. R. Loftus, C. Russell, and R. Silva. Counterfactual fairness. arXiv preprint arXiv:1703.06856, 2017. J. Larson, S. Mattu, L. Kirchner, and J. Angwin. How we analyzed the compass recidivism algorithm, May 2016. URL https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm. A. Noriega-Campero, M. A. Bakker, B. Garcia-Bulle, and A. Pentland. Active fairness in algorithmic decision making. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 77–83, 2019. V. Noroozi, S. Bahaadini, S. Sheikhi, N. Mojab, and S. Y. Philip. Leveraging semi-supervised learning for fairness using neural networks. In 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA), pages 50–55. IEEE, 2019. G. Pleiss, M. Raghavan, F. Wu, J. Kleinberg, and K. Q. Weinberger. On fairness and calibration. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. M. O. Prates, P. H. Avelar, and L. C. Lamb. Assessing gender bias in machine translation: a case study with google translate. Neural Computing and Applications, 32 (10):6363–6381, 2020. I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton. Saving face: Investigating the ethical concerns of facial recognition auditing. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 145–151, 2020. V. Vovk, H. Papadopoulos, and A. Gammerman. Measures of Complexity. Springer, 2015. B. Woodworth, S. Gunasekar, M. I. Ohannessian, and N. Srebro. Learning non-discriminatory predictors. In Conference on Learning Theory, pages 1920–1953. PMLR, 2017. D. Xu, S. Yuan, and X. Wu. Achieving differential privacy and fairness in logistic regression. In Companion Proceedings of The 2019 World Wide Web Conference, pages 594–599, 2019. R. Xu, P. Cui, K. Kuang, B. Li, L. Zhou, Z. Shen, and W. Cui. Algorithmic decision making with conditional fairness. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2125–2135, 2020. M. Yurochkin, A. Bower, and Y. Sun. Training individually fair ml models with sensitive subspace robustness. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=B1gdkxHFDH. B. H. Zhang, B. Lemoine, and M. Mitchell. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 2018. T. Zhang, J. Li, M. Han, W. Zhou, P. Yu, et al. Fairness in semi-supervised learning: Unlabeled data help to reduce discrimination. IEEE Transactions on Knowledge and Data Engineering, 2020. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86507 | - |
| dc.description.abstract | 近年來,具有公平意識的機器學習演算法獲得廣泛的討論,但多數研究侷限在特定的設定。前人提出的方法只能應用在特定的模型架構,亦或是限於二元編碼的敏感屬性。本文研究二元分類問題之公平性,並提出一個後處理演算法。該演算法承接後處理適用所有模型架構的優勢,並應用於多類別的敏感屬性。我們考慮一個更廣泛的公平性定義,其涵蓋數個目前廣泛使用的公平性指標。利用未標記資料提供的資訊,本文提出的演算法調整各組的閾值藉以達到特定的公平性定義。我們設計一個代理函數將此問題的最佳化從網格搜尋取代成基於梯度的迭代演算法,以避免敏感屬性增加或公平性指標的變化對於網格搜尋帶來的影響。本文提出的方法享有嚴格的理論保證,並且在數值實驗的準確度和公平性之間取得更好的平衡。 | zh_TW |
| dc.description.abstract | There has been wide discussion on fairness-aware machine learning algorithms, but most of them are under limited settings. Prior works are either applied to a certain type of model or restricted to a binary sensitive group. We study the fair binary classification problem and propose a post-processing method that simultaneously applies to any classification model, data with any number of groups induced by a sensitive attribute, and a more generic definition of fairness that covers some widely-used fairness metrics. By utilizing unlabeled data, the proposed model estimates a group-dependent threshold to satisfy the given fairness notion. We demonstrate that a surrogate function makes the threshold-finding problem solved by gradient-based optimization. Our approach enjoys rigorous theoretical guarantees and often gets a better trade-off between accuracy and fairness in numerical experiments. | en |
| dc.description.provenance | Made available in DSpace on 2023-03-19T23:59:54Z (GMT). No. of bitstreams: 1 U0001-1208202213404100.pdf: 1067799 bytes, checksum: a9638d6e5752a6e33888642a80101baa (MD5) Previous issue date: 2022 | en |
| dc.description.tableofcontents | Verification Letter from the Oral Examination Committee i Acknowledgements ii 摘要 iii Abstract iv Contents v List of Figures vii List of Tables viii 1 Introduction 1 2 Related Work 4 2.1 Post-processing for Fairness 4 2.2 Unlabeled Data in Algorithmic Fairness 5 3 Setup and Notation 6 4 Assessing Fairness by Unlabeled Set 9 4.1 Consistency of Estimated Violation of Fairness 10 5 Proposed Procedure 13 5.1 The Empirical Fair Classifier 13 5.2 The Surrogate Function 14 5.3 Optimization 16 5.4 The proposed method with a rejection option 17 5.5 Without Sensitive Attribute at Inference Time 18 6 Experimental Result 20 6.1 Experimental Setup 20 6.2 Comparison With Existing Fairness-Aware Methods 23 6.3 The Impact of the Size of Dataset 23 6.4 Information Lacking 25 7 Conclusion 29 References 30 A Proof of the Asymptotic Property 35 A.1 Proof of Theorem 4.2 35 A.2 Proof of Lemma 4.5 37 B The Detailed number of Experimental Results 40 | |
| dc.language.iso | en | |
| dc.subject | 條件公平性 | zh_TW |
| dc.subject | 機器學習 | zh_TW |
| dc.subject | 二元分類問題 | zh_TW |
| dc.subject | 公平性 | zh_TW |
| dc.subject | 未標記資料 | zh_TW |
| dc.subject | 後處理 | zh_TW |
| dc.subject | 機器學習 | zh_TW |
| dc.subject | 二元分類問題 | zh_TW |
| dc.subject | 公平性 | zh_TW |
| dc.subject | 條件公平性 | zh_TW |
| dc.subject | 未標記資料 | zh_TW |
| dc.subject | 後處理 | zh_TW |
| dc.subject | fairness | en |
| dc.subject | unlabeled data | en |
| dc.subject | post-processing | en |
| dc.subject | binary classification | en |
| dc.subject | conditional fairness | en |
| dc.subject | machine learning | en |
| dc.subject | post-processing | en |
| dc.subject | unlabeled data | en |
| dc.subject | machine learning | en |
| dc.subject | binary classification | en |
| dc.subject | fairness | en |
| dc.subject | conditional fairness | en |
| dc.title | 利用未標記資料改善分類公平性之後處理演算法 | zh_TW |
| dc.title | Improving Fair Classification by Leveraging Unlabeled Data in Post-processing | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 110-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 李政德(Cheng-Te Li),江介宏(Jie-Hong Roland Jiang) | |
| dc.subject.keyword | 機器學習,二元分類問題,公平性,條件公平性,未標記資料,後處理, | zh_TW |
| dc.subject.keyword | machine learning,binary classification,fairness,conditional fairness,unlabeled data,post-processing, | en |
| dc.relation.page | 48 | |
| dc.identifier.doi | 10.6342/NTU202202340 | |
| dc.rights.note | 同意授權(全球公開) | |
| dc.date.accepted | 2022-08-16 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| dc.date.embargo-lift | 2023-09-01 | - |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-1208202213404100.pdf | 1.04 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
