Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 管理學院
  3. 資訊管理學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98588
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor魏志平zh_TW
dc.contributor.advisorChih-Ping Weien
dc.contributor.author翁子婷zh_TW
dc.contributor.authorTzu-Ting Wengen
dc.date.accessioned2025-08-18T00:59:21Z-
dc.date.available2025-08-18-
dc.date.copyright2025-08-15-
dc.date.issued2025-
dc.date.submitted2025-08-04-
dc.identifier.citationAstebro, T. B. (2021). An inside peek at AI use in private equity. Journal of Financial Data Science, 3(3), 97-107.
Balachandra, L. (2020). How gender biases drive venture capital decision-making: exploring the gender funding gap. Gender in Management: An International Journal, 35(3), 261-273.
Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Conference on Fairness, Accountability and Transparency, 149-159.
Bodenhausen, G. V., & Wyer, R. S. (1985). Effects of stereotypes in decision making and information-processing strategies. Journal of Personality and Social Psychology, 48(2), 267-282.
Bygrave, W. D., & Timmons, J. (1992). Venture capital at the crossroads. University of Illinois at Urbana-Champaign's Academy for Entrepreneurial Leadership Historical Research Reference in Entrepreneurship.
Caton, S., & Haas, C. (2024). Fairness in machine learning: A survey. ACM Computing Surveys, 56(7), 1-38.
Chen, I., Johansson, F. D., & Sontag, D. (2018). Why is my classifier discriminatory? Advances in Neural Information Processing Systems, 31, 3543-3554.
Chen, J., Kallus, N., Mao, X., Svacha, G., & Udell, M. (2019). Fairness under unawareness: Assessing disparity when protected class is unobserved. Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, 339-348, ACM.
Corea, F., Bertinetti, G., & Cervellati, E. M. (2021). Hacking the venture industry: An Early-stage Startups Investment framework for data-driven investors. Machine Learning with Applications, 5, 100062.
Coval, J. D., & Moskowitz, T. J. (1999). Home bias at home: Local equity preference in domestic portfolios. The Journal of Finance, 54(6), 2045-2073.
Dale, S. (2015). Heuristics and biases: The science of decision-making. Business Information Review, 32(2), 93-99.
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214-226, ACM.
Ewens, M., & Townsend, R. R. (2020). Are early stage investors biased against women? Journal of Financial Economics, 135(3), 653-677.
Fairlie, R., Robb, A., & Robinson, D. T. (2022). Black and white: Access to capital among minority-owned start-ups. Management Science, 68(4), 2377-2400.
Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2015). Certifying and removing disparate impact. Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 259-268, ACM.
Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., March, M., & Lempitsky, V. (2016). Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59), 1-35.
Gazanchyan, N. S., Hashimzade, N., Rodionova, Y., & Vershinina, N. (2017). Gender, access to finance, occupational choice, and business performance. CESifo Working Paper Series, 6353.
Gompers, P. A., & Lerner, J. (2004). The Venture Capital Cycle. MIT press.
Grari, V., Ruf, B., Lamprier, S., & Detyniecki, M. (2019). Fairness-aware neural rényi minimization for continuous features. arXiv preprint arXiv:1911.04929.
Hajian, S., & Domingo-Ferrer, J. (2012). A methodology for direct and indirect discrimination prevention in data mining. IEEE Transactions on Knowledge and Data Engineering, 25(7), 1445-1459.
Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, 29.
Hellmann, T., & Puri, M. (2000). The interaction between product market and financing strategy: The role of venture capital. The Review of Financial Studies, 13(4), 959-984.
Hernandez, M., Raveendhran, R., Weingarten, E., & Barnett, M. (2019). How algorithms can diversify the startup pool. MIT Sloan Management Review, 61(1), 71-78.
Ivanitzki, T., & Rashida, J. (2023). Expand underrepresented participation in high-tech start-ups. Proceeding of 2023 Conference for Industry and Education Collaboration, South Carolina, ASEE.
Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), 1-33.
Kamishima, T., Akaho, S., Asoh, H., & Sakuma, J. (2012). Fairness-aware classifier with prejudice remover regularizer. Proceedings of Machine Learning and Knowledge Discovery in Databases, 7524, 35-50, Springer.
Krishna, A., Agrawal, A., & Choudhary, A. (2016). Predicting the outcome of startups: less failure, more success. Proceedings of 2016 IEEE 16th nternational Conference on Data Mining Workshops, 798-805, IEEE.
Kumar, I. E., Hines, K. E., & Dickerson, J. P. (2022). Equalizing credit opportunity in algorithms: Aligning algorithmic fairness research with us fair lending regulation. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 357-368, ACM.
Lohia, P. K., Ramamurthy, K. N., Bhide, M., Saha, D., Varshney, K. R., & Puri, R. (2019). Bias mitigation post-processing for individual and group fairness. Proceedings of 2019 IEEE International Conference on Acoustics, Speech and Signal Processing, 2847-2851, IEEE.
Matthews, M. J., Anglin, A. H., Drover, W., & Wolfe, M. T. (2024). Just a number? Using artificial intelligence to explore perceived founder age in entrepreneurial fundraising. Journal of Business Venturing, 39(1), 106361.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), Article 115.
Moriarty, R., Ly, H., Lan, E., & McIntosh, S. K. (2019). Deal or no deal: Predicting mergers and acquisitions at scale. Proceedings of 2019 IEEE International Conference on Big Data, 5552-5558, IEEE.
Paglia, J. K., & Harjoto, M. A. (2014). The effects of private equity and venture capital on sales and employment growth in small and medium-sized businesses. Journal of Banking & Finance, 47, 177-197.
Pedreshi, D., Ruggieri, S., & Turini, F. (2008). Discrimination-aware data mining. Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 560-568, ACM.
Pessach, D., & Shmueli, E. (2022). A review on fairness in machine learning. ACM Computing Surveys, 55(3), 1-44.
QS Top Universities. (2024). QS World University Rankings 2025. https://www.topuniversities.com/world-university-rankings/2025
Reynolds, P. D. (2016). Start-up actions and outcomes: What entrepreneurs do to reach profitability. Foundations and Trends in Entrepreneurship, 12(6), 443-559.
Sharchilev, B., Roizner, M., Rumyantsev, A., Ozornin, D., Serdyukov, P., & de Rijke, M. (2018). Web-based startup success prediction. Proceedings of the 27th ACM International Conference on Information and Knowledge Management, 2283-2291, ACM.
Te, Y.-F., Wieland, M., Frey, M., & Grabner, H. (2023a). Mitigating discriminatory biases in success prediction models for venture capitals. Proceedings of 2023 10th IEEE Swiss Conference on Data Science, 26-33, IEEE.
Te, Y.-F., Wieland, M., Frey, M., Pyatigorskaya, A., Schiffer, P., & Grabner, H. (2023b). Making it into a successful series a funding: An analysis of crunchbase and linkedin data. The Journal of Finance and Data Science, 9, 100099.
Wei, C.-P., Fang, E. S.-H., Yang, C.-S., & Liu, P.-J. (2025). To shine or not to shine: Startup success prediction by exploiting technological and venture-capital-related features. Information & Management, 62(6), 104152.
Zafar, M. B., Valera, I., Rogriguez, M. G., & Gummadi, K. P. (2017). Fairness constraints: Mechanisms for fair classification. Artificial Intelligence and Statistics, 54, 962-970.
Zemel, R., Wu, Y., Swersky, K., Pitassi, T., & Dwork, C. (2013). Learning fair representations. Proceedings of International Conference on Machine Learning, 28, 325-333.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98588-
dc.description.abstract隨著創業投資家(VC)越來越依賴於使用機器學習來輔助他們的投資決策,這些演算法是否會延續過去募資結果中所存在的歧視性偏見,成為了一項令人關心的議題。這些偏見大多起因於新創公司早期缺乏足夠的可量化資料,使得投資人進行投資決策時,往往會依賴他們對創辦人團隊的主觀判斷,而這很有可能招致有關人口統計上的刻板印象與歧視。為了避免這類偏見在演算法中被進一步強化,確保決策系統中的公平性是避免創業環境下的資源錯誤分配、以及平等資金機會的關鍵。
在本研究中,針對新創早期成功預測任務,我們考量了三種常見的潛在歧視來源,包含地理區域、創辦人性別以及種族,並且實作與比較了三種公平性方法:特徵遮蔽(feature-blind)、正則化法(regularization-based)與梯度反轉(gradient reversal)。這些方法皆可處理具有混合資料型態的多個敏感屬性(sensitive attributes)。我們的實驗結果顯示,儘管提升公平性會略微影響到目標任務的預測效能,但正則化法與梯度反轉法皆能有效改善模型公平性。
除了比較模型表現外,本研究也進一步識別出哪些子群體最容易受到模型偏見影響,例如創辦人中女性比例超過 75% 的新創企業是最不受基準模型的青睞的。我們也分析了哪個敏感屬性造成了最多的模型偏見。這些研究成果可為新創公司與創投提供實務上的見解,對新創企業而言,採用具公平措施的模型能提升他們平等地獲得資金的機會,而不受既有歧視的影響,進而打造更具包容性的創業環境;對投資人而言,這些模型有助於幫助他們發掘那些可能因偏見而被忽略的投資機會,並建立更平衡的投資組合。
zh_TW
dc.description.abstractAs venture capital (VC) firms increasingly adopt machine learning (ML) tools to support investment decisions, concerns arise regarding the potential perpetuation of historical biases embedded in past funding outcomes. These biases often stem from the limited availability of quantifiable data on early-stage startups. As a result, investment decisions depend heavily on subjective assessments of founding teams, which introduces risks of demographic stereotyping and discrimination. To prevent the reinforcement of such biases, ensuring fairness in ML-based decision systems is therefore critical to mitigating systematic resource misallocation and promoting equitable access to capital.
This study investigates fairness-aware startup early success prediction by examining three commonly cited sources of potential biases, including geographic region, founder gender, and race. We implement and compare three fairness-aware approaches: feature-blind, regularization-based, and gradient reversal, each capable of handling multiple sensitive attributes of mixed data types. Our empirical results demonstrate that, while introducing modest trade-off in predictive performance, both the regularization and gradient reversal methods effectively enhance fairness.
Beyond performance evaluation, this study identifies subgroups most impacted by model biases, such as startups with over 75% female founders, and highlights which sensitive attribute contributes most to observed disparities. The findings offer actionable insights for both startups and VC practitioners. For startups, the adoption of fairness-aware methods can improve fairer access to funding opportunities and foster a more inclusive entrepreneurial landscape. For investors, these methods may help uncover overlooked ventures and support more balanced portfolio construction.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-08-18T00:59:21Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-08-18T00:59:21Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents致謝 i
摘要 ii
Abstract iii
Table of Contents v
List of Figures vii
List of Tables viii
Chapter 1 Introduction 1
1.1 Background 1
1.2 Research Motivation 4
1.3 Research Objectives 5
Chapter 2 Literature Review 7
2.1 Predictive Features for Startup Early Success Prediction 7
2.2 Existing Studies on Mitigating Unfairness in Machine Learning 10
2.2.1 Pre-Processing Methods 10
2.2.2 In-Processing Methods 11
2.2.3 Post-Processing Methods 14
2.3 Summary and Limitations of Existing Literature 15
Chapter 3 Methodology 20
3.1 Definition of Startup Early Success 20
3.2 Predictive Features Used in Our Research 21
3.3 Fairness-Aware Modeling Approaches 25
3.3.1 Blind Method 25
3.3.2 Regularization Method 26
3.3.3 Gradient Reversal Method 27
Chapter 4 Experiments 33
4.1 Data Collection 33
4.2 Baseline and Model Settings 34
4.3 Evaluation Design 36
4.3.1 Fairness Evaluation Metrics 36
4.3.2 Evaluation Procedure and Performance Metrics 40
4.4 Evaluation Results 42
4.4.1 Fairness-aware Methods 42
4.4.2 Group-wise Disparities across Sensitive Attributes 45
4.4.3 Attribution of Prediction Unfairness to Sensitive Attributes 49
4.4.4 Impact of Fairness Interventions on A Single Sensitive Attribute 50
Chapter 5 Conclusion 53
5.1 Conclusion 53
5.2 Future Research Directions 55
References 57
Appendix 64
-
dc.language.isoen-
dc.subject公平性機器學習zh_TW
dc.subject新創公司成功預測zh_TW
dc.subject新創公司分析zh_TW
dc.subject演算法偏見zh_TW
dc.subject演算法公平zh_TW
dc.subject預測建模zh_TW
dc.subject表徵學習zh_TW
dc.subject決策支援系統zh_TW
dc.subjectDecision support systemsen
dc.subjectFairness-aware machine learningen
dc.subjectRepresentation learningen
dc.subjectStartup success predictionen
dc.subjectStartup analyticsen
dc.subjectAlgorithmic biasen
dc.subjectAlgorithmic fairnessen
dc.subjectPredictive modelingen
dc.title新創早期成功預測中偏見消除方法的比較研究zh_TW
dc.titleFrom Bias to Balance: A Comparative Study of Bias Mitigation Methods in Startup Early Success Predictionen
dc.typeThesis-
dc.date.schoolyear113-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee向倩儀;楊錦生zh_TW
dc.contributor.oralexamcommitteeChien-Yi Hsiang;Chin-Sheng Yangen
dc.subject.keyword公平性機器學習,新創公司成功預測,新創公司分析,演算法偏見,演算法公平,預測建模,表徵學習,決策支援系統,zh_TW
dc.subject.keywordFairness-aware machine learning,Startup success prediction,Startup analytics,Algorithmic bias,Algorithmic fairness,Predictive modeling,Representation learning,Decision support systems,en
dc.relation.page78-
dc.identifier.doi10.6342/NTU202503744-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2025-08-08-
dc.contributor.author-college管理學院-
dc.contributor.author-dept資訊管理學系-
dc.date.embargo-lift2025-08-18-
顯示於系所單位:資訊管理學系

文件中的檔案:
檔案 大小格式 
ntu-113-2.pdf1.15 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved