請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98588| 標題: | 新創早期成功預測中偏見消除方法的比較研究 From Bias to Balance: A Comparative Study of Bias Mitigation Methods in Startup Early Success Prediction |
| 作者: | 翁子婷 Tzu-Ting Weng |
| 指導教授: | 魏志平 Chih-Ping Wei |
| 關鍵字: | 公平性機器學習,新創公司成功預測,新創公司分析,演算法偏見,演算法公平,預測建模,表徵學習,決策支援系統, Fairness-aware machine learning,Startup success prediction,Startup analytics,Algorithmic bias,Algorithmic fairness,Predictive modeling,Representation learning,Decision support systems, |
| 出版年 : | 2025 |
| 學位: | 碩士 |
| 摘要: | 隨著創業投資家(VC)越來越依賴於使用機器學習來輔助他們的投資決策,這些演算法是否會延續過去募資結果中所存在的歧視性偏見,成為了一項令人關心的議題。這些偏見大多起因於新創公司早期缺乏足夠的可量化資料,使得投資人進行投資決策時,往往會依賴他們對創辦人團隊的主觀判斷,而這很有可能招致有關人口統計上的刻板印象與歧視。為了避免這類偏見在演算法中被進一步強化,確保決策系統中的公平性是避免創業環境下的資源錯誤分配、以及平等資金機會的關鍵。
在本研究中,針對新創早期成功預測任務,我們考量了三種常見的潛在歧視來源,包含地理區域、創辦人性別以及種族,並且實作與比較了三種公平性方法:特徵遮蔽(feature-blind)、正則化法(regularization-based)與梯度反轉(gradient reversal)。這些方法皆可處理具有混合資料型態的多個敏感屬性(sensitive attributes)。我們的實驗結果顯示,儘管提升公平性會略微影響到目標任務的預測效能,但正則化法與梯度反轉法皆能有效改善模型公平性。 除了比較模型表現外,本研究也進一步識別出哪些子群體最容易受到模型偏見影響,例如創辦人中女性比例超過 75% 的新創企業是最不受基準模型的青睞的。我們也分析了哪個敏感屬性造成了最多的模型偏見。這些研究成果可為新創公司與創投提供實務上的見解,對新創企業而言,採用具公平措施的模型能提升他們平等地獲得資金的機會,而不受既有歧視的影響,進而打造更具包容性的創業環境;對投資人而言,這些模型有助於幫助他們發掘那些可能因偏見而被忽略的投資機會,並建立更平衡的投資組合。 As venture capital (VC) firms increasingly adopt machine learning (ML) tools to support investment decisions, concerns arise regarding the potential perpetuation of historical biases embedded in past funding outcomes. These biases often stem from the limited availability of quantifiable data on early-stage startups. As a result, investment decisions depend heavily on subjective assessments of founding teams, which introduces risks of demographic stereotyping and discrimination. To prevent the reinforcement of such biases, ensuring fairness in ML-based decision systems is therefore critical to mitigating systematic resource misallocation and promoting equitable access to capital. This study investigates fairness-aware startup early success prediction by examining three commonly cited sources of potential biases, including geographic region, founder gender, and race. We implement and compare three fairness-aware approaches: feature-blind, regularization-based, and gradient reversal, each capable of handling multiple sensitive attributes of mixed data types. Our empirical results demonstrate that, while introducing modest trade-off in predictive performance, both the regularization and gradient reversal methods effectively enhance fairness. Beyond performance evaluation, this study identifies subgroups most impacted by model biases, such as startups with over 75% female founders, and highlights which sensitive attribute contributes most to observed disparities. The findings offer actionable insights for both startups and VC practitioners. For startups, the adoption of fairness-aware methods can improve fairer access to funding opportunities and foster a more inclusive entrepreneurial landscape. For investors, these methods may help uncover overlooked ventures and support more balanced portfolio construction. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98588 |
| DOI: | 10.6342/NTU202503744 |
| 全文授權: | 同意授權(全球公開) |
| 電子全文公開日期: | 2025-08-18 |
| 顯示於系所單位: | 資訊管理學系 |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf | 1.15 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
