Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 管理學院
  3. 資訊管理學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/95634
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor畢南怡zh_TW
dc.contributor.advisorNan-Yi Bien
dc.contributor.author洪立曄zh_TW
dc.contributor.authorLi-Yeh Hungen
dc.date.accessioned2024-09-15T16:13:38Z-
dc.date.available2024-09-16-
dc.date.copyright2024-09-14-
dc.date.issued2024-
dc.date.submitted2024-08-11-
dc.identifier.citation[1] Z. Akata, D. Balliet, M. De Rijke, F. Dignum, V. Dignum, G. Eiben, A. Fokkens, D. Grossi, K. Hindriks, H. Hoos, et al. A research agenda for hybrid intelligence:augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence. Computer, 53(8):18–28, 2020.
[2] O. Alnuaimi, L. Robert, and L. Maruping. Social loafing in brainstorming cmcteams: The role of moral disengagement. In 2009 42nd Hawaii International Conference on System Sciences, pages 1–9. IEEE, 2009.
[3] O. A. Alnuaimi, L. P. Robert, and L. M. Maruping. Team size, dispersion, and social loafing in technology-supported teams: A perspective on the theory of moral disengagement. Journal of Management Information Systems, 27(1):203–230, 2010.
[4] A. Bandura. Moral disengagement in the perpetration of inhumanities. Personality and social psychology review, 3(3):193–209, 1999.
[5] A. Bandura. Selective moral disengagement in the exercise of moral agency. Journal of moral education, 31(2):101–119, 2002.
[6] A. Bandura. Moral disengagement. The encyclopedia of peace psychology, 2011.
[7] A. Bandura, C. Barbaranelli, G. V. Caprara, and C. Pastorelli. Mechanisms of moral disengagement in the exercise of moral agency. Journal of personality and social psychology, 71(2):364, 1996.
[8] Á. A. Cabrera, A. Perer, and J. I. Hong. Improving human-ai collaboration with descriptions of ai behavior. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1):1–21, 2023.
[9] J. Chung and G. S. Monroe. Exploring social desirability bias. Journal of Business Ethics, 44:291–302, 2003.
[10] D. Dellermann, P. Ebel, M. Söllner, and J. M. Leimeister. Hybrid intelligence. Business & Information Systems Engineering, 61(5):637–643, 2019.
[11] A. R. Dennis and J. S. Valacich. Computer brainstorms: More heads are better than one. Journal of applied psychology, 78(4):531, 1993.
[12] J. R. Detert, L. K. Treviño, and V. L. Sweitzer. Moral disengagement in ethical decision making: a study of antecedents and outcomes. Journal of applied psychology, 93(2):374, 2008.
[13] M. Diehl and W. Stroebe. Productivity loss in brainstorming groups: Toward the solution of a riddle. Journal of personality and social psychology, 53(3):497, 1987.
[14] R. B. Gallupe, A. R. Dennis, W. H. Cooper, J. S. Valacich, L. M. Bastianutti, and J. F. Nunamaker Jr. Electronic brainstorming and group size. Academy of Management Journal, 35(2):350–369, 1992.
[15] T. Hagendorff. The ethics of ai ethics: An evaluation of guidelines. Minds and machines, 30(1):99–120, 2020.
[16] B. Huber, S. Shieber, and K. Z. Gajos. Automatically analyzing brainstorming language behavior with meeter. Proceedings of the ACM on human-computer interaction, 3(CSCW):1–17, 2019.
[17] S. G. Isaksen and J. P. Gaulin. A reexamination of brainstorming research: Implications for research and practice. Gifted Child Quarterly, 49(4):315–329, 2005.
[18] J. A. Jiang, K. Wade, C. Fiesler, and J. R. Brubaker. Supporting serendipity: Opportunities and challenges for human-ai collaboration in qualitative analysis. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1):1–23, 2021.
[19] N. W. Kohn and S. M. Smith. Collaborative fixation: Effects of others’ ideas on brainstorming. Applied Cognitive Psychology, 25(3):359–371, 2011.
[20] J. D. Lee and K. A. See. Trust in automation: Designing for appropriate reliance. Human factors, 46(1):50–80, 2004.
[21] J. S. Lerner and P. E. Tetlock. Accounting for the effects of accountability. Psychological bulletin, 125(2):255, 1999.
[22] Z. Lin. Why and how to embrace ai such as chatgpt in your academic life. Royal Society Open Science, 10(8):230658, 2023.
[23] P. R. Magnusson, J. Matthing, and P. Kristensson. Managing user involvement in service innovation: Experiments with innovating end users. Journal of Service Research, 6(2):111–124, 2003.
[24] S. R. Martin, J. J. Kish-Gephart, and J. R. Detert. Blind forces: Ethical infrastructures and moral disengagement in organizations. Organizational psychology review, 4(4):295–325, 2014.
[25] L. Memmert and E. Bittner. Human-ai collaboration for brainstorming: Effect of the presence of ai ideas on breadth of exploration. 2024.
[26] L. Memmert and N. Tavanapour. Towards human-ai-collaboration in brainstorming: empirical insights into the perception of working with a generative ai. 2023.
[27] A. A. Nichol, M. C. Halley, C. A. Federico, M. K. Cho, and P. L. Sankar. Not in my ai: Moral engagement and disengagement in health care ai development. In Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, volume 28, page 496. NIH Public Access, 2023.
[28] J. F. Nunamaker, A. R. Dennis, J. S. Valacich, D. Vogel, and J. F. George. Electronic meeting systems. Communications of the ACM, 34(7):40–61, 1991.
[29] A. F. Osborn. Applied imagination. 1953.
[30] A. F. Osborn. Applied imagination, revised edition. Scribner, New York, NY, 1957.
[31] S. Palan and C. Schitter. Prolific. ac—a subject pool for online experiments. Journal of Behavioral and Experimental Finance, 17:22–27, 2018.
[32] P. B. Paulus and V. R. Brown. Toward more creative and innovative group idea generation: A cognitive-social-motivational perspective of brainstorming. Social and Personality Psychology Compass, 1(1):248–265, 2007.
[33] A. Pinsonneault, H. Barki, R. B. Gallupe, and N. Hoppen. Electronic brainstorming: The illusion of productivity. Information Systems Research, 10(2):110–133, 1999.
[34] I. Rahwan, M. Cebrian, N. Obradovich, J. Bongard, J.-F. Bonnefon, C. Breazeal, J. W. Crandall, N. A. Christakis, I. D. Couzin, M. O. Jackson, et al. Machine behaviour. Nature, 568(7753):477–486, 2019.
[35] K. C. Runions and M. Bak. Online moral disengagement, cyberbullying, and cyberaggression. Cyberpsychology, Behavior, and Social Networking, 18(7):400–405, 2015.
[36] I. H. Sarker. Ai-based modeling: techniques, applications and research issues towards automation, intelligent and smart systems. SN Computer Science, 3(2):158, 2022.
[37] I. Seeber, E. Bittner, R. O. Briggs, T. De Vreede, G.-J. De Vreede, A. Elkins, R. Maier, A. B. Merz, S. Oeste-Reiß, N. Randrup, et al. Machines as teammates: A research agenda on ai in team collaboration. Information & management,57(2):103174, 2020.
[38] P. Siangliulue, K. C. Arnold, K. Z. Gajos, and S. P. Dow. Toward collaborative ideation at scale: Leveraging ideas from others to generate more creative and diverse ideas. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pages 937–945, 2015.
[39] D. Siemon, L. Eckardt, and S. Robra-Bissantz. Tracking down the negative group creativity effects with the help of an artificial intelligence-like support system. In 2015 48th Hawaii International Conference on System Sciences, pages 236–243.IEEE, 2015.
[40] D. Siemon and F. Wank. Collaboration with ai-based teammates-evaluation of the social loafing effect. In PACIS, page 146, 2021.
[41] D. W. Taylor, P. C. Berry, and C. H. Block. Does group participation when using brainstorming facilitate or inhibit creative thinking? Administrative science quarterly, pages 23–47, 1958.
[42] A. Vance, P. B. Lowry, and D. Eggett. Increasing accountability through user interface design artifacts. MIS quarterly, 39(2):345–366, 2015.
[43] R. Verganti, L. Vendraminelli, and M. Iansiti. Innovation and design in the age of artificial intelligence. Journal of product innovation management, 37(3):212–227, 2020.
[44] M. Vössing, N. Kühl, M. Lind, and G. Satzger. Designing transparency for effective human-ai collaboration. Information Systems Frontiers, 24(3):877–895, 2022.
[45] T. Wang, J. Chen, Q. Jia, S. Wang, R. Fang, H. Wang, Z. Gao, C. Xie, C. Xu, J. Dai, et al. Weaver: Foundation models for creative writing. arXiv preprint arXiv:2401.17268, 2024.
[46] Y. Zhu, S. M. Ritter, and A. Dijksterhuis. Creativity: Intrapersonal and interpersonal selection of creative ideas. The Journal of Creative Behavior, 54(3):626–635, 2020.
[47] I. Zigurs and B. K. Buckland. A theory of task/technology fit and group support systems effectiveness. MIS quarterly, pages 313–334, 1998.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/95634-
dc.description.abstract本論文探討在人工智慧協助腦力激盪過程中,道德掛鉤如何提升當責意識。隨著生成式人工智慧(GAI)的廣泛應用,本研究根據道德脫鉤理論探討如何能夠減輕減少當責意識和對AI生成內容的被動依賴等風險。透過一系列實驗設計,涵蓋不同的道德掛勾設計,本研究評估了這些道德掛鉤對參與者貢獻程度與腦力激盪點子創新度的影響。研究發現指出,特定的道德掛鉤策略,尤其是提高貢獻透明度,顯著提高了參與者的當責意識和腦力激盪成果的多樣性。本研究不僅擴展了道德脫鉤的理論框架,也為人機協作腦力激盪AI介面設計提供了建議。zh_TW
dc.description.abstractThis thesis explores the role of moral engagement in enhancing accountability during human-AI brainstorming. With the proliferation of Generative Artificial Intelligence in problem-solving, this research investigates how moral mechanisms can mitigate risks such as reduced accountability and passive reliance on AI-generated content. Through a series of experimental designs involving two moral engagement interventions, the study assesses the impact on participants' perceived contribution and uniqueness of brainstorming ideas. The findings suggest that specific moral engagement strategies, particularly transparency creation, significantly improve participants’ accountability and the diversity of brainstorming outcomes. This research contributes to the understanding of effective human-AI collaboration, offering insights for designing AI systems that foster ethical interaction and enhanced creative participation.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-09-15T16:13:38Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-09-15T16:13:38Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
Acknowledgements iii
摘要 v
Abstract vii
Contents ix
List of Figures xiii
List of Tables xv
Chapter 1 Introduction 1
1.1 Background and Motivation 1
1.2 Objectives 3
Chapter 2 Literature Review 5
2.1 Human-AI collaboration 5
2.2 Brainstorming 7
2.3 Accountability 9
2.4 Theory of Moral Disengagement 10
Chapter 3 Research Methodology 17
3.1 Procedure 17
3.2 Participants 18
3.3 Materials 19
3.3.1 Task 19
3.3.2 Interface 20
3.4 Conditions 21
3.5 Measures 22
3.5.1 Perceived contributions 22
3.5.1.1 Perceived Indispensability of an Individual’s Effort 23
3.5.1.2 The Diffusion of Responsibility 24
3.5.2 Uniqueness of ideas 25
Chapter 4 Results 27
4.1 Descriptive Analysis 27
4.2 Quantitative Analysis 28
4.2.1 Percevied Contributions 28
4.2.2 Uniqueness of ideas 30
4.2.3 Path Analysis 31
4.3 Qualitative Analysis 33
4.3.1 Opinions on the Helpfulness of AI Differ 33
4.3.2 Proportion of Ideas as a Reference of Contribution 34
4.3.3 Reasons to Have Low Contributions 34
Chapter 5 Discussions 37
5.1 Implications 37
5.1.1 Contributions in Human-AI Collaboration 37
5.1.2 Interplay Between Perceived Indispensability, Diffused Responsibility, and Perceived Contributions 39
5.1.3 Uniqueness of Brainstorming ideas 39
5.2 Limitations 41
5.2.1 Online Experiment Setting 41
5.2.2 Simplified Human-AI Interaction 41
5.2.3 Reliance on Self-reported Data 42
5.2.4 Use of Unvalidated Measurement Instruments 43
5.2.5 Absence of Manipulation Check 44
5.3 Future Works 44
5.3.1 Exploring Alternative Moral Engagement Mechanisms 44
5.3.2 Investigating Additional Mediators 45
5.3.3 Refinement and Customization of AI Models 46
Chapter 6 Conclusion 47
References 49
-
dc.language.isoen-
dc.title利用道德掛鉤於人機協作腦力激盪提升當責意識效果探討zh_TW
dc.titleExploring the Effect of Moral Engagement on Accountability in Human-AI Brainstormingen
dc.typeThesis-
dc.date.schoolyear112-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee彭志宏;游創文zh_TW
dc.contributor.oralexamcommitteeChih-Hung Peng;Chuang-Wen Youen
dc.subject.keyword人機互動,腦力激盪,當責,生成式 AI,zh_TW
dc.subject.keywordHCI,Brainstorm,Accountability,GAI,en
dc.relation.page54-
dc.identifier.doi10.6342/NTU202401531-
dc.rights.note未授權-
dc.date.accepted2024-08-13-
dc.contributor.author-college管理學院-
dc.contributor.author-dept資訊管理學系-
顯示於系所單位:資訊管理學系

文件中的檔案:
檔案 大小格式 
ntu-112-2.pdf
  目前未授權公開取用
859.58 kBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved