請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101812| 標題: | 多代理 AI 腦力激盪中的引導式發想探索:使用者策略與創意成果之觀察 Exploring Guided Ideation in Multi-Agent AI Collaboration: Understanding Human Strategies and Outcomes in LLM-Supported Brainstorming |
| 作者: | 曾昱婷 Yu-Ting Tseng |
| 指導教授: | 陳炳宇 Bing-Yu Chen |
| 關鍵字: | 人機互動,創意支持人機協作設計方法與流程理解使用者與人類行為提示工程 Human–AI Collaboration,Creativity SupportDesign Methods and ProcessesUnderstanding Users and Human BehaviorPrompt Engineering |
| 出版年 : | 2025 |
| 學位: | 碩士 |
| 摘要: | 團體腦力激盪常受到群體迷思、參與不均與視角侷限等限制,導致創意產出缺乏多樣性與深度。近年研究指出,具備多角色設定的大型語言模型(multi-persona LLMs)具備模擬跨觀點對話的潛力,能夠輔助創意生成。然而,當缺乏人類引導時,LLM 常傾向產生淺層、重複、缺乏原創性的回應。
本研究以人機共創腦力激盪為場景,探討兩種引導策略——角色設定(persona prompting)與結構化創意輔助框架 SCAMPER——如何影響使用者在與多代理人大型語言模型協作下的互動體驗與創意成果。我們採用 2×2 因子設計,招募 120 位參與者,隨機分派至有/無 Persona × 有/無 SCAMPER 的四種條件,使用雙代理人系統進行替代用途測驗(Alternative Uses Test;簡稱 AUT),任務對象為「磚塊」與「掃帚」。創意成果則由訓練過的評分員依據四個面向進行評估:新穎性(Novelty)、可行性(Workability)、相關性(Relevance)與具體性(Specificity)。 自評問卷結果顯示,SCAMPER 能有效降低認知負荷(NASA-TLX)、提升系統學習性(SUS learnability)與主控感(PAAS autonomy)。角色設定雖未直接提升使用性評分,但可透過使用者自我調整與選擇性互動,間接增強參與感與互動深度。然而,多代理人設計也增加了心智負擔,特別是在處理多重觀點時。 創意評估顯示,角色設定整體上傾向降低點子的可行性、相關性與明確性,並在掃帚任務中也使新穎性下降;在磚塊任務中,則略為提升新穎性。SCAMPER 對創意的直接影響較為有限,僅在磚塊任務中稍微提升構想的完整性表現。整體而言,提示策略對創意品質的影響具高度情境依賴性,效果受物件語意特性與任務限制顯著調節。值得注意的是,創意評分結果也反映出傳統量表難以捕捉使用者主觀感受到的新穎性,尤其當固定角色提示使原本具啟發性的構想於整體中變得普遍時,突顯未來系統需搭配更具情境敏感度的評估機制。 質性資料進一步揭示,使用者會透過選擇性使用代理人、關閉特定角色回應、改寫提示等方式來調整互動策略。他們對 AI 代理人的角色定位多樣:有些視其為工具,有些視為合作夥伴,亦有些則採導演/旁觀者的姿態引導對話。使用者也對 AI 構想內容品質表達觀察,部分人認為點子後期趨於重複,也有人覺得創意令人驚喜。這些主觀感受呼應了評分過程中的觀察:多數構想偏向功能強化與加值延伸,儘管具備實用性,卻易被歸類為常見變體,難以充分展現其創新潛力。固定 persona 設計亦可能加劇此情況,使構想在整體比較下顯得重複。此外,物件本身的語意特性亦影響創意表現:磚頭因用途單一,更易誘發概念轉化;掃帚則因熟悉度高,反而限制構想的發散性。 綜合而言,本研究指出,創意系統中的結構化輔助與角色多樣性具有互補潛力:SCAMPER 提供操作導向的流程支援,有助於維持創作節奏與降低負荷;角色設定則引入觀點差異與詮釋衝突,使使用者從單一語境中抽離,進而探索替代性的理解方式。兩者結合能提升人機協作過程中的參與感與創意表現的多樣性,並形塑構想的風格與語用特徵。 然而,結果亦顯示,不同提示策略可能造成創意輸出在新穎性、可行性、相關性與表達完整度等評估構面上產生顯著落差,顯示創意成果不僅受使用者策略影響,更深受系統引導方式所塑形。我們據此提出,未來多代理人系統應納入可調式交互控制與語境敏感的認知適配設計,以因應任務特性與創意需求;並強調創意工具設計不僅應關注使用體驗,更需對產出之知識樣貌與創意結構有所自覺,並搭配能反映語境特性與互動歷程的創意評估機制,以更貼近使用者所感知的創新價值。 Traditional brainstorming is often constrained by groupthink, uneven participation, and limited perspective, leading to creative outcomes that lack diversity and depth. Recent studies suggest that multi-persona large language models (LLMs) have the potential to simulate cross-perspective dialogue and support idea generation. However, without human guidance, LLMs often default to shallow, repetitive, and unoriginal responses. This study explores how two prompting strategies—persona prompting and the SCAMPER structured ideation framework—affect users’ interaction experience and creative outcomes when collaborating with multi-agent LLM systems. We adopted a 2×2 factorial design with 120 participants randomly assigned to one of four conditions (with/without persona × with/without SCAMPER). Participants engaged in brainstorming sessions with a dual-agent LLM system, completing two Alternative Uses Tasks (AUTs) involving a brick and a broom. Their ideas were evaluated by trained coders based on four creativity dimensions: novelty, workability, relevance, and specificity. Results from self-report questionnaires indicated that SCAMPER significantly reduced cognitive load (NASA-TLX), improved system learnability (SUS), and enhanced perceived autonomy (PAAS). While persona prompting did not directly improve usability ratings, it supported deeper engagement and agency through user-driven adjustments and selective interaction. However, it also introduced additional cognitive demands, particularly when participants had to manage multiple agent perspectives. Creativity assessments showed that persona prompting generally reduced workability, relevance, and specificity scores, and also lowered novelty in the broom task; in contrast, it slightly increased novelty in the brick task. SCAMPER's direct effects on creativity were limited, with a modest improvement in idea completeness observed in the brick task. Overall, the impact of prompting strategies on creative quality was highly context-dependent, moderated by the semantic properties of the task objects. Notably, creativity ratings also revealed a mismatch between subjective impressions of novelty and what was captured by standardized rubrics—especially when repeated persona framing led ideas that felt novel in context to appear redundant in aggregate. These findings highlight the need for evaluation frameworks that are more sensitive to interactional and semantic context. Qualitative data further revealed how users actively regulated their interactions—selectively activating agents, disabling specific dialogues, and rewriting prompts to regain control. Users held diverse mental models of the AI agents, ranging from tool to teammate to director/observer roles. They also commented on the perceived quality of AI-generated ideas; while some felt that ideas became repetitive over time, others found them surprisingly inventive. These impressions aligned with patterns observed in the rated ideas, which often centered around functional enhancements or value-adding extensions. Although practical, such ideas were often categorized as common variants under existing rating scales, making it difficult to reflect their subjective novelty. The fixed persona design may have reinforced this pattern—ideas that initially felt creative in interaction began to appear repetitive when viewed across participants. Furthermore, the semantic characteristics of the task objects significantly shaped creative expression: bricks, due to their constrained affordances, more readily prompted conceptual transformation, while brooms, being more familiar, limited the range of idea divergence. This study identifies a complementary relationship between structured prompting and persona diversity in creative support systems. SCAMPER offers procedural scaffolding that sustains ideation flow and reduces cognitive effort, while persona prompting introduces perspective contrast and interpretive tension—encouraging users to step outside a single semantic frame and explore alternative ways of understanding. Together, these strategies foster engagement and expand the stylistic and conceptual diversity of ideas. However, our findings also reveal trade-offs: different prompting strategies shaped creative output in distinct ways, influencing novelty, workability, relevance, and specificity. This suggests that creativity is not solely determined by user strategies, but is actively co-shaped by the system’s interaction design. We argue that future multi-agent systems should support adjustable interaction control and context-sensitive cognitive alignment to meet diverse creative needs. Moreover, creative tool design should not only prioritize usability, but also critically reflect on how ideas are conceptually framed and communicated—how they emerge, what they mean, and how they shape the creative process. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101812 |
| DOI: | 10.6342/NTU202600324 |
| 全文授權: | 未授權 |
| 電子全文公開日期: | N/A |
| 顯示於系所單位: | 資訊管理學系 |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-114-1.pdf 未授權公開取用 | 679.19 kB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
