請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/1234完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 許永真 | |
| dc.contributor.author | Yi-Ching Huang | en |
| dc.contributor.author | 黃怡靜 | zh_TW |
| dc.date.accessioned | 2021-05-12T09:34:39Z | - |
| dc.date.available | 2019-07-01 | |
| dc.date.available | 2021-05-12T09:34:39Z | - |
| dc.date.copyright | 2018-06-14 | |
| dc.date.issued | 2018 | |
| dc.date.submitted | 2018-06-12 | |
| dc.identifier.citation | [1] M. S. Bernstein, G. Little, R. C. Miller, B. Hartmann, M. S. Ackerman, D. R. Karger, D. Crowell, and K. Panovich. Soylent: a word processor with a crowd inside. In Proceedings of the 23nd annual ACM symposium on User interface software and technology, UIST ’10, pages 313–322, New York, NY, USA, 2010. ACM.
[2] M. S. Bernstein, D. Tan, G. Smith, M. Czerwinski, and E. Horvitz. Personalization via friendsourcing. ACM Trans. Comput.-Hum. Interact., 17(2):6:1–6:28, May 2008. [3] S. Blau, J. Hall, and S. Sparks. Guilt-free tutoring: Rethinking how we tutor nonnative english-speaking students. The Writing Center Journal, 23(1), 2002. [4] M. Boekaerts. Self-regulated learning: where we are today. International Journal of Educational Research, 31, 1999. [5] M. Boekaerts, P. R. Pintrich, and M. Zeidner, editors. Handbook of Self-regulation. Academic Press, 2000. [6] D. Boud. Enhancing Learning Through Self Assessment. Routledge, 1995. [7] J. Brooke. Sus: A ”quick and dirty” usability scale. Usability Evaluation in Industry, 1996. [8] J. Burstein, M. Chodorow, and C. Leacock. Automated essay evaluation: The criterion online writing service. AI Magazine, 25(3):27–36, 2004. [9] C. J. Cai, S. T. Iqbal, and J. Teevan. Chain reactions: The impact of order on microtask chains. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pages 3143–3154, New York, NY, USA, 2016. ACM. [10] L. B. Chilton, G. Little, D. Edge, D. S. Weld, and J. A. Landay. Cascade: Crowdsourcing taxonomy creation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’13, pages 1999–2008, New York, NY, USA, 2013. ACM. [11] S. Cooper, F. Khatib, A. Treuille, J. Barbero, J. Lee, M. Beenen, A. Leaver-Fay, D. Baker, Z. Popović, and F. players. Predicting protein structures with a multiplayer online game. Nature, 466(7307):756–760, Aug 2010. [12] R. M. Dawes, D. Faust, and P. E. Meehl. Clinical versus actuarial judgment. Science, 243:pp. 1668–1674, 1989. [13] J. Dewey. How We Think: A Restatement of the Relation of Reflective Thinking to the Educative Process. D.C. Heath and Company, 1933. [14] S. Dikli. An overview of automated scoring of essays. The Journal of Technology, Learning, and Assessment, 5(1), 2006. [15] D. Dixon, M. Prasad, and T. Hammond. icandraw: Using sketch recognition and corrective feedback to assist a user in drawing human faces. In Proc. CHI 2010, 2010. [16] S. Dow, A. Kulkarni, S. Klemmer, and B. Hartmann. Shepherding the crowd yields better work. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work and Social Computing, CSCW ’12, pages 1013–1022, New York, NY, USA, 2012. ACM. [17] K. A. Ericsson, R. T. Krampe, and C. Tesch-romer. The role of deliberate practice in the acquisition of expert performance. Psychological Review, 1993. [18] L. Faigley and S. Witte. Analyzing revision. College Composition and Communication, 32(4):400–414, 1981. [19] J. Fernquist, T. Grossman, and G. Fitzmaurice. Sketch-sketch revolution: An engaging tutorial system for guided sketching and application learning. In Proc. UIST 2011, 2011. [20] C. D. Fisherl. Boredom at work: A neglected concept. Human Relations, 46(3):395417, 1993. [21] J. Fitzgerald. Research on revision in writing. Review of Educational Research, 57(4):481–506, 1987. [22] L. Flower and J. R. Hayes. A cognitive process theory of writing. College Composition and Communication, 32(4), 1981. [23] L. Flower, J. R. Hayes, L. Carey, K. Schriver, and J. Stratman. Detection, diagnosis, and the strategies of revision. College Composition and Communication, 37(1):1655, 1986. [24] E. Foong, S. P. Dow, B. P. Bailey, and E. M. Gerber. Online feedback exchange: A framework for understanding the socio-psychological factors. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pages 4454–4467, 2017. [25] E. Foong, D. Gergle, and E. M. Gerber. Novice and expert sensemaking of crowdsourced design feedback. Proceedings of the ACM on Human-Computer Interaction, 1(CSCW):45:1–45:18, 2017. [26] E. L. Glassman, A. Lin, C. J. Cai, and R. C. Miller. Learnersourcing personalized hints. In Proc. CSCW 2016, 2016. [27] M. D. Greenberg, M. W. Easterday, and E. M. Gerber. Critiki: A scaffolded approach to gathering design feedback from paid crowdworkers. In Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition, pages 235–244, 2015. [28] N. Greer, J. Teevan, and S. T. Iqbal. An introduction to technological support for writing. Technical report, Microsoft Research Tech Report MSR-TR-2016-001, 2016. [29] N. Hahn, J. Chang, J. E. Kim, and A. Kittur. The knowledge accelerator: Big picture thinking in small pieces. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ’16, pages 2258–2270, 2016. [30] C. M. Hicks, V. Pandey, C. A. Fraser, and S. Klemmer. Framing feedback: Choosing review environment features that support high quality peer assessment. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ’16, pages 458–469, 2016. [31] A. S. Horning and A. Becker. Revision: History, Theory, and Practice. Parlor Press, 2006. [32] J. Hui, A. Glenn, R. Jue, E. Gerber, and S. Dow. Using anonymity and communal efforts to improve quality of crowdsourced feedback. In Proceedings of the Third AAAI Conference on Human Computation and Crowdsourcing, 2015. [33] E. Iarussi, A. Bousseau, and T. Tsandilas. The drawing assistant: Automated drawing guidance and feedback from photographs. In Proc. UIST 2013, 2013. [34] H. L. Jacobs, S. A. Zinkgraf, D. R. Wormuth, V. F. Hartfiel, and J. B. Hughey. Testing ESL Composition: A Practical Approach. Newbury House, 1981. [35] A. Jonsson and G. Svingby. The use of scoring rubrics: reliability, validity and educational consequences. Educational Research Review, 2(2):130–144, 2007. [36] D. Kahneman and A. Tversky. Prospect theory: An analysis of decisions under risk. Econometrica, 47(2):263–291, 1979. [37] R. B. Kaplan. Cultural thought patterns in inter-cultural education. Language Learning, 1966. [38] J. Kim, J. Cheng, and M. S. Bernstein. Ensemble: Exploring complementary strengths of leaders and crowds in creative collaboration. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work and Social Computing, 2014. [39] J. Kim and A. Monroy-Hernandez. Storia: Summarizing social media content based on narrative theory using crowdsourcing. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, 2016. [40] J. Kim, S. Sterman, A. A. B. Cohen, and M. S. Bernstein. Mechanical novel: Crowdsourcing complex work through reflection and revision. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, pages 233–245, 2017. [41] A. Kittur, B. Smus, S. Khamkar, and R. E. Kraut. Crowdforge: crowdsourcing complex work. In Proceedings of the 24th annual ACM symposium on User interface software and technology, UIST ’11, pages 43–52, New York, NY, USA, 2011. ACM. [42] S. Konstantinidis. Computing the edit distance of a regular language. Inf. Comput., 205(9):1307–1316, 2007. [43] M. Krause, T. Garncarz, J. Song, E. M. Gerber, B. P. Bailey, and S. P. Dow. Critique style guide: Improving crowdsourced design feedback with a natural language model. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI ’17, pages 4627–4639, 2017. [44] A. Kulkarni. Turkomatic : Automatic Recursive Task Design for Mechanical Turk. pages 1–6, 2011. [45] C. Kulkarni, K. P. Wei, H. Le, D. Chia, K. Papadopoulos, J. Cheng, D. Koller, and S. R. Klemmer. Peer and self assessment in massive online classes. ACM Trans. Comput.-Hum. Interact., 20(6):33:1–33:31, 2013. [46] C. E. Kulkarni, M. S. Bernstein, and S. R. Klemmer. Peerstudio: Rapid peer feedback emphasizes revision and improves performance. In Proceedings of the Second ACM Conference on Learning @ Scale, L@S ’15, pages 75–84, New York, NY, USA, 2015. ACM. [47] I. Leki. Understanding ESL Writers: A Guide for Teachers. Portsmouth, 1992. [48] G. Little, L. B. Chilton, M. Goldman, and R. C. Miller. Exploring iterative and parallel human computation processes. In Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP ’10, pages 68–76, 2010. [49] J. A. Luft. Rubrics: Design and use in science teacher education. Journal of Science Teacher Education, 10(2):107–121, 1999. [50] K. Luther, J.-L. Tolentino, W. Wu, A. Pavel, B. P. Bailey, M. Agrawala, B. Hartmann, and S. P. Dow. Structuring, aggregating, and evaluating crowdsourced design critique. In Proc. CSCW 2015, 2015. [51] X. Ma, L. Yu, J. L. Forlizzi, and S. P. Dow. Exiting the design studio: Leveraging online participants for early-stage design feedback. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work and Social Computing, pages 676–685, New York, NY, USA, 2015. ACM. [52] R. J. Marzano. The Art and Science of Teaching: A Comprehensive Framework for Effective Teaching. VA: ASCD, 2007. [53] M. Nebeling, A. To, A. Guo, A. A. de Freitas, J. Teevan, S. P. Dow, and J. P. Bigham. Wearwrite: Crowd-assisted writing from smartwatches. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ’16, pages 38343846, 2016. [54] D. T. Nguyen, T. Garncarz, F. Ng, L. A. Dabbish, and S. P. Dow. Fruitful feedback: Positive affective language and source anonymity improve critique reception and work outcomes. In Proceedings of the 20th ACM Conference on Computer Supported Cooperative Work and Social Computing, pages 1024–1034, 2017. [55] D. J. Nicol and D. MacFarlane-Dick. Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2):199–218, 2006. [56] J. Noronha, E. Hysen, H. Zhang, and K. Z. Gajos. Platemate: Crowdsourcing nutritional analysis from food photographs. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST ’11, pages 1–12, New York, NY, USA, 2011. ACM. [57] A. Oshima and A. Hogue. Longman Academic Writing Series 4: Essays (5th Edition). Pearson Education ESL, 5th edition edition, 2013. [58] S. Perl. The composing processes of unskilled college writers. Research in the Teaching of English, 13(4):pp. 317–336, 1979. [59] Y. M. Reddy and H. Andrade. A review of rubric use in higher education. Assessment and Evaluation in Higher Education, 35(4):435–448, 2010. [60] T. J. Reigstad and D. A. McAndrew. Training Tutors for Writing Center Conference. ERIC and NCTE, Urbana, IL, 1984. [61] D. Retelny, S. Robaszkiewicz, A. To, W. S. Lasecki, J. Patel, N. Rahmati, T. Doshi, M. Valentine, and M. S. Bernstein. Expert crowdsourcing with flash teams. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, UIST ’14, pages 75–85, New York, NY, USA, 2014. ACM. [62] S. Rose, D. Engel, N. Cramer, and W. Cowley. Automatic keyword extraction from individual documents. In M. W. Berry and J. Kogan, editors, Text Mining. Applications and Theory. John Wiley and Sons, Ltd, 2010. [63] D. R. Sadler. Formative assessment and the design of instructional systems. Instructional Science, 18(2):119–144, 1989. [64] D. A. Schön. The Reflective Practitioner: How Professionals Think in Action. New York: Basic Books, 1983. 99 [65] D. A. Schön. Educating the Reflective Practitioner: Toward a New Design for Teaching and Learning in the Professions. San Francisco: Jossey-Bass, 1987. [66] N. Sommers. Revision strategies of student writers and experienced adult writers. College Composition and Communication, 31(4), 1980. [67] J. Teevan, S. T. Iqbal, and C. von Veh. Supporting collaborative writing with microtasks. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2016. [68] J. S. Underwood. Effective feedback: Guidelines for improving performance. In Proceedings of the 8th International Conference on International Conference for the Learning Sciences - Volume 2, pages 415–422, 2008. [69] V. Verroios and M. S. Bernstein. Context trees: Crowdsourcing global understanding from local views. In Proceedings of the Second AAAI Conference on Human Computation and Crowdsourcing (HCOMP-2014), 2014. [70] L. von Ahn and L. Dabbish. Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’04, pages 319–326, New York, NY, USA, 2004. ACM. [71] S. Weir, J. Kim, K. Z. Gajos, and R. C. Miller. Learnersourcing subgoal labels for how-to videos. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW ’15, pages 405–416, New York, NY, USA, 2015. ACM. [72] A. Xu, S.-W. Huang, and B. Bailey. Voyant: Generating structured feedback on visual designs using a crowd of non-experts. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW ’14, pages 1433–1444, New York, NY, USA, 2014. ACM. [73] Y.-C. G. Yen, S. P. Dow, E. Gerber, and B. P. Bailey. Social network, web forum, or task market?: Comparing different crowd genres for design feedback exchange. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems, DIS ’16, pages 773–784, New York, NY, USA, 2016. ACM. [74] Y.-C. G. Yen, S. P. Dow, E. Gerber, and B. P. Bailey. Listen to others, listen to yourself: Combining feedback review and reflection to improve iterative design. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition, C&C ’17, pages 158–170, 2017. [75] A. Yuan, K. Luther, M. Krause, S. I. Vennix, S. P. Dow, and B. Hartmann. Almost an expert: The effects of rubrics and expertise on perceived value of crowdsourced design critiques. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, CSCW ’16, pages 1005–1017, 2016. [76] H. Zhang, E. Law, R. Miller, K. Gajos, D. Parkes, and E. Horvitz. Human computation tasks with global constraints. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’12, pages 217–226, New York, NY, USA, 2012. ACM. [77] B. J. Zimmerman. Self-regulated learning and academic achievement: An overview. Educational Psychologist, 25(1):3–17, 1990. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/handle/123456789/1234 | - |
| dc.description.abstract | 創造性任務因沒有正確答案且缺乏標準定義,讓許多人感到非常棘手,通常需要投入大量時間與精力去學習並精進專業知識與技能,才能夠完成此種任務。而且,在解決問題的過程中,必須取得外部回饋,並且透過不斷的迭代修改才能夠獲取高品質的結果。許多研究者已經能夠利用網路系統串連提供者與使用者,取得即時的外部回饋,幫助使用者完成任務。然而,大部分的研究只專注於回饋內容的改進,而忽略了最重要的目的,在於如何在此過程中輔助使用者學習,並有效的完成任務。所以,目的包含了獲得高品質的結果以及提高學習成效。此博士論文提出了一個解決創造性任務的迭代循環回饋的框架,分別探討產生有效且高品質回饋的生成機制以及有效整合回饋輔助編輯的方法,來促進有效的學習行為以及創造良好的使用經驗。我們設計與開發了一系列的智慧型回饋系統,運用群眾與機器合作的力量來幫助使用者快速取得符合需求的高品質寫作建議,並透過結構化的設計,引導他們有效地進行編輯,同時精進專業能力與提升作品品質。未來,在我們提出的互動回饋框架中,機器將擔任協調者與合作者的角色,根據使用者的偏好與行為,提供適當的回饋與引導,促進人與人以及人與機器良好的互動,一起合作完成困難的創造性任務。 | zh_TW |
| dc.description.abstract | Performing creative tasks is challenging, for such tasks are typically open-ended and ill-defined. To solve these complex problems, people need to spend much time and effort to learn professional skills and improve the in-progress work through an iterative process. Feedback is a critical component of this process for helping people discover errors and iterate toward better solutions. To meet the demand of timely feedback, recent work has explored technologies to connect problem solvers with feedback providers online. However, most research focuses on improving the content of feedback, but neglects the most important aspect of how to support problem solvers to learn and effectively facilitate the creation of high-quality outcome. In this dissertation, we explore several ways to support not only feedback generation but also feedback integration process, focusing on the writing tasks. We designed and developed intelligent systems that leverage the power of crowd and machines to support writers for obtaining effective feedback and facilitating good revision behaviors in the process. First, we start with our crowd-powered feedback system, StructFeed, and demonstrate a crowdsourcing approach for generating useful feedback to help writers resolve high-level writing issues in their revisions. Next, we propose feedback orchestration, which guides writers to resolve writing issues in a particular workflow by orchestrating feedback presentation. In the future, we would create an iterative feedback framework that enables collaboration between authors and feedback providers. Authors and feedback providers could benefit from this framework and collaboratively accomplish the complex creative tasks. The system would be regarded as a mediator for generating an ensemble of feedback from diverse feedback providers and for orchestrating feedback presentation to guide authors to achieve high revision performance based on individual needs. | en |
| dc.description.provenance | Made available in DSpace on 2021-05-12T09:34:39Z (GMT). No. of bitstreams: 1 ntu-107-D00944010-1.pdf: 11198594 bytes, checksum: f752acd0db856db46c45c85cc707bb2e (MD5) Previous issue date: 2018 | en |
| dc.description.tableofcontents | 1 Introduction 1
1.1 Background and Motivation 1 1.2 Supporting Creative Task Solving 2 1.3 Roadmap of Research 4 1.3.1 StructFeed: Generate Effective Feedback by Crowdsourcing 4 1.3.2 Feedback Orchestration: Facilitate Effective Revision 5 1.3.3 Understand How Feedback Affects Revision Results 6 1.3.4 Reflection Before/After Practice 7 2 Related Work 9 2.1 Human Computation and Crowdsourcing 9 2.1.1 Incentive, Microtasks, Quality Control 9 2.1.2 Crowdsourcing for complex task 11 2.2 Online Feedback Exchange 11 2.3 Learning Science 13 2.3.1 Self-Regulated Learning 13 2.3.2 Reflection and Reflective Practice 14 2.4 Writing Support Systems 14 2.4.1 Automated Writing Evaluation 15 2.4.2 Crowd-Powered Systems for Writing Support 15 3 StructFeed: Generating Structural Feedback by Crowdsourcing 17 3.1 Introduction 17 3.2 StructFeed 18 3.2.1 Paragraph Unity and Topic Sentence 19 3.2.2 Crowdsourcing Workflow 20 3.2.3 Structural Feedback and Interface 22 3.2.4 Implementation 23 3.3 Unity Identification 23 3.3.1 Crowd-Based Method 24 3.3.2 ML-Based Methods 25 3.3.3 Evaluation 27 3.4 Field Deployment Study 29 3.4.1 Study Design 29 3.4.2 Tasks and Procedure 30 3.4.3 Measure 30 3.4.4 Results 31 3.5 Discussion 32 3.5.1 Crowd helps develop better rules for machine 32 3.5.2 StructFeed not only identifies writing issues but promotes reflection 33 3.5.3 Expert feedback performed worse than crowd feedback? 33 3.6 Conclusion 34 4 Feedback Orchestration: Supporting Reflection and Awareness in Revision 35 4.1 Introduction 35 4.2 Formative Study 38 4.2.1 Task and procedure 38 4.2.2 Findings 39 4.3 Feedback Orchestration 40 4.3.1 Expert revision practice 40 4.3.2 Support reflection and awareness 41 4.4 System Implementation 41 4.4.1 Taxonomy of writing feedback 42 4.4.2 Feedback classification 44 4.4.3 Revision workflow 45 4.4.4 Revision interface 45 4.5 Experiment 48 4.5.1 Participants 48 4.5.2 Task and procedure 49 4.5.3 Measures 51 4.5.4 Results 52 4.5.5 Insights from interviews with participants 53 4.6 Discussion and Implications 58 4.6.1 Feedback categorization guides learning behaviors 58 4.6.2 Flexible support for varying preference 59 4.6.3 Low-to-high sequence facilitates difficult problem solving 60 4.6.4 Decrease the workload of low-level, repetitive tasks 61 4.6.5 Resolving editing conflicts and mental obstacles 61 4.7 Limitations and Future work 62 4.8 Conclusion 63 5 How Feedback Affects Revision Quality? 65 5.1 Introduction 65 5.2 Crowd Feedback vs Expert Feedback 66 5.3 Experiment: Writing Revision Affected by Feedback of Different Types 67 5.3.1 Participants 69 5.3.2 Task and Procedure 69 5.3.3 Measures 70 5.3.4 Statistical Results 72 5.3.5 Insights from Interview Data 73 5.4 Discussion 75 5.4.1 The Cost and Benefit of StructFeed 75 5.4.2 The Utility of StructFeed Depends on Learners’ Level of Proficiency 75 5.4.3 Gap between Expert Reviewer and Novice Writer 76 5.4.4 Macro-Task vs. Micro-Task 76 6 Reflection After/Before Practice 79 6.1 Introduction 79 6.2 Related Work 81 6.3 Learnersourcing for Drawing Support 81 6.4 ShareSketch: Draw, Review and Share 82 6.4.1 Sketch Interface 83 6.4.2 Timeline Interface for Sketch History 83 6.5 Before/After-Practice Reflection Workflow 83 6.6 Pilot Study 85 6.7 Results and Findings 85 6.7.1 After-practice annotation augments before-practice annotation 86 6.7.2 Before-practice reflection vs After-practice reflection 87 6.8 Discussion 87 6.8.1 Provide scaffolding for reflection and practice 88 6.8.2 Learning points as feedback enhances creative task learning 88 6.9 Future Work 88 7 Conclusion 89 7.1 Restatement of Contributions 89 7.2 Future Directions 90 7.2.1 Hybrid Combination of Crowd and Machine 90 7.2.2 Creative Knowledge Construction for Innovative Applications 91 7.3 Summary 91 Bibliography 93 | |
| dc.language.iso | en | |
| dc.subject | 解決創造性任務 | zh_TW |
| dc.subject | 創造性學習 | zh_TW |
| dc.subject | 群眾外包 | zh_TW |
| dc.subject | 群眾回饋 | zh_TW |
| dc.subject | 線上回饋交換 | zh_TW |
| dc.subject | crowd feedback | en |
| dc.subject | crowdsourcing | en |
| dc.subject | creative learning | en |
| dc.subject | online feedback exchange | en |
| dc.subject | creative task solving | en |
| dc.title | 設計解決複雜的創造性任務 | zh_TW |
| dc.title | Designing for Complex Creative Task Solving | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 106-2 | |
| dc.description.degree | 博士 | |
| dc.contributor.oralexamcommittee | 王浩全,陳炳宇,張俊盛,梁容輝,何建儒 | |
| dc.subject.keyword | 解決創造性任務,線上回饋交換,創造性學習,群眾外包,群眾回饋, | zh_TW |
| dc.subject.keyword | creative task solving,online feedback exchange,creative learning,crowdsourcing,crowd feedback, | en |
| dc.relation.page | 101 | |
| dc.identifier.doi | 10.6342/NTU201800935 | |
| dc.rights.note | 同意授權(全球公開) | |
| dc.date.accepted | 2018-06-12 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
| 顯示於系所單位: | 資訊網路與多媒體研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-107-1.pdf | 10.94 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
