請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/81376完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳炳宇(Bing-Yu Chen) | |
| dc.contributor.author | Yu-Ting Wang | en |
| dc.contributor.author | 王郁婷 | zh_TW |
| dc.date.accessioned | 2022-11-24T03:46:28Z | - |
| dc.date.available | 2021-08-06 | |
| dc.date.available | 2022-11-24T03:46:28Z | - |
| dc.date.copyright | 2021-08-06 | |
| dc.date.issued | 2021 | |
| dc.date.submitted | 2021-07-13 | |
| dc.identifier.citation | [1] Adobe. Adobe illustrator 2021, 2021. [2] A. X. Ali, E. Mcaweeney, and J. O. Wobbrock. Anachronism by design: Understanding young adults' perceptions of computer iconography. International Journal of Human Computer Studies, page 102599, 2021. [3] Angular. Angular, 2021. [4] D. Arthur and S. Vassilvitskii. Kmeans++: The advantages of careful seeding. In Proceedings of the Eighteenth Annual ACMSIAM Symposium on Discrete Algorithms, SODA ’07, page 1027–1035, USA, 2007. Society for Industrial and Applied Mathematics. [5] R. W. Bailey, R. W. Allan, and P. Raiello. Usability testing vs. heuristic evaluation: A headtohead comparison. In Proceedings of the human factors society annual meeting, volume 36, pages 409–413. SAGE Publications Sage CA: Los Angeles, CA, 1992. [6] J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, and R. Shah. Signature verification using a” siamese” time delay neural network. In Advances in neural information processing systems, pages 737–744, 1994. [7] Z. Bylinskii, N. W. Kim, P. O’Donovan, S. Alsheikh, S. Madan, H. Pfister, F. Durand, B. Russell, and A. Hertzmann. Learning visual importance for graphic designs and data visualizations. In Proceedings of the 30th Annual ACM symposium on user interface software and technology, pages 57–69, 2017. [8] H.T. Chen, T. Grossman, L.Y. Wei, R. M. Schmidt, B. Hartmann, G. Fitzmaurice, and M. Agrawala. History assisted view authoring for 3d models. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’14, pages 2027–2036, New York, NY, USA, 2014. ACM. [9] F.Y. Cherng, W.C. Lin, J.T. King, and Y.C. Lee. An eegbased approach for evaluating graphic icons from the perspective of semantic distance. In Proceedings of the 2016 chi conference on human factors in computing systems, pages 4378–4389. ACM, 2016. [10] S. Chopra, R. Hadsell, Y. LeCun, et al. Learning a similarity metric discriminatively, with application to face verification. In CVPR (1), pages 539–546, 2005. [11] J. Collomosse, T. Bui, M. J. Wilber, C. Fang, and H. Jin. Sketching with style: Visual search with sketches and aesthetic context. In ICCV, pages 2679–2687, 2017. [12] B. Deka, Z. Huang, C. Franzen, J. Hibschman, D. Afergan, Y. Li, J. Nichols, and R. Kumar. Rico: A mobile app dataset for building datadriven design applications. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, pages 845–854, 2017. [13] B. Deka, Z. Huang, C. Franzen, J. Hibschman, D. Afergan, Y. Li, J. Nichols, and R. Kumar. Rico: A mobile app dataset for building datadriven design applications. UIST ’17, page 845–854, New York, NY, USA, 2017. Association for Computing Machinery. [14] B. Deka, Z. Huang, C. Franzen, J. Nichols, Y. Li, and R. Kumar. Zipt:Zerointegration performance testing of mobile app designs. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, pages 727–736, 2017. [15] B. Deka, Z. Huang, C. Franzen, J. Nichols, Y. Li, and R. Kumar. Zipt: Zerointegration performance testing of mobile app designs. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST ’17, page 727 736, New York, NY, USA, 2017. Association for Computing Machinery. [16] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. [17] Y. Gingold, A. Shamir, and D. CohenOr. Micro perceptual human computation. ACM Transactions on Graphics (TOG), 31(5):119:1–119:12, Aug. 2012. [18] D. Gittins. Iconbased human computer interaction. International Journal of ManMachine Studies, 24(6):519–543, 1986. [19] W. Horton. Designing icons and visual symbols. In Conference Companion on Human Factors in Computing Systems, CHI ’96, page 371–372, New York, NY, USA, 1996. Association for Computing Machinery. [20] W. K. Horton. The ICON Book: Visual Symbols for Computer Systems and Documentation. John Wiley Sons, Inc., USA, 1994. [21] S. J. Isherwood, S. J. McDougall, and M. B. Curry. Icon identification in context: The changing role of icon characteristics with user experience. Human Factors: The Journal of the Human Factors and Ergonomics Society, 49(3):465–476, 2007. [22] N. A. Kamarulzaman, N. Fabil, Z. M. Zaki, and R. Ismail. Comparative study of icon design for mobile application. In Journal of Physics: Conference Series, volume 1551, page 012007. IOP Publishing, 2020. [23] D. J. Ketchen and C. L. Shook. The application of cluster analysis in strategic management research: an analysis and critique. Strategic management journal, 17(6):441–458, 1996. [24] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [25] P. Kolhoff, J. Preuß, and J. Loviscach. Contentbased icons for music files. Computers Graphics, 32(5):550–560, 2008. [26] S. Komarov, K. Reinecke, and K. Z. Gajos. Crowdsourcing performance evaluations of user interfaces. In Proceedings of the SIGCHI conference on human factors in computing systems, pages 207–216, 2013. [27] Y. Koyama, I. Sato, D. Sakamoto, and T. Igarashi. Sequential line search for efficient visual design optimization by crowds. 36(4), July 2017. [28] M. Lagunas, E. Garces, and D. Gutierrez. Learning icons appearance similarity. Multimedia Tools and Applications, pages 1–19, 2018. [29] L. F. Laursen, Y. Koyama, H.T. Chen, E. Garces, D. Gutierrez, R. Harper, and T. Igarashi. Icon set selection via human computation. 2016. [30] Y. J. Lee, C. L. Zitnick, and M. F. Cohen. Shadowdraw: Realtime user guidance for freehand drawing. ACM Trans. Graph., 30(4):27:1–27:10, July 2011. [31] R. Leung, J. McGrenere, and P. Graf. Agerelated differences in the initial usability of mobile device icons. Behaviour Information Technology, 30(5):629–642, 2011. [32] J. P. Lewis, R. Rosenholtz, N. Fong, and U. Neumann. Visualids: Automatic distinctive icons for desktop interfaces. ACM Trans. Graph., 23(3):416–423, Aug. 2004. [33] Q. V. Liao, D. Gruen, and S. Miller. Questioning the ai: informing design practices for explainable ai user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–15, 2020. [34] Y.P. Lim and P. C. Woods. Experimental color in computer icons. In Visual Information Communication, pages 149–158. Springer, 2010. [35] G. Little, L. B. Chilton, M. Goldman, and R. C. Miller. Turkit: Human computation algorithms on mechanical turk. UIST ’10, page 57–66, New York, NY, USA, 2010. Association for Computing Machinery. [36] T. F. Liu, M. Craft, J. Situ, E. Yumer, R. Mech, and R. Kumar. Learning design semantics for mobile apps. In The 31st Annual ACM Symposium on User Interface Software and Technology, pages 569–579. ACM, 2018. [37] Y. Liu, A. Agarwala, J. Lu, and S. Rusinkiewicz. Datadriven iconification. In International Symposium on NonPhotorealistic Animation and Rendering (NPAR), May 2016. [38] S. McDougall and S. Isherwood. What's in a name? the role of graphics, functions, and their interrelationships in icon identification. Behavior research methods, 41(2):325–336, 2009. [39] S. J. Mcdougall, M. B. Curry, and O. de Bruijn. Measuring symbol and icon characteristics: Norms for concreteness, complexity, meaningfulness, familiarity, and semantic distance for 239 symbols. Behavior Research Methods, Instruments, Computers, 31(3):487–519, 1999. [40] S. J. McDougall, M. B. Curry, and O. de Bruijn. The effects of visual information on users’ mental models: An evaluation of pathfinder analysis as a measure of icon usability. International Journal of Cognitive Ergonomics, 5(1):59–84, 2001. [41] L. McInnes, J. Healy, N. Saul, and L. Grossberger. Umap: Uniform manifold approximation and projection. The Journal of Open Source Software, 3(29):861, 2018. [42] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. ICML’10, page 807–814, Madison, WI, USA, 2010. Omnipress. [43] M. Peng, J. Xing, and L.Y. Wei. Autocomplete 3d sculpting. ACM Trans. Graph., 37(4):132:1–132:15, July 2018. [44] React. React, 2021. [45] K. Reinecke, T. Yeh, L. Miratrix, R. Mardiko, Y. Zhao, J. Liu, and K. Z. Gajos. Predicting users’ first impressions of website aesthetics with a quantification of perceived visual complexity and colorfulness. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 2049–2058. ACM, 2013. [46] R. Rosenholtz, A. Dorai, and R. Freeman. Do predictions of visual perception aid design? ACM Transactions on Applied Perception (TAP), 8(2):1–20, 2011. [47] P. Sangkloy, N. Burnell, C. Ham, and J. Hays. The sketchy database: learning to retrieve badly drawn bunnies. ACM Transactions on Graphics (TOG), 35(4):119, 2016. [48] V. Setlur, C. AlbrechtBuehler, A. A. Gooch, S. Rossoff, and B. Gooch. Semanticons: Visual metaphors as file icons. In Computer Graphics Forum, volume 24, pages 647– 656, 2005. [49] V. Setlur and J. D. Mackinlay. Automatic generation of semantic icon encodings for visualizations. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems, pages 541–550, 2014. [50] I.C. Shen and B.Y. Chen. Clipgen: A deep generative model for clipartvectorization and synthesis. arXiv preprint, 2021. [51] I.C. Shen, K.H. Liu, L.W. Su, Y.T. Wu, and B.Y. Chen. Clipflip : Multiview clipart design. Computer Graphics Forum, 2021. [52] Sketch. Sketch, 2021. [53] A. Swearngin and Y. Li. Modeling mobile interface tappability using crowdsourcing and deep learning. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1–11, 2019. [54] N. Umetani, T. Igarashi, and N. J. Mitra. Guided exploration of physically valid shapes for furniture design. ACM Trans. Graph., 31(4):86–1, 2012. [55] N. Umetani, Y. Koyama, R. Schmidt, and T. Igarashi. Pteromys: Interactive design and optimization of freeformed freeflight model airplanes. ACM Trans. Graph., 33(4), July 2014. [56] Vue. Vue, 2021. [57] D. Warnock, M. McGeeLennon, and S. Brewster. Multiple notification modalities and older users. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1091–1094, 2013. [58] J. Xing, H.T. Chen, and L.Y. Wei. Autocomplete painting repetitions. ACM Trans. Graph., 33(6):172:1–172:11, Nov. 2014. [59] N. Zhao, N. W. Kim, L. M. Herman, H. Pfister, R. W. Lau, J. Echevarria, and Z. Bylinskii. Iconate: Automatic compound icon generation and ideation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–13, 2020. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/81376 | - |
| dc.description.abstract | "圖標在應用軟體、網頁、和各式各樣的使用者介面上是重要的元素之一,並且經常以整組的圖標一起設計、使用。然而,設計師通常僅僅通過公司、團隊內部非正式的測試來評估新設計的圖標、圖標組的易用性。即便有些較常見的功能已有具代表性的圖標樣式(例如:搜尋、下一步),但是在不同案例中仍然有許多尚未建立代表性圖標的功能的設計需求(例如:封存)。不僅如此,因為介面設計在時間和預算上的限制,設計師鮮少針對每個圖標去做正式的(具有足夠受測者的)易用性測試。因此,我們提出EvIcon,一個互動式設計評估工具來提升圖標設計反覆修改和評估階段的效率,並且針對新設計的圖標提供兩種即時資料驅動回饋。 首先,我們透過群眾外包收集大量群眾對於不同圖標的感知程度(包含語義相關性和熟悉程度)評分(共收集到62, 649 筆評分資料)並用以訓練深度學習模型來針對使用者上傳的圖標提供即時的易用性回饋。接著,我們利用收集到的圖標資料庫(n = 2, 000)以及孿生神經網路(Siamese Neural Network)來輔助圖標組的設計能達到足夠的視覺區分程度。我們透過新手及專業介面設計師的使用者試驗及訪談,展示EvIcon 對於圖標設計反覆修改和評估階段的幫助和成效。在後續的群眾外包實驗中,可以看到在EvIcon 輔助下設計的圖標對比沒有EvIcon 輔助的圖標在語意相關性和熟悉程度上均達到較佳的成效。" | zh_TW |
| dc.description.provenance | Made available in DSpace on 2022-11-24T03:46:28Z (GMT). No. of bitstreams: 1 U0001-1207202116042900.pdf: 2541131 bytes, checksum: 8779c338a116a8711b107c2b991b8490 (MD5) Previous issue date: 2021 | en |
| dc.description.tableofcontents | Acknowledgements 3 摘要5 Abstract 7 Contents 9 List of Figures 11 List of Tables 15 Chapter 1 Introduction 1 Chapter 2 Related Works 7 2.1 Icon Design and Analysis . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Crowdsourced Human Computation . . . . . . . . . . . . . . . . . . 8 2.3 Assistive Authoring Tool for Visual Design . . . . . . . . . . . . . . 9 Chapter 3 System Architecture 11 Chapter 4 Interface Design 13 4.1 Main Canvas Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.2 Icon Suggestion Panel . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.3 Perception Feedback Panel . . . . . . . . . . . . . . . . . . . . . . . 15 4.4 Distinguishability Visualization Panel . . . . . . . . . . . . . . . . . 16 Chapter 5 Dataset Collection and Crowdsourced Perceptual Rating 19 5.1 Icon Dataset Collection . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.2 Crowdsourced Perceptual Ratings . . . . . . . . . . . . . . . . . . . 21 Chapter 6 Computational Models for Datadriven Feedback 25 6.1 Computational Model for Perception Feedback . . . . . . . . . . . . 25 6.1.1 Implementation and Architecture of Classification Models . . . . . 26 6.1.2 Results and Findings of Classification Models . . . . . . . . . . . . 27 6.2 Computational Model for Visual Distinguishability . . . . . . . . . . 28 6.2.1 Implementation and Architecture of Siamese Network . . . . . . . . 28 6.2.2 Results and Findings of Siamese Network . . . . . . . . . . . . . . 30 Chapter 7 Evaluation with UI Designers 31 7.1 Procedure and Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . 32 7.2 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 7.2.1 Revised Icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 7.2.2 Revised Icons For Specific Demographics . . . . . . . . . . . . . . 35 7.2.3 Poststudy Interview . . . . . . . . . . . . . . . . . . . . . . . . . 36 7.3 Statistical Results of Crowdsourced Evaluation . . . . . . . . . . . . 39 Chapter 8 Discussion and Limitations 45 8.1 Interactions between Designers and Authoring Assistive Tool . . . . 45 8.2 Extending Dataset and Using Advanced Computation Models . . . . 46 8.3 Supporting validations for General Use of Icons . . . . . . . . . . . . 47 Chapter 9 Conclusion 49 References 51 | |
| dc.language.iso | zh-TW | |
| dc.subject | 互動式輔助工具 | zh_TW |
| dc.subject | 圖標設計 | zh_TW |
| dc.subject | 群眾外包 | zh_TW |
| dc.subject | Icon Design | en |
| dc.subject | Crowdsourcing | en |
| dc.subject | Interactive Assistive Tool | en |
| dc.subject | Computational Design | en |
| dc.title | 資料驅動互動式回饋以輔助高易用性圖標設計 | zh_TW |
| dc.title | Data-Driven Interactive Feedback for Designing High-Usability Icon | en |
| dc.date.schoolyear | 109-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 詹力韋(Hsin-Tsai Liu),鄭龍磐(Chih-Yang Tseng),余能豪,蔡欣叡 | |
| dc.subject.keyword | 圖標設計,群眾外包,互動式輔助工具, | zh_TW |
| dc.subject.keyword | Icon Design,Crowdsourcing,Interactive Assistive Tool,Computational Design, | en |
| dc.relation.page | 58 | |
| dc.identifier.doi | 10.6342/NTU202101410 | |
| dc.rights.note | 同意授權(限校園內公開) | |
| dc.date.accepted | 2021-07-14 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
| 顯示於系所單位: | 資訊網路與多媒體研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-1207202116042900.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 2.48 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
