Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/19639
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳宏銘(Homer H. Chen)
dc.contributor.authorYi-Chan Wuen
dc.contributor.author吳易展zh_TW
dc.date.accessioned2021-06-08T02:10:31Z-
dc.date.copyright2019-03-11
dc.date.issued2016
dc.date.submitted2016-01-27
dc.identifier.citation[1] A. Alpern, Techniques for Algorithmic Composition of Music. Hampshire College, MA, 1995.
[2] J. D. Fernández and F. J. Vico, “AI methods in algorithmic composition: A comprehensive survey.” Journal of Artificial Intelligence Research, vol. 48, pp. 513–582, 2013.
[3] B. Benward and M. Saker, Music: In Theory and Practice, 8th ed., Vol. I. New York: McGraw-Hill, 2008, ch. 5–7.
[4] I. Simon, D. Morris, and S. Basu, “MySong: Automatic accompaniment generation for vocal melodies,” in Proc. ACM CHI, Florence, Italy, pp. 725–734, 2008.
[5] D. Morris, I. Simon, and S. Basu, “Exposing parameters of a trained dynamic model for interactive music creation,” in Proc. AAAI, Chicago, IL, pp. 784–791, 2008.
[6] PG Music Inc. Band-in-a-Box. [Online]. Available: http://www.pgmusic.com
[7] R. Parncutt, Harmony: A Psychoacoustical Approach. NewYork: Springer-Verlag, 1989, ch. 4–5.
[8] H. R. Lee and J.-S. R. Jang, “i-Ring: A system for humming transcription and chord generation,” in Proc. IEEE Int. Conf. on Multimedia and Expo, Taipei, Taiwan, pp. 1031–1034, 2004.
[9] M. Allan and C. K. I. Williams, “Harmonising chorales by probabilistic inference,” in Proc. Neural Information Processing Systems Conf., Vancouver, Canada, pp. 25–32, 2004.
[10] S. A. Raczyński, S. Fukayama, and E. Vincent, “Melody harmonization with interpolated probabilistic models,” Journal of New Music Research, vol. 42, no. 3, pp. 223–235, 2013.
[11] C.-H. Chuan and E. Chew, “A hybrid system for automatic generation of style-specific accompaniment,” in Proc. Int. Joint Workshop on Computational Creativity, London, UK, pp. 57–64, 2007.
[12] J. F. Paiement, D. Eck, and S. Bengio, “Probabilistic melodic harmonization,” in Proc. 19th Canadian Conf. on Artificial Intelligence, Québec, Canada, pp. 218–229, 2006.
[13] D. Gang, D. Lehman, and N. Wagner, “Tuning a neural network for harmonizing melodies in real-time,” in Proc. Int. Computer Music Conf., Ann Arbor, MI, 1998.
[14] L. Lozano, A. L. Medaglia, and N. Velasco, “Generation of pop-rock chord sequences using genetic algorithms and variable neighborhood search,” in Proc. European Workshops on Applications of Evolutionary Computation, Tübingen, Germany, pp. 573–578, 2009.
[15] A. R. R. Freitas and F. G. Guimarães, “Melody harmonization in evolutionary music using multiobjective genetic algorithms,” in Proc. Sound and Music Computing Conf., Padova, Italy, 2011.
[16] P.-C. Chen, K.-S. Lin, and H. H. Chen, “Emotional accompaniment generation system based on harmonic progression.” IEEE Trans. Multimedia, vol. 15, no. 7, pp. 1469–1479, 2013.
[17] R. E. Thayer, The Biopsychology of Mood and Arousal. New York: Oxford University Press, 1989, ch. 2–5.
[18] H.-T. Cheng, Y.-H. Yang, Y.-C. Lin, I.-B. Liao, and H. H. Chen, “Automatic chord recognition for music classification and retrieval,” in Proc. IEEE Int. Conf. on Multimedia and Expo, Hannover, Germany, pp. 1505–1508, 2008.
[19] P. Gomez and B. Danuser, “Relationships between musical structure and psychophysiological measures of emotion.” Emotion, vol. 7, no. 2, pp. 377–387, 2007.
[20] A. P. Oliveira and A. Cardoso, “A musical system for emotional expression,” Knowledge-Based Systems, vol. 23, no. 8, pp. 901–913, 2010.
[21] I. Wallis, T. Ingalls, E. Campana, and J. Goodman, “A rule-based generative music system controlled by desired valence and arousal,” in Proc. Sound and Music Computing Conf., Padova, Italy, 2011.
[22] F. Morreale, R. Masu, and A. De Angeli, “Robin: An algorithmic composer for interactive scenarios,” in Proc. Sound and Music Computing Conf., Stockholm, Sweden, 2013.
[23] R. Legaspi, Y. Hashimoto, K. Moriyama, S. Kurihara, and M. Numao, “Music compositional intelligence with an affective flavor,” in Proc. ACM Int. Conf. on Intelligent User Interfaces, Honolulu, HI, pp. 216–224, 2007.
[24] P.-H. Kuo, T.-H. S. Li, Y.-F. Ho, and C.-J. Lin, “Development of an automatic emotional music accompaniment system by fuzzy logic and adaptive partition evolutionary genetic algorithm.” IEEE Access, vol. 3, pp. 815–824, 2015.
[25] M. Wöllmer, F. Eyben, S. Reiter, B. Schuller, C. Cox, E. Douglas-Cowie, and R. Cowie, “Abandoning emotion classes—Towards continuous emotion recognition with modelling of long-range dependencies,” in Proc. Interspeech, Brisbane, Australia, pp. 597–600, 2008.
[26] Y.-H. Yang and H. H. Chen. “Machine recognition of music emotion: A review.” ACM Trans. Intell. Systems and Technology, vol. 3, no. 3, 2012.
[27] T. Eerola and P. Toiviainen. MIDI Toolbox: MATLAB Tools for Music Research. University of Jyvaskyla, Finland, 2004. [Online]. Available: http://www.jyu.fi/musica/miditoolbox/
[28] C. Smit. Midi Tools: Extending the midi toolbox in matlab. Columbia University, New York, 2010. [Online]. Available: http://www.ee.columbia.edu/~csmit/matlab_midi.html
[29] M. Rohrmeier and I. Cross, “Statistical properties of tonal harmony in Bach’s chorales,” in Proc. Int. Conf. on Music Perception and Cognition, Sapporo, Japan, pp. 619–627, 2008.
[30] Songs with the same chord - Theorytab. [Online]. Available: http://www.hooktheory.com/trends
[31] K. Lee, “Automatic chord recognition from audio using enhanced pitch class profile,” in Proc. Int. Computer Music Conf., New Orleans, LA, pp. 306–313, 2006.
[32] T. Fujishima, “Realtime chord recognition of musical sound: A system using common lisp music,” in Proc. Int. Computer Music Conf., Beijing, China, pp. 464–467, 1999.
[33] E. Bigand, B. Tillmann, B. Poulin-Charronnat, and D. Manderlier, “Repetition priming: Is music special?” The Quarterly Journal of Experimental Psychology: Section A, vol. 58, no. 8, pp. 1347–1375, 2005.
[34] A. Gabrielsson, E. Lindröm, “The role of structure in the musical expression of emotions,” in Handbook of music and emotion: Theory, research, applications, P. N. Juslin and J. A. Sloboda, Eds. New York: Oxford University Press, 2010, pp. 367–400.
[35] R. G. Crowder, “Perception of the major/minor distinction: I. Historical and theoretical foundations.” Psychomusicology: Music, Mind, and Brain, vol. 4, no. 1–2, pp. 3–12, 1984.
[36] A. Metallinou and S. Narayanan, “Annotation and processing of continuous emotional attributes: Challenges and opportunities,” in Int. Workshop on Emotion Representation, Analysis and Synthesis in Continuous Time and Space, Shanghai, China, 2013.
[37] A. Aljanaki, Y.-H. Yang, and M. Soleymani, “Emotion in music task at MediaEval 2014,” in Working Notes Proceedings of the MediaEval 2014 Workshop, Barcelona, Spain, 2014.
[38] B. Schuller, M. Valster, F. Eyben, R. Cowie, and M. Pantic, “AVEC 2012: The continuous audio/visual emotion challenge,” in Proc. ACM Int. Conf. on Multimodal Interaction, Santa Monica, CA, pp. 449–456, 2012.
[39] P. F. Swaszek, G. W. Johnson, R. Shalaev, M. Wiggins, S. Lo, and R. J. Hartnett, “An Investigation into the Temporal Correlation at the ASF Monitor Sites,” in Proceedings of the International Loran Association 36th Annual Meeting, Orlando, FL, 2007.
[40] S. Mariooryad and C. Busso, “Correcting time-continuous emotional labels by modeling the reaction lag of evaluators.” IEEE Trans. Affective Computing, vol. 6, no. 2, pp. 97–108, 2015.
[41] N. Lubis, S. Sakti, G. Neubig, T. Toda, A. Purwarianti, and S. Nakamura, “Emotion and its triggers in human spoken dialogue: Recognition and analysis,” in Proc. Int. Workshop on Spoken Dialogue Systems, Napa, CA, pp. 224–229, 2014.
[42] G. Kreutz, U. Ott, D. Teichmann, P. Osawa, and D. Vaitl, “Using music to induce emotions: Influences of musical preference and absorption.” Psychology of Music, vol. 36, no.1, pp. 101–126, 2007.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/19639-
dc.description.abstract音樂作品所表現的情緒隨著樂曲的進行而改變。本論文將這樣的特性考慮到自動伴奏系統當中,以旋律與使用者所指定的情緒流動(emotion flow)作為輸入,以帶有情感的伴奏作為輸出。其中情緒流動是由一組情緒的正負度(valence)與高亢度(arousal)曲線所構成,兩者都是時間的函數。
  伴奏由和弦進程(chord progression)與伴奏型態(accompaniment pattern)所構成。我們運用前者來控制音樂情緒的正負度,並用後者控制情緒的高亢度。在系統的實作上,我們同時考慮使用者輸入的旋律與正負度曲線,以動態規劃來產生和弦進程;本論文為此提出了一個數學模型來描述和弦進程與正負度在時間上的關係。至於伴奏型態,則是透過量化的高亢度曲線來產生。
  我們以主觀的方式來衡量系統的性能:包括產生的伴奏搭配上旋律後是否符合指定的情緒、以及兩者之間是否和諧。平均而言,使用者所指定的、以及所感受到的高亢度曲線之間的互相關係數達0.85,而使用者所指定的、以及所感受到的正負度曲線之間的互相關係數達0.52。而如果只考慮專業音樂人士,兩個係數分別為0.92與0.88。結果顯示此系統在考慮情緒的條件下能產生適當的伴奏。
zh_TW
dc.description.abstractThe emotion expressed by a music piece varies as the music unrolls in time. To create such dynamic emotion expression, we develop an algorithm that automatically generates the accompaniment for a melody according to an emotion flow specified by the user. The emotion flow is given in the form of arousal and valence curves, each as a function of time. The affective accompaniment is composed of chord progression and accompaniment pattern. Chord progression, which controls the valence of the composed music, is generated by dynamic programming using the input melody and valence data as constraints. A mathematical model is developed to describe the temporal relationship between valence and chord progression. Accompaniment pattern, which controls the arousal of the composed music, is determined according to the quantized values of the input arousal curve. The performance of the system is evaluated subjectively. The cross-correlation coefficient between the input arousal (valence) and the perceived arousal (valence) of the composed music is 0.85 (0.52). If only musician subjects are considered, the cross-correlation coefficients are 0.92 for arousal and 0.88 for valence. The results show that the proposed system is capable of generating subjectively appropriate accompaniments conforming to the user specification.en
dc.description.provenanceMade available in DSpace on 2021-06-08T02:10:31Z (GMT). No. of bitstreams: 1
ntu-105-R02942139-1.pdf: 962983 bytes, checksum: e8f8809f9c02c1075137ca7386d62ad2 (MD5)
Previous issue date: 2016
en
dc.description.tableofcontents誌謝 i
中文摘要 ii
ABSTRACT iii
CONTENTS iv
LIST OF FIGURES vi
LIST OF TABLES vii
Chapter 1 Introduction 1
Chapter 2 Related Work 4
Chapter 3 Emotion and Music Features 6
3.1 Emotion Model 6
3.2 Music Features Relevant to Valence and Arousal 7
Chapter 4 System Overview 8
4.1 Preprocessing 9
4.2 Chord Progression Generation 9
4.3 Accompaniment Pattern Generation 9
Chapter 5 Generating Chord Progression 10
5.1 Optimization Formulation 10
5.2 Cost Function f1(c) 11
5.3 Cost Function f2(m, c) 12
5.4 Cost Function f3(v, c) 12
5.5 Dynamic Programming 15
Chapter 6 Generating Accompaniment Pattern 16
Chapter 7 System Evaluation and Discussion 17
7.1 Setup 17
7.2 Evaluation Metric for Emotion Curves 19
7.3 Result of the First Subjective Test 20
7.4 Result of the Second Subjective Test 22
Chapter 8 Conclusion 24
REFERENCES 25
dc.language.isoen
dc.title依據指定之情緒流動產生自動伴奏zh_TW
dc.titleGeneration of Affective Accompaniment in Accordance with a Specified Emotion Flowen
dc.typeThesis
dc.date.schoolyear104-1
dc.description.degree碩士
dc.contributor.oralexamcommittee張智星(Jyh-Shing Roger Jang),楊奕軒(Yi-Hsuan Yang),丁建均(Jian-Jiun Ding)
dc.subject.keyword自動伴奏,情緒流動,音樂情緒,演算法作曲,zh_TW
dc.subject.keywordAutomatic accompaniment,emotion flow,music emotion,algorithmic composition,en
dc.relation.page29
dc.rights.note未授權
dc.date.accepted2016-01-27
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電信工程學研究所zh_TW
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-105-1.pdf
  未授權公開取用
940.41 kBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved