Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/27292
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor吳家麟(Ja-Ling Wu)
dc.contributor.authorYin-Tzu Linen
dc.contributor.author林映孜zh_TW
dc.date.accessioned2021-06-12T18:00:19Z-
dc.date.available2011-02-01
dc.date.copyright2008-02-01
dc.date.issued2008
dc.date.submitted2008-01-28
dc.identifier.citation[1] J. P. Bello, et al., “A tutorial on onset detection in music signals,” IEEE Transactions on Speech and Audio Processing, vol. 13, no. 5, pp. 1035-1047, 2005.
[2] Y. Zhu, M. S. Kankanhalli, and S. Gao, “Music key detection for musical audio,” in the Proceedings of the 11th International Multimedia Modelling Conference (MMM '05), 2005.
[3] C.-H. Chuan and E. Chew, “Audio key finding: Considerations in system design and case studies on chopin's 24 preludes,” EURASIP Journal on Advances in Signal Processing, vol. 2007, no. 1, pp. 156-156, 2007.
[4] N. C. Maddage, M. S. Kankanhalli, and L. Haizhou, “A hierarchical approach for music chord modeling based on the analysis of tonal characteristics,” in the Proceedings of IEEE International Conference on Multimedia and Expo (ICME '06), 2006.
[5] K. Lee and M. Slaney, “Automatic chord recognition from audio using an hmm with supervised learning,” in the Proceedings of the 7th International Conference on Music Information Retrieval (ISMIR '06), University of Victoria, Canada, 2006.
[6] G. E. Poliner, et al., “Melody transcription from music audio: Approaches and evaluation,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 14, no. 4, pp. 1247-1256, 2007.
[7] G. E. Poliner and D. P. W. Ellis, “A discriminative model for polyphonic piano transcription,” EURASIP Journal on Advances in Signal Processing, vol. 2007, no. 1, 2007.
[8] M. P. Ryynänen and A. Klapuri, “Polyphonic music transcription using note event modeling,” in the Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, NY, USA, 2005.
[9] J. Woodruff and P. Bryan, “Using pitch, amplitude modulation, and spatial cues for separation of harmonic instruments from stereo music recordings,” EURASIP Journal on Advances in Signal Processing, vol. 2007, no. 1, 2007.
[10] Y. Li and D. Wang, “Separation of singing voice from music accompaniment for monaural recordings,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 4, pp. 1475-1487, 2007.
[11] M. Fingerhut, “Music information retrieval, or how to search for (and maybe find) music and do away with incipits,” in the Proceedings of IAML-IASA Congress, Oslo, Norway, 2004.
[12] “The international conferences on music information retrieval (ismir),” http://www.ismir.net/.
[13] N. Orio, “Music retrieval: A tutorial and review,” Foundations and Trends® in Information Retrieval, vol. 1, no. 1, 2006.
[14] L. Stein, Structure & style: The study and analysis of musical forms, Summy-Birchard Inc., Evanston, USA, 1979.
[15] T. A. Pankhurst, “The fundamental structure,” Schenkerguide.com: A Guide to Schenkerian Analysis, 2001, http://www.schenkerguide.com/fundamentalstructure.html (2008 Jan. 2).
[16] P. Mavromatis and M. Brown, “Parsing context-free grammars for music: A computational model of schenkerian analysis,” in the Proceedings of the 8th International Conference on Music Perception and Cognition (IMCPC8), Evanston, USA, 2004.
[17] É. Gilbert and D. Conklin, “A probabilistic context-free grammar for melodic reduction,” in the Proceedings of the International Workshop on Artificial Intelligence and Music(IJCAI '07), Hyderabad, India, 2007.
[18] A. Marsden, “Automatic derivation of musical structure: A tool for research on schenkerian analysis,” in the Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR '07), Vienna, Austria, 2007.
[19] F. Lerdahl and R. Jackendoff, Generative theory of tonal music, MIT Press, Cambridge, Massachusetts, 1983.
[20] F. Lerdahl, Tonal pitch space, Oxford University Press, New York, 2001.
[21] D. Temperley, The cognition of basic musical structures, MIT Press, Cambridge, Massachusetts, 2001.
[22] M. Hamanaka, K. Hirata, and S. Tojo, “Atta: Implementing gttm on a computer,” in the Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR '07), Vienna, Austria, 2007.
[23] J. Tenney and L. Polansky, “Temporal gestalt perception in music.,” Journal of Music Theory, vol. 24, no. 2, pp. 205-241, 1980.
[24] E. Cambouropoulos, “Musical rhythm: A formal model for determining local boundaries, accents and metre in a melodic surface,” in M. Leman, Editor, Music, gestalt, and computing: Studies in cognitive and systamatic musicology, Springer Verlag, Berlin, p. 277-293, 1997.
[25] R. Bod, “Memory-based models of melodic analysis: Challenging the gestalt principles.,” Journal of New Music Research, vol. 31, no. 1, pp. 27-37, 2002.
[26] C.-H. Chuan and E. Chew, “A dynamic programming approach to the extraction of phrase boundaries from tempo variations in expressive performances,” in the Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR '07), Vienna, Austria, 2007.
[27] P.-H. Weng, “An automatic musical form analysis system for rondo and fugue,” Department of Computer Science, National Tsing Hua University, HsinChu, Taiwan, 2004.
[28] J. Foote, “Automatic audio segmentation using a measure of audio novelty,” in the Proceedings of IEEE International Conference on Multimedia and Expo (ICME '00), New York, NY, USA, 2000.
[29] R. Dannenberg and N. Hu, “Pattern discovery techniques for music audio,” in the Proceedings of the International Symposium on Music Information Retrieval (ISMIR '02), Paris, France, 2002.
[30] W. Chai, “Structural analysis of musical signals via pattern matching,” in the Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '03), 2003.
[31] W. Chai and B. Vercoe, “Music thumbnailing via structural analysis,” in the Proceedings of the 11th ACM international conference on Multimedia (MM '03), Berkeley, CA, USA, 2003.
[32] M. Goto, “A chorus-section detecting method for musical audio signals,” in the Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '03), 2003.
[33] L. Lu, M. Wang, and H.-J. Zhang, “Repeating pattern discovery and structure analysis from acoustic music data,” in the Proceedings of the 6th ACM SIGMM international workshop on Multimedia information retrieval (SIGMM '04), New York, NY, USA, 2004.
[34] N. C. Maddage, C. Xu, M. S. Kankanhalli, and X. Shao, “Content-based music structure analysis with applications to music semantics understanding,” in the Proceedings of the 12th Annual ACM International Conference on Multimedia (MM '04), New York, NY, USA, 2004.
[35] B. S. Ong and P. Herrera, “Semantic segmentation of music audio contents,” in the Proceedings of the International Computer Music Conference (ICMC '05), Barcelona, Spain, 2005.
[36] W. Chai, “Semantic segmentation and summarization of music: Methods based on tonality and recurrent structure,” IEEE Signal Processing Magazine, vol. 23, no. 2, pp. 124-132, 2006.
[37] J. Paulus and A. Klapuri, “Music structure analysis by finding repeated parts,” in the Proceedings of the 1st ACM workshop on Audio and music computing multimedia (AMCMM '06), Santa Barbara, California, USA, 2006.
[38] Y. Shiu, H. Jeong, and C. C. J. Kuo, “Similarity matrix processing for music structure analysis,” in the Proceedings of the 1st ACM workshop on Audio and music computing multimedia (AMCMM '06), Santa Barbara, California, USA, 2006.
[39] M. Müller and F. Kurth, “Towards structural analysis of audio recordings in the presence of musical variations,” EURASIP Journal on Advances in Signal Processing, vol. 2007, no. 1, pp. 163-163, 2007.
[40] K. Jensen, “Multiple scale music segmentation using rhythm, timbre, and harmony,” EURASIP Journal on Advances in Signal Processing, vol. 2007, no. 1, pp. 159-159, 2007.
[41] S.-F. He, “A flexible music clip generating scheme based on music structure analysis,” Master Thesis, Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, 2007.
[42] E. Peiszer, “Automatic audio segmentation: Segment boundary and structure detection in popular music,” Master Thesis, Information & software Engineering Group, Institute of Software Technology and Interactive Systems, Vienna University of technology, Vienna, Austria, 2007.
[43] J. C. Brown, “Calculation of a constant q spectral transform,” Journal of the Acoustical Society of America, vol. 89, no. 1, pp. 425-434, 1990.
[44] M. A. Bartsch and G. H. Wakefield, “To catch a chorus: Using chroma-based representations for audio thumbnailing,” in the Proceedings of the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA '01), New Paltz, NY, USA, 2001.
[45] A. Rauber, E. Pampalk, and D. Merkl, “Using psychoacoustic models and self organizing maps to create a hierarchical structuring of music by sound similarity,” in the Proceedings of the 3rd International Symposium on Music Information Retrieval (ISMIR '02), Paris, France, 2002.
[46] T. Lidy and A. Rauber, “Evaluation of feature extractors and psycho-acoustic transformations for music genre classification,” in the Proceedings of the 6th International Conference on Music Information Retrieval (ISMIR '05), London, UK, 2005.
[47] M. Levy, M. Sandler, and M. Casey, “Extraction of high-level musical structure from audio data and its application to thumbnail generation,” in the Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '06), 2006.
[48] D. Temperley, “Studies of musical grouping in psychology,” in, The cognition of basic musical structures, MIT Press, Cambridge, Massachusetts, p. 56, 2001.
[49] “Timidity++,” http://www.timidity.jp/.
[50] D. Temperley and D. Sleator, “The melisma music analyzer,” http://www.link.cs.cmu.edu/music-analysis/.
[51] K. Lee and M. Slaney, “Automatic chord recognition from audio using a supervised hmm trained with audio-from-symbolic data,” in the Proceedings of the 1st ACM workshop on Audio and music computing multimedia (AMCMM '06), Santa Barbara, California, USA, 2006.
[52] S. Dixon, “Automatic extraction of tempo and beat from expressive performances,” Journal of New Music Research, vol. 30, no. 1, pp. 39-58, 2001.
[53] “Audio beat tracking,” MIREX 2006, http://www.music-ir.org/mirex/2006/index.php/Audio_Beat_Tracking.
[54] “Kern scores library of the center for computer assisted research in humanities at stanford university,” http://kern.ccarh.org/.
[55] “Classical music archives,” http://www.classicalarchives.com.
[56] 張美惠,1991,莫札特鋼琴奏鳴曲之研究,台北:全音樂譜出版社有限公司。
[57] “Muzio clementi,” http://en.wikipedia.org/wiki/Muzio_Clementi.
[58] “Ludwig van beethoven,” http://en.wikipedia.org/wiki/Beethoven.
[59] “Naxos music library,” http://ntu.naxosmusiclibrary.com/.
[60] “Chord (music),” http://en.wikipedia.org/wiki/Chord_%28music%29.
[61] “Wikipedia, the free encyclopedia,” http://en.wikipedia.org/.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/27292-
dc.description.abstract對於一般人來說,欣賞並了解古典音樂並不是一件容易的事,在這篇論文中,我們希望提供另一套分析音樂結構的方法,以協助使用者更容易地欣賞古典音樂。以往在音樂結構分析方面的研究可以分成兩類,一類是透過音樂的符號形式(symbolic representation,例如樂譜或是midi檔)來進行分析,一類是從音樂的波形形式(waveform)來加以分析。在現階段,使用符號形式進行分析的研究所使用的方法尚難以被應用在波形形式的分析上;而在依據波形形式進行分析的研究中,大部分使用的方法都侷限於辨認音樂中重複出現的結構,較少引入的樂理知識。因此,我們在本論文中提出使用波形形式分析古典音樂結構的方法,並且引入了更深入的樂理知識,例如:終止式。此外,我們嘗試由曲式分析出發的觀點來分析音樂的結構。zh_TW
dc.description.abstractIt is a non-trivial task for untrained people to appreciate classical music pieces. Thus, we present a method of music structure analysis in order to help people understand the conveyed messages in the music. Previous works in music structure analysis can be classified by the two types of input sources: symbolic representations (scores or midi files) and waveform format (mp3, wav, etc). The methods used in analyzing symbolic representations cannot be easily applied to waveform-format analysis due to the immaturity of polyphonic transcription technique. Accordingly, most researchers analyze the waveform-format music by recognizing the repetitive patterns represented in low level features extracted directly from the waveform. However, in this methodology, the importance of music theory is often neglected. Therefore, in this thesis, we focus on analyzing waveform-format classical music and attempt to segment music based on more in-depth music theory, such as the cadence. Moreover, instead of relying on the low level repeating structure, we address the structure analysis problem from the semantic perspective of “musical form”.en
dc.description.provenanceMade available in DSpace on 2021-06-12T18:00:19Z (GMT). No. of bitstreams: 1
ntu-97-R94922169-1.pdf: 1292203 bytes, checksum: 869a9a39c25bf296c9219972f79cd198 (MD5)
Previous issue date: 2008
en
dc.description.tableofcontents誌謝 ii
摘要 iii
Abstract iv
Chapter 1  Introduction 1
 1.1 Motivation 1
 1.2 Music Structure Analysis 1
 1.3 Related Works 4
  1.3.1 The types of music content 4
  1.3.2 Symbolic Domain 5
  1.3.3 Waveform Domain 7
 1.4 Research Statement 10
 1.5 The Cadence 10
 1.6 Thesis organization 12
Chapter 2  System Overview 13
 2.1 Basic Idea 13
 2.2 System Framework 15
Chapter 3  Boundary Candidates Collection 17
 3.1 Inter-onset Intervals and Inter-onset Energy 17
 3.2 Cadence Group Detection 18
 3.3 Long Notes Detection 20
Chapter 4  Boundary Refinement 23
 4.1 Basic Idea 23
 4.2 Boundary Property extraction 25
 4.3 Remove Fake Boundary Candidates 27
Chapter 5 Experiments 29
 5.1 The Dataset 29
 5.2 Results of Boundary Candidates Collection 32
 5.3 Results of Boundary Refinement 33
 5.4 Discussion 36
Chapter 6 Conclusion 38
 6.1 Contribution 38
 6.2 Applications 38
  6.2.1 Browsing 38
  6.2.2 Musical Form Recognition 39
 6.3 Future Works 40
Appendix A Music Theory 41
 A.1 Beat / Onset 41
 A.2 Chord 41
 A.3 Scale degree 43
Appendix B Glossary of Terms 44
Bibliography 50
dc.language.isoen
dc.subject音樂結構分析zh_TW
dc.subject音樂分段zh_TW
dc.subject終止式zh_TW
dc.subject曲式分析zh_TW
dc.subjectmusical formen
dc.subjectmusic structure analysisen
dc.subjectthe cadenceen
dc.subjectmusic segmentationen
dc.title適用於音樂結構分析之終止式偵測zh_TW
dc.titleCadences Detection for Music Structure Analysisen
dc.typeThesis
dc.date.schoolyear96-1
dc.description.degree碩士
dc.contributor.oralexamcommittee莊永裕(Yung-Yu Chuang),陳炳宇(Bing-Yu Chen),朱威達(Wei-Ta Chu)
dc.subject.keyword音樂結構分析,曲式分析,終止式,音樂分段,zh_TW
dc.subject.keywordmusic structure analysis,musical form,the cadence,music segmentation,en
dc.relation.page55
dc.rights.note有償授權
dc.date.accepted2008-01-29
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-97-1.pdf
  未授權公開取用
1.26 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved