Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/4878
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor吳家麟(Ja-Ling Wu)
dc.contributor.authorYin-Tzu Linen
dc.contributor.author林映孜zh_TW
dc.date.accessioned2021-05-14T17:49:26Z-
dc.date.available2019-03-13
dc.date.available2021-05-14T17:49:26Z-
dc.date.copyright2015-03-13
dc.date.issued2015
dc.date.submitted2015-01-16
dc.identifier.citation[1] J.-C. Chen, W.-T. Chu, J.-H. Kuo, C.-Y. Weng, and J.-L. Wu, “Tiling slideshow,” in Proc. ACM MM, 2006, pp. 25–34.
[2] M. Goto, “A Chorus Section Detection Method for Musical Audio Signals and Its Application to a Music Listening Station,” IEEE Trans. ASLP, vol. 14, no. 5, pp. 1783–1794, Sep. 2006. [Online]. Available: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=1677997
[3] D. Davis, “Principles of Physics I - Chapt 16. Characteristics of Sound,” 2002. [Online]. Available: http://www.ux1.eiu.edu/~cfadd/1150/16Waves/char.html
[4] H.-Y. Lin, Y.-T. Lin, M.-C. Tien, and J.-L. Wu, “Music Paste: Concatenating Music Clips Based on Chroma and Rhythm Features,” in Proc. ISMIR, Kobe, 2009. [Online]. Available: http://ismir2009.ismir.net/proceedings/PS2-4.pdf
[5] I.-T. Liu, Y.-T. Lin, and J.-L. Wu, “Music Cut and Paste: A Personalized Musical Medley Generating System,” in Proc. ISMIR, Curitiba, PR, Brazil, 2013.
[6] Y.-T. Lin, I.-T. Liu, J.-S. R. Jang, and J.-L. Wu, “Audio Musical Dice Game: A User-preference-aware Medley Generating System,” ACM TOMM, 2014.
[7] P. Hanna, P. Ferraro, and M. Robine, “On Optimizing the Editing Algorithms for Evaluating Similarity Between Monophonic Musical Sequences,” J. New Music Res., vol. 36, no. 4, pp. 267–279, 2007. [Online]. Available: http://www.tandfonline.com/doi/abs/10.1080/09298210801927861
[8] D. Cope, Experiments in Musical Intelligence. Madison, Wisconsin, USA: A-R Editions, 1996.
[9] S. Wenger and M. Magnor, “Constrained Example-based Audio Synthesis,” in Proc. ICME, Barcelona, Spain, 2011. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6011902
[10] Z. Liu, C. Wang, J. Wang, H. Wang, and Y. Bai, “Adaptive Music Resizing with Stretching, Cropping and Insertion,” Multimedia Syst., vol. 19, no. 4, pp. 359–380, Jul. 2012. [Online]. Available: http://www.springerlink.com/index/10.1007/s00530-012-0289-6
[11] R. Cole and E. Schwartz. (2012) Virginia Tech Multimedia Music Dictionary. [Online]. Available: http://www.music.vt.edu/musicdictionary/
[12] A. Latham, “The Oxford Companion to Music,” 2011. [Online]. Available: http://www.oxfordmusiconline.com/subscriber/book/omo_t114
[13] D. M. Randel, The Harvard Dictionary of Music. Belknap Press, 2003. [Online]. Available: http://www.credoreference.com/book/harvdictmusic
[14] “iTunes Store Sets New Record with 25 Billion Songs Sold,” Apple Press Info. [Online]. Available: https://www.apple.com/pr/library/2013/02/06iTunes-Store-Sets-New-Record-with-25-Billion-Songs-Sold.html
[15] G. Loy, Musimathics. USA: The MIT Press, 2006, vol. 1, pp. 295–296,347–350.
[16] G. T. Fechner, Elements of Psychophysics 1. New York: Holt, Rinehart & Winston, 1860.
[17] Y.-T. Lin, C.-L. Lee, J.-S. Jang, and J.-L. Wu, “Bridging Music via Sound Effects,” in Proc. IEEE ISM. (Best Student Paper Award), 2014.
[18] D. Cope, “Experiments in Music Intelligence,” in Proc. ICMC, San Francisco, USA, 1987.
[19] M.-K. Shan and S.-C. Chiu, “Algorithmic Compositions Based on Discovered Musical Patterns,” Multimedia Tools and Applications, vol. 46, no. 1, pp. 1–23, May 2010. [Online]. Available: http://link.springer.com/10.1007/s11042-009-0303-y
[20] D. Schwarz, “Corpus-based Concatenative Synthesis,” Signal Processing Magazine, IEEE, vol. 24, no. 2, pp. 92–104, 2007. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4117932
[21] ——, “Current Research in Concatenative Sound Synthesis,” in Proc. ICMC, Barcelona,Spain, 2005. [Online]. Available: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.8530&rep=rep1&type=pdf
[22] R. B. Dannenberg, “Concatenative Synthesis Using Score-Aligned Transcriptions Music Analysis and Segmentation,” in Proc. ICMC, New Orleans, USA, 2006, pp. 352–355.
[23] D. Schwarz, R. Cahen, and S. Britton, “Principles and Applications of Interactive Corpus Based Concatenative Synthesis,” Journees d’Informatique Musicale (JIM), 2008.
[24] G. Bernardes, C. Guedes, and B. Pennycook, “EarGram : an Application for Interactive Exploration of Large Databases of Audio Snippets for Creative Purposes,” in Proc. CMMR, London, UK, 2012, pp. 19–22.
[25] R. Kobayashi, “Sound Clustering Synthesis Using Spectral Data,” in Proc. ICMC, Singapore, 2003. [Online]. Available: http://nagasm.org/ASL/icmc2003/closed/CR1052.PDF
[26] G. Griffin, Y. Kim, and D. Turnbull, “Beat-Sync-Mash-Coder: a Web Application for Real-Time Creation of Beat-Synchronous Music Mashups,” in Proc. ICASSP, Dallas, Texas, USA, 2010, pp. 2–5. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5495743
[27] M. E. P. Davies, P. Hamel, K. Yoshii, and M. Goto, “AutoMashUpper : An Automatic Multi-Song Mashup System,” in Proc. ISMIR, Curitiba, PR, Brazil, 2013.
[28] B. Logan, “Content-based Playlist Generation: Exploratory Experiments,” in Proc. ISMIR, Paris, France, 2002, pp. 2–3. [Online]. Available: http://pdf.aminer.org/000/439/408/content_based_playlist_generation_exploratory_experiments.pdf
[29] A. Flexer, D. Schnitzer, M. Gasser, and G. Widmer, “Playlist Generation Using Start and End Songs,” in Proc. ISMIR, Philadelphia, USA, Sep. 2008, pp. 173–178. [Online]. Available: http://ismir2008.ismir.net/papers/ISMIR2008_143.pdf
[30] L. Chiarandini, M. Zanoni, and A. Sarti, “A System for Dynamic Playlist Generation Driven by Multimodal Control Signals and Descriptors,” in Proc. MMSP, Hangzhou, China, 2011. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6093850
[31] Q. Lin, L. Lu, C. Weare, and F. Seide, “Music Rhythm Characterization with Application to Workout-Mix Generation,” in Proc. ICASSP, Dallas, Texas, USA, 2010, pp. 69–72. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5496203
[32] C. Baccigalupo and E. Plaza, “Case-based Sequential Ordering of Songs for Playlist Recommendation,” LNCS, vol. 4106, pp. 286–300, 2006. [Online]. Available: http://link.springer.com/chapter/10.1007/11805816_22
[33] T. Jehan, “Creating Music by Listening,” PhD Dissertation, Massachusetts Institute of Technology, 2005.
[34] S. Basu, “Mixing with Mozart,” in Proc. ICMC, Miami, USA, 2004. [Online]. Available: http://research.microsoft.com/en-us/um/people/sumitb/papers/MixingWithMozart_icmc2004.pdf
[35] D. Cliff, “Hang the DJ : Automatic Sequencing and Seamless Mixing of Dance-Music Tracks,” HP Labs Technical Report, Tech. Rep., 2000.
[36] H. Ishizaki, K. Hoashi, and Y. Takishima, “Full-Automatic DJ Mixing System with Optimal Tempo Adjustment based on Measurement Function of User Discomfort,”in Proc. of ISMIR, Kobe, Japan, 2009, pp. 135–140. [Online]. Available: http://ismir2009.ismir.net/proceedings/PS1-14.pdf
[37] S. Dixon, “Evaluation of the Audio Beat Tracking System BeatRoot,” J. New Music Res., vol. 36, no. 1, pp. 39–50, 2007. [Online]. Available: http://www.elec.qmul.ac.uk/people/simond/beatroot/
[38] J. P. Bello, L. Daudet, S. A. Abdallah, C. Duxbury, M. Davies, and M. B. Sandler, “A Tutorial on Onset Detection in Music Signals,” IEEE Trans. ASLP, vol. 13, no. 5, pp. 1035–1047, 2005.
[39] M. Kennedy and J. Bourne, “The Oxford Dictionary of Music.” [Online]. Available: http://www.oxfordmusiconline.com/subscriber/book/omo_t237
[40] E. Pampalk, “Computational Models of Music Similarity and Their Application in Music Information Retrieval,” Ph.D., Vienna University of Technology, 2006. [Online]. Available: http://www.pampalk.at/publications/presentations/sigmus06similarity.pdf
[41] T. Pohle, D. Schnitzer, M. Schedl, P. Knees, and G. Widmer, “On Rhythm and General Music Similarity,” in Proc. ISMIR, no. Ismir, Kobe, Japan, 2009, pp. 525–530. [Online]. Available: http://ismir2009.ismir.net/proceedings/OS6-1.pdf
[42] M. Cicconet. Rhythm Features. [Online]. Available: http://w3.impa.br/~cicconet/cursos/ae/spmirPresentation.html
[43] J. C. Brown and M. S. Puckette, “An Efficient Algorithm for the Calculation of a Constant Q Transform,” Journal of the Acoustical Society of America, vol. 92, no. 5, pp. 2698–2701, 1992.
[44] D. Root, P. V. Bohlman, J. Cross, H. Meconi, and J. H. Roberts, “Grove Music Online,” 2014. [Online]. Available: http://www.oxfordmusiconline.com/subscriber/book/omo_gmo
[45] Y. Ni, M. McVicar, P. Santos-Rodriguez, and T. D. Bie, “An End-to-End Machine Learning System for Harmonic Analysis of Music,” IEEE Trans. ASLP, vol. 20, no. 6, pp. 1771–1783, 2012. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6155600
[46] S. B. Davis and P. Mermelstein, “Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences,” IEEE Trans. ASLP, vol. 28, no. 4, pp. 357–366, 1980.
[47] M. Cooper and J. Foote, “Automatic Music Summarization via Similarity Analysis,” in Proc. ISMIR, Paris, France, 2002, pp. 81–85.
[48] S. Webber, DJ Skills: The Essential Guide to Mixing and Scratching. Focal Press, 2007.
[49] C. Burkhart, “The Phrase Rhythm of Chopin’s A-Flat Major Mazurka, Op. 59, No. 2,” in Engaging Music: Essays in Music Analysis, D. J. Stein, Ed. Oxford University Press, 2005, pp. 3–12.
[50] J. Paulus, M. Muller, and A. Klapuri, “Audio-based Music Structure Analysis,” in Proc. ISMIR, Utrecht, Netherlands, 2010, pp. 625–636.
[51] J. Foote, “Visualizing Music and Audio Using Self-Similarity,” in Proc. ACM MM, 1999, pp. 77–80. [Online]. Available: http://portal.acm.org/citation.cfm?doid=319463.319472
[52] B. Logan and S. Chu, “Music Summarization Using Key Phrases,” in Proc. ICASSP, vol. 2, Istanbul, Turkey, 2000, pp. 749–752. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=859068
[53] J. Pauwels, F. Kaiser, and G. Peeters, “Combining Harmony-based and Noveltybased Approaches for Structural Segmentation,” in Proc. ISMIR, Curitiba, PR, Brazil, 2013.
[54] T. L. Nwe, A. Shenoy, and Y. Wang, “Singing Voice Detection in Popular Music,” in Proc. ACM MM, 2004, pp. 324–327. [Online]. Available: http://dl.acm.org/citation.cfm?id=1027602
[55] N. C. Maddage, C. Xu, M. S. Kankanhalli, and X. Shao, “Content-based Music Structure Analysis with Applications to Music Semantics Understanding,” in Proc. ACM MM, 2004, pp. 112–119. [Online]. Available: http://dl.acm.org/citation.cfm?id=1027549
[56] L. Regnier and G. Peeters, “Singing Voice Detection in Music Tracks Using Direct Voice Vibrato Detection,” in Proc. ICASSP, Taipei, Taiwan, 2009, pp. 1685–1688. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4959926
[57] Y. Li and D. Wang, “Separation of Singing Voice From Music Accompaniment for Monaural Recordings,” IEEE Trans. ASLP, vol. 15, no. 4, pp. 1475–1487, 2007.
[58] I. Leonidas and J.-L. Rouas, “Exploiting Semantic Content for Singing Voice Detection,” in Proc. IEEE ICSC, ser. ICSC ’12, Washington, DC, USA, 2012, pp. 134–137. [Online]. Available: http://dx.doi.org/10.1109/ICSC.2012.18
[59] K. Thomas, “Just Noticeable Difference and Tempo Change,” Journal of Scientific Psychology, 2007.
[60] M. Dolson, “The Phase Vocoder: a Tutorial,” Computer Music Journal, vol. 10, no. 4, pp. 14–27, 1986.
[61] M. J. Swain and D. H. Ballard, “Color Indexing,” International Journal of Computer Vision, vol. 7, no. 1, pp. 11–32, 1991.
[62] P. Hanna, M. Robine, and T. Rocher, “An Alignment Based System for Chord Sequence Retrieval,” in Proc. JCDL, Austin, Texas, USA, 2009, p. 101. [Online]. Available: http://portal.acm.org/citation.cfm?doid=1555400.1555417
[63] J. H. j. Jensen, M. G. s. l. Christensen, D. P. W. Ellis, and S. r. H. Jensen, “Quantitative Analysis of a Common Audio Similarity Measure,”IEEE Trans. ASLP, vol. 17, no. 4, pp. 693–703, 2009. [Online]. Available: http://dx.doi.org/10.1109/TASL.2008.2012314
[64] J. G. D. Forney, “The Viterbi Algorithm,” Proc. of the IEEE, vol. 61, no. 3, pp. 302–309, 1973. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1450960
[65] L. Barrington, A. B. Chan, and G. Lanckriet, “Modeling Music as a Dynamic Texture,” IEEE Trans. ASLP, vol. 18, no. 3, pp. 602–612, 2010. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5337999
[66] R. J. Weiss and J. P. Bello, “Identifying Repeated Patterns in Music Using Sparse Convolutive Non-Negative Matrix Factorization.” in Proc. ISMIR, Utrecht, Netherlands, 2010.
[67] R. Likert, “A Technique for the Measurement of Attitudes,” Archives of Psychology, vol. 22, no. 140, pp. 1–55, 1932. [Online]. Available: http://psycnet.apa.org/psycinfo/1933-01885-001
[68] R. B. Zajonc, “Attitudinal Effects of Mere Exposure,” Journal of Personality and Social Psychology, vol. 9, no. 2, Part 2, pp. 1–27, 1968. [Online]. Available: http://www.sciencedirect.com/science/article/B6X01-4NPKJ6B-1/2/3ceefd618facdf822d5a8478ea0b0e78
[69] D. Turnbull and G. Lanckriet, “A Supervised Approach for Detecting Boundaries in Music Using Difference Features and Boosting,” in Proc. ISMIR, Vienna, Austria, 2007, pp. 42–49.
[70] M.-Y. Su, Y.-H. Yang, Y.-C. Lin, and H. H. Chen, “An Integrated Approach to Music Boundary Detection,” in Proc. ISMIR, Kobe, Japan, 2009, pp. 705–710.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/4878-
dc.description.abstract利用既有的音訊音樂相銜接而產生新的音樂,我們稱作「基於銜接技術之音樂改作(concatenative audio music re-composition)」。本論文針對此類音樂改作發展了一系列的技術。這些改作的音樂,可以應用在個人影片或是幻燈片(slideshow),或是不間斷的舞曲集錦。基於內容分析技術,樂理,以及心理聲學理論,我們提出了多種編作與選取素材的方式。首先我們可以依照相似性,句子結尾,或是小節的資訊來決定兩段音樂的接點。接著為了使音樂的節拍能夠順暢,我們提出以心理聲學為基礎的音樂速度調整方法。而為了處理節奏跟音量相差太多的素材,我們亦相對應的提出考慮兩倍節拍的速度調整法以及音量的正規化方法。在素材的選擇方面,我們提出了兩種選擇方式。一種是直接法,先利用成對的比較去除極端的音樂素材,接著利用接點的相似度來排序。而圖形法則是先將音樂的素材都處理成為樂句,藉著巧妙的內容分析技術,我們生成了一個我們稱之為音樂骰子圖(music dice graph)的graph。利用這張圖,我們便可提供個人化的什錦歌生成服務,依照使用者指定的條件,例如結構、一定要用的音樂素材等等,產生悅耳的什錦歌。此外,我們亦開發了可供使用者選歌、設定參數、修改接點的圖形化程式介面。實驗證明了各個步驟的有效性,呈現了方法之間的比較,並可協助使用者進行適切地參數選擇。zh_TW
dc.description.abstractIn this dissertation, systematic techniques have been developed for helping users to make new music by concatenating existing audio materials, i.e. concatenative audio music re-composition. The re-composed music can be used as the background music for personal films and slideshows or for non-stop dance suites. Based on the content analysis techniques, music theory, and psychoacoustics, various composition and selection schemes have studied in detail. We could locate appropriate connecting positions on the basis of similarity values, phrase boundaries or bar information. Besides, psychoacoustics-based tempo adjustment methods are used to smooth the tempo of concatenated music pieces. For cases of distinct tempo or volume, effective dual tempo adjustment and volume normalization schemes have been proposed and investigated, respectively. Two different schemes are proposed for selecting materials from music collections: The straightforward scheme filtered out unfitting clips by pair wise comparison and ordered the clips by similarity values at the found connecting points. The graph-assisted scheme, first, constructed a musical dice graph from pre-processed clips based on the results of music signal analyses. Then, with the graph, we can provide personalized medley creation service, which will generate various pleasing medleys conform to the specified conditions, such as the medley structure or must-use clips. We also provide an GUI for the users to choose music clips, specify parameters and adjust concatenation boundaries. Experiment results showed the effectiveness of individual components, comparisons among methods, and provide guidelines for users to choose parameters.en
dc.description.provenanceMade available in DSpace on 2021-05-14T17:49:26Z (GMT). No. of bitstreams: 1
ntu-104-D98944002-1.pdf: 7146074 bytes, checksum: 2c8819c2c6776ccfa3073a0834cad560 (MD5)
Previous issue date: 2015
en
dc.description.tableofcontents口試委員會審定書iii
誌謝v
Curriculum Vitae vii
摘要ix
Abstract xi
1 Introduction 1
1.1 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Media Re-composition . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Types of Music Re-composition . . . . . . . . . . . . . . . . . . 3
1.1.3 Concatenative Audio Music Re-composition . . . . . . . . . . . 5
1.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Summary of Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.1 Thorough Investigation of Material Concatenation Methods . . . 7
1.3.2 Personalized Material Selection Scheme . . . . . . . . . . . . . . 9
1.4 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Review of the Literature 11
2.1 Music Re-composition in Symbolic Domain . . . . . . . . . . . . . . . . 11
2.2 Self Re-composition – Audio Retargetting . . . . . . . . . . . . . . . . . 12
2.3 Short Material Re-composition – Concatenative Synthesis . . . . . . . . 12
2.4 Overlaid Material Re-composition – Mashup Creation . . . . . . . . . . . 13
2.5 Material Selection – Playlist Generation . . . . . . . . . . . . . . . . . . 14
2.6 Material Concatenation – Automatic DJ tools . . . . . . . . . . . . . . . 14
3 Domain Knowledge and Audio Music Features 17
3.1 Temporal Related Factors . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Pitch Related Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Dynamics Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4 Timbre Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4 Concatenation Methods 25
4.1 Transition Segments Locating Process . . . . . . . . . . . . . . . . . . . 25
4.1.1 At the Most Similar Position . . . . . . . . . . . . . . . . . . . . 25
4.1.2 At the Phrase Boundary . . . . . . . . . . . . . . . . . . . . . . 27
4.1.3 With Bar Alignment . . . . . . . . . . . . . . . . . . . . . . . . 30
4.2 Tempo Adjustment Process . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2.1 Transition Duration Determination . . . . . . . . . . . . . . . . . 31
4.2.2 Dual Tempo Adjustment . . . . . . . . . . . . . . . . . . . . . . 34
4.3 Synthesis Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.3.1 Volume Normalization . . . . . . . . . . . . . . . . . . . . . . . 35
4.3.2 Crossfading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5 Material Selection 37
5.1 Straightforward Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.1.1 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.1.2 Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.2 Graph-assisted and Personalized Scheme . . . . . . . . . . . . . . . . . . 42
5.2.1 Musical Dice Graph Construction . . . . . . . . . . . . . . . . . 43
5.2.2 Medley Generation . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3 User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6 Experiments 51
6.1 Evaluations on Concatenation Methods . . . . . . . . . . . . . . . . . . 52
6.1.1 Overlap Duration of Similarity-based Transition Segments . . . . 52
6.1.2 Similarity Measurements in Similarity-based Transition Segments 53
6.1.3 Effectiveness of Phrase Detection . . . . . . . . . . . . . . . . . 54
6.1.4 Comparison Between Similarity-based and Phrase-based Transition
Segments Locating Methods . . . . . . . . . . . . . . . . . . 58
6.1.5 The Just Noticeable Difference of Tempo . . . . . . . . . . . . . 59
6.1.6 Effectiveness Bar Alignment and Dual Tempo Adjustment . . . . 60
6.2 Evaluations on Selection Schemes . . . . . . . . . . . . . . . . . . . . . 63
6.2.1 Effectiveness of Clustering Criteria . . . . . . . . . . . . . . . . 63
6.2.2 Effectiveness of Path Finding . . . . . . . . . . . . . . . . . . . 65
6.3 Overall Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.4.1 The Influence of Accompanied with Visual Content . . . . . . . . 68
6.4.2 The Influence of User Familiarity with the Songs . . . . . . . . . 69
6.4.3 Other Criteria that Might Contribute to Better Clip Selection . . . 70
6.4.4 Comparison with Human Created Medley . . . . . . . . . . . . . 70
7 Conclusions and Future Work 73
7.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Bibliography 75
dc.language.isoen
dc.subject音樂銜接技術zh_TW
dc.subject音樂編輯zh_TW
dc.subject什錦歌zh_TW
dc.subjectmusic editingen
dc.subjectmusical medleyen
dc.subjectconcatenating musicen
dc.title基於銜接技術之音樂改作zh_TW
dc.titleConcatenative Audio Music Re-compositionen
dc.typeThesis
dc.date.schoolyear103-1
dc.description.degree博士
dc.contributor.coadvisor張智星(Jyh-Shing Roger Jang)
dc.contributor.oralexamcommittee王新民(Hsin-Min Wang),陳恆佑(Herng-Yow Chen),鄭文皇(Wen-Huang Cheng),楊奕軒(Yi-Hsuan Yang)
dc.subject.keyword音樂編輯,音樂銜接技術,什錦歌,zh_TW
dc.subject.keywordmusic editing,concatenating music,musical medley,en
dc.relation.page82
dc.rights.note同意授權(全球公開)
dc.date.accepted2015-01-16
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊網路與多媒體研究所zh_TW
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
ntu-104-1.pdf6.98 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved