Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/50300
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor顏嗣鈞(Hsu-Chun Yen)
dc.contributor.authorShih-Hung Liuen
dc.contributor.author劉士弘zh_TW
dc.date.accessioned2021-06-15T12:35:33Z-
dc.date.available2016-08-24
dc.date.copyright2016-08-24
dc.date.issued2016
dc.date.submitted2016-07-30
dc.identifier.citation[1] Y. Liu and D. Hakkani-Tur, “Speech Summarization,” Chapter 13 in Spoken Language Understanding: Systems for Extracting Semantic Information from Speech, G. Tur and R. D. Mori (Eds), Wiley, New York, 2011.
[2] A. Nenkova and K. McKeown, “Automatic summarization,” Foundations and Trends in Information Retrieval, Vol. 5, No. 2–3, pp. 103–233, 2011.
[3] G. Murray, S. Renals, and J. Carletta. “Extractive summarization of meeting recordings,” in Proc. of Annual Conference of the International Speech Communication Association, pp. 593–596, 2005
[4] S. Furui, T. Kikuchi, Y. Shinnaka, and C. Hori, “Speech-to-text and speech-to-speech summarization of spontaneous speech,” IEEE Transactions on Speech and Audio Processing, Vol. 12, No. 4, pp. 401–408, 2004.
[5] K. McKeown, J. Hirschberg, M. Galley, and S. Maskey, “From text to speech summarization,” in Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 997–1000, 2005.
[6] J. J. Zhang, R. H. Y. Chan, and P. Fung, “Extractive speech summarization using shallow rhetorical structure modeling,” IEEE Transactions on Audio, Speech and Language Processing, Vol. 18, No. 6, pp. 1147–1157, 2010.
[7] G. Penn and X. Zhu, “A critical reassessment of evaluation baselines for speech summarization,” in Proceedings of Annual Meeting of the Association for Computational Linguistics, pp. 470–478, 2008.
[8] S. Furui, L. Deng, M. Gales, H. Ney, and K. Tokuda, “Fundamental technologies in modern speech recognition,” IEEE Signal Processing Magazine, Vol. 29, No. 6, pp. 16–17, 2012
[9] D. O'Shaughnessy, L. Deng, and H. Li, “Speech information processing: Theory and applications,” Proceedings of the IEEE, Vol. 101, No. 5, pp. 1034–1037, 2013.
[10] H. P. Luhn. “The automatic creation of literature abstracts,” IBM Journal of Research and Development, 2:157–165, 1958
[11] P. B. Baxendale, “Machine-made index for technical literature-an experiment,” IBM Journal, Vol. 2, No. 4, pp. 354–361, 1958.
[12] H. P. Edmundson. “New methods in automatic extracting,” Journal of the ACM, 16(2):264–285, 1968.
[13] K. Koumpis, and S. Renals. “Transcription and summarization of voicemail speech,” in International Conference on Spoken Language Processing, pp. 688–891, 2000.
[14] F. R. Chen and M. Withgott. “The use of emphasis to automatically summarize a spoken discourse,” in Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing, 1992
[15] M. Ostendorf, “Speech technology and information access,” IEEE Signal Processing Magazine, Vol. 25, No. 3, pp. 150–152, 2008.
[16] X. Zhu G. Penn, and F. Rudzicz. 'Summarizing multiple spoken documents: finding evidence from untranscribed audio,' in Proc. of Joint Conference of ACL and IJCNLP, pp. 549–557, 2009.
[17] M. Galley, K. McKeown, J. Hirschberg, and E. Shriberg, “Identifying agreement and disagreement in conversational speech: Use of Bayesian networks to model pragmatic dependencies,” in Annual Meeting of the Association for Computational Linguistics, pp 669-676, 2004.
[18] M. J. Paul, C. X. Zhai and R. Girju, “Summarizing contrastive viewpoints in opinionated text,” in Proceedings of International Conference on Empirical Methods in Natural Language Processing, pp. 66-76, 2010.
[19] L.-S. Lee and B. Chen, “Spoken document understanding and organization,” IEEE Signal Processing Magazine, Vol. 22, No. 5, pp. 42–60, 2005.
[20] B. Chen and S.-H. Lin, “A risk-aware modeling framework for speech summarization,” IEEE Transactions on Audio, Speech and Language Processing, Vol. 20, No. 1, pp. 199–210, 2012.
[21] I. Mani and M. T. Maybury, Advances in automatic text summarization. Cambridge: MIT Press, 1999.
[22] J.-J. Kuo and H.-H. Chen, “Multi-document summary generation using informative and event words,” ACM Transactions on Asian Language Information Processing, 7(1):3:1–3:23, 2008.
[23] M. Wasson, “Using leading text for news summaries: Evaluation results and implications for commercial summarization applications,” in Proc. of International Conference on Computational Linguistics, pp. 1364–1368, 1998.
[24] Y. Gong and X. Liu, “Generic text summarization using relevance measure and latent semantic analysis,” in Proc. of Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 19–25, 2001.
[25] D. D. Lee and H. S. Seung, “Learning the parts of objects by non-negative matrix factorization,” Nature, Vol. 401, pp. 788–791, 1999.
[26] E. Wachsmuth, M. W. Oram, and D. I. Perrett, “Recognition of objects and their component parts: responses of single units in the temporal cortex of the macaque,” Cerebral Cortex, Vol. 4, pp. 509–522, 1994.
[27] S. E. Palmer, “Hierarchical structure in perceptual representation,” Cognitive Psychology, Vol. 9, pp. 441–474, 1977.
[28] W. Xu, X. Liu and Y. Gong, “Document clustering based on non-negative matrix factorization,” in Proc. of The Annual International ACM SIGIR Conference, pp. 267–273, 2003.
[29] D. D. Wang, T. Li, S. Zhu and C. Ding, “Multi-document summarization via sentence-level semantic analysis and symmetric matrix factorization,” in Proc. of The Annual International ACM SIGIR Conference, pp. 307–314, 2008.
[30] J. H. Lee, S. Park, C.-M. Ahn and D. Kim, “Automatic generic document summarization based on non-negative matrix factorization,” Information Processing & Management, Vol. 45, No. 1, pp. 20–34, 2009.
[31] X. Wan and J. Yang, “Multi-document summarization using cluster-based link analysis,” in Proc. of Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 299–306, 2008.
[32] J. Carbonell and J. Goldstein, “The use of MMR, diversity-based reranking for reordering documents and producing summaries,” in Proc. of Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 335–336, 1998.
[33] G. Erkan and D. R. Radev, “LexRank: graph-based lexical centrality as salience in text summarization,” Journal or Artificial Intelligence Research, Vol. 22, pp. 457–479, 2004.
[34] H. Lin, J. Bilmes, and S. Xie, “Graph-based submodular selection for extractive summarization,” in Proc. of IEEE Workshop on Automatic Speech Recognition and Understanding, pp. 381–386, 2009.
[35] H. Lin and J. Bilmes, “Multi-document summarization via budgeted maximization of submodular functions,” in Proc. of Human Language Technology Conference and the North American Chapter of the Association for Computational Linguistics Annual Meeting, pp. 912–920, 2010.
[36] R. McDonald, “A study of global inference algorithms in multi-document summarization,” in Proc. of European Conference on Information Retrieval, pp. 557–564, 2007.
[37] K. Riedhammer, D. Gillick and B. Favre, “Long story short - Global unsupervised models for keyphrase based meeting summarization, ” Speech Communication, Vol. 52, No. 10, pp. 801–815, 2010.
[38] M. A. Fattah and F. Ren, “GA, MR, FFNN, PNN and GMM based models for automatic text summarization,” Computer Speech and Language, Vol. 23, No. 1, pp. 126–144, 2009.
[39] J. Zhang and P. Fung, “Speech summarization without lexical features for Mandarin broadcast news,” in Proc. of Human Language Technology Conference and the North American Chapter of the Association for Computational Linguistics Annual Meeting, pp. 213–216, 2007.
[40] J. Kupiec, J. Pedersen and F. Chen, “A trainable document summarizer,” in Proc. of Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 68–73, 1999.
[41] M. Galley, “A skip-chain conditional random field for ranking meeting utterances by importance,” in Proc. of Conference on Empirical Methods in Natural Language Processing, pp. 364–372, 2006.
[42] Y. Bengio, R. Ducharme, P. Vincent and C. Jauvin, “A neural probabilistic language model,” Journal of Machine Learning Research, Vol. 3, pp. 1137–1155, 2003.
[43] T. Mikolov, I. Sutskever, K. Chen, G. Corrado and J. Dean, “Distributed representations of words and phrases and their compositionality,” in Proc. Conference on Neural Information Processing Systems, pp. 3111–3119, 2013.
[44] M. Kageback, O. Mogren, N. Tahmasebi and D. Dubhashi, “Extractive Summarization using Continuous Vector Space Models,” in Proc. the Workshop on Continuous Vector Space Models and their Compositionality (CVSC), pp. 31–39, 2014.
[45] A. Celikyilmaz and D. Hakkani-Tur, “A hybrid hierarchical model for multi-document summarization,” in Proc. of the Annual Meeting of the Association for Computational Linguistics, pages 815–824, 2010.
[46] S.-H. Lin, Y.-M. Yeh, and B. Chen, “Leveraging Kullback-Leibler divergence measures and information-rich cues for speech summarization,” IEEE Transactions on Audio, Speech and Language Processing, Vol. 19, No. 4, pp. 871–882, 2011.
[47] S. Brin and L. Page, “The anatomy of a large-scale hypertextual (web) search engine,” in Proc. of the International Conference on the World Wide Web, pp. 107–117, 1998.
[48] L. Chen, X. Qian, and Y. Liu, “Using supervised bigram-based ILP for extractive summarization,” in Proc. of the Annual Meeting of the Association for Computational Linguistics, pp. 1004–1013, 2013.
[49] H. Christensen, Y. Gotoh and S. Renals, “A cascaded broadcast news highlighter,” IEEE Transactions on Audio, Speech and Language Processing, Vol. 16, No. 1, pp. 151–161, 2008.
[50] C. X. Zhai and J. Lafferty, “Model-based feedback in the language modeling approach to information retrieval,” in Proc. of ACM SIGIR Conference on Information and knowledge management, pp. 403–410, 2001.
[51] A. P. Dempster, N. M. Laird and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” Journal of Royal Statistical Society B, Vol. 39, No. 1, pp. 1–38, 1977.
[52] T. Tao and C. X. Zhai, “Regularized estimation of mixture models for robust pseudo-relevance feedback,” in Proc. of ACM SIGIR Conference on Information and knowledge management, pp. 162–169, 2006.
[53] S.-H. Lin, B. Chen and H.-M. Wang, “A comparative study of probabilistic ranking models for Chinese spoken document summarization,” ACM Transactions on Asian Language Information Processing, Vol. 8, No. 1, pp. 3:1–3:23, 2009.
[54] S. Xie and Y. Liu, “Using N-best lists and confusion networks for meeting summarization” IEEE Transactions on Audio, Speech and Language Processing, Vol. 19, No. 5, pp. 1160–1169, 2011.
[55] K. Koumpis and S. Renals, “Automatic summarization of voicemail messages using lexical and prosodic features,” ACM Transactions on Speech Language Processing, Vol. 2, No. 1, pp. 1–24, 2005.
[56] S. Maskey and J. Hirschberg, “Comparing lexical, acoustic/prosodic, structural and discourse features for speech summarization,” in Proc. of the Annual Conference of the International Speech Communication Association, pp. 621–624, 2005.
[57] R. Baeza-Yates and B. Ribeiro-Neto, Modern Information Retrieval: The Concepts and Technology behind Search. ACM Press, 2011.
[58] B. Croft and J. Lafferty, Language Models for Information Retrieval. Kluwer International Series on Information Retrieval, Vol. 13, Kluwer Academic Publishers, 2002.
[59] C. X. Zhai, Statistical language models for information retrieval. Morgan & Claypool Publishers, 2008.
[60] T. Roelleke, Information Retrieval Models: Foundations and Relationships. Morgan & Claypool Publishers, 2013.
[61] A. Haghighi and L. Vanderwende, “Exploring content models for multi-document summarization,” in Proc. of Human Language Technology Conference and the North American Chapter of the Association for Computational Linguistics Annual Meeting, pp. 362–370, 2009.
[62] H.-S. Chiu, K.-Y. Chen and B. Chen, “Leveraging topical and positional cues for language modeling in speech recognition,” Multimedia Tools and Applications, Vol. 72, No. 2, 1465–1481, 2014.
[63] Y. Lv, and C.-X. Zhai, “Positional relevance model for pseudo-relevance feedback,” in Proc. of ACM SIGIR Conference, pp. 579–586, 2010.
[64] B. Chen, H.-C. Chang and K.-Y. Chen, “Sentence modeling for extractive speech Summarization,” in Proc. of International Conference on Multimedia & Expo, pp. 1–6, 2013.
[65] J. J. Rocchio, “Relevance feedback in in-formation retrieval,” in The SMART Retrieval System: Experiments in Automatic Document Processing, pp. 313–323, 1971.
[66] V. Lavrenko and B. Croft, “Relevance-based language models,” in Proc. of Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 120–127, 2001.
[67] T. Hofmann, “Probabilistic latent semantic indexing,” in Proc. of Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 50–57, 1999.
[68] S. Hummel, A. Shtok and D. Carmel, “Clarity re-visited,” in Proc. of Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1039–1040, 2012.
[69] S. C. Townsend, Y. Zhou and W. B. Croft, “Predicting query performance,” in Proc. of Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 299–306, 2002.
[70] J. D. Gibbons and S. Chakraborty, Nonparametric statistical inference, 3rd ed. Marcel Dekker, New York, 1992.
[71] H.-M. Wang, B. Chen, J.-W. Kuo and S.-S. Cheng, “MATBN: A Mandarin Chinese broadcast news corpus,” International Journal of Computational Linguistics and Chinese Language Processing, Vol. 10, No. 2, pp. 219–236, 2005.
[72] B. Chen, J.-W. Kuo and W.-H. Tsai, “Lightly supervised and data driven approaches to Mandarin broadcast news transcription,” in Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 777–780, 2004.
[73] X. Liu and W. B. Croft, “Cluster-based retrieval using language models,” in Proc. of ACM SIGIR conference, pp. 186–193, 2004.
[74] Y. Huang, L. Sun and J. Y. Nie, “Smoothing document language model with local word graph,” in Proc. of Conference on Information and Knowledge Management, pp. 1943–1946, 2009.
[75] K. S. Lee and W. B. Croft, “A deterministic resampling method using overlapping document clusters for pseudo-relevance feedback,” Information Processing & Management, Vol. 49, No. 4, pp.792–806, 2013.
[76] R.A.A.K. Lund and C. Burgess, “Semantic and associative priming in high-dimensional semantic space,” in Proc. of annual conference of the cognitive science society, pp. 660–665, 1995.
[77] S.-H. Liu, F.-H. Chu, S.-H. Lin, H.-S. Lee and B. Chen, “Training data selection for improving discriminative training of acoustic models,” in Proc. of IEEE workshop on Automatic Speech Recognition and Understanding, pp. 284–289, 2007.
[78] A. Stolcke, “SRILM -- an extensible language modeling toolkit,” in Proc. of Annual Conference of the International Speech Communication Association, pp. 901–904, 2005.
[79] C.-Y. Lin, “ROUGE: a Package for Automatic Evaluation of Summaries,” in Proc. of Workshop on Text Summarization Branches Out, 2004.
[80] P. Boersma, “Praat, a system for doing phonetics by computer,” Glot International, Vol. 5, No. 9–10, pp. 341–345, 2001.
[81] B. Vlasenko, D. Philippou-Hubner, D. Prylipko , R. Bock, Ingo Siegert and A. Wendemuth, “Vowels formants analysis allows straightforward detection of high arousal emotions,” in Proc. of IEEE International Conference on Multimedia and Expo, pp. 1–6, 2011.
[82] B. Chen, H.-M. Wang and L.-S. Lee, 'Discriminating capabilities of syllable-based features and approaches of utilizing them for voice retrieval of speech information in Mandarin Chinese,' IEEE Transactions on Speech and Audio Processing, Vol. 10, No. 5, pp. 303–314, July 2002.
[83] V. T. Turunen and M. Kurimo, “Speech retrieval from unsegmented finnish audio using statistical morpheme-like units for segmentation, recognition, and retrieval,” ACM Transactions on Speech and Language Processing, Vol. 8, No. 1, pp. 1:1–1:25, 2011.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/50300-
dc.description.abstract由於網際網路的蓬勃發展與大資料時代的來臨,近幾年來自動摘要(Automatic Summarization)已儼然成為一項熱門的研究議題。節錄式(Extractive)自動摘要是根據事先定義的摘要比例,從文字文件(Text Documents)或語音文件(Spoken Documents)中選取一些能夠代表原始文件主旨或主題的重要語句當作摘要。在相關研究中,使用語言模型(Language Modeling)結合庫爾貝克-萊伯勒離散度(Kullback-Leibler Divergence)的架構來挑選重要語句之方法,已初步地被驗證在文字與語音文件的自動摘要任務上有不錯的成果。基於此架構,本論文提出幾個改善語言模型方法的研究。第一,我們探究語句明確度(Clarity)資訊對於語音文件摘要任務之影響性,並進一步地藉由明確度的輔助來重新詮釋如何能在自動摘要任務中適當地挑選重要且具代表性的語句。第二,本論文亦針對語句模型的調適方法進行研究;在運用關聯性(Relevance)的概念下,嘗試藉由每一語句各自的關聯性資訊,重新估測並建立語句的語言模型,使其得以更精準地代表語句的語意內容。第三,本論文提出使用重疊分群(Overlapped Clustering)的概念來額外考量兩兩語句之間關聯性(Sentence Relatedness)的資訊,除此之外,重疊分群更可用來當作語句的事前機率,使得更有效地選取重要的語句當作摘要。最後,傳統語言模型在建立語句模型時通常只考慮各別每個單一詞彙,並未考量到長距離的其它詞彙,因此本論文提出使用鄰近(Proximity)與位置(Position)的資訊來建立語句模型,以期增進自動摘要之效能。本論文的語音文件摘要實驗語料是採用中文公視廣播新聞(MATBN),所使用的語音辨識工具為中文大詞彙連續語音辨識器(LVCSR);實驗結果顯示,相較於其它現有的非監督式摘要方法,我們所發展的多個新穎式摘要方法能提供明顯的效能改善。zh_TW
dc.description.abstractExtractive speech summarization aims to select an indicative set of sentences from a spoken document so as to succinctly cover the most important aspects of the document, which has garnered much research over the years. In this dissertation, we cast extractive speech summarization as an ad-hoc information retrieval (IR) problem and investigate various language modeling (LM) methods for important sentence selection. The main contributions of this dissertation are four-fold. First, we propose a novel clarity measure for use in important sentence selection, which can help quantify the thematic specificity of each individual sentence and is deemed to be a crucial indicator orthogonal to the relevance measure provided by the LM-based methods. Second, we explore a novel sentence modeling paradigm building on top of the notion of relevance, where the relationship between a candidate summary sentence and a spoken document to be summarized is unveiled through different granularities of context for relevance modeling. In addition, not only lexical but also topical cues inherent in the spoken document are exploited for sentence modeling. Third, we explore a novel approach that generates overlapped clusters to extract sentence relatedness information from the document to be summarized, which can be used not only to enhance the estimation of various sentence models but also to facilitate the sentence-level structural relationships for better summarization performance. Fourth, we also explore several effective formulations of proximity cues, and proposing a position-aware language modeling framework using various granularities of position-specific information for sentence modeling. Extensive experiments are conducted on Mandarin broadcast news summarization dataset with Mandarin large vocabulary continuous speech recognition (LVCSR), and the empirical results seem to demonstrate the performance merits of our methods when compared to several existing well-developed and/or state-of-the-art methods.en
dc.description.provenanceMade available in DSpace on 2021-06-15T12:35:33Z (GMT). No. of bitstreams: 1
ntu-105-D98921032-1.pdf: 1652867 bytes, checksum: 498139f6b48b972e1e8fa37deb6d12f5 (MD5)
Previous issue date: 2016
en
dc.description.tableofcontents口試委員會審定書 i
致謝 iii
摘要 v
ABSTRACT vii
CONTENTS ix
LIST OF FIGURES xiii
LIST OF TABLES xiv
Chapter 1 Introduction 1
1.1 Background 1
1.2 Research Spectrum 3
1.3 Extractive Text Summarization 6
1.4 From Text to Speech 8
1.5 Contributions 10
1.6 Outline of this dissertation 11
Chapter 2 Literature Survey 13
2.1 Unsupervised Approaches 13
2.1.1 The Vector-Space Methods 13
2.1.2 The Graph-based Methods 17
2.1.3 The Combinatorial Optimization Methods 18
2.2 Supervised Approaches 20
Chapter 3 The Fundamental Language Modeling Approach 23
3.1 Document Likelihood Measure 23
3.2 KL-Divergence 25
Chapter 4 Leveraging Relevance Cues to Enhance LM 27
4.1 Pseudo-Relevance Feedback Technique 27
4.2 Relevance Model 28
4.2.1 Incorporating Topic Information into RM 30
4.2.2 Word Relevance Modeling 30
4.3 Simple Mixture Model 32
4.3.1 Regularized Simple Mixture Model 33
Chapter 5 Sentence Clarity Measure and Relatedness 35
5.1 Sentence Clarity Measure 35
5.2 Sentence Relatedness Information 38
Chapter 6 Exploiting Proximity Information 41
6.1 Window-based Method 42
6.2 HAL-based Method 43
6.3 Kernel-based Method 44
Chapter 7 Position Aware Language Modeling 47
7.1 Passage-based Language Model (PaLM) 48
7.2 Overlapping Passage-based Language Model (OPaLM) 48
7.3 Position-Specific Language Model (PoLM) 49
7.4 Position-Specific Relevance Model (PRM) 50
Chapter 8 Experiment Setup 53
8.1 Speech and Language Corpora 53
8.2 Summary Annotation 55
8.3 Evaluation Metric 56
8.4 Features for Supervised Summarizers 57
8.5 Subword-level Index Units 59
Chapter 9 Experimental Results 61
9.1 Baseline Experiments 61
9.2 Experiments on Exploiting Relevance Cues 63
9.2.1 Experiments on Simple Mixture Model and Its Extension 63
9.2.2 Experiments on Relevance Model and Its Extensions 64
9.3 Experiments on Using the Clarity Measure 67
9.3.1 The Results of Clarity Measure 67
9.3.2 Integration of Variant Relevance Models 68
9.3.3 More Analysis on Clarity 70
9.3.4 The Real Example 71
9.4 Experiments on Proximity-based LM 72
9.5 Experiments on Sentence Relatedness 74
9.6 Experiments on Position Aware LM 75
9.6.1 Experiments on the Passage-based LM Methods 75
9.6.2 Experiments on the Positional LM Methods 75
9.6.3 Further Extension on the Position-Specific RM Method 78
9.7 Experiments on Using a Supervised Summarization Method 79
9.8 Fusion of Different Levels of Index Features 81
Chapter 10 Conclusion and Future Work 85
Bibliography 87
dc.language.isoen
dc.subject庫爾貝克-萊伯勒離散度zh_TW
dc.subject位置語言模型zh_TW
dc.subject鄰近語言模型zh_TW
dc.subject重疊分群zh_TW
dc.subject關聯性zh_TW
dc.subject語句明確度zh_TW
dc.subject庫爾貝克-萊伯勒離散度zh_TW
dc.subject語言模型zh_TW
dc.subject節錄式自動摘要zh_TW
dc.subject位置語言模型zh_TW
dc.subject鄰近語言模型zh_TW
dc.subject重疊分群zh_TW
dc.subject關聯性zh_TW
dc.subject節錄式自動摘要zh_TW
dc.subject語言模型zh_TW
dc.subject語句明確度zh_TW
dc.subjectoverlapped clusteringen
dc.subjectproximity-based LMen
dc.subjectposition-aware LMen
dc.subjectposition-aware LMen
dc.subjectoverlapped clusteringen
dc.subjectproximity-based LMen
dc.subjectextractive speech summarizationen
dc.subjectclarity measureen
dc.subjectrelevance language modelingen
dc.subjectclarity measureen
dc.subjectextractive speech summarizationen
dc.subjectrelevance language modelingen
dc.title改善語言模型於中文廣播新聞節錄式摘要zh_TW
dc.titleImproved Language Modeling Approaches for Mandarin Broadcast News Extractive Summarizationen
dc.typeThesis
dc.date.schoolyear104-2
dc.description.degree博士
dc.contributor.coadvisor許聞廉(Wen-Lian Hsu)
dc.contributor.oralexamcommittee王勝德,雷欽隆,郭斯彥,曾元顯,王新民
dc.subject.keyword節錄式自動摘要,語言模型,庫爾貝克-萊伯勒離散度,語句明確度,關聯性,重疊分群,鄰近語言模型,位置語言模型,zh_TW
dc.subject.keywordextractive speech summarization,clarity measure,relevance language modeling,overlapped clustering,proximity-based LM,position-aware LM,en
dc.relation.page96
dc.identifier.doi10.6342/NTU201601686
dc.rights.note有償授權
dc.date.accepted2016-08-01
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-105-1.pdf
  未授權公開取用
1.61 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved