請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/81254完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 雷欽隆(Chin-Laung Lei) | |
| dc.contributor.author | Yan-Ci Su | en |
| dc.contributor.author | 蘇彥齊 | zh_TW |
| dc.date.accessioned | 2022-11-24T03:38:54Z | - |
| dc.date.available | 2021-08-04 | |
| dc.date.available | 2022-11-24T03:38:54Z | - |
| dc.date.copyright | 2021-08-04 | |
| dc.date.issued | 2021 | |
| dc.date.submitted | 2021-07-29 | |
| dc.identifier.citation | [1] D. Bagchi, P. Plantinga, A. Stiff, and E. FoslerLussier. Spectral feature mapping with mimic loss for robust speech recognition. CoRR, abs/1803.09816, 2018. [2] M. Berouti, R. Schwartz, and J. Makhoul. Enhancement of speech corrupted by acoustic noise. In ICASSP ’79. IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 4, pages 208–211, 1979. [3] H. S. Choi, J.H. Kim, J. Huh, A. Kim, J.W. Ha, and K. Lee. Phaseaware speech enhancement with deep complex unet, 2019. [4] H. Erdogan, J. R. Hershey, S. Watanabe, and J. Le Roux. Phasesensitive and recognitionboosted speech separation using deep recurrent neural networks. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 708–712, 2015. [5] S. Fu, C. Liao, Y. Tsao, and S. Lin. Metricgan: Generative adversarial networks based blackbox metric scores optimization for speech enhancement. CoRR, abs/1905.04874, 2019. [6] F. G. Germain, Q. Chen, and V. Koltun. Speech denoising with deep feature losses, 2018. [7] Y. Hu and P. C. Loizou. Evaluation of objective quality measures for speech enhancement. IEEE Transactions on Audio, Speech, and Language Processing, 16(1):229–238, 2008. [8] P. Isola, J.Y. Zhu, T. Zhou, and A. Efros. Imagetoimage translation with conditional adversarial networks. pages 5967–5976, 07 2017. [9] Jae Lim and A. Oppenheim. Allpole modeling of degraded speech. IEEE Transactions on Acoustics, Speech, and Signal Processing, 26(3):197–210, 1978. [10] J. H. Kim, J. Yoo, S. Chun, A. Kim, and J.W. Ha. Multidomain processing via hybrid denoising networks for speech enhancement, 2018. [11] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization, 2014. cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015. [12] A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In in ICML Workshop on Deep Learning for Audio, Speech and Language Processing, 2013. [13] C. Macartney and T. Weyde. Improved speech enhancement with the waveunet. CoRR, abs/1811.11307, 2018. [14] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, and Z. Wang. Multiclass generative adversarial networks with the L2 loss function. CoRR, abs/1611.04076, 2016. [15] M. Mirza and S. Osindero. Conditional generative adversarial nets. CoRR, abs/1411.1784, 2014. [16] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. CoRR, abs/1802.05957, 2018. [17] S. Pascual, A. Bonafonte, and J. Serrà. SEGAN: speech enhancement generative adversarial network. CoRR, abs/1703.09452, 2017. [18] D. Pathak, P. Krähenbühl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. CoRR, abs/1604.07379, 2016. [19] M. S. P.V., N. Adiga, V. Tsiaras, and Y. Stylianou. A noncausal fftnet architecture for speech enhancement. Interspeech 2019, Sep 2019. [20] D. Rethage, J. Pons, and X. Serra. A wavenet for speech denoising. CoRR, abs/1706.07162, 2017. [21] A. W. Rix, J. G. Beerends, M. P. Hollier, and A. P. Hekstra. Perceptual evaluation of speech quality (pesq)a new method for speech quality assessment of telephone networks and codecs. In 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221), volume 2, pages 749–752 vol.2, 2001. [22] N. Shah, H. A. Patil, and M. H. Soni. Timefrequency maskbased speech enhancement using convolutional generative adversarial network. In 2018 AsiaPacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pages 1246–1251, 2018. [23] K. Tan and D. Wang. Complex spectral mapping with a convolutional recurrent network for monaural speech enhancement. ICASSP 2019 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6865– 6869, 2019. [24] C. Tang, C. Luo, Z. Zhao, W. Xie, and W. Zeng. Joint timefrequency and time domain learning for speech enhancement. pages 3788–3794, 07 2020. [25] J. Thiemann, N. Ito, and E. Vincent. The diverse environments multichannel acoustic noise database (demand): A database of multichannel environmental noise recordings. The Journal of the Acoustical Society of America, 133:3591, 05 2013. [26] C. Trabelsi, O. Bilaniuk, D. Serdyuk, S. Subramanian, J. F. Santos, S. Mehri, N. Rostamzadeh, Y. Bengio, and C. J. Pal. Deep complex networks. CoRR, abs/1705.09792, 2017. [27] C. ValentiniBotinhao. Noisy speech database for training speech enhancement algorithms and tts models. 2017. [28] C. Veaux, J. Yamagishi, and S. King. The voice bank corpus: Design, collection and data analysis of a large regional accent speech database. In 2013 International Conference Oriental COCOSDA held jointly with 2013 Conference on Asian Spoken Language Research and Evaluation (OCOCOSDA/CASLRE), pages 1–4, 2013. [29] S. Venkataramani and P. Smaragdis. Endtoend source separation with adaptive frontends. CoRR, abs/1705.02514, 2017. [30] N. L. Westhausen and B. T. Meyer. Dualsignal transformation lstm network for realtime noise suppression, 2020. [31] D. S. Williamson, Y. Wang, and D. Wang. Complex ratio masking for monaural speech separation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 24(3):483–492, 2016. [32] H. Zhang, X. Zhang, and G. Gao. Training supervised speech separation system to improve stoi and pesq directly. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5374–5378, 2018. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/81254 | - |
| dc.description.abstract | 早期的語音增強模型有幾個缺點,第一個是對噪音較大或是不穩定的訊號效果不好,第二個是無法準確地消除頻率較高的噪音,因此有人提出了使用深度學 習的模型來解決這些問題。 深度學習模型的輸入大部分使用的是帶有噪音訊號的聲音轉換而來的頻譜圖,少部分會直接使用原始的波形圖,頻譜圖可以幫助我們的模型更容易地學到 訊號中帶有的資訊,但是深度學習的模型沒有辦法處理轉換成頻譜圖後產生的虛數,因此許多方法只針對實數的部分或者是訊號強度的部分去做處理。後來,複數神經網路的出現幫助我們解決了這個問題,因此在我們的方法中,我們也採用了複數神經網路的架構,並且加入了U-Net架構。 另外,模型輸出的訊號和乾淨的聲音訊號的距離並不能準確的表示聲音品質的好壞,因此我們將訊號品質的分數當作我們訓練模型的目標,並採用了 metricGAN 的的技術,透過另外訓練一個判別器模型,讓我們的模型能夠產生品 質更好的聲音。 我們的方法有幾個優點,第一,我們採用了複數神經網路的架構,讓機器能夠看到完整的頻譜圖資訊,第二,我們同時使用了頻譜圖以及波形圖的資訊,讓機器能夠獲得更多關於訊號的內容,第三,我們的模型藉由將聲音的品質分數當作訓練目標,讓模型產生的聲音能夠獲得更高的品質分數。我們的實驗使用了VoiceBank以及DEMAND資料集作為訓練集以及測試集,其中訓練集包含了28個說話者以及共40種不同的噪音條件,並且使用了各種測試分數作為評斷標準,而我們的模型在這些分數上獲得了比其他方法更好的成績。 | zh_TW |
| dc.description.provenance | Made available in DSpace on 2022-11-24T03:38:54Z (GMT). No. of bitstreams: 1 U0001-2707202117040300.pdf: 3654613 bytes, checksum: c4c238848a026ba0362d9cb1398f7f08 (MD5) Previous issue date: 2021 | en |
| dc.description.tableofcontents | Verification Letter from the Oral Examination Committee i 摘要 iii Abstract v Contents vii List of Figures ix List of Tables xi Denotation xiii Chapter 1 Introduction 1 Chapter 2 Related Work 5 Chapter 3 Background 9 3.1 Conditional GAN and LSGAN . . . . . . . . . . . . . . . . . . . . . 9 3.2 Metric GAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3 Complex Neural Network . . . . . . . . . . . . . . . . . . . . . . . 11 3.3.1 Complex Convolution . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.3.2 Complex Activation . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.4 Phaseaware DCUNet . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.4.1 Deep Complex UNet . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.4.2 Complexvalued masking . . . . . . . . . . . . . . . . . . . . . . . 14 3.4.3 WeightedSDR loss . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Chapter 4 Methodology 17 4.1 ComplexMetricGAN . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.2 Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.2.1 Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.2.2 Discriminator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.3 Training process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Chapter 5 Evaluation 23 5.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 5.2 Objective score . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5.3 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5.4 Ablation Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Chapter 6 Conclusion 29 References 31 | |
| dc.language.iso | en | |
| dc.subject | 時域及頻域 | zh_TW |
| dc.subject | 語音增強 | zh_TW |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 複數神經網路 | zh_TW |
| dc.subject | 對抗式生成網路 | zh_TW |
| dc.subject | Deep learning | en |
| dc.subject | Speech enhancement | en |
| dc.subject | Time and TimeFrequency Domain | en |
| dc.subject | Generative Adversarial Network | en |
| dc.subject | Complex neural network | en |
| dc.title | 基於複數神經網路以及生成對抗網路的跨領域語音加強模型 | zh_TW |
| dc.title | Cross-domain Speech Enhancement Model based on Complex Neural Network and Generative Adversarial Network | en |
| dc.date.schoolyear | 109-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 顏嗣鈞(Hsin-Tsai Liu),郭斯彥(Chih-Yang Tseng) | |
| dc.subject.keyword | 語音增強,深度學習,複數神經網路,對抗式生成網路,時域及頻域, | zh_TW |
| dc.subject.keyword | Speech enhancement,Deep learning,Complex neural network,Generative Adversarial Network,Time and TimeFrequency Domain, | en |
| dc.relation.page | 35 | |
| dc.identifier.doi | 10.6342/NTU202101811 | |
| dc.rights.note | 同意授權(限校園內公開) | |
| dc.date.accepted | 2021-07-30 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-2707202117040300.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 3.57 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
