Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83188
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor林守德zh_TW
dc.contributor.advisorShou-de Linen
dc.contributor.author賴政毅zh_TW
dc.contributor.authorCheng-Yi Laien
dc.date.accessioned2023-01-10T17:13:17Z-
dc.date.available2023-11-09-
dc.date.copyright2023-01-07-
dc.date.issued2022-
dc.date.submitted2022-10-30-
dc.identifier.citation[1] F. Auger, M. Hilairet, J. M. Guerrero, E. Monmasson, T. Orlowska-Kowalska, and S. Katsura. Industrial applications of the kalman filter: A review. IEEE Transactions on Industrial Electronics, 60(12):5458–5471, 2013.
[2] S. Bai, J. Z. Kolter, and V. Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271, 2018.
[3] W. Bao, J. Yue, and Y. Rao. A deep learning framework for financial time series using stacked autoencoders and long-short term memory. PloS one, 12(7):e0180944, 2017.
[4] P. Barsocchi, A. Crivello, D. La Rosa, and F. Palumbo. A multisource and multi variate dataset for indoor localization methods based on wlan and geo-magnetic field fingerprinting. In 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pages 1–8. IEEE, 2016.
[5] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. Advances in neural information processing systems, 19, 2006.
[6] C. Chandra, M. S. Moore, and S. K. Mitra. An efficient method for the removal of impulse noise from speech and audio signals. In 1998 IEEE International Symposium on Circuits and Systems (ISCAS), volume 4, pages 206–208. IEEE, 1998.
[7] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for sta tistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
[8] F. Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1251–1258, 2017.
[9] K. Ekstrand. Akwf free (waveform samples), n.d.
[10] F. G. Germain, Q. Chen, and V. Koltun. Speech denoising with deep feature losses. arXiv preprint arXiv:1806.10522, 2018.
[11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
[12] G. Hebrail and A. Berard. Uci machine learning repository: Individual household electric power consumption data set, 2012.
[13] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504–507, 2006.
[14] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
[15] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017.
[16] S. Mehta, M. Rastegari, A. Caspi, L. Shapiro, and H. Hajishirzi. Espnet: Efficient spatial pyramid of dilated convolutions for semantic segmentation. In Proceedings of the european conference on computer vision (ECCV), pages 552–568, 2018.
[17] T. Mikolov, M. Karafiát, L. Burget, J. Cernockỳ, and S. hudanpur. Recurrent neu ral network based language model. In Interspeech, volume 2, pages 1045–1048. Makuhari, 2010.
[18] J. Oh, D. Kim, and S.-Y. Yun. Spectrogram-channels u-net: a source separation model viewing each channel as the spectrogram of each source. arXiv preprint arXiv:1810.11520, 2018.
[19] S. K. Prasadh, S. S. Natrajan, and S. Kalaivani. Efficiency analysis of noise reduc tion algorithms: Analysis of the best algorithm of noise reduction from a set of algo rithms. In 2017 International Conference on Inventive Computing and Informatics (ICICI), pages 1137–1140. IEEE, 2017.
[20] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomed ical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
[21] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[22] R. Wan, S. Mei, J. Wang, M. Liu, and F. Yang. Multivariate temporal convolutional network: A deep neural networks approach for multivariate time series forecasting. Electronics, 8(8):876, 2019.
[23] S. Zhang, B. Guo, A. Dong, J. He, Z. Xu, and S. X. Chen. Cautionary tales on air quality improvement in beijing. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 473(2205):20170457, 2017.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83188-
dc.description.abstract深度學習在去除音頻資料雜訊中的應用引起了很多關注,但對多元時間序列資料的研究相對較少。在本文中,我們致力於透過行動裝置上的深度類神經網路從嘈雜的多元時間序列數據中提取信號的直流分量。為了解決這個問題,我們測試了不同方法對模型性能的影響。從資料增強和雜訊模擬等資料處理方法,到模型結構和特徵工程等模型構建方法,我們比較其性能並分析結果。除了這些實驗之外,我們還提出了一種稱為MPSE的新型損失函數,以幫助模型專注於小振幅信號,以保證模型性能。將我們的最佳設置模型和提出的損失函數與基準進行比較,結果表明它們具有出色的性能和強健性。我們相信這些實驗和分析可以幫助未來需要使用類似資料的研究。zh_TW
dc.description.abstractThe applications of deep learning to denoising audio data have attracted lots of focus, but relatively little research on multivariate time series data. In the paper, we address extracting the DC component of the signal from noisy multivariate time series data via deep neural networks on mobile devices. To solve the problem, we have tried the effect of different methods on the model performance. From data processing methods like data augmentation and noise simulation to model construction methods like model structure and feature engineering, we compare the performance and analyze the result. Besides these experiments, we also proposed a novel loss function called MPSE to help the model focus on small amplitude signals to guarantee the model's performance. Our optimal setting model and the proposed loss function are compared with their baselines respectively, and the results show they have excellent performance and robustness. We believe these experiments and analyses can help future studies that need to use similar data.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-01-10T17:13:16Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-01-10T17:13:17Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsAcknowledgements iii
摘要 v
Abstract vii
Contents ix
List of Figures xiii
List of Tables xv
Denotation xvii
Chapter 1 Introduction 1
1.1 Research background and motivation . . . . . . . . . . . . . . . . . 1
1.2 Research purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Problem definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Data assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 Evaluation method . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 Following structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Chapter 2 Related Work 9
2.1 Frequency analysis for forecasting stock data . . . . . . . . . . . . . 9
2.2 Signal separation by STFT and U-Net . . . . . . . . . . . . . . . . . 10
2.3 Multi-head structure for multivariate data . . . . . . . . . . . . . . . 10
2.4 Deep feature losses for speech denoising . . . . . . . . . . . . . . . 11
2.5 Noise reduction baselines . . . . . . . . . . . . . . . . . . . . . . . 12
2.6 Brief conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Chapter 3 Preliminaries 15
3.1 Data augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Feature engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 Model structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3.1 GRU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3.2 CNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3.3 Resnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3.4 Extracting long-term features . . . . . . . . . . . . . . . . . . . . . 20
3.4 Baseline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Chapter 4 Proposed loss function 23
Chapter 5 Dataset evaluation and simulation 25
5.1 Real dataset analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.1.1 Collect ambient light dataset . . . . . . . . . . . . . . . . . . . . . 25
5.1.2 Analyze ambient light dataset . . . . . . . . . . . . . . . . . . . . . 26
5.2 How to simulate real dataset . . . . . . . . . . . . . . . . . . . . . . 27
5.2.1 Find suitable public dataset that meets queries as signal . . . . . . . 28
5.2.2 Adjust scale, diversity and sign of public dataset . . . . . . . . . . . 29
5.2.3 Data augmentation and padding . . . . . . . . . . . . . . . . . . . . 29
5.2.4 Generate suitable periodic noise waveform . . . . . . . . . . . . . . 29
5.2.5 Combine signal and noise in each channel . . . . . . . . . . . . . . 30
5.2.6 Simulation result . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Chapter 6 Experiment setup 31
6.1 Using datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.2 Formula to simulate OLED light . . . . . . . . . . . . . . . . . . . . 32
6.3 Default parameter settings . . . . . . . . . . . . . . . . . . . . . . . 33
Chapter 7 Discussion and Experiments 35
7.1 Find the best setting . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7.2 Different noise setting and simulation methods . . . . . . . . . . . . 38
7.3 Different signal preprocess . . . . . . . . . . . . . . . . . . . . . . . 40
7.4 Different data augmentation settings . . . . . . . . . . . . . . . . . . 41
7.5 More noise patterns with same number of instances . . . . . . . . . . 43
7.6 Compare with baselines . . . . . . . . . . . . . . . . . . . . . . . . 44
7.7 Ablation of feature engineering data . . . . . . . . . . . . . . . . . . 46
7.8 Compare with loss functions . . . . . . . . . . . . . . . . . . . . . . 46
Chapter 8 Conclusion 49
Chapter 9 Current limitations and future work 51
9.1 Current limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
9.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
References 53
Appendix A — Experiment result: Noise with bandstop filter 57
-
dc.language.isoen-
dc.subject深度學習zh_TW
dc.subject雜訊抑制zh_TW
dc.subject行動裝置zh_TW
dc.subject多元時間序列zh_TW
dc.subject雜訊去除zh_TW
dc.subject直流分量zh_TW
dc.subjectMobile deviceen
dc.subjectMultivariate time seriesen
dc.subjectDeep learningen
dc.subjectNoise reductionen
dc.subjectDenoisingen
dc.subjectDC componenten
dc.title通過深度類神經網路從受雜訊汙染的多元時間序列數據中提取信號的直流分量zh_TW
dc.titleExtract the DC component of the signal from noisy multivariate time series data via deep neural networksen
dc.title.alternativeExtract the DC component of the signal from noisy multivariate time series data via deep neural networks-
dc.typeThesis-
dc.date.schoolyear111-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee葉彌妍;解巽評zh_TW
dc.contributor.oralexamcommitteeMi-Yen Yeh;Hsun-Ping Hsiehen
dc.subject.keyword多元時間序列,深度學習,雜訊抑制,雜訊去除,直流分量,行動裝置,zh_TW
dc.subject.keywordMultivariate time series,Deep learning,Noise reduction,Denoising,DC component,Mobile device,en
dc.relation.page57-
dc.identifier.doi10.6342/NTU202210008-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2022-11-01-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊網路與多媒體研究所-
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
U0001-0342221027444060.pdf
授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務)
4.28 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved