請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83188完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 林守德 | zh_TW |
| dc.contributor.advisor | Shou-de Lin | en |
| dc.contributor.author | 賴政毅 | zh_TW |
| dc.contributor.author | Cheng-Yi Lai | en |
| dc.date.accessioned | 2023-01-10T17:13:17Z | - |
| dc.date.available | 2023-11-09 | - |
| dc.date.copyright | 2023-01-07 | - |
| dc.date.issued | 2022 | - |
| dc.date.submitted | 2022-10-30 | - |
| dc.identifier.citation | [1] F. Auger, M. Hilairet, J. M. Guerrero, E. Monmasson, T. Orlowska-Kowalska, and S. Katsura. Industrial applications of the kalman filter: A review. IEEE Transactions on Industrial Electronics, 60(12):5458–5471, 2013.
[2] S. Bai, J. Z. Kolter, and V. Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271, 2018. [3] W. Bao, J. Yue, and Y. Rao. A deep learning framework for financial time series using stacked autoencoders and long-short term memory. PloS one, 12(7):e0180944, 2017. [4] P. Barsocchi, A. Crivello, D. La Rosa, and F. Palumbo. A multisource and multi variate dataset for indoor localization methods based on wlan and geo-magnetic field fingerprinting. In 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pages 1–8. IEEE, 2016. [5] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. Advances in neural information processing systems, 19, 2006. [6] C. Chandra, M. S. Moore, and S. K. Mitra. An efficient method for the removal of impulse noise from speech and audio signals. In 1998 IEEE International Symposium on Circuits and Systems (ISCAS), volume 4, pages 206–208. IEEE, 1998. [7] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for sta tistical machine translation. arXiv preprint arXiv:1406.1078, 2014. [8] F. Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1251–1258, 2017. [9] K. Ekstrand. Akwf free (waveform samples), n.d. [10] F. G. Germain, Q. Chen, and V. Koltun. Speech denoising with deep feature losses. arXiv preprint arXiv:1806.10522, 2018. [11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [12] G. Hebrail and A. Berard. Uci machine learning repository: Individual household electric power consumption data set, 2012. [13] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504–507, 2006. [14] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [15] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017. [16] S. Mehta, M. Rastegari, A. Caspi, L. Shapiro, and H. Hajishirzi. Espnet: Efficient spatial pyramid of dilated convolutions for semantic segmentation. In Proceedings of the european conference on computer vision (ECCV), pages 552–568, 2018. [17] T. Mikolov, M. Karafiát, L. Burget, J. Cernockỳ, and S. hudanpur. Recurrent neu ral network based language model. In Interspeech, volume 2, pages 1045–1048. Makuhari, 2010. [18] J. Oh, D. Kim, and S.-Y. Yun. Spectrogram-channels u-net: a source separation model viewing each channel as the spectrogram of each source. arXiv preprint arXiv:1810.11520, 2018. [19] S. K. Prasadh, S. S. Natrajan, and S. Kalaivani. Efficiency analysis of noise reduc tion algorithms: Analysis of the best algorithm of noise reduction from a set of algo rithms. In 2017 International Conference on Inventive Computing and Informatics (ICICI), pages 1137–1140. IEEE, 2017. [20] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomed ical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015. [21] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [22] R. Wan, S. Mei, J. Wang, M. Liu, and F. Yang. Multivariate temporal convolutional network: A deep neural networks approach for multivariate time series forecasting. Electronics, 8(8):876, 2019. [23] S. Zhang, B. Guo, A. Dong, J. He, Z. Xu, and S. X. Chen. Cautionary tales on air quality improvement in beijing. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 473(2205):20170457, 2017. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83188 | - |
| dc.description.abstract | 深度學習在去除音頻資料雜訊中的應用引起了很多關注,但對多元時間序列資料的研究相對較少。在本文中,我們致力於透過行動裝置上的深度類神經網路從嘈雜的多元時間序列數據中提取信號的直流分量。為了解決這個問題,我們測試了不同方法對模型性能的影響。從資料增強和雜訊模擬等資料處理方法,到模型結構和特徵工程等模型構建方法,我們比較其性能並分析結果。除了這些實驗之外,我們還提出了一種稱為MPSE的新型損失函數,以幫助模型專注於小振幅信號,以保證模型性能。將我們的最佳設置模型和提出的損失函數與基準進行比較,結果表明它們具有出色的性能和強健性。我們相信這些實驗和分析可以幫助未來需要使用類似資料的研究。 | zh_TW |
| dc.description.abstract | The applications of deep learning to denoising audio data have attracted lots of focus, but relatively little research on multivariate time series data. In the paper, we address extracting the DC component of the signal from noisy multivariate time series data via deep neural networks on mobile devices. To solve the problem, we have tried the effect of different methods on the model performance. From data processing methods like data augmentation and noise simulation to model construction methods like model structure and feature engineering, we compare the performance and analyze the result. Besides these experiments, we also proposed a novel loss function called MPSE to help the model focus on small amplitude signals to guarantee the model's performance. Our optimal setting model and the proposed loss function are compared with their baselines respectively, and the results show they have excellent performance and robustness. We believe these experiments and analyses can help future studies that need to use similar data. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-01-10T17:13:16Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2023-01-10T17:13:17Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Acknowledgements iii
摘要 v Abstract vii Contents ix List of Figures xiii List of Tables xv Denotation xvii Chapter 1 Introduction 1 1.1 Research background and motivation . . . . . . . . . . . . . . . . . 1 1.2 Research purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Problem definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.1 Data assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.2 Evaluation method . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.5 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.6 Following structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Chapter 2 Related Work 9 2.1 Frequency analysis for forecasting stock data . . . . . . . . . . . . . 9 2.2 Signal separation by STFT and U-Net . . . . . . . . . . . . . . . . . 10 2.3 Multi-head structure for multivariate data . . . . . . . . . . . . . . . 10 2.4 Deep feature losses for speech denoising . . . . . . . . . . . . . . . 11 2.5 Noise reduction baselines . . . . . . . . . . . . . . . . . . . . . . . 12 2.6 Brief conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Chapter 3 Preliminaries 15 3.1 Data augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.2 Feature engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3 Model structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.3.1 GRU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.3.2 CNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.3.3 Resnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.3.4 Extracting long-term features . . . . . . . . . . . . . . . . . . . . . 20 3.4 Baseline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Chapter 4 Proposed loss function 23 Chapter 5 Dataset evaluation and simulation 25 5.1 Real dataset analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5.1.1 Collect ambient light dataset . . . . . . . . . . . . . . . . . . . . . 25 5.1.2 Analyze ambient light dataset . . . . . . . . . . . . . . . . . . . . . 26 5.2 How to simulate real dataset . . . . . . . . . . . . . . . . . . . . . . 27 5.2.1 Find suitable public dataset that meets queries as signal . . . . . . . 28 5.2.2 Adjust scale, diversity and sign of public dataset . . . . . . . . . . . 29 5.2.3 Data augmentation and padding . . . . . . . . . . . . . . . . . . . . 29 5.2.4 Generate suitable periodic noise waveform . . . . . . . . . . . . . . 29 5.2.5 Combine signal and noise in each channel . . . . . . . . . . . . . . 30 5.2.6 Simulation result . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Chapter 6 Experiment setup 31 6.1 Using datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 6.2 Formula to simulate OLED light . . . . . . . . . . . . . . . . . . . . 32 6.3 Default parameter settings . . . . . . . . . . . . . . . . . . . . . . . 33 Chapter 7 Discussion and Experiments 35 7.1 Find the best setting . . . . . . . . . . . . . . . . . . . . . . . . . . 35 7.2 Different noise setting and simulation methods . . . . . . . . . . . . 38 7.3 Different signal preprocess . . . . . . . . . . . . . . . . . . . . . . . 40 7.4 Different data augmentation settings . . . . . . . . . . . . . . . . . . 41 7.5 More noise patterns with same number of instances . . . . . . . . . . 43 7.6 Compare with baselines . . . . . . . . . . . . . . . . . . . . . . . . 44 7.7 Ablation of feature engineering data . . . . . . . . . . . . . . . . . . 46 7.8 Compare with loss functions . . . . . . . . . . . . . . . . . . . . . . 46 Chapter 8 Conclusion 49 Chapter 9 Current limitations and future work 51 9.1 Current limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 9.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 References 53 Appendix A — Experiment result: Noise with bandstop filter 57 | - |
| dc.language.iso | en | - |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 雜訊抑制 | zh_TW |
| dc.subject | 行動裝置 | zh_TW |
| dc.subject | 多元時間序列 | zh_TW |
| dc.subject | 雜訊去除 | zh_TW |
| dc.subject | 直流分量 | zh_TW |
| dc.subject | Mobile device | en |
| dc.subject | Multivariate time series | en |
| dc.subject | Deep learning | en |
| dc.subject | Noise reduction | en |
| dc.subject | Denoising | en |
| dc.subject | DC component | en |
| dc.title | 通過深度類神經網路從受雜訊汙染的多元時間序列數據中提取信號的直流分量 | zh_TW |
| dc.title | Extract the DC component of the signal from noisy multivariate time series data via deep neural networks | en |
| dc.title.alternative | Extract the DC component of the signal from noisy multivariate time series data via deep neural networks | - |
| dc.type | Thesis | - |
| dc.date.schoolyear | 111-1 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 葉彌妍;解巽評 | zh_TW |
| dc.contributor.oralexamcommittee | Mi-Yen Yeh;Hsun-Ping Hsieh | en |
| dc.subject.keyword | 多元時間序列,深度學習,雜訊抑制,雜訊去除,直流分量,行動裝置, | zh_TW |
| dc.subject.keyword | Multivariate time series,Deep learning,Noise reduction,Denoising,DC component,Mobile device, | en |
| dc.relation.page | 57 | - |
| dc.identifier.doi | 10.6342/NTU202210008 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2022-11-01 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資訊網路與多媒體研究所 | - |
| 顯示於系所單位: | 資訊網路與多媒體研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-0342221027444060.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 4.28 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
