請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97922完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 劉建豪 | zh_TW |
| dc.contributor.advisor | Chien-Hao Liu | en |
| dc.contributor.author | 吳世瑜 | zh_TW |
| dc.contributor.author | Shih-Yu Wu | en |
| dc.date.accessioned | 2025-07-23T16:06:55Z | - |
| dc.date.available | 2025-07-24 | - |
| dc.date.copyright | 2025-07-23 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-07-14 | - |
| dc.identifier.citation | [1] H. Liu et al., “An epidermal sEMG tattoo-like patch as a new human–machine interface for patients with loss of voice,” Microsyst. Nanoeng., vol. 6, Art. no. 16, 2020.
[2] A. Kapur, S. Kapur, and P. Maes, “Alterego: A personalized wearable silent speech interface,” in Proc. 23rd Int. Conf. Intelligent User Interfaces, 2018, pp. 43–53. [3] S.-W. Byun and S.-P. Lee, “Implementation of hand gesture recognition device applicable to smart watch based on flexible epidermal tactile sensor array,” Micromachines, vol. 10, no. 10, Art. no. 692, Oct. 2019. [4] O. A. Araromi et al., “Ultra-sensitive and resilient compliant strain gauges for soft machines,” Nature, vol. 587, pp. 219–224, Nov. 2020. [5] M. Eskes et al., “Predicting 3D lip shapes using facial surface EMG,” PLOS ONE, vol. 12, no. 4, Art. no. e0175025, Apr. 2017. [6] T. Sun et al., “Decoding of facial strains via conformable piezoelectric interfaces,” Nat. Biomed. Eng., vol. 4, pp. 954–972, Oct. 2020. [7] R. Maiti et al., “In vivo measurement of skin surface strain and sub-surface layer deformation induced by natural tissue stretching,” J. Mech. Behav. Biomed. Mater., vol. 62, pp. 556–569, 2016. [8] D. R. Brown III, R. Ludwig, A. Pelteku, G. Bogdanov, and K. Keenaghan, “A novel non-acoustic voiced speech sensor,” Meas. Sci. Technol., vol. 15, no. 7, pp. 1291–1299, July 2004. [9] D. R. Brown, K. Keenaghan, and S. Desimini, “Measuring glottal activity during voiced speech using a tuned electromagnetic resonating collar sensor,” Meas. Sci. Technol., vol. 16, no. 11, pp. 2381–2390, Nov. 2005. [10] N. Pham et al., “Detection of microsleep events with a behind-the-ear wearable system,” IEEE Trans. Mobile Comput., vol. 22, no. 2, pp. 841–857, Feb. 2023. [11] D. M. E. Freeman and A. E. G. Cass, “A perspective on microneedle sensor arrays for continuous monitoring of the body's chemistry,” Appl. Phys. Lett., vol. 121, no. 7, Art. no. 070501, Aug. 2022. [12] Y. Kunimi et al., “E-mask: A mask-shaped interface for silent speech interaction with flexible strain sensors,” in Proc. Augmented Humans Int. Conf. 2022, March 2022, pp. 26–34. [13] D. Ravenscroft et al., “Machine learning methods for automatic silent speech recognition using a wearable graphene strain gauge sensor,” Sensors, vol. 22, no. 1, Art. no. 299, Dec. 2021. [14] D. Ravenscroft, I. Prattis, T. Kandukuri, Y. A. Samad, and L. G. Occhipinti, “A wearable graphene strain gauge sensor with haptic feedback for silent communications,” in Proc. 2021 IEEE Int. Conf. Flexible and Printable Sensors and Systems (FLEPS) , June 2021, pp. 1–4. [15] B. Denby et al., “Silent speech interfaces,” Speech Commun., vol. 52, no. 4, pp. 270–287, 2010. [16] T. Kim et al., “Ultrathin crystalline-silicon-based strain gauges with deep learning algorithms for silent speech interfaces,” Nat. Commun., vol. 13, Art. no. 5815, Sep. 2022. [17] Y. Wang et al., “All-weather, natural silent speech recognition via machine-learning-assisted tattoo-like electronics,” npj Flex. Electron., vol. 5, Art. no. 20, Aug. 2021. [18] Y. Wang et al., “A durable nanomesh on-skin strain gauge for natural skin motion monitoring with minimum mechanical constraints,” Sci. Adv., vol. 6, no. 33, Art. no. eabb7043, 2020. [19] M. Abdoli-Eramaki, C. Damecour, J. Christenson, and J. Stevenson, “The effect of perspiration on the sEMG amplitude and power spectrum,” J. Electromyogr. Kinesiol., vol. 22, no. 6, pp. 908–913, 2012. [20] S. Han et al., “Multiscale nanowire-microfluidic hybrid strain sensors with high sensitivity and stretchability,” npj Flex. Electron., vol. 2, Art. no. 16, 2018. [21] T. Hueber et al., “Eigentongue feature extraction for an ultrasound-based silent speech interface,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP) , Honolulu, HI, USA, vol. 1, 2007, pp. 1245–1248. [22] T. Hueber, G. Chollet, B. Denby, M. Stone, and L. Zouari, “Ouisper: Corpus based synthesis driven by articulatory data,” in Proc. Int. Congr. Phonetic Sci., Saarbrücken, Germany, 2007, pp. 2193–2196. [23] T. Hueber, G. Chollet, B. Denby, G. Dreyfus, and M. Stone, “Continuous-speech phone recognition from ultrasound and optical images of the tongue and lips,” in Proc. Interspeech, Antwerp, Belgium, 2007, pp. 658–661. [24] T. Hueber, G. Chollet, B. Denby, G. Dreyfus, and M. Stone, “Phone recognition from ultrasound and optical video sequences for a silent speech interface,” in Proc. Interspeech, Brisbane, Australia, 2008, pp. 2032–2035. [25] T. Hueber et al., “Development of a silent speech interface driven by ultrasound and optical images of the tongue and lips,” Speech Commun., vol. 52, no. 4, pp. 288–300, 2010. [26] J. S. Perkell et al., “Electromagnetic midsagittal articulometer systems for transducing speech articulatory movements,” J. Acoust. Soc. Am., vol. 92, no. 6, pp. 3078–3096, 1992. [27] P. Hoole and N. Nguyen, “Electromagnetic articulography in coarticulation research,” Forschungsber. Inst. Phonet. Sprachl. Kommun. Univ. München, vol. 35, pp. 177–184, 1997. [28] S. C. S. Jou, T. Schultz, M. Walliczek, F. Kraft, and A. Waibel, “Towards continuous speech recognition using surface electromyography,” in Proc. Interspeech, Sep. 2006, pp. 573–576. [29] T. Schultz and M. Wand, “Modeling coarticulation in EMG-based continuous speech recognition,” Speech Commun., vol. 52, no. 4, pp. 341–353, 2010. [30] A. Porbadnigk, M. Wester, J. Calliess, and T. Schultz, “EEG-based speech recognition—impact of temporal effects,” in Proc. Int. Conf. Bio-Inspired Systems and Signal Processing (BIOSIGNALS) , vol. 1, Jan. 2009, pp. 376–381. [31] Y.-S. Moon, I.-C. Ryu, W.-H. Son, S.-H. Lee, and S.-Y. Choi, “Stress-endurable temperature sensor designed for temperature compensation on a pressure sensor,” Sensors Mater., vol. 27, no. 1, pp. 125–133, 2015. [32] N. Lu, C. Lu, S. Yang, and J. Rogers, “Highly sensitive skin-mountable strain gauges based entirely on elastomers,” Adv. Funct. Mater., vol. 22, no. 19, pp. 4044–4050, 2012. [33] W. Yan et al., “Giant gauge factor of Van der Waals material based strain sensors,” Nat. Commun., vol. 12, Art. no. 2018, Apr. 2021. [34] J. Ramírez, D. Rodriquez, A. D. Urbina, A. M. Cardenas, and D. J. Lipomi, “Combining high sensitivity and dynamic range: Wearable thin-film composite strain sensors of graphene, ultrathin palladium, and PEDOT: PSS,” ACS Appl. Nano Mater., vol. 2, no. 4, pp. 2222–2229, 2019. [35] S. M. Won et al., “Piezoresistive strain sensors and multiplexed arrays using assemblies of single-crystalline silicon nanoribbons on plastic substrates,” IEEE Trans. Electron Devices, vol. 58, no. 11, pp. 4074–4078, 2011. [36] C. Zizoua, M. Raison, S. Boukhenous, M. Attari, and S. Achiche, “Development of a bracelet with strain-gauge matrix for movement intention identification in traumatic amputees,” IEEE Sens. J., vol. 17, no. 8, pp. 2464–2471, 2017. [37] V. Kedambaimoole et al., “MXene wearables: Properties, fabrication strategies, sensing mechanism and applications,” Mater. Adv., vol. 3, no. 9, pp. 3784–3808, 2022. [38] D. Xiang et al., “Flexible strain sensors with high sensitivity and large working range prepared from biaxially stretched carbon nanotubes/polyolefin elastomer nanocomposites,” J. Appl. Polym. Sci., vol. 140, no. 4, Art. no. e53371, 2023. [39] M. K. Kim et al., “Flexible submental sensor patch with remote monitoring controls for management of oropharyngeal swallowing disorders,” Sci. Adv., vol. 5, no. 12, Art. no. eaay3210, 2019. [40] S. Yao et al., “Nanomaterial-enabled flexible and stretchable sensing systems: Processing, integration, and applications,” Adv. Mater., vol. 32, no. 15, Art. no. 1902343, 2020. [41] H. Yang et al., “Wireless Ti₃C₂Tₓ MXene strain sensor with ultrahigh sensitivity and designated working windows for soft exoskeletons,” ACS Nano, vol. 14, no. 9, pp. 11860–11875, 2020. [42] Z. Yuan et al., “Wrinkle structured network of silver-coated carbon nanotubes for wearable sensors,” Nanoscale Res. Lett., vol. 14, pp. 1–8, 2019. [43] D.-H. Kim et al., “Epidermal electronics,” Science, vol. 333, no. 6044, pp. 838–843, 2011. [44] C. Wang et al., “Stretchable, multifunctional epidermal sensor patch for surface electromyography and strain measurements,” Adv. Intell. Syst., vol. 3, no. 11, Art. no. 2100031, 2021. [45] C. S. Smith, “Piezoresistance effect in germanium and silicon,” Phys. Rev., vol. 94, no. 1, pp. 42–49, Apr. 1954. [46] Y. Kanda, “A graphical representation of the piezoresistance coefficients in silicon,” IEEE Trans. Electron Devices, vol. 29, no. 1, pp. 64–70, 1982. [47] K. Sim et al., “High fidelity tape transfer printing based on chemically induced adhesive strength modulation,” Sci. Rep., vol. 5, Art. no. 16133, Nov. 2015. [48] H. Park et al., “A skin-integrated transparent and stretchable strain sensor with interactive color-changing electrochromic displays,” Nanoscale, vol. 9, no. 22, pp. 7631–7640, 2017. [49] Z. Yan et al., “Stretchable micromotion sensor with enhanced sensitivity using serpentine layout,” ACS Appl. Mater. Interfaces, vol. 11, no. 13, pp. 12261–12271, 2019. [50] J. L. Carter, C. A. Kelly, J. E. Marshall, and M. J. Jenkins, “Effect of thickness on the electrical properties of PEDOT:PSS/Tween 80 films,” Polym. J., vol. 56, no. 2, pp. 107–114, 2024. [51] C.-H. Tseng et al., “Electropolymerized poly (3,4-ethylenedioxythiophene) /screen-printed reduced graphene oxide–chitosan bilayer electrodes for flexible supercapacitors,” ACS Omega, vol. 6, no. 25, pp. 16455–16464, 2021. [52] H. Yoo et al., “Silent speech recognition with strain sensors and deep learning analysis of directional facial muscle movement,” ACS Appl. Mater. Interfaces, vol. 14, no. 48, pp. 54157–54169, 2022. [53] Y. Deng, J. T. Heaton, and G. S. Meltzner, “Towards a practical silent speech recognition system,” in Proc. Interspeech, Sep. 2014, pp. 1164–1168. [54] J. R. Vergara and P. A. Estévez, “A review of feature selection methods based on mutual information,” Neural Comput. Appl., vol. 24, pp. 175–186, 2014. [55] M.-K. Liu, Y.-T. Lin, Z.-W. Qiu, C.-K. Kuo, and C.-K. Wu, “Hand gesture recognition by a MMG-based wearable device,” IEEE Sens. J., vol. 20, no. 24, pp. 14703–14712, 2020. [56] N. Kim, T. Lim, K. Song, S. Yang, and J. Lee, “Stretchable multichannel electromyography sensor array covering large area for controlling home electronics with distinguishable signals from multiple muscles,” ACS Appl. Mater. Interfaces, vol. 8, no. 32, pp. 21070–21076, 2016. [57] M. Kim, B. Cao, T. Mau, and J. Wang, “Speaker-independent silent speech recognition from flesh-point articulatory movements using an LSTM neural network,” IEEE/ACM Trans. Audio, Speech, Lang. Process., vol. 25, no. 12, pp. 2323–2336, 2017. [58] H. Peng, F. Long, and C. Ding, “Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 8, pp. 1226–1238, 2005. [59] J. Rogers and S. Gunn, “Identifying feature relevance using a random forest,” in Proc. Int. Stat. Optim. Perspect. Workshop: Subspace, Latent Structure and Feature Selection, Berlin, Germany: Springer, 2005, pp. 173–184. [60] P. Tao, H. Yi, C. Wei, L. Y. Ge, and L. Xu, “A method based on weighted F-score and SVM for feature selection,” in Proc. 2013 25th Chinese Control and Decision Conf., May 2013, pp. 4287–4290. [61] J. A. Gonzalez-Lopez, A. Gomez-Alanis, J. M. M. Doñas, J. L. Pérez-Córdoba, and A. M. Gomez, “Silent speech interfaces for speech restoration: A review,” IEEE Access, vol. 8, Art. no. 177995, Sep. 2020. [62] 鄧麒宏, “易於辨識的手勢之分析,” 國立清華大學資訊工程學系碩士論文, pp. 1-19, Nov. 2014. [63] D. V. S. Reddy and T. R. Kumar, “Soft spoken murmur analysis using novel random forest algorithm compared with convolutional neural network for improving accuracy,” SPAST Rep., vol. 1, no. 3, pp. 1–9, 2024. [64] M. Zhang et al., “Inductive conformal prediction for silent speech recognition,” J. Neural Eng., vol. 17, no. 6, Art. no. 066019, 2020. [65] M. Matsumoto and J. Hori, “Classification of silent speech using support vector machine and relevance vector machine,” Appl. Soft Comput., vol. 20, pp. 95–102, 2014. [66] F. Qi, C. Bao, and Y. Liu, “A novel two-step SVM classifier for voiced/unvoiced/silence classification of speech,” in Proc. 2004 Int. Symp. Chinese Spoken Lang. Process., Dec. 2004, pp. 77–80. [67] S. Nassimi, N. Mohamed, W. AbuMoghli, and M. W. Fakhr, “Silent speech recognition with Arabic and English words for vocally disabled persons,” Int. J. Adv. Res. Artif. Intell., vol. 3, no. 6, pp. 1–5, 2014. [68] J. Wu et al., “A novel silent speech recognition approach based on parallel inception convolutional neural network and Mel frequency spectral coefficient,” Front. Neurorobot., vol. 16, Art. no. 971446, 2022. [69] R. He, J. Cheng, and F. Wang, “Lithography equipment,” in Handbook of Integrated Circuit Industry, 2023, pp. 1327–1359. [70] N. P. Mphasha and M. S. Rabothata, “Advanced surface modification techniques,” in Surface Engineering – Foundational Concepts, Techniques and Future Trends. London, UK: IntechOpen, 2025, ch. 70. [71] M. A. Sutton, N. Li, D. C. Joy, A. P. Reynolds, and X. Li, “Scanning electron microscopy for quantitative small and large deformation measurements Part I: SEM imaging at magnifications from 200 to 10,000,” Exp. Mech., vol. 47, pp. 775–787, 2007. [72] “I-V characteristic curves,” ElectronicsTutorials. [Online]. Available: https://www.electronics-tutorials.ws/blog/i-v-characteristic-curves.html. [Accessed: May 8, 2025]. [73] P. Karthikeyan, B. Babu, K. Siva, and S. Chellamuthu, “Experimental investigation on mechanical behavior of carbon nanotubes–alumina hybrid epoxy nanocomposites,” Dig. J. Nanomater. Biostruct., vol. 11, no. 2, pp. 625–632, 2016. [74] National Instruments, PCIe/PXIe/USB-63xx Device Features, Feb. 2018. [Online]. Available: https://docs-be.ni.com/bundle/pcie-pxie-usb-63xx-features/raw/resource/enus/370784k.pdf. [Accessed: May 8, 2025]. [75] C.-W. O. Yang et al., “Enhancing the versatility and performance of soft robotic grippers, hands, and crawling robots through three-dimensional-printed multifunctional buckling joints,” Soft Robot., vol. 11, no. 5, pp. 741–754, 2024. [76] KemLab Inc., APOL-LO 3200 Data Sheet, Apr. 2018. [Online]. Available: https://www.kemlab.com/_files/ugd/5b8579_a8dd77c4036e4f199a9a6c899311b7c8.pdf. [Accessed: May 8, 2025]. [77] C.-H. Lee, W.-C. Tsai, and J.-C. Chuang, “Mechanical reliability of high-power modules via simulation-based machine learning,” Eng. Appl. Artif. Intell., vol. 142, Art. no. 111019, Aug. 2025. [78] S. Huang et al., “Ultraminiaturized stretchable strain sensors based on single silicon nanowires for imperceptible electronic skins,” Nano Lett., vol. 20, no. 4, pp. 2478–2485, Apr. 2020. [79] A. Altmann, L. Toloşi, O. Sander, and T. Lengauer, “Permutation importance: a corrected feature importance measure,” Bioinformatics, vol. 26, no. 10, pp. 1340–1347, May 2010. [80] S. Yang and N. Lu, "Gauge Factor and Stretchability of Silicon-on-Polymer Strain Gauges," Sensors, vol. 13, no. 7, pp. 8577–8594, 2013. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97922 | - |
| dc.description.abstract | 近年來,無聲語音辨識系統 (Silent speech recognition) 因其在特殊溝通場景下的潛力而備受關注。然而,現有技術多依賴肌電訊號或影像辨識,常面臨侵入性、設備體積龐大或易受環境干擾等挑戰,限制了其實際應用。與此同時,多數研究致力於辨識大規模詞彙,導致系統複雜度與運算負擔俱增,難以實現即時控制。為此,本研究另闢蹊徑,採取一種目標導向的精簡化策略,專注於辨識一組有限但關鍵的指令,旨在開發一套反應快速、低延遲且高度可靠的無聲語音辨識系統,並以無線氣動仿生手掌之即時控制作為系統效能的最終驗證。
本系統硬體部分包含可撓式單晶矽壓阻感測器、惠斯通電橋放大電路、資料擷取器以及用於訊號處理的電腦。系統針對六種目標指令嘴型,擷取因無聲發音引致臉部多處微小變形所對應的時變電阻訊號。這些原始訊號經過前處理、特徵萃取與選擇後,採用隨機森林 (Random forest) 模型進行分類模型的建立。為提升系統穩定性並降低誤觸,訓練資料中特別納入了佔總資料量50%的「空白指令」。此類別不僅包含日常口部動作,更刻意選用發音嘴型與目標指令相似的混淆詞彙進行訓練。本研究招募三位受試者,將八通道感測陣列對稱貼附於其臉頰及下顎區域,以擷取多通道的表面形變訊號。此外,為有效過濾非指令動作,我們透過分類最大正確率閾值的後處理,此舉顯著提升了系統在真實應用中的可靠性。最終,系統將辨識出的無聲指令透過藍牙即時無線傳輸至以Arduino微控制器為核心的氣動仿生手掌控制系統,成功實現了從無聲語音輸入到具體手勢動作輸出的完整應用流程。 本論文的主要研究成果包含: (1) 提出一套精簡化、可穿戴的無聲語音辨識系統,應用於無線控制氣動仿生手掌。經由三位受試者及總計1008筆資料的驗證,系統平均辨識準確率達到91.1%,macro-F1分數為0.91。 (2) 所開發之單晶矽壓阻感測器在30%拉伸應變條件下,其靈敏度 (Gauge factor) 達到5.5,展現了優異的力學感測性能與可靠性。 (3) 透過特徵索引優先載入等處理流程優化,單次指令分類的平均處理時間從3184毫秒顯著縮短至1164毫秒。這些成果成功展現了本系統在輔助溝通與人機互動介面領域,具備快速處理能力與實際應用的潛力。 | zh_TW |
| dc.description.abstract | Silent Speech Recognition (SSR) has garnered considerable attention in recent years due to its potential in specialized communication scenarios. However, existing technologies, often reliant on electromyography (sEMG) or image recognition, face challenges such as invasiveness, bulkiness, and susceptibility to environmental interference, limiting their practical application. Concurrently, many studies focus on large-vocabulary recognition, leading to increased system complexity and computational load, thereby hindering real-time control. To address this, our research pioneers a target-oriented, streamlined strategy, focusing on recognizing a limited yet critical set of commands. The aim is to develop a responsive, low-latency, and highly reliable SSR system, with its performance ultimately validated through the real-time control of a wireless pneumatic bionic hand.
The system hardware comprises flexible single-crystal silicon piezoresistive sensors, a Wheatstone bridge amplification circuit, a data acquisition (DAQ) unit, and a computer for signal processing. The system targets six command-specific mouth shapes, capturing time-varying resistance signals corresponding to minute facial deformations caused by silent articulation. These raw signals undergo preprocessing, feature extraction, and feature selection, followed by model construction and classification using a Random Forest algorithm. To enhance system stability and reduce false positives, the training dataset uniquely incorporates a 50% proportion of "blank commands." This category includes not only daily oral movements but also deliberately chosen confounding words with mouth shapes similar to the target commands. Three participants were recruited for this study, with an eight-channel sensor array symmetrically attached to their cheek and jaw regions to capture multi-channel surface deformation signals. Furthermore, a post-processing strategy based on a maximum classification accuracy threshold was implemented to effectively filter non-command actions, significantly improving system reliability in real-world applications. Finally, the recognized silent commands are transmitted in real-time via Bluetooth to a pneumatic bionic hand control system centered around an Arduino microcontroller, successfully demonstrating a complete application pipeline from silent speech input to tangible gesture output. The main contributions of this thesis include: (1) The development of a streamlined, wearable SSR system for wireless control of a pneumatic bionic hand. Validated with three participants and a total of 1008 data samples, the system achieved an average recognition accuracy of 91.1% and a macro-F1 score of 0.91. (2) The developed single-crystal silicon piezoresistive sensor exhibited a Gauge Factor (GF) of 5.5 under 30% tensile strain, demonstrating excellent mechanical sensing performance and reliability. (3) Through processing flow optimizations, such as prioritized feature index loading, the average single-command classification time was significantly reduced from 3184 ms to 1164 ms. These achievements successfully demonstrate the system's rapid processing capabilities and practical application potential in assistive communication and human-machine interaction. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-07-23T16:06:55Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-07-23T16:06:55Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 口試委員會審定書 i
誌謝 ii 中文摘要 iii ABSTRACT iv 目次 vi 圖次 ix 表次 xiv 符號說明 xv 第1章 82B緒論 1 1.1 88B前言 1 1.2 89B研究背景與動機 3 1.3 90B論文架構 6 第2章 83B文獻回顧與理論基礎 9 2.1 91B無聲語音辨識 9 2.1.1 112B影像、光學與雷達式 9 2.1.2 113B生理訊號 (sEMG/EEG) 10 2.1.3 114B表面應變感測法 13 2.2 92B應變感測器 16 2.2.1 115B壓阻效應 16 2.2.2 116B結構的幾何形狀與材料 17 2.2.3 117B導電線材料 — 導電高分子 (Conducting polymers) 20 2.3 93B感測訊號處理與特徵擷取技術 22 2.3.1 118B前處理 — 正規化、濾波 22 2.3.2 119B特徵擷取與選擇 (Feature extraction) 24 2.4 94B分類方法概述與模型選擇 25 2.4.1 120B隨機森林 (Random forest, RF) 25 2.4.2 121B支持向量機 (Support vector machine, SVM) 27 2.4.3 122B深度學習 28 第3章 84B實驗儀器與設備原理 30 3.1 95B製程設備 30 3.1.1 123B曝光機 (Aligner) 30 3.1.2 124B電感耦合電漿蝕刻機 (Inductively coupled plasma reactive ion etching, ICP-RIE) 31 3.1.3 125B離子佈植設備 (Ion implantation) 33 3.1.4 126B快速熱退火處理設備 35 3.2 96B量測與分析設備 36 3.2.1 127B掃描式電子顯微鏡 36 3.2.2 128B兩點探針I-V量測系統 37 3.2.3 129B拉伸測試平台 39 3.2.4 130B雷射共軛聚焦顯微鏡 (Laser confocal microscope) 40 3.3 97B實驗設備 42 3.3.1 131B感測訊號擷取模組 (DAQ) 42 3.3.2 132BArduino 44 3.3.3 133B氣動手掌 45 3.4 98B實驗用材料與藥品說明 47 第4章 85B實驗流程與製程 50 4.1 99B感測器設計原則與模擬分析 51 4.1.1 134B感測器設計 51 4.1.2 135B有限元素模擬分析 (FEA) 52 4.2 100B感測圖案製作與初步製程 55 4.2.1 136B離子佈植 (Ion implantation) 56 4.2.2 137B光阻塗佈 (Photoresist spin coating) 58 4.2.3 138B接觸式曝光 (Photolithography exposure) 59 4.2.4 139B曝光後烤與顯影 (PEB & Development) 60 4.2.5 140B乾式蝕刻 (Dry etching) 61 4.3 101BPDMS 轉印與柔性基板整合 63 4.3.1 141B感測結構蝕刻釋放 (Sensor structure etching and release) 64 4.3.2 142BPDMS製備與拔取 66 4.3.3 143B柔性基板前處理與轉印 69 4.4 102B導線製作與封裝處理 71 4.4.1 144B導電高分子 PEDOT:PSS備製 72 4.4.2 145B導線圖案定義 73 4.4.3 146B電性連接檢測 75 4.5 103B多通道感測器組裝與貼附量測佈局 77 4.6 104B製程總覽與詳細步驟 79 第5章 86B訊號處理與驗證 82 5.1 105B拉伸試驗與訊號響應驗證 82 5.2 106B多通道應變訊號同步擷取系統 86 5.3 107B訊號及特徵處理 89 5.3.1 147B前處理 89 5.3.2 148B特徵提取 90 5.3.3 149B特徵選擇與分析比較 91 5.4 108B模型訓練與分類結果 95 5.4.1 150B模型性能比較與混淆矩陣分析 97 5.4.2 151B空白指令與最大正確率設定 105 5.4.3 152B分類處理速度優化策略 107 5.5 109B系統整合與氣動手掌應用展示 110 第6章 87B結論與未來展望 114 6.1 110B結論 114 6.2 111B未來展望 116 參考文獻 119 | - |
| dc.language.iso | zh_TW | - |
| dc.subject | 單晶矽 | zh_TW |
| dc.subject | 無聲語音辨識 | zh_TW |
| dc.subject | 可撓式 | zh_TW |
| dc.subject | 壓阻式感測器 | zh_TW |
| dc.subject | 微機電系統 | zh_TW |
| dc.subject | 機器學習 | zh_TW |
| dc.subject | 氣動手掌 | zh_TW |
| dc.subject | 無聲語音辨識 | zh_TW |
| dc.subject | 可撓式 | zh_TW |
| dc.subject | 壓阻式感測器 | zh_TW |
| dc.subject | 單晶矽 | zh_TW |
| dc.subject | 微機電系統 | zh_TW |
| dc.subject | 機器學習 | zh_TW |
| dc.subject | 氣動手掌 | zh_TW |
| dc.subject | Micro-Electro-Mechanical Systems (MEMS) | en |
| dc.subject | Piezoresistive Sensor | en |
| dc.subject | Single-Crystal Silicon | en |
| dc.subject | Machine Learning | en |
| dc.subject | Silent Speech Recognition | en |
| dc.subject | Flexible | en |
| dc.subject | Piezoresistive Sensor | en |
| dc.subject | Single-Crystal Silicon | en |
| dc.subject | Pneumatic Hand | en |
| dc.subject | Micro-Electro-Mechanical Systems (MEMS) | en |
| dc.subject | Machine Learning | en |
| dc.subject | Pneumatic Hand | en |
| dc.subject | Silent Speech Recognition | en |
| dc.subject | Flexible | en |
| dc.title | 可撓式壓阻感測器之無聲語音辨識系統設計與無線氣動手掌控制之應用 | zh_TW |
| dc.title | Design of a Silent Speech Recognition System Based on Flexible Piezoresistive Sensors and Its Application in Wireless Pneumatic Hand Control | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 莊嘉揚;周元昉 | zh_TW |
| dc.contributor.oralexamcommittee | Jia-Yang Juang;Yuan-Fang Chou | en |
| dc.subject.keyword | 無聲語音辨識,可撓式,壓阻式感測器,單晶矽,微機電系統,機器學習,氣動手掌, | zh_TW |
| dc.subject.keyword | Silent Speech Recognition,Flexible,Piezoresistive Sensor,Single-Crystal Silicon,Micro-Electro-Mechanical Systems (MEMS),Machine Learning,Pneumatic Hand, | en |
| dc.relation.page | 125 | - |
| dc.identifier.doi | 10.6342/NTU202501758 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2025-07-15 | - |
| dc.contributor.author-college | 工學院 | - |
| dc.contributor.author-dept | 機械工程學系 | - |
| dc.date.embargo-lift | 2025-07-24 | - |
| 顯示於系所單位: | 機械工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf | 30.71 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
