請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100980完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 闕志達 | zh_TW |
| dc.contributor.advisor | Tzi-Dar Chiueh | en |
| dc.contributor.author | 陳法諭 | zh_TW |
| dc.contributor.author | Fa-Yu Chen | en |
| dc.date.accessioned | 2025-11-26T16:20:25Z | - |
| dc.date.available | 2025-11-27 | - |
| dc.date.copyright | 2025-11-26 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-10-08 | - |
| dc.identifier.citation | [1] A. Vaswani et al., "Attention is all you need," in Proceedings of Advances in neural information processing systems, Long Beach, CA, USA, December 2017, vol. 30, pp. 5998-6008.
[2] N. Jouppi et al., "Tpu v4: An optically reconfigurable supercomputer for machine learning with hardware support for embeddings," in Proceedings of the 50th annual international symposium on computer architecture, Orlando, FL, USA, June 2023, pp. 1-14. [3] J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin, "Convolutional sequence to sequence learning," in Proceedings of International conference on machine learning, Sydney, Australia, August 2017: PMLR, pp. 1243-1252. [4] D. Hendrycks and K. Gimpel, "Gaussian error linear units (gelus)," arXiv preprint arXiv:1606.08415, 2016. [5] J. Wang, J. Wu, and L. Huang, "Understanding the failure of batch normalization for transformers in NLP," in Proceedings of Advances in Neural Information Processing Systems, New Orleans, LA, USA, November 2022, vol. 35, pp. 37617-37630. [6] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, November 2002. [7] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "Imagenet: A large-scale hierarchical image database," in Proceedings of 2009 IEEE conference on computer vision and pattern recognition, Miami Beach, FL, USA, June 2009: Ieee, pp. 248-255. [8] S. V. Lab. "ImageNet." https://www.image-net.org/download.php (accessed 06-27, 2025). [9] A. Dosovitskiy et al., "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020. [10] H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jégou, "Training data-efficient image transformers & distillation through attention," in Proceedings of International conference on machine learning, Vienna, Austria, July 2021: PMLR, pp. 10347-10357. [11] M. Ding, B. Xiao, N. Codella, P. Luo, J. Wang, and L. Yuan, "Davit: Dual attention vision transformers," in Proceedings of European conference on computer vision, Tel Aviv, Israel, October 2022: Springer, pp. 74-92. [12] Z. Liu et al., "Swin transformer: Hierarchical vision transformer using shifted windows," in Proceedings of the IEEE/CVF international conference on computer vision, Montreal, Canada, October 2021, pp. 9992-10002. [13] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, "Improving language understanding by generative pre-training," 2018. [14] C. Raffel et al., "Exploring the limits of transfer learning with a unified text-to-text transformer," Journal of machine learning research, vol. 21, no. 1, pp. 5485 - 5551, January 2020, Art no. 140. [15] H. Touvron et al., "Llama: Open and efficient foundation language models," arXiv preprint arXiv:2302.13971, 2023. [16] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, "Language models are unsupervised multitask learners," OpenAI blog, vol. 1, no. 8, p. 9, 2019. [17] J. Achiam et al., "Gpt-4 technical report," arXiv preprint arXiv:2303.08774, 2023. [18] V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, "Librispeech: an asr corpus based on public domain audio books," in Proceedings of 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), Brisbane, Queensland, Australia, April 2015: IEEE, pp. 5206-5210. [19] A. Radford, J. W. Kim, T. Xu, G. Brockman, C. McLeavey, and I. Sutskever, "Robust speech recognition via large-scale weak supervision," in Proceedings of International conference on machine learning, Honolulu, Hawaii, USA, July 2023: PMLR, pp. 28492-28518. [20] H. Wang et al., "Bitnet: Scaling 1-bit transformers for large language models," arXiv preprint arXiv:2310.11453, 2023. [21] S. Ma et al., "The era of 1-bit llms: All large language models are in 1.58 bits," arXiv preprint arXiv:2402.17764, vol. 1, no. 4, 2024. [22] S. Ma et al., "BitNet b1. 58 2B4T Technical Report," arXiv preprint arXiv:2504.12285, 2025. [23] H. Wang, S. Ma, and F. Wei, "BitNet a4. 8: 4-bit Activations for 1-bit LLMs," arXiv preprint arXiv:2411.04965, 2024. [24] H. Wang, S. Ma, and F. Wei, "BitNet v2: Native 4-bit Activations with Hadamard Transformation for 1-bit LLMs," arXiv preprint arXiv:2504.18415, 2025. [25] Z. Yuan, R. Zhou, H. Wang, L. He, Y. Ye, and L. Sun, "Vit-1.58 b: Mobile vision transformers in the 1-bit era," arXiv preprint arXiv:2406.18051, 2024. [26] F. Li, B. Liu, X. Wang, B. Zhang, and J. Yan, "Ternary weight networks," arXiv preprint arXiv:1605.04711, 2016. [27] Y. Bhalgat, J. Lee, M. Nagel, T. Blankevoort, and N. Kwak, "Lsq+: Improving low-bit quantization through learnable offsets and better initialization," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, Seattle, WA, USA, June 2020, pp. 696-697. [28] Y. Bengio, N. Léonard, and A. Courville, "Estimating or propagating gradients through stochastic neurons for conditional computation," arXiv preprint arXiv:1308.3432, 2013. [29] W. Byun, J. Woo, and S. Mukhopadhyay, "FPGA Acceleration With Hessian-Based Comprehensive Intra-Layer Mixed-Precision Quantization for Transformer Models," IEEE Access, vol. 13, pp. 70282--70297, April 2025. [30] A. Marchisio, D. Dura, M. Capra, M. Martina, G. Masera, and M. Shafique, "SwiftTron: An efficient hardware accelerator for quantized transformers," in Proceedings of 2023 International Joint Conference on Neural Networks (IJCNN), Queensland, Australia, June 2023: IEEE, pp. 1-9. [31] K. Chen, Y. Gao, H. Waris, W. Liu, and F. Lombardi, "Approximate softmax functions for energy-efficient deep neural networks," IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 31, no. 1, pp. 4-16, November 2022. [32] J. Yu et al., "NN-LUT: Neural approximation of non-linear operations for efficient transformer inference," in Proceedings of the 59th ACM/IEEE Design Automation Conference, San Francisco, CA, USA, July 2022, pp. 577-582. [33] K. Banerjee, R. R. Gupta, K. Vyas, and B. Mishra, "Exploring alternatives to softmax function," arXiv preprint arXiv:2011.11538, 2020. [34] C. Peltekis, K. Alexandridis, and G. Dimitrakopoulos, "Reusing softmax hardware unit for gelu computation in transformers," in Proceedings of 2024 IEEE 6th International Conference on AI Circuits and Systems (AICAS), Dubai, UAE, April 2024: IEEE, pp. 159-163. [35] J. Chen, "Design and Implementation of an Energy-Efficient Binary-Weight Transformer Accelerator Chip," M.S. thesis, National Taiwan University, Taipei, 2024. [36] Y.-H. Huang, P.-H. Kuo, and J.-D. Huang, "Hardware-Friendly Activation Function Designs and Its Efficient VLSI Implementations for Transformer-Based Applications," in Proceedings of 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hangzhou, China, June 2023: IEEE, pp. 1-5. [37] T. Mohaidat, M. R. K. Khan, and K. Khalil, "Curvature-Based Piecewise Linear Approximation Method of GELU Activation Function in Neural Networks," in Proceedings of 2024 2nd International Conference on Artificial Intelligence, Blockchain, and Internet of Things (AIBThings), Mount Pleasant, MI, USA, September 2024: IEEE, pp. 1-5. [38] D. Wang, C.-T. Lin, G. K. Chen, P. Knag, R. K. Krishnamurthy, and M. Seok, "DIMC: 2219TOPS/W 2569F2/b digital in-memory computing macro in 28nm based on approximate arithmetic hardware," in Proceedings of 2022 IEEE international solid-state circuits conference (ISSCC), San Francisco, CA, USA, February 2022, vol. 65: IEEE, pp. 266-268. [39] Y. He et al., "An RRAM-based digital computing-in-memory macro with dynamic voltage sense amplifier and sparse-aware approximate adder tree," IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 70, no. 2, pp. 416-420, September 2022. [40] M.-G. Lin, J.-P. Wang, C.-Y. Chang, and A.-Y. A. Wu, "Approximate Adder Tree Design with Sparsity-Aware Encoding and In-Memory Swapping for SRAM-based Digital Compute-In-Memory Macros," in Proceedings of 2024 IEEE 6th International Conference on AI Circuits and Systems (AICAS), Abu Dhabi, UAE, April 2024: IEEE, pp. 362-366. [41] A. Dave, F. Frustaci, F. Spagnolo, M. Yayla, J.-J. Chen, and H. Amrouch, "HW/SW codesign for approximation-aware binary neural networks," IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 13, no. 1, pp. 33-47, February 2023. [42] N. Corporation. "NVIDIA H100 Tensor Core GPU Architecture." https://resources.nvidia.com/en-us-data-center-overview/gtc22-whitepaper-hopper (accessed 06-27, 2025). [43] S. Moon, H.-G. Mun, H. Son, and J.-Y. Sim, "A 127.8 TOPS/W arbitrarily quantized 1-to-8b scalable-precision accelerator for general-purpose deep learning with reduction of storage, logic and latency waste," in Proceedings of 2023 IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, USA, February 2023: IEEE, pp. 21-23. [44] Y. Qin et al., "A 28nm 49.7 TOPS/W Sparse Transformer Processor with Random-Projection-Based Speculation, Multi-Stationary Dataflow, and Redundant Partial Product Elimination," in Proceedings of 2023 IEEE Asian Solid-State Circuits Conference (A-SSCC), Haikou, China, November 2023: IEEE, pp. 1-3. [45] S. Kim, S. Kim, W. Jo, S. Kim, S. Hong, and H.-J. Yoo, "20.5 C-Transformer: A 2.6-18.1 μJ/Token Homogeneous DNN-Transformer/Spiking-Transformer Processor with Big-Little Network and Implicit Weight Generation for Large Language Models," in Proceedings of 2024 IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, USA, February 2024, vol. 67: IEEE, pp. 368-370. [46] T. Tambe et al., "22.9 A 12nm 18.1 TFLOPs/W sparse transformer processor with entropy-based early exit, mixed-precision predication and fine-grained power management," in Proceedings of 2023 IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, USA, February 2023: IEEE, pp. 342-344. [47] H. Mun, H. Son, S. Moon, J. Park, B. Kim, and J.-Y. Sim, "A 28 nm 66.8 TOPS/W sparsity-aware dynamic-precision deep-learning processor," in Proceedings of 2023 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits), Kyoto, Japan, June 2023: IEEE, pp. 1-2. [48] K. Prabhu et al., "MINOTAUR: A Posit-Based 0.42–0.50-TOPS/W Edge Transformer Inference and Training Accelerator," IEEE Journal of Solid-State Circuits, vol. 60, no. 4, pp. 1311 - 1323, March 2025. [49] B. Keller et al., "A 95.6-TOPS/W deep learning inference accelerator with per-vector scaled 4-bit quantization in 5 nm," IEEE Journal of Solid-State Circuits, April 2023. [50] K. Li, M. Huang, A. Li, S. Yang, Q. Cheng, and H. Yu, "A 29.12-TOPS/W Vector Systolic Accelerator With NAS-Optimized DNNs in 28-nm CMOS," IEEE Journal of Solid-State Circuits, vol. 60, no. 10, pp. 3790 - 3801, April 2025. [51] C.-L. Hsiung and T.-S. Chang, "Low-Power Vision Transformer Accelerator With Hardware-Aware Pruning and Optimized Dataflow," IEEE Transactions on Circuits and Systems I: Regular Papers, pp. 1--10, July 2025. [52] C.-Y. Li, Y.-F. Shyu, and C.-H. Yang, "An 157TOPS/W Transformer Learning Processor Supporting Forward Pass Only with Zeroth-Order Optimization," in proceedings of 2025 Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits), Kyoto, Japan, June 2025: IEEE, pp. 1-3. [53] "Scaphandre." https://github.com/hubblo-org/scaphandre (accessed 08-22, 2025). | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100980 | - |
| dc.description.abstract | 近年來,變形器神經網路(Transformer)由於其優異的表現,被廣泛應用於電腦視覺、自然語言處裡、語音辨識等各式應用領域,並基於最初的transformer模型,針對各式任務的需求延伸出多種變體模型。儘管transformer在多個應用領域都有著超越傳統非transformer模型的表現,但高昂的硬體建置成本與模型推論時的能耗需求,都使得transformer系列模型在邊緣裝置與嵌入式系統等對於算力跟能耗有高度限制場域的部署成為一大挑戰。因此,在transformer模型表現日新月異的同時,許多研究也著眼於尋找在模型效能與硬體成本間的平衡點,其中,如何降低模型參數量與推論時的能耗,以便於邊緣運算裝置的部署,便是我們希望能解決的問題。
本文以維持transformer模型效能為首要目標,並以此為基礎追求大幅降低模型參數量、硬體成本、推論能耗的解決方案。針對transformer模型中的運算,提出了包括三元(Ternary)權重量化、非線性函數(包括Softmax與GELU)近似、加法近似等技巧,並完成相對應硬體加速器晶片的設計與實現。並透過量化感知訓練的方式,確保在施加上述近似技巧後,模型仍盡可能保持原有效能。 首先是三元權重量化,我們將transformer模型的權重(Weight)量化至-1, 0與1,而輸入(Activation)則是採用INT-4的量化精度。這樣的精度設定相比二元(Binary)權重加上INT-4輸入的組合經實驗證明可以有效降低模型的效能損失,同時不會顯著增加硬體的面積與功耗。 在非線性函數近似的方面,我們針對Softmax函數提出了硬體友善的近似版本,Multimax,透過僅計算單一矩陣row中最大的N個數值並賦予相對應比例,可以大幅降低面積與功耗。以Multimax計算電路本身而言,相比於32位元單精度浮點數的Softmax電路,面積與功耗分別可以降低95.8%與97.2%。本晶片可依照軟體模擬時的參數設定,支援Multimax的N為1, 3或5。另一個非線性函數,GELU,我們則是提出以分段二項式進行近似的QGELU,並將二項式中的參數設計為硬體僅需Shift & Add便可以完成的運算,讓整體成本進一步降低。以QGELU計算電路本身而言,相比於32位元單精度浮點數的GELU電路,面積與功耗分別可以降低98.3%與99.9%。 最後是加法電路的近似,本文提出多模態加法近似,可以支援精確運算、LSB近似加法以及末兩位近似加法運算,可以在對於運算精確度較不敏感的transformer運算中採用較大程度的近似,以節省推論時的功耗,並在對於精確度較敏感的部分維持高精確度的加法運算以維持模型效能。近似加法器我們採用OAI21與AAI21的混和雙層近似加法器架構,相比普通加法器,面積與功耗分別可以下降86.2%與90.8%。若以單一一個PE Cube進行分析,開啟LSB近似加法與末兩位近似加法運算時分別可以節省4.3%與21.4%的功耗。 本研究所提出之加速晶片於40奈米製程與200MHz的工作頻率下,可以達到32.3 TOPS/W的能源效率與1.50 TOPS/{mm}^2的面積效率,超越其他研究所提出之神經網路加速晶片。而使用ViT-Base進行inference時,運算效能可達1003.1 Tokens/J。 | zh_TW |
| dc.description.abstract | In recent years, Transformer neural networks have achieved remarkable success in computer vision, natural language processing, and speech recognition, resulting in numerous task-specific variants. Despite their superior performance, Transformers face challenges of high hardware cost and energy consumption, hindering deployment in resource-constrained environments such as edge devices. As performance continues to improve, research increasingly focuses on balancing accuracy and efficiency. This work addresses the key challenge of reducing parameters and inference power consumption to enable deployment on edge platforms.
This work focuses on preserving Transformer performance while reducing parameters, hardware cost, and inference energy consumption. We introduce optimization techniques targeting key computations, including ternary weight quantization, nonlinear function approximations (Softmax, GELU), and approximate adders. A hardware accelerator chip is designed to support these methods, with quantization-aware training ensuring minimal accuracy loss. We first apply ternary weight quantization, mapping weights to [-1, 0, 1] with activations in 4-bit precision (INT4). Compared to binary weights with INT4 activations, this setting reduces performance degradation while incurring minimal hardware and power overhead. For nonlinear function approximation, we propose Multimax, a hardware-friendly alternative to Softmax that selects the top-N values and assigns proportional weights. Compared to 32-bit Softmax, Multimax reduces area by 95.8% and power by 97.2%, with configurable N (1, 3, or 5). Regarding the GELU activation function, we introduce QGELU, a segmented polynomial approximation designed for efficient hardware implementation. The polynomial coefficients are carefully chosen to allow computation using only shift-and-add operations, further reducing complexity. Compared to a standard 32-bit floating-point GELU circuit, QGELU achieves a 98.3% reduction in area and a 99.9% reduction in power consumption. Lastly, we propose a multi-mode approximate adder supporting exact, LSB, and two-bit approximations to reduce inference energy. By applying lower-precision modes to less accuracy-sensitive stages, the design balances efficiency and performance. Implemented with a two-layer OAI21–AAI21 hybrid structure, it achieves 86.2% area and 90.8% power savings over conventional adders. At the PE cube level, LSB and two-bit modes yield 4.3% and 21.4% power savings, respectively. The accelerator chip proposed in this study, implemented using a 40nm process and operating at a frequency of 200 MHz, achieves an energy efficiency of 32.3 TOPS/W and an area efficiency of 1.50 TOPS/mm², outperforming other neural network accelerators reported in previous works. When performing inference with ViT-Base, the efficiency reaches 1003.1 tokens/J. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-11-26T16:20:25Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-11-26T16:20:25Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 致謝 i
摘要 iii Abstract v 目次 viii 圖次 xii 表次 xvi 1 第一章 緒論 1 1.1 研究背景 1 1.2 研究動機與目標 3 1.3 論文組織與貢獻 4 2 第二章 變形器神經網路(Transformer)介紹 6 2.1 多頭注意力(Multi-Head Attention, MHA) 7 2.2 多層感知器(Multi-Layer Perceptron, MLP) 11 2.3 層正則化(Layer Normalization) 14 2.4 第二章總結 16 3 第三章 三元權重與近似運算神經網路訓練與推論 18 3.1 訓練模型與資料集 18 3.2 影像辨識任務 18 3.2.1 ImageNet資料集 18 3.2.2 Vision Transformer簡介 19 3.2.2.1 ViT與DeiT 20 3.2.2.2 DaViT 21 3.2.2.3 Swin 23 3.3 文字生成任務 25 3.3.1 OLM & WikiText-2資料集 26 3.3.2 GPT-2簡介 27 3.4 語音轉文字任務 27 3.4.1 LibriSpeech資料集 28 3.4.2 Whisper簡介 28 3.5 三元權重神經網路簡介與實驗結果 29 3.5.1 神經網路量化 29 3.5.1.1 三元權重量化 33 3.5.1.2 輸入量化 34 3.5.2 三元權重神經網路訓練 36 3.6 複雜函數近似 38 3.6.1 Multimax 39 3.6.1.1 Softmax硬體實作技巧比較 39 3.6.1.2 構想與實現 40 3.6.2 QGELU 42 3.6.2.1 GELU硬體實作技巧比較 43 3.6.2.2 構想與實現 43 3.7 加法近似運算 46 3.7.1 加法近似技巧文獻回顧 47 3.7.2 OAI21+AAI21雙層近似加法器 50 3.7.3 近似加法硬體實作結果分析 53 3.8 模型訓練結果彙整 57 3.8.1 影像辨識任務訓練結果 57 3.8.2 文字生成任務訓練結果 59 3.8.3 語音轉文字任務訓練結果 60 3.8.4 稀疏性分析 60 3.8.5 加法近似訓練結果 60 3.8.6 模型輸出結果比較 64 3.9 第三章總結 65 4 第四章 晶片架構設計 67 4.1 晶片架構總覽 67 4.1.1 運算電路概覽 67 4.1.2 記憶體配置 68 4.2 PE(Processing Element) Cube 68 4.2.1 PE Unit 69 4.2.2 PE Cube 71 4.3 PPU(Post-Processing Unit) 74 4.3.1 架構介紹 74 4.3.2 Accumulator 75 4.3.3 Requantizer 77 4.3.4 QGELU 78 4.3.5 Pre-Sort 80 4.4 Multimax Calculation Unit 81 4.5 Self-Test Circuit 84 4.6 晶片架構 85 4.7 晶片計算流程 86 4.8 晶片支援運算總結 88 4.9 第四章總結 90 5 第五章 晶片實現 93 5.1 晶片設計流程 93 5.2 晶片驗證流程 94 5.3 晶片參數 95 5.3.1 晶片布局 95 5.3.2 晶片數據分析 97 5.3.3 晶片比較表 104 5.4 效能與功耗分析 108 5.5 晶片量測 111 5.6 第五章總結 114 6 第六章 研究結語與展望 116 參考文獻 119 | - |
| dc.language.iso | zh_TW | - |
| dc.subject | 三元權重 | - |
| dc.subject | Multimax | - |
| dc.subject | QGELU | - |
| dc.subject | 高能效 | - |
| dc.subject | 加法近似電路 | - |
| dc.subject | Ternary Weights | - |
| dc.subject | Multimax | - |
| dc.subject | QGELU | - |
| dc.subject | Energy Efficiency | - |
| dc.subject | Approximate Adder Circuits | - |
| dc.title | 三元權重變形器神經網路加速電路之設計與晶片實現 | zh_TW |
| dc.title | Ternary-Weight Transformer Acceleration Circuit Design and Chip Implementation | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 114-1 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 楊家驤;劉宗德;馬席彬 | zh_TW |
| dc.contributor.oralexamcommittee | Chia-Hsiang Yang;Tsung-Te Liu;Hsi-Pin Ma | en |
| dc.subject.keyword | 三元權重,MultimaxQGELU高能效加法近似電路 | zh_TW |
| dc.subject.keyword | Ternary Weights,MultimaxQGELUEnergy EfficiencyApproximate Adder Circuits | en |
| dc.relation.page | 123 | - |
| dc.identifier.doi | 10.6342/NTU202504546 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2025-10-08 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電子工程學研究所 | - |
| dc.date.embargo-lift | 2025-11-27 | - |
| 顯示於系所單位: | 電子工程學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-114-1.pdf | 10.22 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
