請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101482完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 丁建均 | zh_TW |
| dc.contributor.advisor | Jian-Jiun Ding | en |
| dc.contributor.author | 陳祈安 | zh_TW |
| dc.contributor.author | Chi-An Chen | en |
| dc.date.accessioned | 2026-02-04T16:08:01Z | - |
| dc.date.available | 2026-02-05 | - |
| dc.date.copyright | 2026-02-04 | - |
| dc.date.issued | 2026 | - |
| dc.date.submitted | 2026-01-26 | - |
| dc.identifier.citation | [1] Ericsson, "Ericsson Mobility Report," Ericsson, Stockholm, Sweden, Tech. Rep., Jun. 2025, June 2025 edition. Accessed: Nov. 9, 2025. [Online]. Available: https://www.ericsson.com/en/reports-and-papers/mobility-report.
[2] T. Wiegand, G. J. Sullivan, G. Bjontegaard, and A. Luthra, "Overview of the h. 264/avc video coding standard," IEEE Transactions on circuits and systems for video technology, vol. 13, no. 7, pp. 560-576, 2003. [3] G. J. Sullivan, J.-R. Ohm, W.-J. Han, and T. Wiegand, "Overview of the high efficiency video coding (hevc) standard," IEEE Transactions on circuits and systems for video technology, vol. 22, no. 12, pp. 1649-1668, 2012. [4] B. Bross, Y.-K. Wang, Y. Ye, S. Liu, J. Chen, G. J. Sullivan, and J.-R. Ohm, "Overview of the versatile video coding (vvc) standard and its applications," IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 10, pp. 3736-3764, 2021. [5] D. Liu, Y. Li, J. Lin, H. Li, and F. Wu, "Deep learning-based video coding: A review and a case study," ACM Computing Surveys (CSUR), vol. 53, no. 1, pp. 1-35, 2020. [6] J. Ascenso, E. Alshina, and T. Ebrahimi, "The jpeg ai standard: Providing efficient human and machine visual data consumption," Ieee Multimedia, vol. 30, no. 1, pp. 100-111, 2023. [7] B. Li, H. Li, L. Li, and J. Zhang, "Lambda domain rate control algorithm for High Efficiency Video Coding," IEEE Transactions on Image Processing, vol. 23, no. 9, pp. 3841-3854, 2014. [8] Z. Jia, B. Li, J. Li, W. Xie, L. Qi, H. Li, and Y. Lu, "Towards practical real-time neural video compression," in Proceedings of the Computer Vision and Pattern Recognition Conference, 2025, pp. 12 543-12552. [9] Enhanced compression model (ecm) reference software, https://vcgit.hhi.fraunhofer.de/ecm/, JVET experimental reference software. [10] Y. Li, X. Chen, J. Li, J. Wen, Y. Han, S. Liu, and X. Xu, "Rate control for learned video compression," in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 2829-2833. [11] S. Liao, C. Jia, H. Fan, J. Yan, and S. Ma, "Rate-quality based rate control model for neural video compression," in ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2024, pp. 4215-4219. [12] Y. Zhang, G. Lu, Y. Chen, S. Wang, Y. Shi, J. Wang, and L. Song, "Neural rate control for learned video compression," in The Twelfth International Conference on Learning Representations, 2024. [13] S. S. Haykin, Adaptive filter theory. Pearson Education India, 2002. [14] C. E. Shannon, "A mathematical theory of communication," The Bell system technical journal, vol. 27, no. 3, pp. 379-423, 1948. [15] K. Sayood, Introduction to data compression. Morgan Kaufmann, 2017. [16] T. Xue, B. Chen, J. Wu, D. Wei, and W. T. Freeman, "Video enhancement with task-oriented flow," International Journal of Computer Vision, vol. 127, no. 8, pp. 1106-1125, 2019. [17] D. A. Huffman, "A method for the construction of minimum-redundancy codes," Proceedings of the IRE, vol. 40, no. 9, pp. 1098-1101, 2007. [18] G. Lu, W. Ouyang, D. Xu, X. Zhang, C. Cai, and Z. Gao, "Dvc: An end-to-end deep video compression framework," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 11 006-11015. [19] K. O'shea and R. Nash, "An introduction to convolutional neural networks," arXiv preprint arXiv: 1511.08458, 2015. [20] A. Ranjan and M. J. Black, "Optical flow estimation using a spatial pyramid network," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4161-4170. [21] T. Wiegand, F. Jeltsch, I. Hanski, and V. Grimm, "Using pattern-oriented modeling for revealing hidden information: A key for reconciling ecological theory and application," Oikos, vol. 100, no. 2, pp. 209-222, 2003. [22] J. Ballé, V. Laparra, and E. P. Simoncelli, "Density modeling of images using a generalized normalization transformation," arXiv preprint arXiv: 1511.06281, 2015. [23] J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston, "Variational image compression with a scale hyperprior," arXiv preprint arXiv: 1802.01436, 2018. [24] J. Bégaint, F. Racapé, S. Feltman, and A. Pushparaja, "Compressai: A pytorch library and evaluation platform for end-to-end compression research," arXiv preprint arXiv: 2011.03029, 2020. [25] J. Ballé, V. Laparra, and E. P. Simoncelli, "End-to-end optimized image compression," arXiv preprint arXiv: 1611.01704, 2016. [26] D. Marpe, H. Schwarz, and T. Wiegand, "Context-based adaptive binary arithmetic coding in the h.264/avc video compression standard," IEEE Transactions on circuits and systems for video technology, vol. 13, no. 7, pp. 620-636, 2003. [27] C.-H. Huang and J.-L. Wu, "Unveiling the future of human and machine coding: A survey of end-to-end learned image compression," Entropy, vol. 26, no. 5, p. 357, 2024. [28] D. Minnen, J. Ballé, and G. D. Toderici, "Joint autoregressive and hierarchical priors for learned image compression," Advances in neural information processing systems, vol. 31, 2018. [29] G.-H. Wang, J. Li, B. Li, and Y. Lu, "Evc: Towards real-time neural image compression with mask decay," arXiv preprint arXiv: 2302.05071, 2023. [30] G. Lu, X. Zhang, W. Ouyang, L. Chen, Z. Gao, and D. Xu, "An end-to-end learning framework for video compression," IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 10, pp. 3292-3308, 2020. [31] E. Agustsson, D. Minnen, N. Johnston, J. Balle, S. J. Hwang, and G. Toderici, "Scale-space flow for end-to-end optimized video compression," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 8503-8512. [32] Y. Wang, L. Lipson, and J. Deng, "Sea-raft: Simple, efficient, accurate raft for optical flow," in European Conference on Computer Vision, Springer, 2024, pp. 36-54. [33] J. Yang, C. Yang, Y. Zhai, Q. Wang, X. Pan, and R. Wang, "Improving learned video compression by exploring spatial redundancy," in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024, pp. 2860-2864. [34] Y. Shi, Y. Ge, J. Wang, and J. Mao, "Alphavc: High-performance and efficient learned video compression," in European Conference on Computer Vision, Springer, 2022, pp. 616-631. [35] C. Reich, B. Debnath, D. Patel, T. Prangemeier, D. Cremers, and S. Chakradhar, "Deep video codec control for vision models," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 5732-5741. [36] Z. Chen, H. Sun, L. Zhang, and F. Zhang, "Survey on visual signal coding and processing with generative models: Technologies, standards, and optimization," IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 14, no. 2, pp. 149-171, 2024. [37] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial networks," Communications of the ACM, vol. 63, no. 11, pp. 139-144, 2020. [38] E. Agustsson, D. Minnen, G. Toderici, and F. Mentzer, "Multi-realism image compression with a conditional generator," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 22 324-22333. [39] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017. [40] A. Dosovitskiy, "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv: 2010.11929, 2020. [41] Z. Chen, L. Relic, R. Azevedo, Y. Zhang, M. Gross, D. Xu, L. Zhou, and C. Schroers, "Neural video compression with spatio-temporal cross-covariance transformers," in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 8543-8551. [42] F. Mentzer, G. Toderici, D. Minnen, S.-J. Hwang, S. Caelles, M. Lucic, and E. Agustsson, "Vct: A video compression transformer," arXiv preprint arXiv:2206.07307, 2022. [43] T. Ladune, P. Philippe, W. Hamidouche, L. Zhang, and O. Déforges, "Optical flow and mode selection for learning-based video coding," in 2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP), IEEE, 2020, pp. 1-6. [44] J. Li, B. Li, and Y. Lu, "Deep contextual video compression," Advances in Neural Information Processing Systems, vol. 34, pp. 18 114-18 125, 2021. [45] X. Sheng, J. Li, B. Li, L. Li, D. Liu, and Y. Lu, "Temporal context mining for learned video compression," IEEE Transactions on Multimedia, vol. 25, pp. 7311-7322, 2022. [46] J. Lin, D. Liu, J. Liang, H. Li, and F. Wu, "A deeply modulated scheme for variable-rate video compression," in 2021 IEEE International Conference on Image Processing (ICIP), IEEE, 2021, pp. 3722-3726. [47] Z. Hu and D. Xu, "Complexity-guided slimmable decoder for efficient deep video compression," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 14358-14367. [48] J. Li, B. Li, and Y. Lu, "Hybrid spatial-temporal entropy modelling for neural video compression," in Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 1503-1511. [49] M. Lu, Z. Duan, F. Zhu, and Z. Ma, "Deep hierarchical video compression," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, 2024, pp. 8859-8867. [50] B. Liu, Y. Chen, R. C. Machineni, S. Liu, and H.-S. Kim, "Mmvc: Learned multi-mode video compression with block-based prediction mode selection and density-adaptive entropy coding," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 18487-18496. [51] J. Li, B. Li, and Y. Lu, "Neural video compression with diverse contexts," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 22 616-22 626. [52] L. Qi, J. Li, B. Li, H. Li, and Y. Lu, "Motion information propagation for neural video compression," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 6111-6120. [53] J. Li, B. Li, and Y. Lu, "Neural video compression with feature modulation," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 26 099-26 108. [54] D. Alexandre, H.-M. Hang, and W.-H. Peng, "Hierarchical b-frame video coding using two-layer canf without motion coding," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 10 249-10 258. [55] J. Ballé, N. Johnston, and D. Minnen, "Integer networks for data compression with latent-variable models," in International Conference on Learning Representations, 2018. [56] G. Gao, H. M. Kwan, F. Zhang, and D. Bull, "Pnvc: Towards practical inr-based video compression," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, 2025, pp. 3068-3076. [57] S. Ma, W. Gao, and Y. Lu, "Rate-distortion analysis for h. 264/avc video coding and its application to rate control," IEEE transactions on circuits and systems for video technology, vol. 15, no. 12, pp. 1533-1544, 2005. [58] D.-K. Kwon, M.-Y. Shen, and C.-C. J. Kuo, "Rate control for h. 264 video with enhanced rate and distortion models," IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, no. 5, pp. 517-529, 2007. [59] Z. He, Y. K. Kim, and S. K. Mitra, "Low-delay rate control for det video coding via/spl rho/-domain source modeling," IEEE transactions on Circuits and Systems for Video Technology, vol. 11, no. 8, pp. 928-940, 2001. [60] B. Li, J. Xu, D. Zhang, and H. Li, "QP refinement according to lagrange multiplier for high efficiency video coding," in 2013 IEEE International Symposium on Circuits and Systems (ISCAS), IEEE, 2013, pp. 477-480. [61] A. Mercat, M. Viitanen, and J. Vanne, "Uvg dataset: 50/120fps 4k sequences for video codec analysis and development," in Proceedings of the 11th ACM multimedia systems conference, 2020, pp. 297-302. [62] H. Wang, W. Gan, S. Hu, J. Y. Lin, L. Jin, L. Song, P. Wang, I. Katsavounidis, A. Aaron, and C.-C. J. Kuo, "Mcl-jcv: A jnd-based h. 264/avc video quality assessment dataset," in 2016 IEEE international conference on image processing (ICIP), IEEE, 2016, pp. 1509-1513. [63] G. Bjontegaard, Calculation of average PSNR differences between RD-curves, ITU-T SG16/Q6, 13th VCEG Meeting, Austin, TX, USA, Doc. VCEG-M33, Apr. 2001. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101482 | - |
| dc.description.abstract | 神經視訊編碼 (Neural Video Coding, NVC) 近年來在壓縮效率上已展現出超越傳統視訊編碼標準 (如 H.265/HEVC, H.266/VVC) 的潛力。然而,如何將其實現於實時應用 (如視訊會議、直播) 仍是一大挑戰。儘管 DCVC-RT 等高效能架構已突破了計算複雜度的瓶頸,但目前針對此類先進 NVC 模型的實時碼率控制 (Rate Control) 機制研究仍然匱乏,且現有方法難以適應神經網路特有的非線性與非穩態特徵。
本論文提出了一種針對實時神經視訊編碼器 DCVC-RT 的自適應狀態空間碼率控制系統。首先,透過全域搜索分析,本研究確立了 DCVC-RT 的碼率與量化參數 (QP) 之間呈現對數關係 (Logarithmic R-Q Model),並利用此線性化特性引入遞迴最小平方法 (Recursive Least Squares, RLS) 進行在線參數估計。相較於傳統梯度下降法,RLS 演算法具備極快的收斂速度與穩定性。針對 DCVC-RT 特有的週期性特徵刷新機制,本研究進一步提出了「雙狀態 RLS (Dual-State RLS)」策略,將一般幀與刷新幀的參數估計分離,有效解決了刷新幀作為異常值導致的模型參數劇烈震盪 (Model Shattering) 問題。此外,本系統結合了滑動視窗預算分配、層級化質量結構以及智慧輸出平滑化機制,以確保長期流量的穩定性與視覺品質。 實驗結果顯示,在 UVG、MCL-JCV 與 HEVC Class B 等多個標準數據集上,本研究所提出的方法均優於現有的基準方法。特別是在 UVG 數據集上,相比於基準算法,本方法實現了平均 14.9% 的 BD-Rate 性能提升,並將平均碼率誤差 (BRE) 控制在 1.92% 的高精準度範圍內。同時,計算複雜度分析顯示本方法的額外時間開銷僅約 1%,證實了其在實時應用中的可行性。本研究成功填補了高效能 NVC 與實時傳輸需求之間的缺口,為神經視訊編碼的實際部署提供了關鍵的技術解決方案。 | zh_TW |
| dc.description.abstract | Neural Video Coding (NVC) shows great potential to surpass traditional standards like H.265/HEVC and H.266/VVC in compression efficiency. However, realizing NVC in real-time applications remains challenging. While architectures like DCVC-RT have reduced computational complexity, effective real-time Rate Control (RC) mechanisms for these models are scarce, as existing methods struggle with the non-linear and non-stationary nature of neural networks.
This thesis proposes an adaptive state-space RC system specifically designed for the real-time neural video coder, DCVC-RT. Global search analysis reveals a logarithmic relationship between bitrate and the Quantization Parameter (QP) in DCVC-RT. Leveraging this linearity, a Recursive Least Squares (RLS) algorithm is introduced for online parameter estimation, offering superior convergence and stability over gradient-based methods. To address the periodic feature refresh mechanism in DCVC-RT, a "Dual-State RLS" strategy is proposed. By separating parameter estimation for normal and refresh frames, this strategy prevents model shattering caused by outliers. Furthermore, the system integrates sliding window budget allocation, a hierarchical quality structure, and smart output smoothing to ensure traffic stability and visual quality. Experimental results on UVG, MCL-JCV, and HEVC Class B datasets demonstrate that the proposed method significantly outperforms baselines. On the UVG dataset, it achieves a 14.9% BD-Rate improvement and maintains an average Bitrate Error (BRE) of 1.92%. With approximately 1% additional time overhead, the method proves feasible for real-time use. This study bridges the gap between high-performance NVC and real-time transmission needs, offering a key solution for practical deployment. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2026-02-04T16:08:01Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2026-02-04T16:08:01Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 致謝 ...... i
摘要 ...... ii ABSTRACT ...... iii CONTENTS ...... iv LIST OF FIGURES ...... vii LIST OF TABLES ...... ix Chapter 1 Introduction ...... 1 1.1 Background ...... 1 1.2 Problem Statement ...... 2 1.3 Motivation ...... 3 1.4 Primary Contributions ...... 4 1.5 Thesis Organization ...... 5 Chapter 2 Fundamentals ...... 7 2.1 Video Compression Basics ...... 7 2.1.1 Information Theory - Entropy ...... 7 2.1.2 Redundancy ...... 8 2.1.3 The Rate-Distortion (R-D) Trade-off ...... 11 2.2 From Traditional Standards to Neural Video Coding ...... 13 2.2.1 The Generalized Hybrid Coding Pipeline ...... 13 2.2.2 Temporal Prediction: Motion Vectors to Optical Flow ...... 15 2.2.3 Spatial Transformation: Linear to Non-linear ...... 16 2.2.4 Differentiable Quantization and Optimization ...... 17 2.2.5 Entropy Coding: Context Models to Learned Priors ...... 19 Chapter 3 Literature Review ...... 23 3.1 State-of-the-Art Neural Video Codecs ...... 23 3.1.1 Developmental Progression ...... 23 3.1.2 Conditional Coding and DCVC Architecture ...... 24 3.1.3 Single Model, Variable R-D Performance ...... 27 3.1.4 Wider Quality Range and Real-Time Adaptation ...... 30 3.1.5 Summary ...... 32 3.2 Rate Control Algorithms and Limitations ...... 33 3.2.1 Rate Control in Traditional Codecs ...... 33 3.2.2 Rate Control Approaches in Neural Video Compression ...... 36 3.2.3 Comparative Analysis and Limitations ...... 41 Chapter 4 Proposed Method: Adaptive State-Space Design for Real-Time NVC Rate Control ...... 47 4.1 System Overview ...... 47 4.2 R-Q Modeling and Empirical Analysis ...... 49 4.3 Adaptive Parameter Estimation ...... 53 4.3.1 Recursive Least Squares (RLS) Algorithm ...... 53 4.3.2 Comparison with Gradient-based Methods ...... 55 4.3.3 Challenge: Vulnerability to Refresh Frames ...... 56 4.3.4 Proposed Solution: Dual-State RLS ...... 57 4.3.5 Parameter Stabilization (First-Stage Smoothing) ...... 59 4.4 Bitrate Allocation Strategy ...... 60 4.4.1 GOP-based Allocation with Sliding Window ...... 60 4.4.2 Hierarchical Quality Structure and Feature Propagation ...... 60 4.4.3 Dynamic GOP Rebalancing ...... 61 4.5 Dynamic Smoothing and Control ...... 62 4.5.1 Smart Output Smoothing (Second-Stage Smoothing) ...... 62 4.5.2 Algorithm Summary ...... 63 Chapter 5 Experiments and Results ...... 65 5.1 Experimental Setup ...... 65 5.1.1 Datasets ...... 65 5.2 Evaluation Benchmarks ...... 67 5.2.1 Rate Control Accuracy ...... 67 5.2.2 R-D Performance ...... 67 5.2.3 Time Overhead ...... 68 5.3 Implementation Details ...... 69 5.3.1 Experimental Settings ...... 69 5.3.2 Baseline Methods ...... 69 5.3.3 Parameter Settings ...... 71 5.4 Performance Comparison ...... 73 5.4.1 Quantitative Results on UVG Dataset ...... 73 5.4.2 Generalization on Diverse Datasets ...... 74 5.4.3 Environment Specifications and Computational Overhead ...... 77 5.5 Parametric Analysis and Robustness ...... 77 5.5.1 Robustness to Initialization ...... 78 5.5.2 Sensitivity to Update Dynamics ...... 79 5.6 Summary ...... 80 Chapter 6 Conclusion ...... 83 6.1 Summary of Contributions ...... 83 6.2 Limitations and Future Work ...... 84 References ...... 87 | - |
| dc.language.iso | en | - |
| dc.subject | 神經視訊編碼 | - |
| dc.subject | 碼率控制 | - |
| dc.subject | 遞迴最小平方法 | - |
| dc.subject | 實時視訊通訊 | - |
| dc.subject | DCVC-RT | - |
| dc.subject | Neural Video Coding | - |
| dc.subject | Rate Control | - |
| dc.subject | Recursive Least Squares | - |
| dc.subject | Real-Time Video Communication | - |
| dc.subject | DCVC-RT | - |
| dc.title | 基於遞迴最小平方法之實時神經視訊編碼碼率控制 | zh_TW |
| dc.title | Real-Time Rate Control for Neural Video Coding via Recursive Least Squares | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 114-1 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 盧奕璋;蘇柏齊;曾易聰 | zh_TW |
| dc.contributor.oralexamcommittee | Yi-Chang Lu;Po-Chyi Su;Yi-Chong Zeng | en |
| dc.subject.keyword | 神經視訊編碼,碼率控制遞迴最小平方法實時視訊通訊DCVC-RT | zh_TW |
| dc.subject.keyword | Neural Video Coding,Rate ControlRecursive Least SquaresReal-Time Video CommunicationDCVC-RT | en |
| dc.relation.page | 93 | - |
| dc.identifier.doi | 10.6342/NTU202600297 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2026-01-27 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電信工程學研究所 | - |
| dc.date.embargo-lift | N/A | - |
| 顯示於系所單位: | 電信工程學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-114-1.pdf 未授權公開取用 | 16.82 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
