請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/54149完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 丁建均 | |
| dc.contributor.author | Ching-Wen Hsiao | en |
| dc.contributor.author | 蕭景文 | zh_TW |
| dc.date.accessioned | 2021-06-16T02:42:04Z | - |
| dc.date.available | 2019-07-23 | |
| dc.date.copyright | 2015-07-23 | |
| dc.date.issued | 2015 | |
| dc.date.submitted | 2015-07-21 | |
| dc.identifier.citation | A. Fundamentals
[1] J. B. O’Neal, “Differential Pulse-Code Modulation (DPCM) with Entropy Coding,” IEEE Transactions on Information Theory, vol. IT-21, pp. 169-174, 1976. [2] D. A. Huffman, “A method for the construction of minimum redundancy codes,” Proceedings of the IRE, vol. 40, pp. 1098-1101, 1952. [3] I. H. Witten, R. M. Neal, and J. G. Cleary, “Arithmetic coding for data compression,” Communications of the ACM, vol. 30, pp. 520-540, 1987. [4] P. Soille, Morphological Image Analysis: Principles and Applications, Berlin, Germany: Springer Verlag, 2003. B. Binary Feature Point [5] C. Harris and M. Stephens, “A combined corner and edge detection,” Proceedings of the 4th Alvey Vision Conference, pp. 189-192, 1988. [6] D. G. Lowe, “Object recognition from local scale-invariant features,” IEEE International Conference on Computer Vision, vol. 2, pp. 1150-1157, 1999. [7] S. Leutenegger, M. Chli, and R. Siegwart, “BRISK: Binary robust invariant scalable keypoints,” IEEE International Conference on Computer Vision, pp. 2548-2555, 2011. [8] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” IEEE International Conference on Computer Vision, pp. 2564-2571, 2011. [9] A. Alahi, R. Ortiz, and P. Vandergheynst, “FREAK: Fast retina keypoint,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 510-517, 2012. [10] V. Chandrasekhar, M. Makar, G. Takacs, D. Chen, S. S. Tsai, N. M. Cheung, R. Grzeszczuk, Y. Reznik, and B. Girod, “Survey of SIFT compression schemes,” IEEE international conference on pattern recognition, pp. 1-8, 2010. [11] J. Chem, L. Y. Duan, R. Ji, H. Yao, and W. Gao, “Sorting local descriptors for low bit rate mobile visual search,” IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 1029-1032, 2011. [12] J. Ascenso and F. Pereira, “Lossless compression of binary image descriptors for visual sensor networks,” IEEE International Conference on Digital Signal Processing, pp. 1-8, 2013. [13] A. Moffat, Alistair, R. M. Neal, and I. H. Witten. “Arithmetic coding revisited,” ACM Transactions on Information Systems, vol. 16, pp. 256-294, 1998. [14] A. Said, “Introduction to arithmetic coding-theory and practice,” Hewlett Packard Laboratories Report, 2004. C. Chain Code [15] H. Freeman, “Computer processing of line drawing images,” ACM Computing Surveys, vol. 6, pp. 57-59, 1974. [16] H. Freeman, “Application of the generalized chain coding scheme to map data processing,” Proceedings of IEEE Pattern Recognition and Image Processing, pp. 220-226, 1978. [17] H. Hampel, R. B. Arps, C. Chamzas, D. Delleert, D. L. Duttweiler, T. Endoh, W. Equitz, F. Ono, R. Pasco, and I. Sebestyen, “Technical features of the JBIG standard for progressive bi-level image compression,” Signal Processing: Image Communication, vol. 4, no.2, pp. 103-111, 1992. [18] Y. K. Liu and B. Zalik, “An efficient chain code with Huffman coding,” Pattern Recognition, vol. 38, pp. 553-557, 2005. [19] E. Bribiesca, “A new chain code,” Pattern Recognition, vol. 32, pp. 235-251, 1999. [20] Y. K. Liu, W. Wei, P. J. Wang, and B. Zalik, “Compressed vertex chain codes,” Pattern Recognition, vol. 40, pp. 2908-2913, 2007. [21] H. Sanchez-Cruz, “Proposing a new code by considering pieces of discrete straight lines in contour shapes,” Journal of Visual Communication and Image Representation, vol. 21, pp. 311-324, 2010. [22] B. Žalik and N. Lukač, “Chain code lossless compression using move-to-front transform and adaptive run-length encoding,” Signal Processing: Image Communication, vol. 29, pp. 96-106, 2014. [23] H. Sanchez-Cruz and R. M. Rodriguez-Dagnino, “Compressing bilevel images by means of a three-bit chain code,” Optical Engineering, vol. 44, pp. 1-8, 2005. [24] M. Burrows and D. Wheeler, “A block sorting lossless data compression algorithm,” Digital Equipment Corporation, Palo Alto, CA, Technical Report 124, 1994. [25] H. Sanchez-Cruz, E. Bribiesca, and R. M. Rodriguez-Dagnino, “Efficiency of chain codes to represent binary objects,” Pattern Recognition, vol. 40, no. 6, pp. 1660-1674, 2007. [26] S. Zahir and K. Dhou, “A new chain coding based method for binary image compression and reconstruction,” Picture Coding Symposium, pp. 1321-1324, 2007. [27] L. Linghua, L. Yining, L. Yongkui, and B. Žalik, “Evaluation and comparison on the techniques of vertex chain codes,” Jounal of Software, vol.7, pp. 2840-2848, 2012. [28] H. Sánchez-Cruz and H. H. Lopez´-Valdez, “Equivalence of chain codes,” Journal of Electronic Imaging, vol. 23, pp.13–31, 2014. D. Polygonal Approximation [29] U. Ramer, “An iterative procedure for the polygonal approximation of plane curves,” Computer Graphics and Image Processing, pp.244-256, 1972 [30] A. Dziech, A. Ukasha and R. Baran, “A new method for contour compression,” Proceedings of the 5th WSEAS International Conference on Signal, Speech and Signal Processing, pp. 282-286, 2005. [31] A. Ukasha, E. Saffor, and M. A. S. Hassan, “Contour compression using trapezoid mehod,” Computer and Information Sciences, pp. 1-6, 2008. [32] D. K. Prasad, C. Quek, M. K. H. Leung, and S. Y. Cho, “A parameter independent line fitting method,” Asian Conference on Pattern Recognition, pp. 441-445, 2011. [33] D. K. Prasad, “PRO: A novel approach to precision and reliability optimization based dominant point detection,” Journal of Optimization, 2013. [34] A. R. Backes and O. M. Bruno, “Polygonal approximation of digital planar curves through vertex betweenness,” Information Sciences, vol. 222, pp.795–804, 2013. [35] E. J. Aguilera-Aguilera, A. Carmona-Poyato, F. J. Madrid-Cuevas, and R. Medina-Carnicer, “The computation of polygonal approximations for 2D contours based on a concavity tree,” Visual Communication and Image Representation, pp. 1905-1917, 2014. [36] M. T. Parvez, “Optimized polygonal approximations through vertex relocations in contour neighborhoods,” Image and Vision Computing, vol. 34, pp. 1-10, 2015. [37] A. Masood, “Optimized polygonal approximation by dominant point deletion,” Pattern Recognition, vol. 41, pp. 227–239, 2008. [38] A. Masood, “Dominant point deletion by reverse polygonization of digital curves,” Image and Vision Computing, vol. 26, pp. 702-715, 2008. [39] A. Carmona-Poyato, F. J. Madrid-Cuevas, R. Medina-Carnicer, and R. Muñoz-Salinas, “Polygonal approximation of digital planar curves through break point suppression,” Pattern Recognition, vol. 43, no. 1, pp. 14-25, 2010. [40] M. T. Parvez and S. A. Mahmoud, “Polygonal approximation of digital planar curves through adaptive optimiations,” Pattern Recognition Letters, vol. 31, no. 13, pp. 1997-2005, 2010. [41] M. T. Parvez and S. A. Mahmoud, “Polygonal approximation of planar curves using triangular suppression,” proceeding of IEEE Information Sciences Signal Processing and their Applications, pp. 622-625, 2010. [42] J. M. Inesta, M. Buendia, and M. A. Sart, “Reliable polygonal approximations of imaged real objects through dominant point detection,” Pattern Recognition, vol. 31, pp. 685-697, 1998. E. Database [43] Zalik’s chain codes dataset http://gemma.uni-mb.si/chaincodes/ [44] MPEG7 shape dataset http://www.dabi.temple.edu/~shape/MPEG7/dataset.html F. Source Code [45] Matlab implementation of the arithmetic encoder and decoder http://www.mathworks.com/matlabcentral/fileexchange/2818-huffman-coding-and-arithmetic-coding/content/Arith07.m [46] Matlab implementation of the Burrows-Wheeler transform http://www.mathworks.com/matlabcentral/fileexchange/4904-bwt-encoder-and-decoder/content/bwt_matlab_code/bwt.m [47] Execution software of the move-to-front transform with adaptive run length coding method in [22] http://gemma.uni-mb.si/chaincodes/ [48] Matlab source code of the linefit RDP method in [32] https://docs.google.com/file/d/0B10RxHxW3I92dG9SU0pNMV84alk/edit?usp=drive_web&urp=https://sites.google.com/site/dilipprasad/source-c&pli=1 | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/54149 | - |
| dc.description.abstract | 形狀對於物體識別、模板匹配、影像分析等領域皆是很重要的特徵,為了有效率地描繪出形狀,如何用很低的資料量來儲存形狀的兩大特徵─「角點」與「輪廓」將是一大課題。因此,在本篇論文中提出了三種資料壓縮技術,分別對角點與輪廓資訊做有效率的壓縮。
在角點壓縮中,我們將一連串角點先依距離資訊來重新排序,接著使用一個主動式矩陣來儲存角點的位置,並且運用了適應性算術編碼與內文模型的技術來提升編碼效率,除此之外,我們也提出二元特徵點其他資訊的壓縮方法,角度部份有參考特徵點距離資訊的關聯性,而描述子的部分則提出非對稱選擇參考點的方式,以增加預測編碼的成效。 在無損失的輪廓壓縮中,演算法起初會讓輪廓做些微的型態學變形,以縮小需要處理的輪廓,接著運用與弗利曼角度鏈碼有相近的概念,但是會分為主鏈碼與副鏈碼,因為根據觀察,一條鏈碼在串接時,絕大部分是會沿著原來的方向,或是偏左或右45度,為了降低符號的多樣性,在主鏈碼的其他方向會由同一符號來表示、副鏈碼中才來區別這些方向,主鏈碼還有做符號替換的動作,主要是精簡兩種數位輪廓下易產生的符號組合,接著利用機率統計的方式轉換成霍夫曼編碼以做為中介碼,並利用分布轉換來改變兩位元(0與1)的分布,除了增加同種位元的聚集性,也提升0出現的機率,最後再利用內文模型提升適應性算術編碼的壓縮效率。 在有損失的輪廓壓縮中,演算法的精髓是使用多項式曲線連結一些重要的頂點來近似出原始形狀,而這些頂點又被稱作主特徵點,其找尋方式是先使用曲度來做第一次篩選,在找出初代主特徵點後,利用三階多項式搭配最佳化來對原始形狀做近似,並計算其與原始形狀的誤差,如果誤差過大,則在適當位置給予一新主特徵點直到誤差可以容忍,接著對主特徵點的八連通區域做一次鄰居搜尋,如果有更佳的位置,則將主特徵點移至該鄰居,並調整三階多項式參數。在主特徵點的記錄上,我們使用類似角點壓縮的概念,而三階多項式參數,我們也有發現其之間的相關性,最後利用改良後的適應性算術編碼,能有效降低所需的資料量。 | zh_TW |
| dc.description.abstract | Shape is an important feature for object recognition, template matching, and image analysis. To represent it efficiently, the techniques for encoding the corners and the contours with very low bit rate are necessary. Therefore, in this thesis, three data compression techniques are proposed in order to reduce the data size requirement for corners and contours.
For corner compression, we rearrange the corners based on the distance information and use an active matrix to encode the position of corners. Moreover, the techniques of adaptive arithmetic coding and context modeling are also adopted to improve the efficiency. Besides, the techniques to compress the other information in binary feature point are also proposed. To encode the angles, the correlation between the distance information of feature points is considered. For descriptors, an asymmetric reference point selection scheme is proposed to improve the predictive coding. For lossless contour compression, the proposed algorithm first applies the morphology operation to shrink the contour slightly, and then uses similar concept to the Angle Freeman chain code. However, the chain code will be split into main-chain code and sub-chain code. From observation, the angles between consecutive directions are mostly 0 degree and 45 degrees to the right or left. To decrease the symbol diversity, the angles except for 0, 45, -45 degrees are represented as the same symbol in main-chain code, and are distinguished in another chain code. Moreover, there is some symbol substitution in main-chain code to simply the most common symbol combination owing to digital contour. After that, Huffman code is used to the main-chain code as an intermediate code according to the probability statistics of symbols, and to improve compression efficiency, the distribution transform is applied to alter the distribution of zeros and ones. Not only does the distribution of the same bit become denser, but the probability of zeros also gets higher. At last, by adaptive arithmetic coding and context modeling, the data size of the contour reduces considerably. The central concept of the proposed lossy contour compression is to approximate the original shape by a combination of vertices and polynomial curves. The vertices, which are also called dominant points, are found by the following steps. First, the initial dominant points will be chosen according to the curvature measure, and then 3rd order polynomial with optimization will be used to approximate the original contour. After calculating the error between the approximated contour and the original contours, new dominant points will be iteratively added in suitable positions until the error is tolerated. By employing the tuning around the eight-connected neighbor of dominant points, the dominant points and the polynomial coefficients will be replaced if there is some better position. When encoding the coordinates of dominant points, the concept similar to the proposed corner compression techniques is adopted. We found there is some correlation between polynomial coefficients, where the data size can be significantly decreased by improved adaptive arithmetic coding. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-16T02:42:04Z (GMT). No. of bitstreams: 1 ntu-104-R02942031-1.pdf: 7513981 bytes, checksum: 794805673f87d3d7775b4a357d3eecd0 (MD5) Previous issue date: 2015 | en |
| dc.description.tableofcontents | 誌謝 i
中文摘要 ii ABSTRACT iv CONTENTS vi LIST OF FIGURES ix LIST OF TABLES xiii Chapter 1 Introduction 1 1.1 Background 1 1.2 Organization 2 Chapter 2 Fundamentals 3 2.1 Compression Basics 3 2.1.1 Predictive Coding 3 2.1.2 Entropy Coding 5 2.2 Morphological processes 8 2.2.1 Erosion 8 2.2.2 Dilation 9 2.2.3 Holes Filling 10 2.3 Chain Code 11 2.3.1 Freeman Chain Codes 11 2.3.2 Vertex Chain Codes 14 2.4 Polygonal Approximation 17 2.4.1 Ramer Method 18 2.4.2 Other Dominant Point Detection Methods 19 2.4.3 Dominant Point Deletion Methods 24 Chapter 3 The Proposed Binary Feature Point Compression Techniques 27 3.1 Introduction 27 3.2 Related Work 28 3.3 Overview of the Proposed Framework 30 3.3.1 Position Encoding 30 3.3.2 Angle Encoding 33 3.3.3 Descriptor Coding 35 3.4 Simulation Results 41 3.5 Summary 44 Chapter 4 The Proposed Lossless Contour Compression Techniques 45 4.1 Introduction 45 4.2 Related Work 46 4.3 Overview of the Proposed Framework 47 4.3.1 Morphology Operation 48 4.3.2 Chain Code and Division 49 4.3.3 Huffman Code 51 4.3.4 Distribution Transform 52 4.3.5 Context Modeling and Adaptive Arithmetic Coding 55 4.4 Simulation Results 57 4.5 Summary 62 Chapter 5 The Proposed Lossy Contour Compression Techniques 64 5.1 Introduction 64 5.2 Related Work 65 5.3 Overview of the Proposed Framework 66 5.3.1 Curvature Measure 67 5.3.2 Dominant Points Searching 68 5.3.3 Connection Curves Approximation 69 5.3.4 Integral Minimum Point Distance 74 5.3.5 Neighborhood Tuning 76 5.3.6 Dominant Points Encoding 77 5.3.7 Curve Coefficients Encoding 79 5.4 Simulation Results 80 5.5 Summary 91 Chapter 6 Conclusions and Future Work 92 6.1 Conclusions 92 6.2 Future Work 93 REFERENCE 95 PUBLICATION 101 | |
| dc.language.iso | en | |
| dc.subject | 輪廓近似 | zh_TW |
| dc.subject | 影像壓縮 | zh_TW |
| dc.subject | 資料壓縮 | zh_TW |
| dc.subject | 熵編碼 | zh_TW |
| dc.subject | 預測編碼 | zh_TW |
| dc.subject | 預測編碼 | zh_TW |
| dc.subject | 熵編碼 | zh_TW |
| dc.subject | 資料壓縮 | zh_TW |
| dc.subject | 影像壓縮 | zh_TW |
| dc.subject | 二元特徵點 | zh_TW |
| dc.subject | 鏈碼 | zh_TW |
| dc.subject | 鏈碼 | zh_TW |
| dc.subject | 輪廓近似 | zh_TW |
| dc.subject | 二元特徵點 | zh_TW |
| dc.subject | Binary feature point | en |
| dc.subject | Predictive coding | en |
| dc.subject | Entropy coding | en |
| dc.subject | Data compression | en |
| dc.subject | Image compression | en |
| dc.subject | Chain code | en |
| dc.subject | Contour approximation | en |
| dc.subject | Predictive coding | en |
| dc.subject | Entropy coding | en |
| dc.subject | Data compression | en |
| dc.subject | Image compression | en |
| dc.subject | Binary feature point | en |
| dc.subject | Chain code | en |
| dc.subject | Contour approximation | en |
| dc.title | 適用於角點與輪廓之無損與有損壓縮技術 | zh_TW |
| dc.title | Lossless and Lossy Compression Techniques for Corners and Contours | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 103-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 郭景明,葉敏宏,簡鳳村 | |
| dc.subject.keyword | 預測編碼,熵編碼,資料壓縮,影像壓縮,二元特徵點,鏈碼,輪廓近似, | zh_TW |
| dc.subject.keyword | Predictive coding,Entropy coding,Data compression,Image compression,Binary feature point,Chain code,Contour approximation, | en |
| dc.relation.page | 101 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2015-07-21 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電信工程學研究所 | zh_TW |
| 顯示於系所單位: | 電信工程學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-104-1.pdf 未授權公開取用 | 7.34 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
