請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/8005完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 連豊力 | |
| dc.contributor.author | Yi-Chun Lin | en |
| dc.contributor.author | 林意淳 | zh_TW |
| dc.date.accessioned | 2021-05-19T18:02:22Z | - |
| dc.date.available | 2024-03-22 | |
| dc.date.available | 2021-05-19T18:02:22Z | - |
| dc.date.copyright | 2019-03-22 | |
| dc.date.issued | 2014 | |
| dc.date.submitted | 2014-08-19 | |
| dc.identifier.citation | [1: Baillieul & Antsaklis 2007]
J. Baillieul and P. J. Antsaklis, “Control and Communication Challenges in Networked Real-Time Systems,” Proceedings of the IEEE, Vol. 95, No. 1, pp. 9-28, January 2007. [2: Moyne & Tilbury 2007] J. R. Moyne, and D. M. Tilbury, “The Emergence of Industrial Control Networks for Manufacturing Control, Diagnostics, and Safety Data,” Proceedings of the IEEE, Vol. 95, No. 1, pp. 29-47, January 2007. [3: Hespanha et al. 2007] J. P. Hespanha, P. Naghshtabrizi, and Y. Xu, “A Survey of Recent Results in Networked Control Systems,” Proceedings of the IEEE, Vol. 95, No. 1, pp. 138-162, January 2007. [4: Foresti et al. 2005] G. L. Foresti, C. Micheloni, L. Snidaro, P. Remagnino, and T. Ellis, “Active Video-Based Surveillance System: the Low-Level Image and Video Processing Techniques Needed for Implementation,” IEEE Signal Processing Magazine, Vol. 22, No. 2, pp. 25-37, March 2005. [5: Hampapur et al. 2005] A. Hampapur, L. Brown, J. Connell, A. Ekin, N. Haas, M. Lu, H. Merkl, S. Pankanti, A. Senior, C.-F. Shu, and Y. L. Tian, “Smart Video Surveillance: Exploring the Concept of Multi-scale Spatiotemporal Tracking,” IEEE Signal Processing Magazine, Vol. 22 , No. 2, pp. 38-51, March 2005. [6: Schenato et al. 2007] L. Schenato, B. Sinopoli, M. Franceschetti, K. Poolla, and S. Shankar Sastry, “Foundations of Control and Estimation over Lossy Networks,” Proceedings of the IEEE, Vol. 95, No. 1, pp. 163-187, January 2007. [7: Zhang et al. 2011] J. Zhang, F.-Y. Wang, K. Wang, W.-H. Lin, X. Xu, and C. Chen, “Data-Driven Intelligent Transportation Systems: A Survey,” IEEE Transactions on Intelligent Transportation Systems, Vol. 12, No. 4, pp. 1624-1639, December 2011. [8: Matiakis et al. 2009] T. Matiakis, S. Hirche, and M. Buss, “Control of Networked Systems using the Scattering Transformation,” IEEE Transactions on Control Systems Technology, Vol. 17, No. 1, pp. 60-67, January 2009. [9: Tang & Silva 2006] P. L. Tang, and C. W. de Silva, “Compensation for Transmission Delays in an Ethernet-Based Control Network using Variable-Horizon Predictive Control,” IEEE Transactions on Control Systems Technology, Vol. 14, No. 4, pp. 707-718, July 2006. [10: Tipsuwan & Chow 2004] Y. Tipsuwan and M.-Y. Chow, “Gain Scheduler Middleware: A Methodology to Enable Existing Controllers for Networked Control and Teleoperation-Part I: Networked Control,” IEEE Transactions on Industrial Electronics, Vol. 51, No. 6, pp. 1218-1227, December 2004. [11: Lee et al. 2005] K. C. Lee, S. Lee, and M. H. Lee, “QoS-Based Remote Control of Networked Control Systems Via Profibus Token Passing Protocol,” IEEE Transactions on Industrial Informatics, Vol. 1, No. 3, pp. 183-191, August 2005. [12: Lian et al. 2006] F.-L. Lian, J. K. Yook, D. M. Tilbury, and J. Moyne, “Network Architecture and Communication Modules for Guaranteeing Acceptable Control and Communication Performance for Networked Multiagent Systems,” IEEE Transactions on Industrial Informatics, Vol. 2, No. 1, pp. 12-24, February 2006. [13: Ghosh et al. 2007] B. K. Ghosh, A. D. Polpitiya, and W. Wang, “Bio-Inspired Networks of Visual Sensors, Neurons, and Oscillators,” Proceedings of the IEEE, Vol. 95, No. 1, pp. 188-214, January 2007. [14: Kreucher et al. 2007] C. M. Kreucher, A. O. Hero, K. D. Kastella, and M. R. Morelande, “An Information-Based Approach to Sensor Management in Large Dynamic Networks,” Proceedings of the IEEE, Vol. 95, No. 5, pp. 978-999, May 2007. [15: Lobaton et al. 2010] E. Lobaton, R. Vasudevan, R. Bajcsy, and S. Sastry, “A Distributed Topological Camera Network Representation for Tracking Applications,” IEEE Transactions on Image Processing, Vol. 19, No. 10, pp. 2516-2529, October 2010. [16: Oppenheim & Willsky 1997] A. V. Oppenheim, A. S. Willsky, and S. Hamid, “Signals and Systems,” 2nd Ed, Prentice Hall, 1997. [17: Candes & Wakin 2008] E. J. Candes and M. B. Wakin, “An Introduction to Compressive Sampling,” IEEE Signal Processing Magazine, Vol. 25 , No. 2, pp. 21-30, March 2008. [18: Donoho 2006] D. L. Donoho, “Compressed Sensing,” IEEE Transactions on Information Theory, Vol. 52. No. 4, pp. 1289-1306, April 2006. [19: Lustig & Donoho 2008] M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, “Compressed Sensing MRI,” IEEE Signal Processing Magazine, Vol. 25, No. 2, pp. 72-82, March 2008. [20: Karpenko & Aarabi 2011] A. Karpenko, and P. Aarabi, “Tiny Videos: A Large Data Set for Nonparametric Video Retrieval and Frame Classification,” IEEE Transition on Pattern Analysis and Machine Intelligence, Vol. 33, No. 3, pp. 618-630, March 2011. [21: Tordoff & Murray 2003] B. J. Tordoff and D. W. Murray, “Resolution vs. Tracking Error: Zoom as a Gain Controller,” IEEE Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. I-273-I-280, Madison, Wisconsin, 18-20 June 2003 [22: Chen et al. 2008] C.-H. Chen, Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi, “Heterogeneous Fusion of Omnidirectional and PTZ Cameras for Multiple Object Tracking,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 18, No. 8, pp. 1052-1063, August 2008 [23: Micheloni et al. 2010] C. Micheloni, B. Rinner, and G. L. Foresti, “Video Analysis in Pan-Tilt-Zoom Camera Networks,” IEEE Signal Processing Magazine, Vol. 27, No. 5, pp. 78-90, September 2010. [24: Han et al. 2011] J. Han, D. Farin, and P. H. N. de With, “A Mixed-Reality System for Broadcasting Sports Video to Mobile Devices,” IEEE Multimedia, Vol. 18, No. 2, pp. 72-84, 2010. [25: Huang et al. 2009] Y. Huang, S. Mao, and S. F. Midkiff, “A Control-Theoretic Approach to Rate Control for Streaming Videos,” IEEE Transactions on Multimedia, Vol. 11, No. 6, pp. 1072-1081, October 2009. [26: Dinh et al. 2011] Thang Ba Dinh, Nam Vo and G’erard Medioni, “High Resolution Face Sequences from A PTZ Network Camera,” IEEE International Conference on Automatic Face & Gesture Recognition and Workshops, pp. 531-538, Santa Barbara, CA, 21-25 March 2011. [27: Micheloni & Foresti 2005] C. Micheloni and G.L. Foresti, “Zoom on Target while Tracking,” IEEE International Conference on Image Processing (ICIP), vol. 3, pp. 117-120, Genoa, Italy, 11-14 September 2005. [28: Tordoff & Murray 2004] B. Tordoff and D. Murray, “Reactive Control of Zoom while Fixating using Perspective and Affine Cameras,” IEEE Transactions on Pattern Analysis And Machine Intelligence, Vol. 26, No. 1, pp. 98-112, January 2004. [29: Dinh et al. 2009] T. Dinh, Q. Yu, and G. Medioni, “Real Time Tracking using an Active Pan-Tilt-Zoom Network Camera,” IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, USA, pp. 3786-3793, 11-15 October 2009. [30: Ding et al. 2012] C. Ding, B. Song, A. Morye, J. A. Farrell, and A. K. Roy-Chowdhuryrray, “Collaborative Sensing in a Distributed PTZ Camera Network,” IEEE Transactions on Image Processing, Vol. 21, No. 7, pp. 98-112, July 2012. [31: Goodman & Gersho 1974] D. J. Goodman and A. Gersho, “Theory of Adaptive Quantizer,” IEEE Transactions on Communications, Vol. COM-22, No. 8, pp. 1037-1045, August 1974. [32: Brockett and Liberzon 2000] R. W. Brockett and D. Liberzon, “Quantized Feedback Stabilization of Linear Systems,” IEEE Transactions on Automatic Control, Vol. 45, No. 7, pp. 1279-1289, July, 2000. [33: Delchamps 1990] D. Delchamps, “Stabilizing a Linear System with Quantized State Feedback,” IEEE Transactions on Automatic Control, Vol. 35, No. 8, pp. 916-924, August, 1990. [34: Liberzon 2003] D. Liberzon, “On Stabilization of Linear Systems with Limited information,” IEEE Transactions on Automatic Control, Vol. 48, No. 2, pp. 304-307, February, 2003. [35: Elia & Mitter 2001] N. Elia and S. K. Mitter, “Stabilization of Linear Systems with Limited Information,” IEEE Transactions on Automatic Control, Vol. 46, No. 9, pp. 1384-1400, September, 2001. [36: Tatikonda & Mitter 2004] S. Tatikonda and S. Mitter, “Control under Communication Constraints,” IEEE Transactions on Automatic Control, Vol. 49, No.7, pp. 1056-1068, July, 2004. [37: Nesci & Liberzon 2009] D. Nesci and D. Liberzon, “A Unified Framework for Design and Analysis of Networked and Quantized Control Systems,” IEEE Transactions on Automatic Control, Vol. 54, No. 4, pp. 732-747, April, 2009. [38: Chen et al. 2010] Y. Chen, Y. Deng, Y. Guo, W. Wang, Y. Zou, and K. Wang, “A Temporal Video Segmentation and Summary Generation Method Based on Shots' Abrupt and Gradual Transition Boundary Detecting,” Second International Conference on Communication Software and Networks, Singapore, pp. 271-275, 26-28 February 2010. [39: Chatzigiorgaki & Skodras 2009] M. Chatzigiorgaki and A. N. Skodras, “Real-Time Keyframe Extraction Towards Video Content Identification,” 16th International Conference on Digital Signal Processing, Santorini-Hellas, pp. 1-6, 5-7 July 2009. [40: Li et al. 2006] Y. Li, S.-H. Lee, C.-H. Yeh, and C.-C. J.Kuo, “Techniques for movie content analysis and skimming: Tutorial and overview on video abstraction techniques,” IEEE Signal Processing Magazine, Vol. 23, No. 2, pp. 79-89, March 2006. [41: Jiang & Qin 2010] P. Jiang and X.-L. Qin, “Keyframe-Based Video Summary using Visual Attention Clues,” IEEE Multimedia, Vol. 17, No. 2, pp. 64-73, 2010. [42: Lee and Hayes 2004] S. Lee and M. H. Hayes, ‘‘Properties of the Singular Value Decomposition for Efficient Data Clustering,’’ IEEE Signal Processing Letters, Vol. 11, No. 11, pp. 862-866, 2004. [43: Zhuang et al. 1998] Y. Zhuang, Y. Rui, T. S. Huang and S. Mehrotra, “Adaptive Keyframe Extraction using Unsupervised Clustering,” International Conference on Image Processing (ICIP) ,Chicago, IL, Vol.1, pp. 866-870, 4-7 October 1998. [44: Hammound & Mohr 2000] R. Hammoud and R. Mohr, “A Probabilistic Framework of Selecting Effective Key Frames for Video Browsing and Indexing,” International Workshop on Real-Time Image Sequence Analysis, pp. 79-88, 2000. [45: Liu et al. 2004] T.-Y. Liu et al., ‘‘Shot Reconstruction Degree: A Novel Criterion for Key Frame Selection,’’ Pattern Recognition Letters, Vol. 25, No. 12, pp. 1451-1457, 2004. [46: Zeinalpour 2009] Z. Zeinalpour, B. M. Bidgoli, and M. Fathi, “Video Summarization using Genetic Algorithm and Information Theory,” 14th International CSI Computer Conference, pp. 158-163, Tehran, 20-21 October 2009. [47: Panagiotakis et al. 2009] C. Panagiotakis, A. Doulamis, and G. Tziritas, “Equivalent Key Frames Selection based on Iso-Content Principles,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 19, No. 3, pp. 447-451, March 2009. [48: Ngo et al. 2005] C.-W. Ngo, Y.-F. Ma, and H.-J. Zhang, “Video Summarization and Scene Detection by Graph Modeling,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 15, No. 2, pp. 296-305, February 2005. [49: Li et al. 2005] Z. Li, G. M. Schuster, and A. K. Katsaggelos, “MINMAX Optimal Video Summarization,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 15, No. 10, pp. 1245-1256, October 2005. [50: Zhang et al. 2010] J. Zhang, H. Huang, and J. Wang, “Manifold Learning for Visualizing and Analyzing High-Dimensional Data,” IEEE Intelligent Systems, Vol. 25, No. 4, pp. 54-61, 2010. [51: Schwarz 1978] G. Schwarz, “Estimating the Dimension of a Model,” The Annals of Statistics, Vol. 6, No. 2, pp. 461-464, 1978. [52: Celeux & Govaert 1995] G. Celeux and G. Govaert, “Gaussian Parsimonious Clustering Models,” Pattern Recognition, Vol. 28, No. 5, pp. 781-783, 1995. [53: Guan et al. 2013] G. Guan, Z. Wang, S. Lu, J. D. Deng, and D. Dagan, “Keypoint-Based Keyframe Selection,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 23, No. 4, pp.729-734, April 2013. [54: Anagnastopoulos et al. 2009] V. Anagnastopoulos, N. Doulamis and A. Doulamis, “Edge-motion Video Summarization: Economical Video Summarization for Low Powered Devices,” 10th Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS), pp. 284-287, Brunei Gallery, London, 6-8 May 2009. [55: Kang et al. 2011] L.-W. Kang, C.-Y. Hsu, H.-W. Chen, C.-S. Lu, C.-Y. Lin, and S.-C. Pei, “Feature-Based Sparse Representation for Image Similarity Assessment,” IEEE Transactions on Multimedia, Vol. 13, No. 5, pp. 1019-1030, October 2011. [56: Liu et al. 2003] T. Liu, H.-J. Zhang, and F. Qi, “A Novel Video Key-Frame-Extraction Algorithm Based on Perceived Motion Energy Model,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 13, No. 10, pp. 1006-1013, October, 2003. [57: Lowe 2004] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110, November 2004. [58: Ratsamee et al. 2013] P. Ratsamee, Y. Mae, A. Jinda-apiraksa, J. Machajdik, K. Ohara, M. Kojima, R. Sablatnig, and T. Arai, “Lifelogging Keyframe Selection Using Image Quality Measurements and Physiological Excitement Features,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5215-5220, Tokyo, Japan, 3-7 November, 2013. [59: Liu & Fan 2005] L. Liu and G. Fan, “Combined Key-Frame Extraction and Object-Based Video Segmentation,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 15, No. 7, pp. 869-884, June 2005. [60: Song & Fan 2006] X. Song and G. Fan, “Joint Key-Frame Extraction and Object Segmentation for Content-Based Video Analysis,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 16, No. 7, pp. 904-914, July 2006. [61: Raty 2010] T. D. Raty, “Survey on Contemporary Remote Surveillance Systems for Public Safety,” IEEE Transactions on Systems, Man, and Cybernetics-part C: Applications and Reviews, Vol. 40, No. 5, pp. 493-515, September 2010. [62: Huu et al. 2010] P. N. Huu, V. Tran-Quang, and T. Miyoshi, “Image Compression Algorithm Considering Energy Balance on Wireless Sensor Networks,” IEEE International Conference on Industrial Informatics (INDIN), pp. 1005-1010, Osaka, Japan, 13-16 July 2010. [63: Misra et al. 2008] S. Misra, M. Reisslein, and G. Xue, “A Survey of Multimedia Streaming in Wireless Sensor Networks,” IEEE Communications Surveys & Tutorials, Vol. 10, No. 4, pp. 18-39, January 2008. [64: Si et al. 2012] Y. Si, J. Mei, and H. Gao, “Novel Approaches to Improve Robustness, Accuracy and Rapidity of Iris Recognition Systems,” IEEE Transactions on Industrial Informatics, Vol. 8, No. 1, pp. 110-117, February 2012. [65: Huang et al. 2009] Y. Huang, S. Mao, and S. F. Midkiff, “A Control-Theoretic Approach to Rate Control for Streaming Videos,” IEEE Transactions on Multimedia, Vol. 11, No. 6, pp. 1072-1081, October 2009. [66: Wu et al. 2000] D. Wu, Y. T. Hou, and Y. Q. Zhang, “Transporting Real-Time Video Over the Internet: Challenges and Approaches,” Proceedings of IEEE, Vol. 88, No. 12, pp. 1855-1877, December 2000. [67: Lian et al. 2006] F.-L. Lian, J. K. Yook, D. M. Tilbury, and J. Moyne, “Network Architecture and Communication Modules for Guaranteeing Acceptable Control And Communication Performance for Networked Multiagent Systems,” IEEE Transactions on Industrial Informatics, Vol. 2, No. 1, pp. 12-24, February 2006. [68: Andersson et al. 2008] B. Andersson, N. Pereira, W. Elmenreich, E. Tovar, F. Pacheco, and N. Cruz, “A Scalable and Efficient Approach for Obtaining Measurements in CAN-Based Control Systems,” IEEE Transactions on Industrial Informatics, Vol. 4, No. 2, pp. 80-91, May 2008. [69: Li et al. 2006] Y. Li, S.-H. Lee, C.-H. Yeh, and C.-C. J. Kuo, “Techniques for Movie Content Analysis and Skimming: Tutorial and Overview on Video Abstraction Techniques,” IEEE Signal Processing Magazine, Vol. 23, No. 2, pp. 79-89, March 2006. [70: Buche et al. 1998] C. Buchel, O. Josephs, G. Rees, R. Turner, C. D. Frith and K. J. Friston, “The Functional Anatomy of Attention to Visual Motion: A Functional MRI Study,” Brain, Vol. 121, No. 7, pp. 1281-1294, 1998. [71: Kanwisher & Wojciulik 2000] N. Kanwisher and E. Wojciulik, “Visual Attention: Insights from Brain Imaging,” Nature Reviews Neuroscience, Vol. 1, No. 2, pp. 91-100, November 2000. [72: Duncan et al. 1997] J. Duncan, G. Humphreys and R. Ward, “Competitive Brain Activity in Visual Attention,” Current Opinion in Neurobiology, Vol. 7, No. 2, pp. 255-261, April 1997. [73: Liu et al. 2008] C. Liu, W. T. Freeman, E. H. Adelson, and Y. Weiss, “Human-assisted motion annotation,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.1-8, Anchorage, AK, 23-28 June 2008. [74: Irani & Anandan 1999] M. Irani and P. Anandan, “About Direct Methods,” International Conference on Computer Vision (ICCV), Kerkyra, Corfu, Greece, pp. 267-277, September 20-25, 1999. [75: Torr & Zisserman 1999] P. H. S. Torr and A. Zisserman, “Feature Based Methods for Structure and Motion Estimation,” ICCV Workshop on Vision Algorithms, Corfu, Greece, pp. 278-294, 1999. [76: Lian et al. 2013] F.-L. Lian, Y.-C. Lin, C.-T. Kuo, and J.-H. Jean, “Voting-Based Motion Estimation for Real-Time Video Transmission in Networked Mobile Camera Systems,” IEEE Transactions on Industrial Informatics, Vol. 9, No. 1, pp. 172-180, February 2013. [77: Aires et al. 2008] K. R. T. Aires, A. M. Santana, and A. A. D. Medeiros, “Optical Flow Using Color Information,” 23rd Annual ACM Symposium on Applied Computing, Fortaleza, Ceara, Brazil, pp. 1607-1611, March 16 -20, 2008. [78: Graham & Ian 1996] U. Graham and C. Ian, “Understanding Statistics,” Oxford University Press, p. 55, 1996. [79: Zwillinger & Kokoska 2000] D. Zwillinger and S. Kokoska, “CRC Standard Probability and Statistics Tables and Formulae,” Journal of the Royal Statistical Society: Series D (The Statistician), Vol. 50, No. 2, pp. 239-241, 2001. [80: Mizera & . Muller 2004] I. Mizera and C. H. Muller, “Location-Scale Depth,” Journal of the American Statistical Association, Vol. 99, No. 468, pp. 949-966, December 2004. [81: Rousseeuw & Croux 1993] P. J. Rousseeuw and C. Croux, “Alternatives to the Median Absolute Deviation,” Journal of the American Statistical Association, Vol. 88, No. 424, pp. 1273-1283, December 1993. [82: Lian et al. 2012] F.-L. Lian, Y.-C. Lin, C.-T. Kuo, and J.-H. Jean, “Rate and Quality Control with Embedded Coding for Mobile Robot with Visual Patrol,” IEEE Systems Journal, Vol. 6, No. 3, pp. 368-377, September 2012. [83: Chen 1999] C.-T. Chen, “Linear System Theory and Design,” 3 rd Ed., Oxford, 1999. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/8005 | - |
| dc.description.abstract | 視覺感測器因其具有競爭性的價格和豐富的感測資訊量,因此,在過去幾年間被廣泛的運用在特定的區域,並且有目的性的收集感興趣目標物的影像資訊。在許多應用上都可見到其提供協助或是監視功能的蹤跡,例如: 工業機器人、軍事防禦和監控系統。隨著視覺感測系統使用的快速增加,也產生了越來越多需要被傳輸影像資料。然而,要在一個共享且頻寬有限制的網路上進行大量的影像傳輸,是非常困難且具有挑戰性的。其引起的嚴重延遲以及封包丟失會大大的降低系統表現和影像分析結果。為了兼顧系統表現和可靠穩定的傳輸,在本論文中基於資訊密度、資料獨特性和系統動態性,提出針對影像資料的解析度和傳輸量的控制方法。就理論面而言,影像資料的解析度控制被轉換成量化回饋穩定性問題並且以 Lyapunov 的方法加以證明。就 實際應用上,所設計的影像解析度控制策略被實現在 PTZ 相機上,並且在室內和室外的實驗場景都得到極佳的表現。另一方面,影像資料的傳輸量控制可視為影像摘要問題,為了確保在經過資料減量過程後,系統表現依舊維持在令人滿意的範圍內。因此,在本論文,提出基於感知運動能量來設計針對獨特性資料的取樣策略並藉此移除重複性高的資料。接著,將其實現在大量且豐富的實驗場景,不僅可以得到將近50% 的優秀資料減量結果,更重要的是,系統表現也維持在可接受範圍內。再著,在本論文中所提出影像資料解析度跟傳輸量的控制方法也與其他方法做比較,並藉此展現其卓越的優勢。 | zh_TW |
| dc.description.abstract | Visual sensors are widely used in the significant area, such as industry, army and public to collect the abundant video-related data about the objects of interest in the past years due to reasonable price and unique sensing capability. They have been found in various applications such as industrial robotics, military defense and surveillance for assisting and monitoring purposes. The rapid use of visual sensing systems has led to the increased amounts of video data that impose implicit difficulties on video transmission task over a shared and bandwidth-limited communication network. Moreover, control performance or video analysis results would be greatly degraded in the presence of constraints such as severe delays and packet dropouts induced by excessive transmitted video data. For taking desired performance and reliable transmission into account, quality and quantity control of video packet data are proposed based on information density and system dynamics in the dissertation. Quality control is modeled as the quantized feedback stabilization problem and is proved in the sense of Lyapunov. In practical applications, designed quality control policies are implemented in the camera with zoom functionality and are experimented on indoor and
outdoor environment to clearly demonstrate the better human tracking and detection performance. On the other hand, quantity control is modeled as video summarization problem. In order to prevent the system performance from being influenced during data reduction process, the designed keyframe extraction rules based on perceived motion energy are proposed to remove the similar frames. The exceptional and near 50% data reduction ratio and acceptable tracking, detection and transmission results are presented with abundant typical and experimental videos. Furthermore, the proposed quality and quantity control are also compared with the other approaches to present its outstanding advantage. | en |
| dc.description.provenance | Made available in DSpace on 2021-05-19T18:02:22Z (GMT). No. of bitstreams: 1 ntu-103-D96921002-1.pdf: 7651678 bytes, checksum: 9a63648418060d8368cc6ddda2595b97 (MD5) Previous issue date: 2014 | en |
| dc.description.tableofcontents | vii
Contents 摘要 ........................................................................................................................... i ABSTRACT ............................................................................................................. iii CONTENNTS .......................................................................................................... vii LIST OF FIGURES ................................................................................................. ix LIST OF TABLES ................................................................................................... xxi CHAPTER 1 INTRODUCTION ............................................................................ 1 1.1 MOTIVATION ........................................................................................... 5 1.2 CONTRIBUTION ..................................................................................... 11 1.3 ORGANIZATION ..................................................................................... 18 CHAPTER 2 LITERATURE OF SURVEY .......................................................... 19 2.1 QUALITY CONTROL ............................................................................ 20 2.2 QUANTITY CONTROL ......................................................................... 23 CHATPER 3 REGION-OF-INTEREST-BASED QUALITY CONTROL ........ 29 3.1 QUANTIZED STATE FEEDBACK STABILIZATION ...................... 32 3.2 REGION OF INTEREST BASED ZOOM CONTROL ....................... 39 3.3 EXPERIMENTAL RESULTS OF REGION-OF-INTEREST-BASED QUALITY CONTROL ........................................................................... 43 3.3.1 DESCRIPTION .............................................................................. 43 3.3.2 INDOOR ENVIRONMENT ......................................................... 46 3.3.3 OUTDOOR ENVIRONMNET ..................................................... 57 3.4 SUMMARY............................................................................................... 67 CHAPTER 4 KEYFRAME-BASED QUANTITY CONTROL .......................... 69 4.1KEYFRAME EXTRACTION BASED ON PERCEIVED MOTION ENERGY (PME) ..................................................................................... 71 4.2 SAMPLING STRATEGY ....................................................................... 82 4.3 EXPERIMENTAL RESULTS OF PME-BASED KEYFRAME EXTRACTION ANALYSIS ................................................................... 86 4.3.1 DESCRIPTION .............................................................................. 86 4.3.2 EXPERIMENTAL RESULTS ....................................................... 89 4.3.2.1 PART I: VISUAL SENSOR IS MOBILE .......................... 89 4.3.2.2 PART II: VISUAL SENSOR IS STATIONARY ............... 134 4.3.3 COMPARISON .............................................................................. 171 4.4 SUMMARY............................................................................................... 179 CHAPTER 5 CONCLUSION NAD FUTURE WORK ....................................... 183 5.1 CONCLUSION ......................................................................................... 183 5.2 FUTURE WORK ..................................................................................... 190 viii REFERENCES ........................................................................................................ 191 | |
| dc.language.iso | en | |
| dc.title | 視覺感測系統之資訊密度的解析度控制與資料獨特性的傳輸量控制 | zh_TW |
| dc.title | Region-of-Interest-Based Quality Control and Keyframe-Based Quantity Control in Visual Sensing System | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 102-2 | |
| dc.description.degree | 博士 | |
| dc.contributor.oralexamcommittee | 張帆人,顏炳郎,簡忠漢,李後燦,黃正民 | |
| dc.subject.keyword | 解析度控制,傳輸量控制,資料獨特性,動態取樣,量化回饋穩定性,變焦控制, | zh_TW |
| dc.subject.keyword | Quality control,Quantity control,Keyframe extraction,Dynamic sampling,Quantized feedback stabilization,Zoom control, | en |
| dc.relation.page | 204 | |
| dc.rights.note | 同意授權(全球公開) | |
| dc.date.accepted | 2014-08-20 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| dc.date.embargo-lift | 2024-03-22 | - |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-103-1.pdf | 7.47 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
