請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/45424完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳良基(Liang-Gee Chen) | |
| dc.contributor.author | Yu-Han Chen | en |
| dc.contributor.author | 陳昱翰 | zh_TW |
| dc.date.accessioned | 2021-06-15T04:19:25Z | - |
| dc.date.available | 2011-12-29 | |
| dc.date.copyright | 2009-12-29 | |
| dc.date.issued | 2009 | |
| dc.date.submitted | 2009-11-12 | |
| dc.identifier.citation | [1] Pradeep Dubey, “A platform 2015 workload model: Recognition, mining and
synthesis moves computers to the era of tera,” Intel Corporation, 2005. [2] Available: http://www.vision.caltech.edu/Image Datasets/Caltech101/. [3] Ichiro Kuroda and Shorin Kyo, “The blue brain project,” Nature Reviews Neuroscience, vol. 7, pp. 153–160, Feb. 2006. [4] M. M. Khan, D. R. Lester, L. A. Plana, A. Rast, X. Jin, E. Painkras, and S. B. Furber, “SpiNNaker: Mapping neural networks onto a massively-parallel chip multiprocessor,” in IEEE International Joint Conference on Neural Networks, 2008, pp. 2849–2856. [5] Thomas Dean, “A computational model of the cerebral cortex,” in Proc. AAAI-05, 2005, pp. 938–943. [6] Numenta Inc., ,” http://www.numenta.com/. [7] Kestutis Kveraga, Avniel S. Ghuman, and Moshe Bar, “Top-down predictions in the cognitive brain,” Brain and Cognition, vol. 65, no. 2, pp. 145–168, Nov. 2007. [8] Rae Silver, Kwabena Boahen, Sten Grillner, Nancy Kopell, and Kathie L. Olsen, “Neurotech for neuroscience: Unifying concepts, organizing principles, and emerging tools,” The Journal of Neuroscience, vol. 27, pp. 11807– 11819, Oct. 2007. [9] H. de Garis and M. Korkin, “The CAM-Brain Machine (CBM): an FPGAbased hardware tool that evolves a 1000 neuron-net circuit module in seconds and updates a 75 million neuron artificial brain for real-time robot control,” Neurocomputing, vol. 42, no. 1, pp. 35–68, Jan. 2002. [10] Available: http://software.intel.com/en-us/intel-vtune/. [11] Sushmita Mitra and Tinku Acharya, “Gesture recognition: A survey,” IEEE Transaction on System, Man, and Cybernetics—Part C: Applications and Reviews, vol. 37, no. 3, pp. 311–324, May 2007. [12] Shaou-Gang Miaou, Pei-Hsu Sung, and Chia-Yuan Huang, “A customized human fall detection system using omni-camera images and personal information,” in Proc. 1st Transdisciplinary Conference on Distributed Diagnosis and Home Healthcare, Apr. 2006, pp. 39–42. [13] Ichiro Kuroda and Shorin Kyo, “Media processing LSI architectures for automotives—challenges and future trends,” IEICE Trans. Electron., vol. E90- C, no. 10, pp. 1850–1857, Oct. 2007. [14] Corinna Cortes and Vladimir Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. [15] Richard Lippmann, “An introduction to computing with neural nets,” IEEE ASSP Magazine, vol. 4, no. 2, pp. 4–22, Apr. 1987. [16] Paul Viola and Michael Jones, “Robust real-time face detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137–154, May 2004. [17] Lawrence Rabiner, “A tutorial on hidden markov models and selected applications in speech recognition,” Proc. IEEE, vol. 77, no. 2, pp. 257–285, Feb. 1989. [18] Sanjeev Arulampalam, Simon Maskell, Neil Gordon, and Tim Clapp, “A tutorial on particle filters for on-line nonlinear/non-gaussian bayesian tracking,” IEEE Transaction on Signal Processing, vol. 50, no. 2, pp. 174–188, Feb. 2001. [19] Shorin Kyo, Takuya Koga, Shinichiro Okazaki, Ryoichi Uchida, Satoshi Yoshimoto, and Ichiro Kuroda, “A 51.2GOPS scalable video recognition processor for intelligent cruise control based on a linear array of 128 4-way VLIW processing elements,” in ISSCC Digest of Technical Papers, 2003, pp. 48–49. [20] A. Abbo, R. Kleihorst, V. Choudhary, L. Sevat, P. Wielage, S. Mouy, and M. Heijligers, “XETAL-II: A 107 GOPS, 600mW massively-parallel processor for video scene analysis,” in ISSCC Digest of Technical Papers, 2007, pp. 270–271. [21] Brucek Khailany, Ted Williams, Jim Lin, Eileen Long, Mark Rygh, DeForest Tovey, and William Dally, “A programmable 512 GOPS stream processor for signal, image, and video processing,” in ISSCC Digest of Technical Papers, 2007, pp. 272–273. [22] Yuya Hanai, Yuichi Hori, Jun Nishimura, and Tadahiro Kuroda, “A versatile recognition processor employing haar-like feature and cascaded classifier,” in ISSCC Digest of Technical Papers, 2009, pp. 148–149. [23] Joo-Young Kim, Minsu Kim, Seungjin Lee, Jinwook Oh, Kwanho Kim, Sejong Oh, Jeong-Ho Woo, Donghyun Kim, and Hoi-Jun Yoo, “A 201.4GOPS 496mW real-time multi-object recognition processor with bio-inspired neural perception engine,” in ISSCC Digest of Technical Papers, 2009, pp. 150–151. [24] Hideo Yamasaki and Tadashi Shibata, “A real-time image-feature-extraction and vector-generation VLSI employing arrayed-shift-register architecture,” IEEE Journal of Solid-state Circuits, vol. 42, no. 9, pp. 2046–2053, 2007. [25] Chih-Chi Cheng, Chia-Hua Lin, Chung-Te Li, Samuel Chang, and Liang-Gee Chen, “iVisual: An intelligent visual sensor SoC with 2790 fps CMOS image sensor and 205GOPS/W vision processor,” in ISSCC Digest of Technical Papers, 2008, pp. 306–307. [26] Simon Thorpe, Denis Fize, and Catherine Marlot, “Speed of processing in the human visual system,” Nature, vol. 381, no. 6, pp. 520–522, 1996. [27] Michele Fabre-Thorpe, Arnaud Delorme, Catherine Marlot, and Simon Thorpe, “A limit to the speed of processing in ultra-rapid visual categorization of novel natural scenes,” Journal of COgnitive Neuroscience, vol. 13, no. 2, pp. 171–180, Feb. 2001. [28] Valentino Braitenberg, “Brain size and number of neurons: An exercise in synthetic neuroanatomy,” Journal of Computational Neuroscience, vol. 10, no. 1, pp. 71–77, Jan. 2001. [29] Jerome A. Feldman, Mark A. Fanty, and Nigel H. Goodard, “Computing with structured neural networks,” Computer, vol. 21, no. 3, pp. 91–103, Mar. 1988. [30] Ani1 K. Jain, Jianchang Mao, and K.M. Mohiuddin, “Artificial neural network: A tutorial,” Computer, vol. 29, no. 3, pp. 31–44, Mar. 1996. [31] Masakazu Yagi and Tadashi Shibata, “An image representation algorithm compatible with neural-associative-processor-based hardware recognition systems,” IEEE Transaction on Neural Network, vol. 14, no. 5, pp. 1144–1161, 2003. [32] Yasufumi Suzuki and Tadashi Shibata, “An edge-based face detection algorithm robust against illumination, focus, and scale variations,” in Proc. European Signal Processing Conference, 2004, pp. 2279–2282. [33] David Lowe, “Object recognition from local scale-invariant features,” in Proc. International Conference on Computer Vision, 1999, pp. 1150–1157. [34] David Lowe, “Local feature view clustering for 3D object recognition,” in Proc. International Conference on Computer Vision and Pattern Recognition, Dec. 2001, pp. 682–688. [35] David Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, Nov. 2004. [36] Jian Sun, Lu Yuan, Jiaya Jia, and Heung-Yeung Shum, “Image completion with structure propagation,” in Proc. ACM SIGGRAPH, 2005, pp. 861–868. [37] John Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” in Proc. Natl. Acad. Sci., Apr. 1982, pp. 2554– 2558. [38] Teuvo Kohonen, “The self-organizing map,” Proceedings of the IEEE, vol. 78, no. 9, pp. 1464–1480, 1990. [39] Tai Sing and Lee David Mumford, “Hierarchical bayesian inference in the visual cortex,” Journal of the Optical Society of America A, vol. 20, no. 7, pp. 1434V1448, 2003. [40] T. Serre, M. Kouh, C. Cadieu, U. Knoblich, G. Kreiman, and T. Poggio, “A theory of object recognition: Computations and circuits in the feedforward path of the ventral stream in primate visual cortex,” BCL Paper 259/AI Memo 2005-036, Massachusetts Institute of Technology, 2005. [41] Kunihiko Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” Biological Cybernetics, vol. 36, no. 4, pp. 193–202, Apr. 1980. [42] Dileep George and Jeff Hawkins, “A hierarchical bayesian model of invariant pattern recognition in the visual cortex,” in Proc. Int. Joint Conf. on Neural Networks, 2005, pp. 1812–1817. [43] Vernon B. Mountcastle, “The columnar organization of the neocortex,” Brain, vol. 120, no. 4, pp. 701–722, 1997. [44] Jeff Hawkins, “Why can’t a computer be more like a brain? or what to do with all those transistors?,” in IEEE ISSCC Dig. Tech. Papers, 2008, pp. 38–41. [45] A. D. Rast, Shufan Yang, M. Khan, and S. B. Furber, “Virtual synaptic interconnect using an asynchronous network-on-chip,” in IEEE International Joint Conference on Neural Networks, 2008, pp. 2727–2734. [46] Xin Jin, S. B. Furber, and J. V. Woods, “Efficient modelling of spiking neural networks on a scalable chip multiprocessor,” in IEEE International Joint Conference on Neural Networks, 2008, pp. 2812–2819. [47] Udo Seiffert, “Artificial neural networks on massively parallel computer hardware,” Neurocomputing, vol. 57, pp. 135–150, Feb. 2004. [48] M. Valera and S.A. Velastin, “Intelligent distributed surveillance systems: a review,” IEE Proceedings - Vision, Image and Signal Processing, vol. 152, no. 2, pp. 192–204, Apr. 2005. [49] Thomas Moeslund, Adrian Hilton, and Volker Kruger, “A survey of advances in vision-based human motion capture and analysis,” Computer Vision and Image Understanding, vol. 104, no. 2-3, pp. 90–126, November-December 2006. [50] P.J. Phillips, P.J. Flynn, T. Scruggs, K.W. Bowyer, Jin Chang, K. Hoffman, J. Marques, Jaesik Min, and W. Worek, “Overview of the face recognition grand challenge,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005, pp. 947–954. [51] Wen-Chung Kao, Wei-Hsin Chen, Chun-Kuo Yu, Chin-Ming Hong, and Sheng-Yuan Lin, “Portable real-time homecare system design with digital camera platform,” IEEE Transactions on Consumer Electronics, vol. 51, no. 4, pp. 1035–1041, Nov. 2005. [52] A. Rodriguez, J. Whitson, and R. Granger, “Derivation and analysis of basic computational operations of thalamocortical circuits,” Journal of Cognitive Neuroscience, vol. 16, 2004. [53] David H. Hubel and Torsten N. Wiesel, “Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex,” The Journal of Physiology, vol. 160, no. 1, pp. 106–154, 1962. [54] David H. Hubel and Torsten N. Wiesel, “Receptive fields and functional architecture in two nonstriate visual areas (18 and 19) of the cat,” Journal of Neurophysiology, vol. 160, no. 2, pp. 229–289, Mar. 1965. [55] R.W. Guillery and S. Murray Sherman, “Thalamic relay functions and their role in corticocortical communication: Generalizations from the visual system,” Neuron, vol. 33, pp. 163–175, Jan. 2002. [56] Henry J Alitto and W Martin Usrey, “Corticothalamic feedback and sensory processing,” Current Opinion in Neurobiology, vol. 13, pp. 440–445, 2003. [57] S. Murray Sherman, “The thalamus is more than just a relay,” Current Opinion in Neurobiology, vol. 17, pp. 417–422, 2007. [58] Kunihiko Fukushima, “Neocognitron for handwritten digit recognition,” Neurocomputing, vol. 51, 2003. [59] Kazuya Tohyama and Kunihiko Fukushima, “Neural network model for extracting optic flow,” Neural Networks, vol. 18, no. 5-6, pp. 549–556, July- August 2005. [60] Thomas Serre, Lior Wolf, Stanley Bileschi, Maximilian Riesenhuber, and Tomaso Poggio, “Robust object recognition with cortex-like mechanisms,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 29, no. 3, pp. 411–426, Mar. 2007. [61] Ethan Meyers and Lior Wolf, “Usign biologically inspired features for face processing,” International Journal on Computer Vision, vol. 76, no. 1, pp. 93–104, Jan. 2008. [62] Laurenz Wiskott and Terrence J. Sejnowski, “Slow feature analysis: Unsupervised learning of invariances,” Neural Comptutation, vol. 14, no. 4, pp. 715–770, 2002. [63] Jeff Hawkins and Sandra Blakeslee, On Intelligence, Henry Holt and Company, Sept. 2004. [64] Simon B. Laughlin and Terrence J. Sejnowski, “Communication in neuronal networks,” Science, vol. 301, no. 5641, pp. 1870–1874, Sept. 2003. [65] Tai Sing Lee, “Computations in the early visual cortex,” Journal of Physiology-Paris, vol. 97, 2003. [66] James Hays and Alexei Efros, “Scene completion using millions of photographs,” Communications of the ACM, vol. 51, no. 10, pp. 87–94, Oct. 2008. [67] M. Bar, K. S. Kassam, A. S. Ghuman, J. Boshyan, A. M. Schmid, A. M. Dale, M. S. Hamalainen, K. Marinkovic, D. L. Schacter, B. R. Rosen, and E. Halgren, “Top-down facilitation of visual recognition,” Proceedings of the National Academy of Sciences, vol. 103, no. 2, pp. 449–454, Jan. 2006. [68] Moshe Bar, “The proactive brain: using analogies and associations to generate predictions,” Trends in Cognitive Sciences, vol. 11, no. 7, pp. 449–454, 2007. [69] CAude Oliva, Antonio Torralba, Monica S. Castelhano, and John M. Henderson, “Top-down control of visual attention in object detection,” in Proc. International Conference on Image Processing, 2003, pp. 253–256. [70] Christian Siagian and Laurent Itti, “Biologically-inspired face detection: Nonbrute- force-search approach,” in Proc. Computer Vision and Pattern Recognition Workshop, 2004, pp. 62–69. [71] Vidhya Navalpakkam and Laurent Itti, “Modeling the influence of task on attention,” Vision Research, vol. 45, no. 2, pp. 205–231, Jan. 2005. [72] Kunihiko Fukushima, “Neocognitron capable of incremental learning,” Neural Networks, vol. 17, no. 1, pp. 37–46, 2002. [73] Tadashi Shibata, “Intelligent signal processing based on a psychologicallyinspired vlsi brain model,” IEICE Trans. Fundamentals, vol. E85-A, no. 3, pp. 600–609, 2002. [74] Changjian Gao and Dan Hammerstrom, “Cortical models onto cmol and cmos—architectures and performance/price,” IEEE Transaction on Circuits and System I: Regular Papers, vol. 54, no. 11, pp. 2502–2515, Nov. 2007. [75] Seetharam Narasimhan, Somnath Paul, and Swarup Bhunia, “Collective computing based on swarm intelligence,” in Proc. IEEE/ACM Design Automation Conference, 2008, pp. 349–350. [76] Clive Finlayson and Jose Carrion, “Rapid ecological turnover and its impact on Neanderthal and other human populations,” Trends in Ecology and Evolution, vol. 22, no. 4, pp. 213–222, 2007. [77] Marcia Ponce de Leon, Lubov Golovanova, Vladimir Doronichev, Galina Romanova, Takeru Akazawa, Osamu Kondo, Hajime Ishida, and Christoph Zollikofer, “Neanderthal brain size at birth provides insights into the evolution of human life history,” Proceedings of the National Academy of Sciences, vol. 105, no. 37, pp. 13764–13768, 2008. [78] Glenn Elert, Ed., Power of a Human Brain, Available: http://hypertextbook.com/facts/2001/JacquelineLing.shtml. [79] Daniel Drubach, The Brain Explained, New Jersey: Prentice-Hall, 2000. [80] Luis Plana, Steve Furber, Steve Temple, Mukaram Khan, Yebin Shi, Jian Wu, and Shufan Yang, “A GALS infrastructure for a massively parallel multiprocessor,” IEEE Design and Test of Computers, vol. 24, no. 5, pp. 454–463, Sept.–Oct. 2007. [81] John Arthur, Learning In Silicon: A Neuromorphic Model of the Hippocampus, Ph.D. thesis, Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, 2006. [82] Kareem Zaghloul and Kwabena Boahen, “A silicon retina that reproduces signals in the optic nerve,” Journal of Neural Engineering, vol. 3, no. 4, pp. 257–267, Dec. 2006. [83] Bo Wen and Kwabena Boahen, “A 360-channel speech preprocessor that emulates the cochlear amplifier,” in ISSCC Digest of Technical Papers, 2006, pp. 556–557. [84] H. de Garis, Ce Wang, and T. Batty, “Building a cheaper artificial brain,” in IEEE International Joint Conference on Neural Network, 2005, pp. 685–688. [85] Maya Gokhale, Bill Holmes, and Ken Iobst, “Processing in memory: the Terasys massively parallel PIM array,” Computer, vol. 28, no. 4, pp. 23–31, Apr. 1995. [86] David Patterson, Thomas Anderson, Neal Cardwell, Richard Fromm, Kimberley Keeton, Christoforos Kozyrakis, Randi Thomas, and Kathy Yelick, “Intelligent RAM (IRAM): chips that remember and compute,” in ISSCC Digest of Technical Papers, 1997, pp. 224–225. [87] Duncan Elliott, Michael Stumm, Martin Snelgrove, Christian Cojocaru, and Robert Mckenzie, “Computational RAM: implementing processors in memory,” IEEE Design and Test of Computers, vol. 16, no. 1, pp. 32–41, Jan.–Mar. 1999. [88] Joseph Gebis, Sam Williams, David Patterson, and Christos Kozyrakis, “VIRAM1: A media-oriented vector processor with embedded dram,” in 41st Design Automation Student Design Contenst, 2004. [89] Erik Jan Marinissen, Betty Prince, Doris Keitel-Schulz, and Yervant Zorian, “Challenges in embedded memory design and test,” in Proc. Design, Automation and Test in Europe Conference and Exhibition, 2005, pp. 722–727. [90] Stephen Brown and Jonathan Rose, “FPGA and CPLD architectures: a tutorial,” IEEE Design and Test of Computers, vol. 13, no. 2, pp. 42–57, 1996. [91] Brian Towles and William Dally, “Route packets, not wires: On-chip interconnection networks,” in 38th Conference on Design Automation, 2001. [92] Luca Benini and Giovanni De Micheli, “Networks on chips: a new SoC paradigm,” Computer, vol. 35, no. 1, pp. 70–78, Jan. 2002. [93] Xiao FanWang and Guanrong Chen, “Complex networks: small-world, scalefree and beyond,” IEEE Circuits and Systems Magazine, vol. 3, no. 1, pp. 6–20, 2003. [94] Olaf Sporns, Dante Chialvo, Marcus Kaiser, and Claus Hilgetag, “Organization, development and function of complex brain networks,” Trends in Cognitive Sciences, vol. 8, no. 9, pp. 418–425, 2004. [95] N. R. Adiga, M. A. Blumrich, D. Chen, P. Coteus, A. Gara, M. E. Giampapa, P. Heidelberger, S. Singh, B. D. Steinmacher-burow, T. Takken, M. Tsao, and P. Vranas, “Blue Gene/L torus interconnection network,” IBM journal of research and development, vol. 49, no. 2-3, pp. 265–276, 2005. [96] Jud Leonard, The Kautz Digraph as a Cluster Interconnect, Available: http://sicortex.com/content/download/1134/6805/file/hpcncs-07-Kautz.pdf. [97] Chuanxiong Guo, Haitao Wu, Kun Tan, Lei Shi, Yongguang Zhang, and Songwu Lu, “Dcell: a scalable and fault-tolerant network structure for data centers,” ACM SIGCOMM Computer Communication Review, vol. 38, no. 4, pp. 75–86, 2008. [98] Gene M. Amdahl, “Validity of the single processor approach to achieving large scale computing capabilities,” in AFIPS Joint Computer Conferences, Apr. 1967, pp. 483–485. [99] Pawan Sinha, Benjamin Balas, Yuri Ostrovsky, and Richard Russell, “Face recognition by humans: Nineteen results all computer vision researchers should know about,” Proceedings of the IEEE, vol. 94, no. 11, pp. 1948–1962, Nov. 2006. [100] Andrew Yip and Pawan Sinha, “Role of color in face recognition,” Perception, vol. 31, pp. 995–1003, 2002. [101] Antonio Torralba, Rob Fergus, and William Freeman, “80 million tiny images: A large data set for nonparametric object and scene recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 11, pp. 1958–1970, Nov. 2008. [102] Mike Muller, “Embedded processing at the heart of life and style,” in ISSCC Digest of Technical Papers, 2008, pp. 32–37. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/45424 | - |
| dc.description.abstract | 讓電腦能像人腦一樣有智慧的處理事情將會是未來主要的運算需求,莫爾定律告訴我們每一顆晶片上將會有愈來愈多的運算資源,然而電腦的智慧有和運算能力同步的成長嗎?從“百步法則“中我們學到,當我們在照片中辨識一隻貓的時候,大腦並沒有去計算結果,而是從記憶中去提取相關的訊息,這和現今電腦的運作機制是不一樣的,如果要開發一個有智慧的機器,可以嚐試學習大腦去建立一個有效率的記憶系統而非是一個運算系統。在本篇論文中,我們嚐試去開發一個能支援未來智慧應用的智慧處理器 (Intelligence Processing Unit, IPU)。首先,我們模仿大腦的記憶-預測結構建立一個可以提供多種智慧功能的軟體系統,接著,我們模仿大腦記憶聯想的機制去建立一個有效率的硬體平台,我們還提出了皮質編譯器 (Cortex Compiler),可以支援從記憶-預測結構到硬體平台的轉換,最後,這些部份結合起來成為一個可以支援未來智慧應用的完整設計流程。
在論文的第一部份,我們試著建立一個針對視覺分析的仿大腦運作模型,我們專注在大腦進入視覺皮質層後的處理。新皮質 (Neocortex) 是大腦智慧的核心,而記憶-預測的結構可能是新皮質統一的運作機制,海馬迴 (Hippocampus) 和視丘 (Thalamus) 也是記憶-預測處理的重要成員,海馬迴幫忙形成記憶-預測的結構,視丘負責調節資訊整合時個別訊號的強弱。我們結合新皮質、海馬迴還有視丘的運作機制提出了一個記憶-預測的架構可以同時支援辨識 (Recognition)、探勘 (Mining) 和合成 (Synthesis)的功能,在此架構中提供了階層間、空間上還有時間上的記憶-預測結構,這些結構可以合成出預測的影像資訊來改善視覺分析的結果。針對有雜訊的影像,我們提出的運作模型不只可以提高辨識準確度,還可以去除雜訊恢復影像的原貌。我們提出的架構還可將被遮住的圖型重建回來,或是在沒有任何影像輸入時去想像一個學過的影像型態。這樣記憶-預測的機制還可以延伸出其他的智慧功能,例如我們要在影像中找車子,車子的概念會預測可能出現的型態讓我們可以快速專注到有車子的區域而不是漫無目的的對整張影像做搜尋。這也證明這樣記憶-預測的架構確實可能是新皮質中產生人類智慧能力的統一而基本的運作機制。 在論文第二個部份我們試著去建立一個矽腦硬體平台,其中包含了System Control、Neocortex Processor和Cortex Compiler,分別代表了我們提出的大腦運作模型中的視丘、新皮質和海馬廻。System Control負責記憶-預測網路的輸入和輸出。Neocortex Processor是一個離散式記憶聯想系統,為了達到區域性的資料讀取,我們學習大腦讓資訊是自動往後傳遞 (Push-based Processor) 而非像是在一般處理器中透過主記憶體做資料讀取 (Pull-based Processing)。接著我們提出了Dataflow-in-memory的機制讓記憶聯想所需的運算分散到許多較小的記憶體中,讓資料的讀取變得較有效率。我們亦採用了晶片網路的方式做資料傳輸以提供較佳的晶片可擴張性 (Scalability)。此外,我們還採用了仿大腦的資料閘道控管機制 (Information Gating),只有信心分數較高的記憶型態會發出訊號去激發和其相關的記憶型態,如此一來,只有部份的辨識網路會被激發,我們提出的Neocortex Processor能達到類似大腦的快速反應、能量使用有效率及好的擴展性等特性。最後,Cortex Compiler負責將記憶-預測網路切割並擺放到Neocortex Processor中去運作,目的在於增加硬體的使用效率。 我們以UMC 90nm的製程實作了一個14核心的IPU,核心面積為10.6 mm2。我們的系統可以在0.15秒內辨識一張64x64的影像,耗費的功率為137mW。和1.86-GHz的CPU比起來,我們的反應速度是其8倍且只花費不到二十分之一的硬體資源及不到三百分之一的功率。和其他類神經網路的模擬器比起來,我們提出的IPU系統能達到85.2倍的功耗效率 (定義為每單位功耗能支援的神經元模擬量)。而和其他智慧辨識系統比較,我們能提供較快速且完整的設計流程、較佳的可擴張性及較多的智慧功能。總結來說,我們提出的IPU系統有潛力在未來接近人類的智慧,並且適合用於未來的智慧應用中。 | zh_TW |
| dc.description.abstract | People in Intel forecast intelligent processing of Recognition, Mining, and Synthesis (RMS) will become the main computation requirements in their 2015 work load model. According to Moore's Law, there will be more and more computing resources in a single chip. However, does machine intelligence increase with available computing power? With the inspiration from 'hundred-step rule,' we learn that when we recognize a kitten in a photo, human brain does not compute the result but retrieves the associated information from the past experience. If we want to build an intelligent machine, we could mimic the brain to build a memory system rather than a computing system like the current computer. In this thesis, we try to develop an Intelligence Processing Unit (IPU) to support the future intelligent applications with the combination of silicon technology and the brain-mimicking model. At first, we mimic the memory-prediction structure of the brain to build a software model and provide the required intelligent functions. Then, we mimic the memory association mechanism of the brain to build an efficient hardware platform for intelligent processing. We also propose Cortex Compiler which supports the mapping from memory-prediction structure to the hardware platform. Finally, we provide a complete design flow from the software brain model, Cortex Compiler, to the hardware platform for future intelligent applications.
In the first part of the thesis, we try to build a brain-mimicking model for vision analysis. Our work focuses on information processing in the visual cortex. Neocortex is the core of human intelligence. Recently, it is found that memory-prediction could be the unified processing mechanism in neocortex. Hippocampus and thalamus are also key components for memory-prediction processing in the brain. Hippocampus recollects the experience through a day in the dream and builds the connections between the associated patterns as long-term memory. That is to say, it builds the memory-prediction structure. Thalamus is the information gateway which modulates the information processing. It adjusts the weighting values for information fusion. We propose a memory-prediction framework which combines the functions of hippocampus, thalamus, and neocortex and can achieve the capability of recognition, mining, and synthesis. The provided hierarchial, spatial, and temporal memory-prediction structure can help synthesize predicted image patterns to refine the vision analysis results. For a noisy pattern recognition problem, the proposed model can not only improve the recognition accuracy but also recover the pattern. Our model can also provide the capability of image completion for an occluded image and imagination when no input is provided. At last, the memory-prediction framework can also be extended to other intelligent functions like attention. If we want to find a car in an image, the concept of the car predicts the possible patterns. As a result, we can attend to the car fast with rough view rather than full search of the image. It is proven that the proposed memory-prediction model is a unified and a basic building block for human intelligence. In the second part, we try to mimic the brain circuits to build a Silicon Brain hardware platform. Silicon Brain contains System Control (thalamus), Neocortex Processor (neocortex), and Cortex Compiler (hippocampus). System Control is the input/output interface which reads the input data of the memory-prediction network and outputs the analysis results. Neocortex Processor is a distributed memory association system for the memory-prediction network. Push-based processing rather than pull-based processing in conventional processers is first adopted to achieve localized data processing and reduce data access latency. Then, we propose a dataflow-in-memory technique. Memory association operations for the memory-prediction network can be distributed to many small-sized memory units and thus memory access becomes efficient. On-chip network is adopted to achieve data communication through virtual channels with good routability and scalability. In addition, an information gating data processing mechanism is proposed. Only the information with high confidence is forwarded to activate the fan-outs in the memory-prediction network. With this technique, only parts of the recognition network are activated. As a result, the proposed Neocortex Processor can achieve the brain-like features of fast response, energy efficiency, and good scalability. Cortex Compiler is responsible for partitioning and allocating the memory-prediction network into Neocortex Processor. Techniques of relay cell insertion and cortex placement are proposed to reduce sequential data processing and improve the hardware utilization. We have implemented a 200-MHz 14-core IPU system in UMC 90nm technology with 10.6 mm^2 of area. The proposed system can recognize one 64x64 image in 0.15 seconds with 137 mW of power consumption. The throughput is 8x compared to a 1.86-GHz CPU but with only less than 1/20 transistors and less than 1/300 power. Compared with other neural network simulators/emulators, the proposed 14-core IPU system can achieve at less 85.2x better power efficiency (number of neurons per Watt). We also compare our design with other intelligent recognition processors. Our IPU platform can provide rapid design flow, better scalability, and more intelligent functions. In conclusion, the proposed IPU system is potential to approach human-like intelligence and suitable for future intelligent applications. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-15T04:19:25Z (GMT). No. of bitstreams: 1 ntu-98-F92943018-1.pdf: 4957513 bytes, checksum: b7634d2e71750316dd13318787319e10 (MD5) Previous issue date: 2009 | en |
| dc.description.tableofcontents | Abstract xv
1 Introduction 1 1.1 Trend of Intelligent Computing . . . . . . . . . . . . . . . . . . . . 1 1.2 Challenges of Intelligence Processing Unit Design . . . . . . . . . . 2 1.3 Research Topics and Main Contributions . . . . . . . . . . . . . . . 5 1.3.1 Brain-mimicking Unified Model . . . . . . . . . . . . . . . 5 1.3.2 Brain-mimicking Architecture Design . . . . . . . . . . . . 10 1.4 Dissertation Organization . . . . . . . . . . . . . . . . . . . . . . . 13 I Brain-mimicking Unified Model for Vision Analysis 15 2 Background Knowledge of Intelligent Visual Processing 17 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2 Human Visual System . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.3 Previous Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3.1 HTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.4 Memory-prediction . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.4.1 Integration of Recognition, Mining, and Synthesis . . . . . 27 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3 Vision Analyzer with Unified Recognition-Mining-Synthesis (RMS) Brain Model 33 3.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.1.1 Spatial Prediction . . . . . . . . . . . . . . . . . . . . . . . 34 3.1.2 Temporal Prediction . . . . . . . . . . . . . . . . . . . . . 34 3.2 Hippocampus Model: Off-line Mining of Visual Information Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.3 Neocortex Model: HST Prediction . . . . . . . . . . . . . . . . . . 37 3.3.1 Prediction Data Generation . . . . . . . . . . . . . . . . . . 38 3.3.2 Information Fusion . . . . . . . . . . . . . . . . . . . . . . 39 3.3.3 Meet-In-The-Middle Early Termination Strategy . . . . . . 40 3.4 Thalamus Attention Model: Content-adaptive Information Fusion . 40 3.5 Thalamus Attention Model: Occluded Image Masking . . . . . . . 41 3.6 Achieved Intelligent Functions . . . . . . . . . . . . . . . . . . . . 42 3.6.1 Noise Pattern Recognition and Recovery . . . . . . . . . . 42 3.6.2 Image Completion . . . . . . . . . . . . . . . . . . . . . . 47 3.6.3 Imagination . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 II Brain-mimicking Architecture Design for Intelligence Processing Unit 53 4 Design Issues for Intelligent Computing Platform 55 4.1 Hardware Design Issues . . . . . . . . . . . . . . . . . . . . . . . 55 4.2 Previous Intelligent Computing Platform . . . . . . . . . . . . . . . 56 4.3 Challenges for Intelligent Computing Platform . . . . . . . . . . . 58 4.3.1 Memory-bounded Computing System . . . . . . . . . . . . 59 4.3.2 Scalability Issues for Intelligent Computing . . . . . . . . . 60 4.4 Architecture Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.4.1 Information Processing Mechanism . . . . . . . . . . . . . 62 4.4.2 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.4.3 Communication . . . . . . . . . . . . . . . . . . . . . . . . 65 4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5 Architecture Design of Intelligence Processing Unit 67 5.1 Computation Analysis of Proposed Memory-prediction Model . . . 68 5.2 Silicon Brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.3 Neocortex Processor . . . . . . . . . . . . . . . . . . . . . . . . . 72 5.3.1 Push-based Processing . . . . . . . . . . . . . . . . . . . . 72 5.3.2 Cortical Column Processing Unit . . . . . . . . . . . . . . 79 5.4 Cortex Compiler . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.4.1 Cortex Placement . . . . . . . . . . . . . . . . . . . . . . . 82 5.4.2 Relay Cell Insertion . . . . . . . . . . . . . . . . . . . . . 83 5.5 Simulation Result . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6 Chip Implementation 99 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 6.2 Partition and Rescheduling for Large Scale Recognition Network . . 100 6.3 Chip Implementation Results . . . . . . . . . . . . . . . . . . . . . 103 6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 7 Conclusion 113 7.1 Principle Contributions . . . . . . . . . . . . . . . . . . . . . . . . 113 7.1.1 Unified Brain-inspired Model with Integration of Recognition, Mining, and Synthesis . . . . . . . . . . . . . . . . . 113 7.1.2 Brain-inspired Architecture Design for Intelligence Processing Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 7.2 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 7.2.1 Autonomous Recognition System with On-line Learning . . 117 7.2.2 Temporal Sequence Analysis . . . . . . . . . . . . . . . . . 117 7.2.3 Many-chip System for Large Scale Brain Network . . . . . 117 Bibliography. . . . . . . . . . . . . . . . . . . . . . . . . . 119 | |
| dc.language.iso | zh-TW | |
| dc.subject | 大腦啟發 | zh_TW |
| dc.subject | 智慧處理 | zh_TW |
| dc.subject | 視覺辨識 | zh_TW |
| dc.subject | 積體電路設計 | zh_TW |
| dc.subject | 記憶-預測 | zh_TW |
| dc.subject | Brain-inspired | en |
| dc.subject | Memory-prediction | en |
| dc.subject | VLSI design | en |
| dc.subject | Visual recognition | en |
| dc.subject | Intelligent Processing | en |
| dc.title | 針對智慧型視覺辨識應用之大腦啟發演算法及架構設計 | zh_TW |
| dc.title | Brain-inspired Algorithm and Architecture Design for Intelligent Visual Recognition Applications | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 98-1 | |
| dc.description.degree | 博士 | |
| dc.contributor.oralexamcommittee | 簡韶逸(Shao-Yi Chien),吳安宇(An-Yeu Wu),陳永昌(Yung-Chang Chen),賴永康(Yeong-Kang Lai),溫?岸(Kuei-Ann Wen),黃聖傑(Sheng-Chieh Huang),王駿發(Jhing-Fa Wang) | |
| dc.subject.keyword | 大腦啟發,智慧處理,視覺辨識,積體電路設計,記憶-預測, | zh_TW |
| dc.subject.keyword | Brain-inspired,Intelligent Processing,Visual recognition,VLSI design,Memory-prediction, | en |
| dc.relation.page | 129 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2009-11-13 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-98-1.pdf 未授權公開取用 | 4.84 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
