Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/64995
Full metadata record
???org.dspace.app.webui.jsptag.ItemTag.dcfield???ValueLanguage
dc.contributor.advisor于天立(Tian-Li Yu)
dc.contributor.authorJen-Hao Changen
dc.contributor.author張仁豪zh_TW
dc.date.accessioned2021-06-16T23:13:48Z-
dc.date.available2014-08-27
dc.date.copyright2012-08-27
dc.date.issued2012
dc.date.submitted2012-08-02
dc.identifier.citation[1] J. A. Anderson. An Introduction to Neural Networks. The MIT Press, Mar. 1995.
[2] J. R. Anderson, D. Bothell, M. D. Byrne, S. Douglass, C. Lebiere, and Y. Qin. An
integrated theory of the mind. Psychol Rev, 111(4):1036–1060, Oct. 2004.
[3] S. Baker, A. Ireland, and A. Smaill. On the use of the constructive omega-rule
within automated deduction. In Proceedings of the International Conference on
Logic Programming and Automated Reasoning, LPAR ’92, pages 214–225, London,
UK, UK, 1992. Springer-Verlag.
[4] Y. Bengio. Learning Deep Architectures for AI. Found. Trends Mach. Learn., 2:1–
127, Jan. 2009.
[5] G. Edelman. Neural Darwinism: Selection and reentrant signaling in higher brain
function. Neuron, 10(2):115–125, Feb. 1993.
[6] V. B. Edelman, Gerald M.; Mountcastle. The mindful brain: Cortical organization
and the group-selective theory of higher brain. Oxford, England: MIT Press, 1978.
[7] C. Fernando, K. K. Karishma, and E. Szathm’ary. Copying and Evolution of Neuronal
Topology. PLoS ONE, 3(11), Nov. 2008.
[8] D. B. Fogel. Evolutionary Computation: Toward a New Philosophy of Machine
Intelligence (IEEE Press Series on Computational Intelligence). Wiley-IEEE Press,
2006.
[9] J. D. D. Friedenberg and D. G. Silverman. Cognitive Science: An Introduction to
the Study of Mind. Sage Publications, Inc, 2005.
[10] J. L. Garfield. Cognitive Science: An Introduction. Cambridge: MIT Press, 1995.
[11] B. Gramlich. Strategic issues, problems and challenges in inductive theorem proving.
Electron. Notes Theor. Comput. Sci., 125(2):5–43, Mar. 2005.
[12] B. Gramlich, H. Kirchner, and F. Pfenning. Editorial: Strategies in automated deduction.
Annals of Mathematics and Artificial Intelligence, 29(1-4):0–0, Jan. 2001.
[13] C. Hartshorne. Collected Papers of Charles Sanders Peirce. Cambridge, MA: Harvard
University Press, 1931-1958.
[14] J. Hawkins and D. George. Hierarchical Temporal Memory: Concepts, Theory, and
Terminology. Technical Report version 3/27/2007, Numenta Inc., 2006.
[15] J. H. Holland. Adaptation in natural and artificial systems. MIT Press, Cambridge,
MA, USA, 1992.
[16] D. Holmes and L. Jain. Introduction to Bayesian Networks. pages 1–5. 2008.
[17] D. Kapur and H. Zhang. An overview of rewrite rule laboratory (rrl). J. of Computer
and Mathematics with Applications, 29:91–114, 1995.
[18] M. Kaufmann, J. S. Moore, and P. Manolios. Computer-Aided Reasoning: An Approach.
Kluwer Academic Publishers, Norwell, MA, USA, 2000.
[19] J. E. Laird. A Gentle Introduction to Soar. The MIT Press, 2012.
[20] D. Marr. Vision: A computational investigation into the human representation and
processing of visual information. W.H. Freeman, July 1982.
[21] B. L. Miller and D. E. Goldberg. Genetic algorithms, selection schemes, and the
varying effects of noise. Evol. Comput., 4(2):113–131, June 1996.
[22] U. Neisser. Cognitive Psychology. Prentice Hall, 1st edition.
[23] S. Owre, J. Rushby, and N. Shankar. PVS: a prototype verification system. In
Proceedings of the 11th International Conference on Automated Deduction, number
607 in LNCS. Springer-Verlag, 1992.
[24] L. C. Paulson. Isabelle: a Generic Theorem Prover. Number 828 in Lecture Notes
in Computer Science. Springer – Berlin, 1994.
[25] J. Rissanen. Minimum-Description-Length Principle. Wiley, 2006.
[26] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. A Bradford
Book, 1998.
[27] P. Thagard. Mind: Introduction to cognitive science. A Bradford Book, 2000.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/64995-
dc.description.abstract即使是目前最先進的人工智慧技術在歸納能力方面仍然遠不及人類,人類可以利用基本的背景知識從觀察到的事物中學習並歸納出通則。許多關於人類心智的研究都想要解釋人類的學習能力。在許多關於人類心智的假說中,此論文基於表徵主義、功能主義、神經達爾文主義的想法,設計了一個學習模型。本論文探討此學習模型在很少的基本背景知識下學習通則的能力。本論文提出的學習模型分為三個部分,而表徵主義、功能主義、神經達爾文主義分別為這三個部分奠定了基礎。表徵主義為此學習模型提供了表達知識的方式:此模型中所有的知識皆以符號為最小單位; 功能主義為此學習模型提供了結合知識的方式:此模型的知識可以用函數的方式來結合; 神經達爾文主義為此學習模型提供了歸納知識的方式:此模型利用優勝劣敗的演化方式來學習通則。本論文分析了這個學習模型:此學習模型的表達能力和圖靈機(universal Turing machine)相同;此模型的歸納能力是可靠(sound)但不完備(incomplete)的。本論文利用三個實驗來測試這個學習模型的能力,實驗結果說明此學習模型可以在這些問題中歸納出通則。此論文證實了基於這三個關於人類心智假說的學習模型具備了歸納學習的能力; 即使基於很少的背景知識,此學習模型仍然可以從基本的例子中歸納出通則; 這個學型模型的能力可以利用給予額外的知識來提高歸納學習的效率。zh_TW
dc.description.abstractEven the best artificial intelligence technology can not compare with humans on the inductive ability. Humans can learn general rules from observed instances based only on bare-bone prior knowledge. Lots of research on the human mind tries to explain the learning ability of humans. Of all these hypotheses about the human mind, this thesis designs a learning model based on the ideas of representationalism, functionalism, and neural Darwinism. This thesis researches on the learning ability of the proposed model when the model is given only bare-bone prior knowledge. The proposed model is composed of three parts, which are respectively founded on representationalism, functionalism, and neural Darwinism. The knowledge representation of the proposed model is designed based on representationalism: symbols are the basic components of the knowledge representation of the model. The
combination of knowledge of the proposed model is designed based on functionalism: the knowledge in the model can be combined in the form of functions. The inductive ability of the proposed model is designed based on neural Darwinism: the model exploits evolutionary processes to learn general rules. This thesis analyzes the proposed learning model. The expressive power of the model is the same as the universal Turing machine. The inductive process of the model is sound but incomplete. Three experiments are used to test the learning ability of the proposed model. Results shows that the learning model is able to learn general rules in these experiments. The results shows that the learning model is able to induct general rules from input instances of these experiments. This thesis proves that a learning model presuming representationalism, functionalism, and neural Darwinism is able to do inductive learning. Even based on bare-bone prior knowledge, the learning model still can learn general rules from instances. The learning model can exploit additional analogical knowledge to increase its inductive learning ability.
en
dc.description.provenanceMade available in DSpace on 2021-06-16T23:13:48Z (GMT). No. of bitstreams: 1
ntu-101-R99921037-1.pdf: 1217905 bytes, checksum: e1d734f22ea2475c3cacf1922fff1e79 (MD5)
Previous issue date: 2012
en
dc.description.tableofcontents口試委員會審定書 i
Acknowledgments iii
致謝 v
中文摘要 vii
Abstract ix
Introduction 1
1 Background 5
1.1 Representationalism 5
1.2 Functionalism 7
1.3 Neural Darwinism 7
1.4 Evolutionary Computation 9
1.5 Inductive Theorem Proving 11
1.6 Summary 12
2 Design the Learning Model 13
2.1 Based on Cognitive Hypotheses 13
2.1.1 Based on Representationalism 14
2.1.2 Based on Functionalism 17
2.1.3 Based on Neural Darwinism 17
2.2 Overview of the Model 19
2.2.1 Position of the Model 19
2.2.2 The Learning Procedure and Block Diagram 20
2.3 Provided Prior Knowledge 21
2.4 Implementation Detail 22
2.4.1 Induction and Validation 22
2.4.2 Knowledge Base Architecture 24
2.4.3 Fitness Function 25
2.4.4 Evolutionary Algorithms 27
2.4.5 Encountered Problems 29
2.5 Compared to Existing Methods 30
2.6 Summary 32
3 Analyses and Experiments 35
3.1 Analyses 35
3.1.1 Expressive Ability 36
3.1.2 Soundness and Completeness 37
3.1.3 Decidability 37
3.2 Experiment 1: Minesweeper 38
3.2.1 Problem Description 38
3.2.2 Problem Encoding 39
3.2.3 Experiment Results 40
3.3 Experiment 2: Minesweeper plus Analogy 45
3.3.1 The Analogy Information 45
3.3.2 Experiment Results 48
3.4 Experiment 3: Mathematical Ring 51
3.4.1 Problem Description and Encoding 51
3.4.2 Experiment Results 52
3.5 Summary 56
4 Conclusion 59
Bibliography 61
dc.language.isoen
dc.subject人工智慧zh_TW
dc.subject歸納學習zh_TW
dc.subject認知心理學假說zh_TW
dc.subject演化計算zh_TW
dc.subject基本預備知識zh_TW
dc.subject知識組合zh_TW
dc.subjectEvolutionary computationen
dc.subjectKnowledge combinationen
dc.subjectBare-bone prior knowledgeen
dc.subjectInductive learningen
dc.subjectArtificial intelligenceen
dc.subjectCognitive hypothesesen
dc.title基於表徵、功能、神經達爾文主義的學習模型探討zh_TW
dc.titleResearch on a Learning Model Presuming Representationalism, Functionalism, and Neural Darwinismen
dc.typeThesis
dc.date.schoolyear100-2
dc.description.degree碩士
dc.contributor.oralexamcommittee鄭士康(Shyh-Kang Jeng),陳穎平(Ying-ping Chen)
dc.subject.keyword人工智慧,歸納學習,認知心理學假說,演化計算,基本預備知識,知識組合,zh_TW
dc.subject.keywordArtificial intelligence,Inductive learning,Cognitive hypotheses,Evolutionary computation,Bare-bone prior knowledge,Knowledge combination,en
dc.relation.page62
dc.rights.note有償授權
dc.date.accepted2012-08-03
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
Appears in Collections:電機工程學系

Files in This Item:
File SizeFormat 
ntu-101-1.pdf
  Restricted Access
1.19 MBAdobe PDF
Show simple item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved