語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Optimizing for Mental Representations in the Evolution of Artificial Cognitive Systems.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Optimizing for Mental Representations in the Evolution of Artificial Cognitive Systems./
作者:
Kirkpatrick, Douglas Andrew.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
181 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Contained By:
Dissertations Abstracts International83-03B.
標題:
Artificial intelligence. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28651549
ISBN:
9798535549712
Optimizing for Mental Representations in the Evolution of Artificial Cognitive Systems.
Kirkpatrick, Douglas Andrew.
Optimizing for Mental Representations in the Evolution of Artificial Cognitive Systems.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 181 p.
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Thesis (Ph.D.)--Michigan State University, 2021.
This item must not be sold to any third party vendors.
Mental representations, or sensor-independent internal models of the environment, are used to interpret the world and make decisions based upon that understanding. For example, a human sees dark clouds in the sky, recalls that often dark clouds mean rain (a mental representation), and consequently decides to wear a raincoat. I seek to identify, understand, and encourage the evolution of these representations in silico. Previous work identified an information-theoretic tool, referred to as R, that measures mental representations in artificial cognitive systems (e.g., Markov Brains or Recurrent Neural Networks). Further work found that selecting for R, along with task performance, in the evolution of artificial cognitive systems leads to better overall performance on a given task. Here I explore the implications and opportunities of this modified selection process, referred to as R-augmentation. After an overview of common methods, techniques, and computational substrates in Chapter 1, a series of working chapters experimentally demonstrate the capabilities and possibilities of R-augmentation. First, in Chapter 2, I address concerns regarding potential limitations of R-augmentation. This includes an refutation of suspected negative impacts on the system's ability to generalize within-domain and the system's robustness to sensor noise. To the contrary, the systems evolved with R-augmentation tend to perform better than those evolved without, in the context of noisy environments and different computational components. In Chapter 3 I examine how R-augmentation works across different cognitive structures, focusing on the evolution of genetic programming related structures and the effect that augmentation has on the distribution of their representations. For Chapter 4, in the context of the all-component Markov Brain (referred to as a Buffet Brain, see [Hintze et al., 2019]) I analyze potential reasons that explain why R-augmentation works; the mechanism seems to be based on evolutionary dynamics as opposed to structural or component differences. Next, I demonstrate a novel usage of R-augmentation in Chapter 5; with R-augmentation, one can use far fewer training examples during evolution and the resulting systems still perform approximately as well as those that were trained on the full set of examples. This advantage in increased performance at low sample size is found in some examples of in-domain and out-domain generalization, with the "worst-case" scenario being that the networks created by R-augmentation perform as well as their unaugmented equivalents. Lastly, in Chapter 6 I move beyond R-augmentation to explore using other neuro-correlates - particularly the distribution of representations, called smearedness - as part of the fitness function. I investigate the possibility of using MAP-Elites to identify an optimal value of smearedness for augmentation or for use as an optimization method in its own right. Taken together, these investigations demonstrate both the capabilities and limitations of R-augmentation, and open up pathways for future research.
ISBN: 9798535549712Subjects--Topical Terms:
516317
Artificial intelligence.
Subjects--Index Terms:
Evolutionary artificial intelligence
Optimizing for Mental Representations in the Evolution of Artificial Cognitive Systems.
LDR
:04462nmm a2200421 4500
001
2348619
005
20220912135619.5
008
241004s2021 ||||||||||||||||| ||eng d
020
$a
9798535549712
035
$a
(MiAaPQ)AAI28651549
035
$a
AAI28651549
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Kirkpatrick, Douglas Andrew.
$0
(orcid)0000-0003-4225-5362
$3
3687984
245
1 0
$a
Optimizing for Mental Representations in the Evolution of Artificial Cognitive Systems.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
181 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
500
$a
Advisor: Hintze, Arend;Adami, Christoph C.
502
$a
Thesis (Ph.D.)--Michigan State University, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
Mental representations, or sensor-independent internal models of the environment, are used to interpret the world and make decisions based upon that understanding. For example, a human sees dark clouds in the sky, recalls that often dark clouds mean rain (a mental representation), and consequently decides to wear a raincoat. I seek to identify, understand, and encourage the evolution of these representations in silico. Previous work identified an information-theoretic tool, referred to as R, that measures mental representations in artificial cognitive systems (e.g., Markov Brains or Recurrent Neural Networks). Further work found that selecting for R, along with task performance, in the evolution of artificial cognitive systems leads to better overall performance on a given task. Here I explore the implications and opportunities of this modified selection process, referred to as R-augmentation. After an overview of common methods, techniques, and computational substrates in Chapter 1, a series of working chapters experimentally demonstrate the capabilities and possibilities of R-augmentation. First, in Chapter 2, I address concerns regarding potential limitations of R-augmentation. This includes an refutation of suspected negative impacts on the system's ability to generalize within-domain and the system's robustness to sensor noise. To the contrary, the systems evolved with R-augmentation tend to perform better than those evolved without, in the context of noisy environments and different computational components. In Chapter 3 I examine how R-augmentation works across different cognitive structures, focusing on the evolution of genetic programming related structures and the effect that augmentation has on the distribution of their representations. For Chapter 4, in the context of the all-component Markov Brain (referred to as a Buffet Brain, see [Hintze et al., 2019]) I analyze potential reasons that explain why R-augmentation works; the mechanism seems to be based on evolutionary dynamics as opposed to structural or component differences. Next, I demonstrate a novel usage of R-augmentation in Chapter 5; with R-augmentation, one can use far fewer training examples during evolution and the resulting systems still perform approximately as well as those that were trained on the full set of examples. This advantage in increased performance at low sample size is found in some examples of in-domain and out-domain generalization, with the "worst-case" scenario being that the networks created by R-augmentation perform as well as their unaugmented equivalents. Lastly, in Chapter 6 I move beyond R-augmentation to explore using other neuro-correlates - particularly the distribution of representations, called smearedness - as part of the fitness function. I investigate the possibility of using MAP-Elites to identify an optimal value of smearedness for augmentation or for use as an optimization method in its own right. Taken together, these investigations demonstrate both the capabilities and limitations of R-augmentation, and open up pathways for future research.
590
$a
School code: 0128.
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Neurosciences.
$3
588700
650
4
$a
Evolution & development.
$3
3172418
650
4
$a
Genetics.
$3
530508
650
4
$a
Computer science.
$3
523869
650
4
$a
Random variables.
$3
646291
650
4
$a
Experiments.
$3
525909
650
4
$a
Genetic algorithms.
$3
533907
650
4
$a
Sensors.
$3
3549539
650
4
$a
Brain.
$3
525115
650
4
$a
Noise.
$3
598816
653
$a
Evolutionary artificial intelligence
653
$a
Evolutionary computation
653
$a
Genetic algorithms
653
$a
Neural networks
653
$a
Robustness of solutions
653
$a
Artificial cognitive systems
653
$a
Mental representation
690
$a
0800
690
$a
0412
690
$a
0317
690
$a
0369
690
$a
0984
710
2
$a
Michigan State University.
$b
Computer Science - Doctor of Philosophy.
$3
2104328
773
0
$t
Dissertations Abstracts International
$g
83-03B.
790
$a
0128
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28651549
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9471057
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入