Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Learning Neural Representations that...
~
Stachenfeld, Kimberly.
Linked to FindBook
Google Book
Amazon
博客來
Learning Neural Representations that Support Efficient Reinforcement Learning.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Learning Neural Representations that Support Efficient Reinforcement Learning./
Author:
Stachenfeld, Kimberly.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2018,
Description:
155 p.
Notes:
Source: Dissertation Abstracts International, Volume: 79-10(E), Section: B.
Contained By:
Dissertation Abstracts International79-10B(E).
Subject:
Neurosciences. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10824319
ISBN:
9780438050419
Learning Neural Representations that Support Efficient Reinforcement Learning.
Stachenfeld, Kimberly.
Learning Neural Representations that Support Efficient Reinforcement Learning.
- Ann Arbor : ProQuest Dissertations & Theses, 2018 - 155 p.
Source: Dissertation Abstracts International, Volume: 79-10(E), Section: B.
Thesis (Ph.D.)--Princeton University, 2018.
RL has been transformative for neuroscience by providing a normative anchor for interpreting neural and behavioral data. End-to-end RL methods have scored impressive victories with minimal compromises in autonomy, hand-engineering, and generality. The cost of this minimalism in practice is that model-free RL methods are slow to learn and generalize poorly. Humans and animals exhibit substantially improved flexibility and generalize learned information rapidly to new environment by learning invariants of the environment and features of the environment that support fast learning rapid transfer in new environments. An important question for both neuroscience and machine learning is what kind of ``representational objectives'' encourage humans and other animals to encode structure about the world. This can be formalized as ``representation feature learning,'' in which the animal or agent learns to form representations with information potentially relevant to the downstream RL process. We will overview different representational objectives that have received attention in neuroscience and in machine learning. The focus of this overview will be to first highlight conditions under which these seemingly unrelated objectives are actually mathematically equivalent. We will use this to motivate a breakdown of properties of different learned representations that are meaningfully different and can be used to inform contrasting hypotheses for neuroscience. We then use this perspective to motivate our model of the hippocampus. A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity, and policy dependence in place cells suggests that the representation is not purely spatial. We approach the problem of understanding hippocampal representations from a reinforcement learning perspective, focusing on what kind of spatial representation is most useful for maximizing future reward. We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. We go on to argue that entorhinal grid cells encode a low-dimensional basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.
ISBN: 9780438050419Subjects--Topical Terms:
588700
Neurosciences.
Learning Neural Representations that Support Efficient Reinforcement Learning.
LDR
:03447nmm a2200313 4500
001
2164072
005
20181026115419.5
008
190424s2018 ||||||||||||||||| ||eng d
020
$a
9780438050419
035
$a
(MiAaPQ)AAI10824319
035
$a
(MiAaPQ)princeton:12624
035
$a
AAI10824319
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Stachenfeld, Kimberly.
$3
3352106
245
1 0
$a
Learning Neural Representations that Support Efficient Reinforcement Learning.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2018
300
$a
155 p.
500
$a
Source: Dissertation Abstracts International, Volume: 79-10(E), Section: B.
500
$a
Adviser: Matthew M. Botvinick.
502
$a
Thesis (Ph.D.)--Princeton University, 2018.
520
$a
RL has been transformative for neuroscience by providing a normative anchor for interpreting neural and behavioral data. End-to-end RL methods have scored impressive victories with minimal compromises in autonomy, hand-engineering, and generality. The cost of this minimalism in practice is that model-free RL methods are slow to learn and generalize poorly. Humans and animals exhibit substantially improved flexibility and generalize learned information rapidly to new environment by learning invariants of the environment and features of the environment that support fast learning rapid transfer in new environments. An important question for both neuroscience and machine learning is what kind of ``representational objectives'' encourage humans and other animals to encode structure about the world. This can be formalized as ``representation feature learning,'' in which the animal or agent learns to form representations with information potentially relevant to the downstream RL process. We will overview different representational objectives that have received attention in neuroscience and in machine learning. The focus of this overview will be to first highlight conditions under which these seemingly unrelated objectives are actually mathematically equivalent. We will use this to motivate a breakdown of properties of different learned representations that are meaningfully different and can be used to inform contrasting hypotheses for neuroscience. We then use this perspective to motivate our model of the hippocampus. A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity, and policy dependence in place cells suggests that the representation is not purely spatial. We approach the problem of understanding hippocampal representations from a reinforcement learning perspective, focusing on what kind of spatial representation is most useful for maximizing future reward. We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. We go on to argue that entorhinal grid cells encode a low-dimensional basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.
590
$a
School code: 0181.
650
4
$a
Neurosciences.
$3
588700
650
4
$a
Quantitative psychology.
$3
2144748
650
4
$a
Cognitive psychology.
$3
523881
690
$a
0317
690
$a
0632
690
$a
0633
710
2
$a
Princeton University.
$b
Neuroscience.
$3
2099004
773
0
$t
Dissertation Abstracts International
$g
79-10B(E).
790
$a
0181
791
$a
Ph.D.
792
$a
2018
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10824319
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9363619
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login