語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Approximate Bayesian Methods for Opt...
~
Morais, Michael J.
FindBook
Google Book
Amazon
博客來
Approximate Bayesian Methods for Optimal Neural Coding and Decision-Making.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Approximate Bayesian Methods for Optimal Neural Coding and Decision-Making./
作者:
Morais, Michael J.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
139 p.
附註:
Source: Dissertations Abstracts International, Volume: 82-12, Section: B.
Contained By:
Dissertations Abstracts International82-12B.
標題:
Neurosciences. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28496964
ISBN:
9798515256630
Approximate Bayesian Methods for Optimal Neural Coding and Decision-Making.
Morais, Michael J.
Approximate Bayesian Methods for Optimal Neural Coding and Decision-Making.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 139 p.
Source: Dissertations Abstracts International, Volume: 82-12, Section: B.
Thesis (Ph.D.)--Princeton University, 2021.
This item must not be sold to any third party vendors.
One fundamental goal of theoretical neuroscience is to understand the normative principles governing the functional organization of neural circuits, and, in turn, to what extent they can be considered optimal. Calling neural representations of information in the brain ``optimal'' implies a multifarious equilibrium that balances robustness against flexibility, completeness against relevance, and so on, but it need only imply a solution to some optimization program. The exact forms of these programs varies with the modeling goals, neural circuits, tasks, or even animals under investigation. With this dissertation, we explore how we can define neural codes as optimal when they generate optimal behavior -- an easy principle to state, but a hard one to implement. Such a principle would bridge a gap between classical hypotheses of optimal neural coding, efficient coding and the Bayesian brain, with a common unified theory.In the first study, we analyzed neural population activity in V1 while monkeys performed a visual detection task, and found that a majority of the total choice-related variability is already present in V1 population activity. Such a prominent contribution of non-stimulus activity in classically sensory regions cannot be incorporated into existing models of neural coding, and demands models that can jointly optimize coding and decision-making within a single neural population.In the second study, we derived power-law efficient codes, a natural generalization of classical efficient codes, and show they are sufficient to replicate and explain a diverse set of psychophysical results. This broader family can maximize mutual information or minimize error of perceptual decisions, suggesting that psychophysical phenomena used to validate normative models could be more general features of perceptual systems than previously appreciated.In the third study, we translated the problem of joint model learning and decision-making into Bayesian machine learning, and extended a family of methods for decision-aware approximate inference to include a novel algorithm that we called loss-calibrated expectation propagation. How this problem can be solved by a non-biophysical system could be a constructive reference point for future studies into joint coding and decision-making, and the normative principles that drive decision-related variability in optimal sensory neural codes.
ISBN: 9798515256630Subjects--Topical Terms:
588700
Neurosciences.
Subjects--Index Terms:
Approximate inference
Approximate Bayesian Methods for Optimal Neural Coding and Decision-Making.
LDR
:03604nmm a2200385 4500
001
2281660
005
20210920103611.5
008
220723s2021 ||||||||||||||||| ||eng d
020
$a
9798515256630
035
$a
(MiAaPQ)AAI28496964
035
$a
AAI28496964
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Morais, Michael J.
$3
3560341
245
1 0
$a
Approximate Bayesian Methods for Optimal Neural Coding and Decision-Making.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
139 p.
500
$a
Source: Dissertations Abstracts International, Volume: 82-12, Section: B.
500
$a
Advisor: Pillow, Jonathan W.
502
$a
Thesis (Ph.D.)--Princeton University, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
One fundamental goal of theoretical neuroscience is to understand the normative principles governing the functional organization of neural circuits, and, in turn, to what extent they can be considered optimal. Calling neural representations of information in the brain ``optimal'' implies a multifarious equilibrium that balances robustness against flexibility, completeness against relevance, and so on, but it need only imply a solution to some optimization program. The exact forms of these programs varies with the modeling goals, neural circuits, tasks, or even animals under investigation. With this dissertation, we explore how we can define neural codes as optimal when they generate optimal behavior -- an easy principle to state, but a hard one to implement. Such a principle would bridge a gap between classical hypotheses of optimal neural coding, efficient coding and the Bayesian brain, with a common unified theory.In the first study, we analyzed neural population activity in V1 while monkeys performed a visual detection task, and found that a majority of the total choice-related variability is already present in V1 population activity. Such a prominent contribution of non-stimulus activity in classically sensory regions cannot be incorporated into existing models of neural coding, and demands models that can jointly optimize coding and decision-making within a single neural population.In the second study, we derived power-law efficient codes, a natural generalization of classical efficient codes, and show they are sufficient to replicate and explain a diverse set of psychophysical results. This broader family can maximize mutual information or minimize error of perceptual decisions, suggesting that psychophysical phenomena used to validate normative models could be more general features of perceptual systems than previously appreciated.In the third study, we translated the problem of joint model learning and decision-making into Bayesian machine learning, and extended a family of methods for decision-aware approximate inference to include a novel algorithm that we called loss-calibrated expectation propagation. How this problem can be solved by a non-biophysical system could be a constructive reference point for future studies into joint coding and decision-making, and the normative principles that drive decision-related variability in optimal sensory neural codes.
590
$a
School code: 0181.
650
4
$a
Neurosciences.
$3
588700
650
4
$a
Statistics.
$3
517247
650
4
$a
Logic.
$3
529544
653
$a
Approximate inference
653
$a
Bayesian statistics
653
$a
Decision-making
653
$a
Efficient coding
653
$a
Neural coding
653
$a
Perception
690
$a
0317
690
$a
0463
690
$a
0395
710
2
$a
Princeton University.
$b
Neuroscience.
$3
2099004
773
0
$t
Dissertations Abstracts International
$g
82-12B.
790
$a
0181
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28496964
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9433393
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入