語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Learning and Inferring Representatio...
~
Livezey, Jesse A.
FindBook
Google Book
Amazon
博客來
Learning and Inferring Representations of Data in Neural Networks.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Learning and Inferring Representations of Data in Neural Networks./
作者:
Livezey, Jesse A.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2017,
面頁冊數:
91 p.
附註:
Source: Dissertation Abstracts International, Volume: 78-11(E), Section: B.
Contained By:
Dissertation Abstracts International78-11B(E).
標題:
Biophysics. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10281928
ISBN:
9780355034042
Learning and Inferring Representations of Data in Neural Networks.
Livezey, Jesse A.
Learning and Inferring Representations of Data in Neural Networks.
- Ann Arbor : ProQuest Dissertations & Theses, 2017 - 91 p.
Source: Dissertation Abstracts International, Volume: 78-11(E), Section: B.
Thesis (Ph.D.)--University of California, Berkeley, 2017.
Finding useful representations of data in order to facilitate scientific knowledge generation is a ubiquitous concept across disciplines. Until the development of machine learning and statistical methods with hidden or latent representations, useful representations of data were generated "by hand" through scientific modeling or simple measurement observations. Scientific models often make explicit the underlying structure of a system which generates the data we observe and measure. To test a model, inferences must be made about the free parameters and the distributions of latent or unmeasured variables in the model conditioned on the data collected. At this time, many scientific disciplines such as astronomy, particle physics, wildlife conservation, and neuroscience have been moving towards collecting datasets that are large and complex enough so that no human will ever look at and analyze all measurements by hand. Datasets of this scale present an interesting scientific opportunity: to be able to derive insight into the structure of natural systems by creating models which can adapt themselves to the latent structure of large amounts of data, often called data-driven hypothesis testing. The three topics of this work fall under this umbrella, but are largely independent research directions. First, we show how deep learning can be used to infer representations of neural data which can be used to find the limits of information content in sparsely sampled neural activity and applied to improving the performance of brain-computer interfaces. Second, we derive a circuit model for a network neurons which implements approximate inference in a probabilistic model given the biological constraint of neuron-local computations. Finally, we provide a theoretical and empirical analysis of a family of methods for learning linear representations which have low coherence (cosine-similarity) and show that linear methods have limited applicability as compared to nonlinear, recurrent models which solve the same problem. Together, these results provide insight into how scientists and the brain can learn useful representations of data in deep and single layer networks.
ISBN: 9780355034042Subjects--Topical Terms:
518360
Biophysics.
Learning and Inferring Representations of Data in Neural Networks.
LDR
:03159nmm a2200313 4500
001
2159849
005
20180703084808.5
008
190424s2017 ||||||||||||||||| ||eng d
020
$a
9780355034042
035
$a
(MiAaPQ)AAI10281928
035
$a
(MiAaPQ)berkeley:16970
035
$a
AAI10281928
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Livezey, Jesse A.
$3
3347744
245
1 0
$a
Learning and Inferring Representations of Data in Neural Networks.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2017
300
$a
91 p.
500
$a
Source: Dissertation Abstracts International, Volume: 78-11(E), Section: B.
500
$a
Adviser: Michael R. DeWeese.
502
$a
Thesis (Ph.D.)--University of California, Berkeley, 2017.
520
$a
Finding useful representations of data in order to facilitate scientific knowledge generation is a ubiquitous concept across disciplines. Until the development of machine learning and statistical methods with hidden or latent representations, useful representations of data were generated "by hand" through scientific modeling or simple measurement observations. Scientific models often make explicit the underlying structure of a system which generates the data we observe and measure. To test a model, inferences must be made about the free parameters and the distributions of latent or unmeasured variables in the model conditioned on the data collected. At this time, many scientific disciplines such as astronomy, particle physics, wildlife conservation, and neuroscience have been moving towards collecting datasets that are large and complex enough so that no human will ever look at and analyze all measurements by hand. Datasets of this scale present an interesting scientific opportunity: to be able to derive insight into the structure of natural systems by creating models which can adapt themselves to the latent structure of large amounts of data, often called data-driven hypothesis testing. The three topics of this work fall under this umbrella, but are largely independent research directions. First, we show how deep learning can be used to infer representations of neural data which can be used to find the limits of information content in sparsely sampled neural activity and applied to improving the performance of brain-computer interfaces. Second, we derive a circuit model for a network neurons which implements approximate inference in a probabilistic model given the biological constraint of neuron-local computations. Finally, we provide a theoretical and empirical analysis of a family of methods for learning linear representations which have low coherence (cosine-similarity) and show that linear methods have limited applicability as compared to nonlinear, recurrent models which solve the same problem. Together, these results provide insight into how scientists and the brain can learn useful representations of data in deep and single layer networks.
590
$a
School code: 0028.
650
4
$a
Biophysics.
$3
518360
650
4
$a
Statistics.
$3
517247
650
4
$a
Neurosciences.
$3
588700
690
$a
0786
690
$a
0463
690
$a
0317
710
2
$a
University of California, Berkeley.
$b
Physics.
$3
1671059
773
0
$t
Dissertation Abstracts International
$g
78-11B(E).
790
$a
0028
791
$a
Ph.D.
792
$a
2017
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10281928
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9359396
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入