Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Learning and Inferring Representatio...
~
Livezey, Jesse A.
Linked to FindBook
Google Book
Amazon
博客來
Learning and Inferring Representations of Data in Neural Networks.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Learning and Inferring Representations of Data in Neural Networks./
Author:
Livezey, Jesse A.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2017,
Description:
91 p.
Notes:
Source: Dissertation Abstracts International, Volume: 78-11(E), Section: B.
Contained By:
Dissertation Abstracts International78-11B(E).
Subject:
Biophysics. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10281928
ISBN:
9780355034042
Learning and Inferring Representations of Data in Neural Networks.
Livezey, Jesse A.
Learning and Inferring Representations of Data in Neural Networks.
- Ann Arbor : ProQuest Dissertations & Theses, 2017 - 91 p.
Source: Dissertation Abstracts International, Volume: 78-11(E), Section: B.
Thesis (Ph.D.)--University of California, Berkeley, 2017.
Finding useful representations of data in order to facilitate scientific knowledge generation is a ubiquitous concept across disciplines. Until the development of machine learning and statistical methods with hidden or latent representations, useful representations of data were generated "by hand" through scientific modeling or simple measurement observations. Scientific models often make explicit the underlying structure of a system which generates the data we observe and measure. To test a model, inferences must be made about the free parameters and the distributions of latent or unmeasured variables in the model conditioned on the data collected. At this time, many scientific disciplines such as astronomy, particle physics, wildlife conservation, and neuroscience have been moving towards collecting datasets that are large and complex enough so that no human will ever look at and analyze all measurements by hand. Datasets of this scale present an interesting scientific opportunity: to be able to derive insight into the structure of natural systems by creating models which can adapt themselves to the latent structure of large amounts of data, often called data-driven hypothesis testing. The three topics of this work fall under this umbrella, but are largely independent research directions. First, we show how deep learning can be used to infer representations of neural data which can be used to find the limits of information content in sparsely sampled neural activity and applied to improving the performance of brain-computer interfaces. Second, we derive a circuit model for a network neurons which implements approximate inference in a probabilistic model given the biological constraint of neuron-local computations. Finally, we provide a theoretical and empirical analysis of a family of methods for learning linear representations which have low coherence (cosine-similarity) and show that linear methods have limited applicability as compared to nonlinear, recurrent models which solve the same problem. Together, these results provide insight into how scientists and the brain can learn useful representations of data in deep and single layer networks.
ISBN: 9780355034042Subjects--Topical Terms:
518360
Biophysics.
Learning and Inferring Representations of Data in Neural Networks.
LDR
:03159nmm a2200313 4500
001
2159849
005
20180703084808.5
008
190424s2017 ||||||||||||||||| ||eng d
020
$a
9780355034042
035
$a
(MiAaPQ)AAI10281928
035
$a
(MiAaPQ)berkeley:16970
035
$a
AAI10281928
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Livezey, Jesse A.
$3
3347744
245
1 0
$a
Learning and Inferring Representations of Data in Neural Networks.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2017
300
$a
91 p.
500
$a
Source: Dissertation Abstracts International, Volume: 78-11(E), Section: B.
500
$a
Adviser: Michael R. DeWeese.
502
$a
Thesis (Ph.D.)--University of California, Berkeley, 2017.
520
$a
Finding useful representations of data in order to facilitate scientific knowledge generation is a ubiquitous concept across disciplines. Until the development of machine learning and statistical methods with hidden or latent representations, useful representations of data were generated "by hand" through scientific modeling or simple measurement observations. Scientific models often make explicit the underlying structure of a system which generates the data we observe and measure. To test a model, inferences must be made about the free parameters and the distributions of latent or unmeasured variables in the model conditioned on the data collected. At this time, many scientific disciplines such as astronomy, particle physics, wildlife conservation, and neuroscience have been moving towards collecting datasets that are large and complex enough so that no human will ever look at and analyze all measurements by hand. Datasets of this scale present an interesting scientific opportunity: to be able to derive insight into the structure of natural systems by creating models which can adapt themselves to the latent structure of large amounts of data, often called data-driven hypothesis testing. The three topics of this work fall under this umbrella, but are largely independent research directions. First, we show how deep learning can be used to infer representations of neural data which can be used to find the limits of information content in sparsely sampled neural activity and applied to improving the performance of brain-computer interfaces. Second, we derive a circuit model for a network neurons which implements approximate inference in a probabilistic model given the biological constraint of neuron-local computations. Finally, we provide a theoretical and empirical analysis of a family of methods for learning linear representations which have low coherence (cosine-similarity) and show that linear methods have limited applicability as compared to nonlinear, recurrent models which solve the same problem. Together, these results provide insight into how scientists and the brain can learn useful representations of data in deep and single layer networks.
590
$a
School code: 0028.
650
4
$a
Biophysics.
$3
518360
650
4
$a
Statistics.
$3
517247
650
4
$a
Neurosciences.
$3
588700
690
$a
0786
690
$a
0463
690
$a
0317
710
2
$a
University of California, Berkeley.
$b
Physics.
$3
1671059
773
0
$t
Dissertation Abstracts International
$g
78-11B(E).
790
$a
0028
791
$a
Ph.D.
792
$a
2017
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10281928
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9359396
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login