Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Linked to FindBook
Google Book
Amazon
博客來
Computational Models in fMri : = from the Neural Basis of Visual Recognition Memory to Predicting Brain Activation for Arbitrary Tasks.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Computational Models in fMri :/
Reminder of title:
from the Neural Basis of Visual Recognition Memory to Predicting Brain Activation for Arbitrary Tasks.
Author:
Walters, Jonathon.
Description:
1 online resource (140 pages)
Notes:
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
Contained By:
Dissertations Abstracts International85-04B.
Subject:
Neurosciences. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30615131click for full text (PQDT)
ISBN:
9798380471466
Computational Models in fMri : = from the Neural Basis of Visual Recognition Memory to Predicting Brain Activation for Arbitrary Tasks.
Walters, Jonathon.
Computational Models in fMri :
from the Neural Basis of Visual Recognition Memory to Predicting Brain Activation for Arbitrary Tasks. - 1 online resource (140 pages)
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
Thesis (Ph.D.)--Stanford University, 2023.
Includes bibliographical references
This dissertation explores specific and generalized approaches in the computational modeling of human brain data, from localizing brain regions important for visual recognition memory to predicting cortex-wide activation for arbitrary tasks.In Part 1, we investigate recognition memory for naturalistic scenes, focusing on subsequent memory e↵ects (SMEs), which reveal areas in the brain whose activity during an experience is associated with whether that experience is later remembered or forgotten. Behaviorally, people are remarkably consistent in the images they remember or forget, indicating that inherent properties of images heavily influence human vision and memory. Emerging evidence suggests that this phenomenon of image memorability may be an automatic, stimulus-driven, high-level perceptual signal represented along the path from perception to memory. If true, memorability is an item-level confound present in decades of studies showing SMEs in a consistent set of brain regions. To address this issue, we used a recent large-scale functional magnetic resonance imaging (fMRI) dataset in which 8 subjects were each shown up to 30,000 images over the course of a year, with image repetition delays ranging on the order of seconds to months. We found that, after controlling for image memorability and encoding-related reaction time, SMEs nevertheless persisted in the expected brain regions, providing a critical validation of decades of research into visual recognition memory. Additionally, while we replicated prior work showing image memorability e↵ects (IMEs) in high-level visual cortex, we also found IMEs in early visual cortex, a surprising novel finding that was perhaps uniquely detectable by this dataset. Finally, we observed an overlap of SMEs and IMEs in areas thought to be selective for either e↵ect, disrupting a narrative in the literature of a clean neural separation between these stimulus-driven and observer-driven e↵ects.Part 2 generalizes the analyses in Part 1 with a more flexible, predictive modeling framework that learns to map from a comprehensive space of psychological functions (perceptual, motor, and cognitive) to cortex-wide patterns of activation. Fundamentally, a deep understanding of the brain basis of cognition should enable accurate prediction of brain activity patterns for any psychological task, based on the cognitive functions engaged by that task. Encoding models (EMs), a class of computational models that predict neural responses from known features (e.g., stimulus properties), have succeeded in circumscribed domains like visual neuroscience, but implementing domain-general EMs that predict brain-wide activity for arbitrary tasks has been limited mainly by availability of datasets that 1) suciently span a large space of psychological functions, and 2) are suciently annotated with such functions to allow robust EM specification. To address these issues, we introduce cognitive encoding models (CEMs), which predict cortical activation patterns for arbitrary tasks based on their perceptual, motor, and cognitive demands, as specified by a formal ontology. CEMs were trained and tested using the Multi-Domain Task Battery dataset of 24 subjects engaging in 44 diverse task conditions over the course of 32 fMRI scans (5.5 hours of task data per subject). We found that CEMs predicted cortical activation maps of held-out tasks with high accuracy, and we probed the trained models for insights into a) hierarchical relationships between psychological functions, b) the degree to which brains of di↵erent individuals similarly encode such functions, and c) the functional specialization of large-scale resting-state networks. Our implementation and validation of CEMs provides a proof of principle for the utility of formal ontologies in cognitive neuroscience and motivates the use of CEMs in the further testing of cognitive theories.Taken together, these modeling approaches set the stage for several fruitful research directions, in particular the development of stimulus-computable encoder-decoder models of brain and behavior that continually use the latest models in machine learning and artificial intelligence to characterize, in a data-driven manner, how representations in the brain are transformed from perception to memory in service of future behavior.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798380471466Subjects--Topical Terms:
588700
Neurosciences.
Index Terms--Genre/Form:
542853
Electronic books.
Computational Models in fMri : = from the Neural Basis of Visual Recognition Memory to Predicting Brain Activation for Arbitrary Tasks.
LDR
:05638nmm a2200337K 4500
001
2363084
005
20231109104755.5
006
m o d
007
cr mn ---uuuuu
008
241011s2023 xx obm 000 0 eng d
020
$a
9798380471466
035
$a
(MiAaPQ)AAI30615131
035
$a
(MiAaPQ)STANFORDyb752th1520
035
$a
AAI30615131
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Walters, Jonathon.
$3
3703833
245
1 0
$a
Computational Models in fMri :
$b
from the Neural Basis of Visual Recognition Memory to Predicting Brain Activation for Arbitrary Tasks.
264
0
$c
2023
300
$a
1 online resource (140 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
500
$a
Advisor: Poldrack, Russell.
502
$a
Thesis (Ph.D.)--Stanford University, 2023.
504
$a
Includes bibliographical references
520
$a
This dissertation explores specific and generalized approaches in the computational modeling of human brain data, from localizing brain regions important for visual recognition memory to predicting cortex-wide activation for arbitrary tasks.In Part 1, we investigate recognition memory for naturalistic scenes, focusing on subsequent memory e↵ects (SMEs), which reveal areas in the brain whose activity during an experience is associated with whether that experience is later remembered or forgotten. Behaviorally, people are remarkably consistent in the images they remember or forget, indicating that inherent properties of images heavily influence human vision and memory. Emerging evidence suggests that this phenomenon of image memorability may be an automatic, stimulus-driven, high-level perceptual signal represented along the path from perception to memory. If true, memorability is an item-level confound present in decades of studies showing SMEs in a consistent set of brain regions. To address this issue, we used a recent large-scale functional magnetic resonance imaging (fMRI) dataset in which 8 subjects were each shown up to 30,000 images over the course of a year, with image repetition delays ranging on the order of seconds to months. We found that, after controlling for image memorability and encoding-related reaction time, SMEs nevertheless persisted in the expected brain regions, providing a critical validation of decades of research into visual recognition memory. Additionally, while we replicated prior work showing image memorability e↵ects (IMEs) in high-level visual cortex, we also found IMEs in early visual cortex, a surprising novel finding that was perhaps uniquely detectable by this dataset. Finally, we observed an overlap of SMEs and IMEs in areas thought to be selective for either e↵ect, disrupting a narrative in the literature of a clean neural separation between these stimulus-driven and observer-driven e↵ects.Part 2 generalizes the analyses in Part 1 with a more flexible, predictive modeling framework that learns to map from a comprehensive space of psychological functions (perceptual, motor, and cognitive) to cortex-wide patterns of activation. Fundamentally, a deep understanding of the brain basis of cognition should enable accurate prediction of brain activity patterns for any psychological task, based on the cognitive functions engaged by that task. Encoding models (EMs), a class of computational models that predict neural responses from known features (e.g., stimulus properties), have succeeded in circumscribed domains like visual neuroscience, but implementing domain-general EMs that predict brain-wide activity for arbitrary tasks has been limited mainly by availability of datasets that 1) suciently span a large space of psychological functions, and 2) are suciently annotated with such functions to allow robust EM specification. To address these issues, we introduce cognitive encoding models (CEMs), which predict cortical activation patterns for arbitrary tasks based on their perceptual, motor, and cognitive demands, as specified by a formal ontology. CEMs were trained and tested using the Multi-Domain Task Battery dataset of 24 subjects engaging in 44 diverse task conditions over the course of 32 fMRI scans (5.5 hours of task data per subject). We found that CEMs predicted cortical activation maps of held-out tasks with high accuracy, and we probed the trained models for insights into a) hierarchical relationships between psychological functions, b) the degree to which brains of di↵erent individuals similarly encode such functions, and c) the functional specialization of large-scale resting-state networks. Our implementation and validation of CEMs provides a proof of principle for the utility of formal ontologies in cognitive neuroscience and motivates the use of CEMs in the further testing of cognitive theories.Taken together, these modeling approaches set the stage for several fruitful research directions, in particular the development of stimulus-computable encoder-decoder models of brain and behavior that continually use the latest models in machine learning and artificial intelligence to characterize, in a data-driven manner, how representations in the brain are transformed from perception to memory in service of future behavior.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Neurosciences.
$3
588700
650
4
$a
Graphs.
$3
3685259
650
4
$a
Memory.
$3
522110
650
4
$a
Ontology.
$3
530874
650
4
$a
Neural networks.
$3
677449
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0317
690
$a
0800
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
Stanford University.
$3
754827
773
0
$t
Dissertations Abstracts International
$g
85-04B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30615131
$z
click for full text (PQDT)
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9485440
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login