語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Resource Constrained Neural Architecture Design.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Resource Constrained Neural Architecture Design./
作者:
Xiong, Yunyang.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
183 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Contained By:
Dissertations Abstracts International83-03B.
標題:
Artificial intelligence. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28718016
ISBN:
9798535599861
Resource Constrained Neural Architecture Design.
Xiong, Yunyang.
Resource Constrained Neural Architecture Design.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 183 p.
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Thesis (Ph.D.)--The University of Wisconsin - Madison, 2021.
This item must not be sold to any third party vendors.
Deep neural networks have been highly effective for a wide range of applications in computer vision, natural language processing, speech recognition, medical imaging, and biology. Large amounts of annotated data, dedicated deep learning computing hardware such as the NVIDIA GPU and Google TPU, and the innovative neural network architectures and algorithms have all contributed to rapid advances over the last decade. Despite the foregoing improvements, the ever-growing amount of compute and data resources needed for training neural networks (whose sizes are growing quickly) as well as a need for deploying these models on embedded devices call for designing deep neural networks under various types of resource constraints. For example, low latency and real-time response of deep neural networks can be critical for various applications. While the complexity of deep neural networks can be reduced by model compression, different applications with diverse resource constraints pose unique challenges for neural network architecture design. For instance, each type of device has its own hardware idiosyncrasies and requires different deep architectures to achieve the best accuracy-efficiency trade-off. Consequently, designing neural networks that are adaptive and scalable to applications with diverse resource requirements is not trivial. We need methods that are capable of addressing different application-specific challenges paying attention to: (1) problem type (e.g., classification, object detection, sentence prediction), (2) resource challenges (e.g., strict inference compute, memory, and latency constraint, limited training computational resources, small sample sizes in scientific/biomedical problems). In this dissertation, we describe algorithms that facilitate neural architecture design while effectively addressing application- and domain-specific resource challenges. For diverse application domains, we study neural architecture design strategies respecting different resource needs ranging from test time efficiency to training efficiency and sample efficiency. We show the effectiveness of these ideas for learning with smaller datasets as well as enabling the deployment of deep learning systems on embedded devices with limited computational resources which may enable reducing the environmental effects of using such models.
ISBN: 9798535599861Subjects--Topical Terms:
516317
Artificial intelligence.
Subjects--Index Terms:
Accuracy-efficiency trade-off
Resource Constrained Neural Architecture Design.
LDR
:03598nmm a2200397 4500
001
2348639
005
20220912135624.5
008
241004s2021 ||||||||||||||||| ||eng d
020
$a
9798535599861
035
$a
(MiAaPQ)AAI28718016
035
$a
AAI28718016
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Xiong, Yunyang.
$3
3688009
245
1 0
$a
Resource Constrained Neural Architecture Design.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
183 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
500
$a
Advisor: Singh, Vikas.
502
$a
Thesis (Ph.D.)--The University of Wisconsin - Madison, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
Deep neural networks have been highly effective for a wide range of applications in computer vision, natural language processing, speech recognition, medical imaging, and biology. Large amounts of annotated data, dedicated deep learning computing hardware such as the NVIDIA GPU and Google TPU, and the innovative neural network architectures and algorithms have all contributed to rapid advances over the last decade. Despite the foregoing improvements, the ever-growing amount of compute and data resources needed for training neural networks (whose sizes are growing quickly) as well as a need for deploying these models on embedded devices call for designing deep neural networks under various types of resource constraints. For example, low latency and real-time response of deep neural networks can be critical for various applications. While the complexity of deep neural networks can be reduced by model compression, different applications with diverse resource constraints pose unique challenges for neural network architecture design. For instance, each type of device has its own hardware idiosyncrasies and requires different deep architectures to achieve the best accuracy-efficiency trade-off. Consequently, designing neural networks that are adaptive and scalable to applications with diverse resource requirements is not trivial. We need methods that are capable of addressing different application-specific challenges paying attention to: (1) problem type (e.g., classification, object detection, sentence prediction), (2) resource challenges (e.g., strict inference compute, memory, and latency constraint, limited training computational resources, small sample sizes in scientific/biomedical problems). In this dissertation, we describe algorithms that facilitate neural architecture design while effectively addressing application- and domain-specific resource challenges. For diverse application domains, we study neural architecture design strategies respecting different resource needs ranging from test time efficiency to training efficiency and sample efficiency. We show the effectiveness of these ideas for learning with smaller datasets as well as enabling the deployment of deep learning systems on embedded devices with limited computational resources which may enable reducing the environmental effects of using such models.
590
$a
School code: 0262.
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Language.
$3
643551
650
4
$a
Cellular telephones.
$3
607843
650
4
$a
Accuracy.
$3
3559958
650
4
$a
Deep learning.
$3
3554982
650
4
$a
Human performance.
$3
3562051
650
4
$a
Autonomous vehicles.
$3
2179092
650
4
$a
Voice recognition.
$3
3564741
650
4
$a
Input output.
$3
3686610
650
4
$a
Neural networks.
$3
677449
650
4
$a
Maps.
$3
544078
650
4
$a
Decomposition.
$3
3561186
650
4
$a
Approximation.
$3
3560410
650
4
$a
Design.
$3
518875
650
4
$a
Natural language processing.
$3
1073412
650
4
$a
Algorithms.
$3
536374
650
4
$a
Medical imaging.
$3
3172799
650
4
$a
Medical screening.
$3
735005
650
4
$a
Efficiency.
$3
753744
653
$a
Accuracy-efficiency trade-off
653
$a
Computer vision
653
$a
Deep neural networks
653
$a
Natural language processing
653
$a
Neural architecture design
653
$a
Resource constraints
690
$a
0800
690
$a
0389
690
$a
0574
690
$a
0679
710
2
$a
The University of Wisconsin - Madison.
$b
Computer Sciences.
$3
2099760
773
0
$t
Dissertations Abstracts International
$g
83-03B.
790
$a
0262
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28718016
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9471077
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入