語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Enhancing LLM performance = efficacy...
~
Passban, Peyman.
FindBook
Google Book
Amazon
博客來
Enhancing LLM performance = efficacy, fine-tuning, and inference techniques /
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Enhancing LLM performance/ edited by Peyman Passban, Andy Way, Mehdi Rezagholizadeh.
其他題名:
efficacy, fine-tuning, and inference techniques /
其他作者:
Passban, Peyman.
出版者:
Cham :Springer Nature Switzerland : : 2025.,
面頁冊數:
xvii, 183 p. :ill. (some col.), digital ;24 cm.
內容註:
Introduction and Fundamentals -- SPEED: Speculative Pipelined Execution for Efficient Decoding -- Efficient LLM Inference on CPUs -- KronA: Parameter-Efficient Tuning with Kronecker Adapter -- LoDA: Low-Dimensional Adaptation of Large Language Models -- Sparse Fine-Tuning for Inference Acceleration of Large Language Models -- TCNCA: Temporal CNN with Chunked Attention for Efficient Training on Long Sequences -- Class-Based Feature Knowledge Distillation -- On the Use of Cross-Attentive Fusion Techniques for Audio-Visual Speaker Verification -- An Efficient Clustering Algorithm for Self-Supervised Speaker Recognition -- Remaining Issues for AI.
Contained By:
Springer Nature eBook
標題:
Machine learning. -
電子資源:
https://doi.org/10.1007/978-3-031-85747-8
ISBN:
9783031857478
Enhancing LLM performance = efficacy, fine-tuning, and inference techniques /
Enhancing LLM performance
efficacy, fine-tuning, and inference techniques /[electronic resource] :edited by Peyman Passban, Andy Way, Mehdi Rezagholizadeh. - Cham :Springer Nature Switzerland :2025. - xvii, 183 p. :ill. (some col.), digital ;24 cm. - Machine translation: technologies and applications,v. 72522-803X ;. - Machine translation: technologies and applications ;volume 7..
Introduction and Fundamentals -- SPEED: Speculative Pipelined Execution for Efficient Decoding -- Efficient LLM Inference on CPUs -- KronA: Parameter-Efficient Tuning with Kronecker Adapter -- LoDA: Low-Dimensional Adaptation of Large Language Models -- Sparse Fine-Tuning for Inference Acceleration of Large Language Models -- TCNCA: Temporal CNN with Chunked Attention for Efficient Training on Long Sequences -- Class-Based Feature Knowledge Distillation -- On the Use of Cross-Attentive Fusion Techniques for Audio-Visual Speaker Verification -- An Efficient Clustering Algorithm for Self-Supervised Speaker Recognition -- Remaining Issues for AI.
This book is a pioneering exploration of the state-of-the-art techniques that drive large language models (LLMs) toward greater efficiency and scalability. Edited by three distinguished experts-Peyman Passban, Mehdi Rezagholizadeh, and Andy Way-this book presents practical solutions to the growing challenges of training and deploying these massive models. With their combined experience across academia, research, and industry, the authors provide insights into the tools and strategies required to improve LLM performance while reducing computational demands. This book is more than just a technical guide; it bridges the gap between research and real-world applications. Each chapter presents cutting-edge advancements in inference optimization, model architecture, and fine-tuning techniques, all designed to enhance the usability of LLMs in diverse sectors. Readers will find extensive discussions on the practical aspects of implementing and deploying LLMs in real-world scenarios. The book serves as a comprehensive resource for researchers and industry professionals, offering a balanced blend of in-depth technical insights and practical, hands-on guidance. It is a go-to reference book for students, researchers in computer science and relevant sub-branches, including machine learning, computational linguistics, and more.
ISBN: 9783031857478
Standard No.: 10.1007/978-3-031-85747-8doiSubjects--Topical Terms:
533906
Machine learning.
LC Class. No.: Q325.5
Dewey Class. No.: 006.31
Enhancing LLM performance = efficacy, fine-tuning, and inference techniques /
LDR
:03100nmm a2200337 a 4500
001
2412443
003
DE-He213
005
20250704131705.0
006
m d
007
cr nn 008maaau
008
260204s2025 sz s 0 eng d
020
$a
9783031857478
$q
(electronic bk.)
020
$a
9783031857461
$q
(paper)
024
7
$a
10.1007/978-3-031-85747-8
$2
doi
035
$a
978-3-031-85747-8
040
$a
GP
$c
GP
041
0
$a
eng
050
4
$a
Q325.5
072
7
$a
UYQM
$2
bicssc
072
7
$a
MAT029000
$2
bisacsh
072
7
$a
UYQM
$2
thema
082
0 4
$a
006.31
$2
23
090
$a
Q325.5
$b
.E58 2025
245
0 0
$a
Enhancing LLM performance
$h
[electronic resource] :
$b
efficacy, fine-tuning, and inference techniques /
$c
edited by Peyman Passban, Andy Way, Mehdi Rezagholizadeh.
260
$a
Cham :
$b
Springer Nature Switzerland :
$b
Imprint: Springer,
$c
2025.
300
$a
xvii, 183 p. :
$b
ill. (some col.), digital ;
$c
24 cm.
490
1
$a
Machine translation: technologies and applications,
$x
2522-803X ;
$v
v. 7
505
0
$a
Introduction and Fundamentals -- SPEED: Speculative Pipelined Execution for Efficient Decoding -- Efficient LLM Inference on CPUs -- KronA: Parameter-Efficient Tuning with Kronecker Adapter -- LoDA: Low-Dimensional Adaptation of Large Language Models -- Sparse Fine-Tuning for Inference Acceleration of Large Language Models -- TCNCA: Temporal CNN with Chunked Attention for Efficient Training on Long Sequences -- Class-Based Feature Knowledge Distillation -- On the Use of Cross-Attentive Fusion Techniques for Audio-Visual Speaker Verification -- An Efficient Clustering Algorithm for Self-Supervised Speaker Recognition -- Remaining Issues for AI.
520
$a
This book is a pioneering exploration of the state-of-the-art techniques that drive large language models (LLMs) toward greater efficiency and scalability. Edited by three distinguished experts-Peyman Passban, Mehdi Rezagholizadeh, and Andy Way-this book presents practical solutions to the growing challenges of training and deploying these massive models. With their combined experience across academia, research, and industry, the authors provide insights into the tools and strategies required to improve LLM performance while reducing computational demands. This book is more than just a technical guide; it bridges the gap between research and real-world applications. Each chapter presents cutting-edge advancements in inference optimization, model architecture, and fine-tuning techniques, all designed to enhance the usability of LLMs in diverse sectors. Readers will find extensive discussions on the practical aspects of implementing and deploying LLMs in real-world scenarios. The book serves as a comprehensive resource for researchers and industry professionals, offering a balanced blend of in-depth technical insights and practical, hands-on guidance. It is a go-to reference book for students, researchers in computer science and relevant sub-branches, including machine learning, computational linguistics, and more.
650
0
$a
Machine learning.
$3
533906
650
0
$a
Natural language processing (Computer science)
$3
565309
650
1 4
$a
Machine Learning.
$3
3382522
650
2 4
$a
Natural Language Processing (NLP).
$3
3755514
700
1
$a
Passban, Peyman.
$3
3787714
700
1
$a
Way, Andy.
$3
3529385
700
1
$a
Rezagholizadeh, Mehdi.
$3
3787715
710
2
$a
SpringerLink (Online service)
$3
836513
773
0
$t
Springer Nature eBook
830
0
$a
Machine translation: technologies and applications ;
$v
volume 7.
$3
3787716
856
4 0
$u
https://doi.org/10.1007/978-3-031-85747-8
950
$a
Education (SpringerNature-41171)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9517941
電子資源
11.線上閱覽_V
電子書
EB Q325.5
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入