語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Information retrieval evaluation in ...
~
Cross-Language Evaluation Forum., Conference.
FindBook
Google Book
Amazon
博客來
Information retrieval evaluation in a changing world = lessons learned from 20 years of CLEF /
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Information retrieval evaluation in a changing world/ edited by Nicola Ferro, Carol Peters.
其他題名:
lessons learned from 20 years of CLEF /
其他作者:
Ferro, Nicola.
出版者:
Cham :Springer International Publishing : : 2019.,
面頁冊數:
xxii, 595 p. :ill., digital ;24 cm.
內容註:
From Multilingual to Multimodal: The Evolution of CLEF over Two Decades -- The Evolution of Cranfield -- How to Run an Evaluation Task -- An Innovative Approach to Data Management and Curation of Experimental Data Generated through IR Test Collections -- TIRA Integrated Research Architecture -- EaaS: Evaluation-as-a-Service and Experiences from the VISCERAL Project -- Lessons Learnt from Experiments on the Ad-Hoc Multilingual Test Collections at CLEF -- The Challenges of Language Variation in Information Access -- Multi-lingual Retrieval of Pictures in ImageCLEF -- Experiences From the ImageCLEF Medical Retrieval and Annotation Tasks -- Automatic Image Annotation at ImageCLEF -- Image Retrieval Evaluation in Specific Domains -- 'Bout Sound and Vision: CLEF beyond Text Retrieval Tasks -- The Scholarly Impact and Strategic Intent of CLEF eHealth Labs from 2012-2017 -- Multilingual Patent Text Retrieval Evaluation: CLEF-IP -- Biodiversity Information Retrieval through Large Scale Content-Based Identification: A Long-Term Evaluation -- From XML Retrieval to Semantic Search and Beyond -- Results and Lessons of the Question Answering Track at CLEF -- Evolution of the PAN Lab on Digital Text Forensics -- RepLab: an Evaluation Campaign for Online Monitoring Systems -- Continuous Evaluation of Large-scale Information Access Systems: A Case for Living Labs -- The Scholarly Impact of CLEF 2010-2017 -- Reproducibility and Validity in CLEF -- Visual Analytics and IR Experimental Evaluation -- Adopting Systematic Evaluation Benchmarks in Operational Settings.
Contained By:
Springer Nature eBook
標題:
Information retrieval. -
電子資源:
https://doi.org/10.1007/978-3-030-22948-1
ISBN:
9783030229481
Information retrieval evaluation in a changing world = lessons learned from 20 years of CLEF /
Information retrieval evaluation in a changing world
lessons learned from 20 years of CLEF /[electronic resource] :edited by Nicola Ferro, Carol Peters. - Cham :Springer International Publishing :2019. - xxii, 595 p. :ill., digital ;24 cm. - The information retrieval series,v.411871-7500 ;. - Information retrieval series ;v.41..
From Multilingual to Multimodal: The Evolution of CLEF over Two Decades -- The Evolution of Cranfield -- How to Run an Evaluation Task -- An Innovative Approach to Data Management and Curation of Experimental Data Generated through IR Test Collections -- TIRA Integrated Research Architecture -- EaaS: Evaluation-as-a-Service and Experiences from the VISCERAL Project -- Lessons Learnt from Experiments on the Ad-Hoc Multilingual Test Collections at CLEF -- The Challenges of Language Variation in Information Access -- Multi-lingual Retrieval of Pictures in ImageCLEF -- Experiences From the ImageCLEF Medical Retrieval and Annotation Tasks -- Automatic Image Annotation at ImageCLEF -- Image Retrieval Evaluation in Specific Domains -- 'Bout Sound and Vision: CLEF beyond Text Retrieval Tasks -- The Scholarly Impact and Strategic Intent of CLEF eHealth Labs from 2012-2017 -- Multilingual Patent Text Retrieval Evaluation: CLEF-IP -- Biodiversity Information Retrieval through Large Scale Content-Based Identification: A Long-Term Evaluation -- From XML Retrieval to Semantic Search and Beyond -- Results and Lessons of the Question Answering Track at CLEF -- Evolution of the PAN Lab on Digital Text Forensics -- RepLab: an Evaluation Campaign for Online Monitoring Systems -- Continuous Evaluation of Large-scale Information Access Systems: A Case for Living Labs -- The Scholarly Impact of CLEF 2010-2017 -- Reproducibility and Validity in CLEF -- Visual Analytics and IR Experimental Evaluation -- Adopting Systematic Evaluation Benchmarks in Operational Settings.
This volume celebrates the twentieth anniversary of CLEF - the Cross-Language Evaluation Forum for the first ten years, and the Conference and Labs of the Evaluation Forum since - and traces its evolution over these first two decades. CLEF's main mission is to promote research, innovation and development of information retrieval (IR) systems by anticipating trends in information management in order to stimulate advances in the field of IR system experimentation and evaluation. The book is divided into six parts. Parts I and II provide background and context, with the first part explaining what is meant by experimental evaluation and the underlying theory, and describing how this has been interpreted in CLEF and in other internationally recognized evaluation initiatives. Part II presents research architectures and infrastructures that have been developed to manage experimental data and to provide evaluation services in CLEF and elsewhere. Parts III, IV and V represent the core of the book, presenting some of the most significant evaluation activities in CLEF, ranging from the early multilingual text processing exercises to the later, more sophisticated experiments on multimodal collections in diverse genres and media. In all cases, the focus is not only on describing "what has been achieved", but above all on "what has been learnt". The final part examines the impact CLEF has had on the research world and discusses current and future challenges, both academic and industrial, including the relevance of IR benchmarking in industrial settings. Mainly intended for researchers in academia and industry, it also offers useful insights and tips for practitioners in industry working on the evaluation and performance issues of IR tools, and graduate students specializing in information retrieval.
ISBN: 9783030229481
Standard No.: 10.1007/978-3-030-22948-1doiSubjects--Corporate Names:
3502134
Cross-Language Evaluation Forum.
Conference.Subjects--Topical Terms:
566853
Information retrieval.
LC Class. No.: ZA3075 / .I54 2019
Dewey Class. No.: 025.04
Information retrieval evaluation in a changing world = lessons learned from 20 years of CLEF /
LDR
:04512nmm a2200349 a 4500
001
2242718
003
DE-He213
005
20200710160833.0
006
m d
007
cr nn 008maaau
008
211207s2019 sz s 0 eng d
020
$a
9783030229481
$q
(electronic bk.)
020
$a
9783030229474
$q
(paper)
024
7
$a
10.1007/978-3-030-22948-1
$2
doi
035
$a
978-3-030-22948-1
040
$a
GP
$c
GP
041
0
$a
eng
050
4
$a
ZA3075
$b
.I54 2019
072
7
$a
UNH
$2
bicssc
072
7
$a
COM030000
$2
bisacsh
072
7
$a
UNH
$2
thema
072
7
$a
UND
$2
thema
082
0 4
$a
025.04
$2
23
090
$a
ZA3075
$b
.I43 2019
245
0 0
$a
Information retrieval evaluation in a changing world
$h
[electronic resource] :
$b
lessons learned from 20 years of CLEF /
$c
edited by Nicola Ferro, Carol Peters.
260
$a
Cham :
$b
Springer International Publishing :
$b
Imprint: Springer,
$c
2019.
300
$a
xxii, 595 p. :
$b
ill., digital ;
$c
24 cm.
490
1
$a
The information retrieval series,
$x
1871-7500 ;
$v
v.41
505
0
$a
From Multilingual to Multimodal: The Evolution of CLEF over Two Decades -- The Evolution of Cranfield -- How to Run an Evaluation Task -- An Innovative Approach to Data Management and Curation of Experimental Data Generated through IR Test Collections -- TIRA Integrated Research Architecture -- EaaS: Evaluation-as-a-Service and Experiences from the VISCERAL Project -- Lessons Learnt from Experiments on the Ad-Hoc Multilingual Test Collections at CLEF -- The Challenges of Language Variation in Information Access -- Multi-lingual Retrieval of Pictures in ImageCLEF -- Experiences From the ImageCLEF Medical Retrieval and Annotation Tasks -- Automatic Image Annotation at ImageCLEF -- Image Retrieval Evaluation in Specific Domains -- 'Bout Sound and Vision: CLEF beyond Text Retrieval Tasks -- The Scholarly Impact and Strategic Intent of CLEF eHealth Labs from 2012-2017 -- Multilingual Patent Text Retrieval Evaluation: CLEF-IP -- Biodiversity Information Retrieval through Large Scale Content-Based Identification: A Long-Term Evaluation -- From XML Retrieval to Semantic Search and Beyond -- Results and Lessons of the Question Answering Track at CLEF -- Evolution of the PAN Lab on Digital Text Forensics -- RepLab: an Evaluation Campaign for Online Monitoring Systems -- Continuous Evaluation of Large-scale Information Access Systems: A Case for Living Labs -- The Scholarly Impact of CLEF 2010-2017 -- Reproducibility and Validity in CLEF -- Visual Analytics and IR Experimental Evaluation -- Adopting Systematic Evaluation Benchmarks in Operational Settings.
520
$a
This volume celebrates the twentieth anniversary of CLEF - the Cross-Language Evaluation Forum for the first ten years, and the Conference and Labs of the Evaluation Forum since - and traces its evolution over these first two decades. CLEF's main mission is to promote research, innovation and development of information retrieval (IR) systems by anticipating trends in information management in order to stimulate advances in the field of IR system experimentation and evaluation. The book is divided into six parts. Parts I and II provide background and context, with the first part explaining what is meant by experimental evaluation and the underlying theory, and describing how this has been interpreted in CLEF and in other internationally recognized evaluation initiatives. Part II presents research architectures and infrastructures that have been developed to manage experimental data and to provide evaluation services in CLEF and elsewhere. Parts III, IV and V represent the core of the book, presenting some of the most significant evaluation activities in CLEF, ranging from the early multilingual text processing exercises to the later, more sophisticated experiments on multimodal collections in diverse genres and media. In all cases, the focus is not only on describing "what has been achieved", but above all on "what has been learnt". The final part examines the impact CLEF has had on the research world and discusses current and future challenges, both academic and industrial, including the relevance of IR benchmarking in industrial settings. Mainly intended for researchers in academia and industry, it also offers useful insights and tips for practitioners in industry working on the evaluation and performance issues of IR tools, and graduate students specializing in information retrieval.
610
2 0
$a
Cross-Language Evaluation Forum.
$b
Conference.
$3
3502134
650
0
$a
Information retrieval.
$3
566853
650
1 4
$a
Information Storage and Retrieval.
$3
761906
650
2 4
$a
Natural Language Processing (NLP)
$3
3381674
700
1
$a
Ferro, Nicola.
$3
2058115
700
1
$a
Peters, Carol.
$3
893119
710
2
$a
SpringerLink (Online service)
$3
836513
773
0
$t
Springer Nature eBook
830
0
$a
Information retrieval series ;
$v
v.41.
$3
3502135
856
4 0
$u
https://doi.org/10.1007/978-3-030-22948-1
950
$a
Computer Science (SpringerNature-11645)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9403764
電子資源
11.線上閱覽_V
電子書
EB ZA3075 .I54 2019
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入