語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Verification and Validation of Machine Learning Safety in Learning-Enabled Autonomous Systems.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Verification and Validation of Machine Learning Safety in Learning-Enabled Autonomous Systems./
作者:
Huang, Wei.
面頁冊數:
1 online resource (187 pages)
附註:
Source: Dissertations Abstracts International, Volume: 84-10, Section: A.
Contained By:
Dissertations Abstracts International84-10A.
標題:
Autonomous underwater vehicles. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30394654click for full text (PQDT)
ISBN:
9798377690405
Verification and Validation of Machine Learning Safety in Learning-Enabled Autonomous Systems.
Huang, Wei.
Verification and Validation of Machine Learning Safety in Learning-Enabled Autonomous Systems.
- 1 online resource (187 pages)
Source: Dissertations Abstracts International, Volume: 84-10, Section: A.
Thesis (Ph.D.)--The University of Liverpool (United Kingdom), 2023.
Includes bibliographical references
Past few years have witnessed tremendous progress on machine learning (ML) models, especially deep neural networks. The great achievement in human-level intelligence promotes the wide application of leaning-enabled systems (LESs), which consists of ML models as components, in many safety critical tasks, such as robot assisted surgery and self-driving cars. The safety critical tasks in turn raise people's concern on whether or not the modern ML techniques can meet safety requirements, and it has been shown that ML models are vulnerable to the robustness, security, and transparency problems. For example, the small, even human imperceptible, perturbations on the inputs can change the final prediction results. Therefore, it is urgently needed to develop rigorous analysis techniques for the LESs and ML components to have an objective evaluation on their safety and security performance. Unfortunately, this is very challenging because the ML models tend to be of large scale and hard to be analysed directly (commonly referred to "black-box"). In this thesis, we tackle the challenge through testing of ML components and practical verification of LESs. Such techniques belong to the Verification and Validation (V&V), which are widely applied in traditional systems such as airborne software systems and automotive systems to rigorously engineering their developments. Here, we adapt them to work with LESs and ML models.We start from the introduction, preliminary and literature review for studying safety problems in LESs. Then, we develop two black-box based testing methods for the robustness of DL models. One is based on the coverage-guided testing, a well-known software engineering testing technique. The other one considers the distribution of operational data for testing. In next chapter, the mechanism of backdoor attack on tree ensembles is firstly studied. It is followed by two techniques to debug test the backdoor. One detects backdoor inputs at runtime, and the other one synthesizes the backdoor knowledge from tree ensembles. In addition to debug testing robustness and backdoor, we present new metrics to evaluate DL models. Apart from the coverage rate provided by coverage-guided testing, the reliability, defined as the generation times robustness, can assess the overall performance of ML models. The proposed evaluation approach is further applied to assess the YOLOv3 model in Autonomous Underwater Vehicles. Finally, we study how failures of CNN models propagate to the whole LESs. For this purpose, we develop practical verification methods for robustness of LESs. At the end, we have a comprehensive discussion on contributions, findings and future works. The conclusion is also summarized.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798377690405Subjects--Topical Terms:
3444520
Autonomous underwater vehicles.
Index Terms--Genre/Form:
542853
Electronic books.
Verification and Validation of Machine Learning Safety in Learning-Enabled Autonomous Systems.
LDR
:04076nmm a2200373K 4500
001
2355132
005
20230515064621.5
006
m o d
007
cr mn ---uuuuu
008
241011s2023 xx obm 000 0 eng d
020
$a
9798377690405
035
$a
(MiAaPQ)AAI30394654
035
$a
(MiAaPQ)Liverpool_3168023
035
$a
AAI30394654
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Huang, Wei.
$3
1023420
245
1 0
$a
Verification and Validation of Machine Learning Safety in Learning-Enabled Autonomous Systems.
264
0
$c
2023
300
$a
1 online resource (187 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 84-10, Section: A.
500
$a
Advisor: Huang, Xiaowei ; Zhao, Xingyu.
502
$a
Thesis (Ph.D.)--The University of Liverpool (United Kingdom), 2023.
504
$a
Includes bibliographical references
520
$a
Past few years have witnessed tremendous progress on machine learning (ML) models, especially deep neural networks. The great achievement in human-level intelligence promotes the wide application of leaning-enabled systems (LESs), which consists of ML models as components, in many safety critical tasks, such as robot assisted surgery and self-driving cars. The safety critical tasks in turn raise people's concern on whether or not the modern ML techniques can meet safety requirements, and it has been shown that ML models are vulnerable to the robustness, security, and transparency problems. For example, the small, even human imperceptible, perturbations on the inputs can change the final prediction results. Therefore, it is urgently needed to develop rigorous analysis techniques for the LESs and ML components to have an objective evaluation on their safety and security performance. Unfortunately, this is very challenging because the ML models tend to be of large scale and hard to be analysed directly (commonly referred to "black-box"). In this thesis, we tackle the challenge through testing of ML components and practical verification of LESs. Such techniques belong to the Verification and Validation (V&V), which are widely applied in traditional systems such as airborne software systems and automotive systems to rigorously engineering their developments. Here, we adapt them to work with LESs and ML models.We start from the introduction, preliminary and literature review for studying safety problems in LESs. Then, we develop two black-box based testing methods for the robustness of DL models. One is based on the coverage-guided testing, a well-known software engineering testing technique. The other one considers the distribution of operational data for testing. In next chapter, the mechanism of backdoor attack on tree ensembles is firstly studied. It is followed by two techniques to debug test the backdoor. One detects backdoor inputs at runtime, and the other one synthesizes the backdoor knowledge from tree ensembles. In addition to debug testing robustness and backdoor, we present new metrics to evaluate DL models. Apart from the coverage rate provided by coverage-guided testing, the reliability, defined as the generation times robustness, can assess the overall performance of ML models. The proposed evaluation approach is further applied to assess the YOLOv3 model in Autonomous Underwater Vehicles. Finally, we study how failures of CNN models propagate to the whole LESs. For this purpose, we develop practical verification methods for robustness of LESs. At the end, we have a comprehensive discussion on contributions, findings and future works. The conclusion is also summarized.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Autonomous underwater vehicles.
$3
3444520
650
4
$a
Software reliability.
$3
3687003
650
4
$a
Neural networks.
$3
677449
650
4
$a
Seeds.
$3
573605
650
4
$a
Algorithms.
$3
536374
650
4
$a
Decision trees.
$3
827433
650
4
$a
Agronomy.
$3
2122783
650
4
$a
Computer science.
$3
523869
650
4
$a
Robotics.
$3
519753
650
4
$a
Transportation.
$3
555912
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0800
690
$a
0285
690
$a
0984
690
$a
0771
690
$a
0709
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
The University of Liverpool (United Kingdom).
$3
1684840
773
0
$t
Dissertations Abstracts International
$g
84-10A.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30394654
$z
click for full text (PQDT)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9477488
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入