Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Development and Application of Robus...
~
Chuah, Joshua Ru.
Linked to FindBook
Google Book
Amazon
博客來
Development and Application of Robustness Evaluation Techniques for AI/ML Models Derived From Biological Data.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Development and Application of Robustness Evaluation Techniques for AI/ML Models Derived From Biological Data./
Author:
Chuah, Joshua Ru.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2024,
Description:
134 p.
Notes:
Source: Dissertations Abstracts International, Volume: 85-12, Section: B.
Contained By:
Dissertations Abstracts International85-12B.
Subject:
Bioinformatics. -
Online resource:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31144281
ISBN:
9798383059241
Development and Application of Robustness Evaluation Techniques for AI/ML Models Derived From Biological Data.
Chuah, Joshua Ru.
Development and Application of Robustness Evaluation Techniques for AI/ML Models Derived From Biological Data.
- Ann Arbor : ProQuest Dissertations & Theses, 2024 - 134 p.
Source: Dissertations Abstracts International, Volume: 85-12, Section: B.
Thesis (Ph.D.)--Rensselaer Polytechnic Institute, 2024.
Artificial intelligence (AI) and machine learning (ML) models are frequently used to analyze large, complex biomedical datasets. These types of models are commonly used for tasks such as disease diagnosis, biomarker identification, and network analysis. However, the data that these models are derived from and used on are often characterized by significant amounts of noise resulting from patient-to-patient heterogeneity, different measurement protocols, and other commonly encountered sources of noise. This creates a problem for the robustness of these models and one outcome is that relatively few AI/ML models have seen widespread clinical use. As such, evaluation and subsequent improvement of AI/ML model robustness is vital for clinical translation.This dissertation examines methods which will allow researchers to quantify model robustness, and further demonstrates how to develop more robust AI/ML models. First, this work defines a framework which can be used to evaluate the robustness of an already-trained biomarker-based diagnostic model. This is done by measuring the quality of the biomarkers used to generate the classifier and observing the classifier's performance when the data is perturbed by several sources of noise. Next, a detailed investigation was performed that looked at the robustness of deep learning medical image classification models in response to being trained by data that was artificially perturbed. One key outcome from this evaluation was that it was demonstrated that perturbing training samples results in excellent classifier performance not only for noisy testing data but also does not sacrifice performance on unperturbed images. This is especially important as a classifier will need to be able to perform well on several distributions of data to truly be generalizable across multiple datasets. Finally, a method for the creation of multi-omic co-expression networks of longitudinal biological data was developed. The robustness of this model was assessed by noise perturbation of the data, and further verified by comparing the model outcomes to known biological information.By understanding how to measure and improve AI/ML model robustness, robust models can be generated that perform well on diverse sets of data. In conclusion, this dissertation lays the foundation for advancing the clinical applicability of AI/ML models by establishing methodologies to assess and enhance their robustness in the face of inherent data noise.
ISBN: 9798383059241Subjects--Topical Terms:
553671
Bioinformatics.
Subjects--Index Terms:
Machine learning
Development and Application of Robustness Evaluation Techniques for AI/ML Models Derived From Biological Data.
LDR
:03737nmm a2200421 4500
001
2401911
005
20241022111605.5
006
m o d
007
cr#unu||||||||
008
251215s2024 ||||||||||||||||| ||eng d
020
$a
9798383059241
035
$a
(MiAaPQ)AAI31144281
035
$a
AAI31144281
035
$a
2401911
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Chuah, Joshua Ru.
$0
(orcid)0009-0008-0165-9292
$3
3772129
245
1 0
$a
Development and Application of Robustness Evaluation Techniques for AI/ML Models Derived From Biological Data.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2024
300
$a
134 p.
500
$a
Source: Dissertations Abstracts International, Volume: 85-12, Section: B.
500
$a
Advisor: Hahn, Juergen.
502
$a
Thesis (Ph.D.)--Rensselaer Polytechnic Institute, 2024.
520
$a
Artificial intelligence (AI) and machine learning (ML) models are frequently used to analyze large, complex biomedical datasets. These types of models are commonly used for tasks such as disease diagnosis, biomarker identification, and network analysis. However, the data that these models are derived from and used on are often characterized by significant amounts of noise resulting from patient-to-patient heterogeneity, different measurement protocols, and other commonly encountered sources of noise. This creates a problem for the robustness of these models and one outcome is that relatively few AI/ML models have seen widespread clinical use. As such, evaluation and subsequent improvement of AI/ML model robustness is vital for clinical translation.This dissertation examines methods which will allow researchers to quantify model robustness, and further demonstrates how to develop more robust AI/ML models. First, this work defines a framework which can be used to evaluate the robustness of an already-trained biomarker-based diagnostic model. This is done by measuring the quality of the biomarkers used to generate the classifier and observing the classifier's performance when the data is perturbed by several sources of noise. Next, a detailed investigation was performed that looked at the robustness of deep learning medical image classification models in response to being trained by data that was artificially perturbed. One key outcome from this evaluation was that it was demonstrated that perturbing training samples results in excellent classifier performance not only for noisy testing data but also does not sacrifice performance on unperturbed images. This is especially important as a classifier will need to be able to perform well on several distributions of data to truly be generalizable across multiple datasets. Finally, a method for the creation of multi-omic co-expression networks of longitudinal biological data was developed. The robustness of this model was assessed by noise perturbation of the data, and further verified by comparing the model outcomes to known biological information.By understanding how to measure and improve AI/ML model robustness, robust models can be generated that perform well on diverse sets of data. In conclusion, this dissertation lays the foundation for advancing the clinical applicability of AI/ML models by establishing methodologies to assess and enhance their robustness in the face of inherent data noise.
590
$a
School code: 0185.
650
4
$a
Bioinformatics.
$3
553671
650
4
$a
Biostatistics.
$3
1002712
650
4
$a
Biomedical engineering.
$3
535387
650
4
$a
Medical imaging.
$3
3172799
653
$a
Machine learning
653
$a
Metabolomics
653
$a
Multiomics
653
$a
Biomedical datasets
653
$a
Noise perturbation
690
$a
0715
690
$a
0308
690
$a
0541
690
$a
0574
690
$a
0800
710
2
$a
Rensselaer Polytechnic Institute.
$b
Biomedical Engineering.
$3
3178857
773
0
$t
Dissertations Abstracts International
$g
85-12B.
790
$a
0185
791
$a
Ph.D.
792
$a
2024
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31144281
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9510231
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login