語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Compelling Robot Behaviors through S...
~
Cuan, Caitlin Rose.
FindBook
Google Book
Amazon
博客來
Compelling Robot Behaviors through Supervised Learning and Choreorobotics.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Compelling Robot Behaviors through Supervised Learning and Choreorobotics./
作者:
Cuan, Caitlin Rose.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2023,
面頁冊數:
159 p.
附註:
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
Contained By:
Dissertations Abstracts International85-04B.
標題:
Robots. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30615087
ISBN:
9798380483872
Compelling Robot Behaviors through Supervised Learning and Choreorobotics.
Cuan, Caitlin Rose.
Compelling Robot Behaviors through Supervised Learning and Choreorobotics.
- Ann Arbor : ProQuest Dissertations & Theses, 2023 - 159 p.
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
Thesis (Ph.D.)--Stanford University, 2023.
As robots transition from industrial and research settings into everyday human environments, robots must be able to (1) learn from humans while benefiting from the full range of the humans' knowledge and (2) learn to interact with humans in safe, intuitive, and social ways. This thesis describes a series of compelling robot behaviors, where human perception and interaction are foregrounded in a variety of tasks. Supervised learning is incorporated in three projects to improve robots' capabilities in dynamic, changing environments.In the first project, robots learned a door opening task from human teleoperators. In the process of data collection, many failure modes would occur, such as the human teacher using the robot to apply too much force to the door handle or experiencing uncertainty about when the robot was contacting the door. In order to improve this, we wrote software to map the robot's force-torque sensor data to real-time haptic feedback for the human demonstrator. We found that when haptic feedback was provided to the human during the data collection process, performance improved for both human data collection (efficiency of learning from demonstration teacher examples) and autonomous robot door opening (successful deployment of the learned policy on the robot).In the second project, robots learned to navigate in response to human gestures by using imitation learning paired with model predictive control. Humans and the environment were observed through a series of images, which were fed to a neural network that outputted navigation points. These navigation points were fed to a model predictive control algorithm. This was a novel combination of a high-level neural network with a low-level navigation planner. We observed that trained policies correctly responded to human gestures in the majority of cases during different gesture scenarios. Thus, visual imitation learning paired with model predictive control effectively resulted in gesture-aware navigation.In the third project, robots' movement generated music in real time through software. Through two experiments, we showed that augmenting a robot with motiongenerated music increased robot like ability and perceived intelligence. In the fourth project, robots learned to move in groups based on a choreographer's preferences while running the music generation software from the third project. In an experiment for the fourth project, we demonstrated that our multi-robot flocking system with gesture and musical accompaniment effectively engaged and enthralled humans. We trained a model to select between different navigation modes, based on how a human choreographer made selections. We observed that humans' perceptions of the robots and their overall experience are not significantly altered by the method (human choreographer selection, model prediction, or a control condition) in which the robots' behaviors were modulated. The compelling robot behaviors described in this thesis elucidate how teaching interfaces and interactions with robots in everyday settings can be appealing, effective, and delightful.
ISBN: 9798380483872Subjects--Topical Terms:
529507
Robots.
Compelling Robot Behaviors through Supervised Learning and Choreorobotics.
LDR
:04160nmm a2200349 4500
001
2403598
005
20241118135843.5
006
m o d
007
cr#unu||||||||
008
251215s2023 ||||||||||||||||| ||eng d
020
$a
9798380483872
035
$a
(MiAaPQ)AAI30615087
035
$a
(MiAaPQ)STANFORDkb164ng7952
035
$a
AAI30615087
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Cuan, Caitlin Rose.
$3
3773870
245
1 0
$a
Compelling Robot Behaviors through Supervised Learning and Choreorobotics.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2023
300
$a
159 p.
500
$a
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
500
$a
Advisor: Okamura, Allison.
502
$a
Thesis (Ph.D.)--Stanford University, 2023.
520
$a
As robots transition from industrial and research settings into everyday human environments, robots must be able to (1) learn from humans while benefiting from the full range of the humans' knowledge and (2) learn to interact with humans in safe, intuitive, and social ways. This thesis describes a series of compelling robot behaviors, where human perception and interaction are foregrounded in a variety of tasks. Supervised learning is incorporated in three projects to improve robots' capabilities in dynamic, changing environments.In the first project, robots learned a door opening task from human teleoperators. In the process of data collection, many failure modes would occur, such as the human teacher using the robot to apply too much force to the door handle or experiencing uncertainty about when the robot was contacting the door. In order to improve this, we wrote software to map the robot's force-torque sensor data to real-time haptic feedback for the human demonstrator. We found that when haptic feedback was provided to the human during the data collection process, performance improved for both human data collection (efficiency of learning from demonstration teacher examples) and autonomous robot door opening (successful deployment of the learned policy on the robot).In the second project, robots learned to navigate in response to human gestures by using imitation learning paired with model predictive control. Humans and the environment were observed through a series of images, which were fed to a neural network that outputted navigation points. These navigation points were fed to a model predictive control algorithm. This was a novel combination of a high-level neural network with a low-level navigation planner. We observed that trained policies correctly responded to human gestures in the majority of cases during different gesture scenarios. Thus, visual imitation learning paired with model predictive control effectively resulted in gesture-aware navigation.In the third project, robots' movement generated music in real time through software. Through two experiments, we showed that augmenting a robot with motiongenerated music increased robot like ability and perceived intelligence. In the fourth project, robots learned to move in groups based on a choreographer's preferences while running the music generation software from the third project. In an experiment for the fourth project, we demonstrated that our multi-robot flocking system with gesture and musical accompaniment effectively engaged and enthralled humans. We trained a model to select between different navigation modes, based on how a human choreographer made selections. We observed that humans' perceptions of the robots and their overall experience are not significantly altered by the method (human choreographer selection, model prediction, or a control condition) in which the robots' behaviors were modulated. The compelling robot behaviors described in this thesis elucidate how teaching interfaces and interactions with robots in everyday settings can be appealing, effective, and delightful.
590
$a
School code: 0212.
650
4
$a
Robots.
$3
529507
650
4
$a
Musical instruments.
$3
550167
650
4
$a
Music.
$3
516178
650
4
$a
Feedback.
$3
677181
650
4
$a
Systems design.
$3
3433840
650
4
$a
Neural networks.
$3
677449
650
4
$a
Sound.
$3
542298
650
4
$a
Design.
$3
518875
650
4
$a
Robotics.
$3
519753
690
$a
0413
690
$a
0800
690
$a
0389
690
$a
0771
710
2
$a
Stanford University.
$3
754827
773
0
$t
Dissertations Abstracts International
$g
85-04B.
790
$a
0212
791
$a
Ph.D.
792
$a
2023
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30615087
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9511918
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入