語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Robust structure-based autonomous co...
~
Sridharan, Mohan.
FindBook
Google Book
Amazon
博客來
Robust structure-based autonomous color learning on a mobile robot.
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Robust structure-based autonomous color learning on a mobile robot./
作者:
Sridharan, Mohan.
面頁冊數:
144 p.
附註:
Advisers: Benjamin Kuipers; Peter Stone.
Contained By:
Dissertation Abstracts International68-10B.
標題:
Artificial Intelligence. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3284681
ISBN:
9780549263067
Robust structure-based autonomous color learning on a mobile robot.
Sridharan, Mohan.
Robust structure-based autonomous color learning on a mobile robot.
- 144 p.
Advisers: Benjamin Kuipers; Peter Stone.
Thesis (Ph.D.)--The University of Texas at Austin, 2007.
Keywords. Autonomous Color Learning, Illumination Invariance, Realtime Vision, Legged robots.
ISBN: 9780549263067Subjects--Topical Terms:
769149
Artificial Intelligence.
Robust structure-based autonomous color learning on a mobile robot.
LDR
:04607nam 2200337 a 45
001
958949
005
20110704
008
110704s2007 ||||||||||||||||| ||eng d
020
$a
9780549263067
035
$a
(UMI)AAI3284681
035
$a
AAI3284681
040
$a
UMI
$c
UMI
100
1
$a
Sridharan, Mohan.
$3
1282416
245
1 0
$a
Robust structure-based autonomous color learning on a mobile robot.
300
$a
144 p.
500
$a
Advisers: Benjamin Kuipers; Peter Stone.
500
$a
Source: Dissertation Abstracts International, Volume: 68-10, Section: B, page: 6747.
502
$a
Thesis (Ph.D.)--The University of Texas at Austin, 2007.
520
$a
Keywords. Autonomous Color Learning, Illumination Invariance, Realtime Vision, Legged robots.
520
$a
Mobile robots are increasingly finding application in fields as diverse as medicine, surveillance and navigation. In order to operate in the real world, robots are primarily dependent on sensory information but the ability to accurately sense the real world is still missing. Though visual input in the form of color images from a camera is a rich source of information for mobile robots, until recently most people have focussed their attention on other sensors such as laser, sonar and tactile sensors. There are several reasons for this reliance on other relatively low-bandwidth sensors. Most sophisticated vision algorithms require substantial computational (and memory) resources and assume a stationary or slow moving camera, while many mobile robot systems and embedded systems are characterized by rapid camera motion and real-time operation within constrained computational resources. In addition, color cameras require time-consuming manual color calibration, which is sensitive to illumination changes, while mobile robots typically need to be deployed in a short period of time and often go into places with changing illumination.
520
$a
It is commonly asserted that in order to achieve autonomous behavior, an agent must learn to deal with unexpected environmental conditions. However, for true extended autonomy, an agent must be able to recognize when to abandon its current model in favor of learning a new one, how to learn in its current situation, and also what features or representation to learn. This thesis is a fully implemented example of such autonomy in the context of color learning and segmentation, which primarily leverages the fact that many mobile robot applications involve a structured environment consisting of objects of unique shape(s) and color(s) - information that can be exploited to overcome the challenges mentioned above. The main contributions of this thesis are as follows.
520
$a
First, the thesis presents a hybrid color representation that enables color learning both within constrained lab settings and in un-engineered indoor corridors, i.e. it enables the robot to decide what to learn. The second main contribution of the thesis is to enable a mobile robot to exploit the known structure of its environment to significantly reduce human involvement in the color calibration process. The known positions, shapes and color labels of the objects of interest are used by the robot to autonomously plan an action sequence to facilitate learning, i.e. it decides how to learn. The third main contribution is a novel representation for illumination, which enables the robot to detect and adapt smoothly to a range of illumination changes, without any prior knowledge of the different illuminations, i.e. the robot figures out when to learn. Fourth, as a means of testing the proposed algorithms, the thesis provides a real-time mobile robot vision system, which performs color segmentation, object recognition and line detection in the presence of rapid camera motion. In addition, a practical comparison is performed of the color spaces for robot vision -- YCbCr, RGB and LAB are considered. The baseline system initially requires manual color calibration and constant illumination, but with the proposed innovations, it provides a self-contained mobile robot vision system that enables a robot to exploit the inherent structure and plan a motion sequence for learning the desired colors, and to detect and adapt to illumination changes, with minimal human supervision.
590
$a
School code: 0227.
650
4
$a
Artificial Intelligence.
$3
769149
650
4
$a
Computer Science.
$3
626642
650
4
$a
Engineering, Robotics.
$3
1018454
690
$a
0771
690
$a
0800
690
$a
0984
710
2
$a
The University of Texas at Austin.
$b
Electrical and Computer Engineering.
$3
1018445
773
0
$t
Dissertation Abstracts International
$g
68-10B.
790
$a
0227
790
1 0
$a
Kuipers, Benjamin,
$e
advisor
790
1 0
$a
Stone, Peter,
$e
advisor
791
$a
Ph.D.
792
$a
2007
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3284681
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9122414
電子資源
11.線上閱覽_V
電子書
EB W9122414
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入