語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Interaction between modules in learn...
~
Sethi, Amit.
FindBook
Google Book
Amazon
博客來
Interaction between modules in learning systems for vision applications.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Interaction between modules in learning systems for vision applications./
作者:
Sethi, Amit.
面頁冊數:
96 p.
附註:
Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 4012.
Contained By:
Dissertation Abstracts International67-07B.
標題:
Engineering, Electronics and Electrical. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3223715
ISBN:
9780542775611
Interaction between modules in learning systems for vision applications.
Sethi, Amit.
Interaction between modules in learning systems for vision applications.
- 96 p.
Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 4012.
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006.
Complex vision tasks such as event detection in a surveillance video can be divided into subtasks such as human detection, tracking, and trajectory analysis. The video can be thought of as being composed of various features. These features can be roughly arranged in a hierarchy from low level features to high-level features. Low-level features include edges and blobs, and high-level features include objects and events. Loosely, the low-level feature extraction is based on signal/image processing techniques, while the high-level feature extraction is based on machine learning techniques.
ISBN: 9780542775611Subjects--Topical Terms:
626636
Engineering, Electronics and Electrical.
Interaction between modules in learning systems for vision applications.
LDR
:03128nmm 2200301 4500
001
1833567
005
20071009090858.5
008
130610s2006 eng d
020
$a
9780542775611
035
$a
(UMI)AAI3223715
035
$a
AAI3223715
040
$a
UMI
$c
UMI
100
1
$a
Sethi, Amit.
$3
1262970
245
1 0
$a
Interaction between modules in learning systems for vision applications.
300
$a
96 p.
500
$a
Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 4012.
500
$a
Adviser: Thomas S. Huang.
502
$a
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006.
520
$a
Complex vision tasks such as event detection in a surveillance video can be divided into subtasks such as human detection, tracking, and trajectory analysis. The video can be thought of as being composed of various features. These features can be roughly arranged in a hierarchy from low level features to high-level features. Low-level features include edges and blobs, and high-level features include objects and events. Loosely, the low-level feature extraction is based on signal/image processing techniques, while the high-level feature extraction is based on machine learning techniques.
520
$a
Traditionally, vision systems extract features in a feedforward manner on the hierarchy; that is, certain modules extract low-level features and other modules make use of these low-level features to extract high-level features. Along with others in the research community we have worked on this design approach. We briefly present our work on object recognition and multiperson tracking systems designed with this approach and highlight its advantages and shortcomings. However, our focus is on system design methods that allow tight feedback between the layers of the feature hierarchy, as well as among the high-level modules themselves. We present previous research on systems with feedback and discuss the strengths and limitations of these approaches. This analysis allows us to develop a new framework for designing complex vision systems that allows tight feedback in a hierarchy of features and modules that extract these features using a graphical representation. This new framework is based on factor graphs. It relaxes some of the constraints of the traditional factor graphs and replaces its function nodes by modified versions of some of the modules that have been developed for specific vision tasks. These modules can be easily formulated by slightly modifying modules developed for specific tasks in other vision systems, if we can match the input and output variables to variables in our graphical structure. It also draws inspiration from product of experts and Free Energy view of the EM algorithm. We present experimental results and discuss the path for future development.
590
$a
School code: 0090.
650
4
$a
Engineering, Electronics and Electrical.
$3
626636
650
4
$a
Artificial Intelligence.
$3
769149
650
4
$a
Computer Science.
$3
626642
690
$a
0544
690
$a
0800
690
$a
0984
710
2 0
$a
University of Illinois at Urbana-Champaign.
$3
626646
773
0
$t
Dissertation Abstracts International
$g
67-07B.
790
1 0
$a
Huang, Thomas S.,
$e
advisor
790
$a
0090
791
$a
Ph.D.
792
$a
2006
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3223715
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9224431
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入