語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Search Strategies for Localization i...
~
Bency, Archith John.
FindBook
Google Book
Amazon
博客來
Search Strategies for Localization in Images and Videos.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Search Strategies for Localization in Images and Videos./
作者:
Bency, Archith John.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2018,
面頁冊數:
128 p.
附註:
Source: Dissertations Abstracts International, Volume: 80-09, Section: B.
Contained By:
Dissertations Abstracts International80-09B.
標題:
Computer Engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10935790
ISBN:
9780438896598
Search Strategies for Localization in Images and Videos.
Bency, Archith John.
Search Strategies for Localization in Images and Videos.
- Ann Arbor : ProQuest Dissertations & Theses, 2018 - 128 p.
Source: Dissertations Abstracts International, Volume: 80-09, Section: B.
Thesis (Ph.D.)--University of California, Santa Barbara, 2018.
This item must not be sold to any third party vendors.
The emphasis of the thesis work is in developing novel search-driven methods for image and video analysis tasks. In comparison to visual recognition tasks, lack of large-scale annotated datasets for localization tasks can make training and generalizing analogous complex models challenging. In this context, we investigate whether search-driven methods can provide competitive approaches to localization tasks and if so, what are the video and image representations that are appropriate for an efficient search mechanism? Specifically, we explore search-driven methods for object tracking in videos, object localization in images and temporal action detection in untrimmed videos. Most current methods in video object tracking fail in conditions of poor image quality and severe compression artifacts, which are common-place in video recorded in large camera networks. Also, datasets with ground-truth object tracks have mainly been looked at as a source for validating tracking performance and not as a database of domain-relevant knowledge. Pre-existing datasets are leveraged to track objects in unseen videos using simple motion features which are robust to video artifacts. For every training video sequence, a document that represents motion information is generated and a searchable library of documents is generated from a training set of annotated videos. Documents of the unseen video are queried against the library at multiple scales to find videos with similar motion characteristics. The associated library annotations provide coarse localization of objects in the unseen video. Retrieved object locations are further refined to the new video using an efficient warping scheme. We demonstrate improved tracking performance over trackers which model target appearance in video datasets with challenging visual artifacts. The next part of the thesis explores the problem of object localization in images. Current methods for image object detection need strong supervision in the form of object extent bounding boxes, which require more effort to acquire compared to image labels, making development of weakly supervised detection methods an important task. Local spatial and semantic patterns encoded in convolutional layers of deep neural networks, trained for the task of image classification, are utilized for object localization. Localization candidates are defined on a grid over deep feature map activations and are organized in a search tree. An efficient beam search based strategy is used to prune and select promising localization candidates. Post-processing steps using selected candidates lead to localization estimates for objects in images. We achieve improvement in location estimation of objects in images from benchmark datasets compared to state-of-the-art methods, and demonstrate comparable performance in object spatial span estimation. In the final part of this thesis, we describe a novel method in Temporal action detection that exploits mid-level descriptions generated over clusters of lower-level spatio-temporal features. The descriptors are structured to incorporate temporal context and be subject to efficient search using binary operations. Two temporal labeling strategies for these descriptors are explored, k-nearest neighbor classification and conditional random fields. We achieve comparable performance to a large portion of state-of-the-art algorithms with a method with significantly reduced model complexity.
ISBN: 9780438896598Subjects--Topical Terms:
1567821
Computer Engineering.
Search Strategies for Localization in Images and Videos.
LDR
:04538nmm a2200325 4500
001
2207832
005
20190923114239.5
008
201008s2018 ||||||||||||||||| ||eng d
020
$a
9780438896598
035
$a
(MiAaPQ)AAI10935790
035
$a
(MiAaPQ)ucsb:14052
035
$a
AAI10935790
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Bency, Archith John.
$3
3434833
245
1 0
$a
Search Strategies for Localization in Images and Videos.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2018
300
$a
128 p.
500
$a
Source: Dissertations Abstracts International, Volume: 80-09, Section: B.
500
$a
Publisher info.: Dissertation/Thesis.
500
$a
Advisor: Manjunath, B. S.
502
$a
Thesis (Ph.D.)--University of California, Santa Barbara, 2018.
506
$a
This item must not be sold to any third party vendors.
520
$a
The emphasis of the thesis work is in developing novel search-driven methods for image and video analysis tasks. In comparison to visual recognition tasks, lack of large-scale annotated datasets for localization tasks can make training and generalizing analogous complex models challenging. In this context, we investigate whether search-driven methods can provide competitive approaches to localization tasks and if so, what are the video and image representations that are appropriate for an efficient search mechanism? Specifically, we explore search-driven methods for object tracking in videos, object localization in images and temporal action detection in untrimmed videos. Most current methods in video object tracking fail in conditions of poor image quality and severe compression artifacts, which are common-place in video recorded in large camera networks. Also, datasets with ground-truth object tracks have mainly been looked at as a source for validating tracking performance and not as a database of domain-relevant knowledge. Pre-existing datasets are leveraged to track objects in unseen videos using simple motion features which are robust to video artifacts. For every training video sequence, a document that represents motion information is generated and a searchable library of documents is generated from a training set of annotated videos. Documents of the unseen video are queried against the library at multiple scales to find videos with similar motion characteristics. The associated library annotations provide coarse localization of objects in the unseen video. Retrieved object locations are further refined to the new video using an efficient warping scheme. We demonstrate improved tracking performance over trackers which model target appearance in video datasets with challenging visual artifacts. The next part of the thesis explores the problem of object localization in images. Current methods for image object detection need strong supervision in the form of object extent bounding boxes, which require more effort to acquire compared to image labels, making development of weakly supervised detection methods an important task. Local spatial and semantic patterns encoded in convolutional layers of deep neural networks, trained for the task of image classification, are utilized for object localization. Localization candidates are defined on a grid over deep feature map activations and are organized in a search tree. An efficient beam search based strategy is used to prune and select promising localization candidates. Post-processing steps using selected candidates lead to localization estimates for objects in images. We achieve improvement in location estimation of objects in images from benchmark datasets compared to state-of-the-art methods, and demonstrate comparable performance in object spatial span estimation. In the final part of this thesis, we describe a novel method in Temporal action detection that exploits mid-level descriptions generated over clusters of lower-level spatio-temporal features. The descriptors are structured to incorporate temporal context and be subject to efficient search using binary operations. Two temporal labeling strategies for these descriptors are explored, k-nearest neighbor classification and conditional random fields. We achieve comparable performance to a large portion of state-of-the-art algorithms with a method with significantly reduced model complexity.
590
$a
School code: 0035.
650
4
$a
Computer Engineering.
$3
1567821
650
4
$a
Electrical engineering.
$3
649834
690
$a
0464
690
$a
0544
710
2
$a
University of California, Santa Barbara.
$b
Electrical and Computer Engineering.
$3
2095334
773
0
$t
Dissertations Abstracts International
$g
80-09B.
790
$a
0035
791
$a
Ph.D.
792
$a
2018
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10935790
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9384381
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入