1 |
Enhancing a Web Crawler with Arabic Search Capability
|
|
|
|
In: DTIC (2010)
|
|
BASE
|
|
Show details
|
|
3 |
A Journey in Entity Related Retrieval for TREC 2009
|
|
|
|
In: DTIC (2009)
|
|
BASE
|
|
Show details
|
|
5 |
Lucene for n-grams using the ClueWeb Collection
|
|
|
|
In: DTIC (2009)
|
|
BASE
|
|
Show details
|
|
6 |
PARADISE Based Search Engine at TREC 2009 Web Track
|
|
|
|
In: DTIC (2009)
|
|
BASE
|
|
Show details
|
|
8 |
Informedia at TRECVID 2003: Analyzing and Searching Broadcast News Video
|
|
Hauptmann, A.; Baron, R. V.; Chen, M.; Christel, M.; Duygulu, P.; Huang, C.; Jin, R.; Lin, W.; Ng, T.; Moraveji, N.
|
|
In: DTIC (2004)
|
|
Abstract:
A concentrated effort was made by the authors to develop an interface allowing a human to succeed with video topics as defined in TRECVID 2001. This interface was part of the TRECVID 2002 interactive query task, in which a person could issue multiple queries and refinements to the video corpus in formulating the shot answer set for the topic at hand. The interface was designed to present a visually rich set of thumbnail images to the user, tailored for expert control over the number, scale, and attributes of the images. Armed with this interface, an expert user completely familiar with the retrieval system and its features, but having no a priori knowledge of the TRECVID 2002 search test corpus, performed well on the search tasks. This exact system as used in the TRECVID 2002 interactive query task was again used for the TRECVID 2003 evaluation. To facilitate better visual browsing, we extended the storyboard idea to show keyframes across multiple video documents, where a document is automatically derived by segmenting a video production into story units through speech, silence, black frames, and other heuristics. The hierarchy of information units is frame, shot, document and full production. A set of documents is returned by a query. The shots for these documents are presented in a single storyboard, i.e., an ordered set of keyframes presented simultaneously on the computer screen, one keyframe per shot. Without further filtering, most queries would overwhelm the user with too many images. Through the use of query context, the cardinality of the image set can be greatly reduced. The search engine for text queries makes use of the Okapi method. The multiple document storyboard can be set to show only the shots containing matching words. This strategy of selecting a single thumbnail image to represent a video document based on query context resulted in more efficient information retrieval with greater user satisfaction. ; Sponsored in part by the National Science Foundation Grant no. IRI-9817496. Presented at TRECVID 2003 held in Gaithersburg, MD on 17-21 Nov 2003. The original document contains color images.
|
|
Keyword:
*COMPUTATIONAL LINGUISTICS; *INFORMATION RETRIEVAL; *VIDEO MAPPING; CHARACTER RECOGNITION; Cybernetics; Information Science; INTERACTIVE SEARCH; Linguistics; NATURAL LANGUAGE; Operations Research; Recording and Playback Devices; SCHNEIDERMAN'S FACE DETECTOR ALGORITHM; SEARCH ENGINES; SPEECH RECOGNITION; TEXT QUERIES; VIDEO FRAMES
|
|
URL: http://www.dtic.mil/docs/citations/ADA456318 http://oai.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA456318
|
|
BASE
|
|
Hide details
|
|
10 |
Integration of Language and Cognition at Pre-Conceptual Level
|
|
|
|
In: DTIC (2003)
|
|
BASE
|
|
Show details
|
|
12 |
Searching and Search Engines: When is Current Research Going to Lead to Major Progress?
|
|
|
|
In: School of Information Studies - Faculty Scholarship (2000)
|
|
BASE
|
|
Show details
|
|
|
|