log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
TREC: Researching the Problem of Measuring Search
Ian Soboroff - National Institute of Standards and Technology
Wednesday, October 21, 2015, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Information retrieval as a research field is intensely focused on evaluation: measuring whether a change to an algorithm within the larger search process improves search, experimentally and for actual users.  IR research concentrates on those algorithms, and evaluation conferences like NIST's TREC support that by providing data for researchers to use in measuring their improvement.  In order to do that, we at NIST have our own research questions: how should we measure search, how do we build datasets that let us do that effectively, and how do we know when we're doing it better?

If that sounds just too meta, don't panic.  Each TREC track has this dual problem: how to build systems that solve the task, and how to measure which systems are the most effective at doing so.  In this talk, I will detail some past, current, and future TREC efforts in order to show how they are driving both IR research and IR evaluation research, and how that cycle helps the entire research community.

Bio

Dr. Ian Soboroff is the head of the Retrieval Group at the National Institute of Standards and Technology (NIST).  The Retrieval Group organizes the Text REtrieval Conference (TREC), the Text Analysis Conference (TAC), and the TREC Video Retrieval Evaluation (TRECVID). These are all large, community-based research workshops that drive the state-of-the-art in information retrieval, video search, web search, information extraction, text summarization and other areas of information access. He has co-authored many publications in information retrieval evaluation, test collection building, text filtering, collaborative filtering, and intelligent software agents. His current research interests include building test collections for social media environments and nontraditional retrieval tasks.

This talk is organized by Naomi Feldman