(Note that the time differs from regular colloquium time)
In professional search tasks such as precision medicine literature search, a query often involves multiple aspects. To assess the relevance of a document, a searcher would have to painstakingly validate each aspect in the query and follow a task-specific logic to make a relevance decision. In such scenarios, we say the searcher makes a structured relevance judgment, as opposed to the conventional univariate (binary or graded) relevance judgment. Ideally, a search engine can support the searcher’s workflow and follow the same steps to predict document relevance. This approach may not only yield highly effective retrieval models, but also open up opportunities for the model to explain its decisions in the same ‘lingo’ as the searcher.
In this talk, I will discuss our recent work on explainable retrieval models that emulate how medical experts make structured relevance judgments. Using data from the TREC Precision Medicine literature search track (2017-2019), we found that a simple, explainable, and label-efficient model can consistently perform as well as complex, black-box, and data-hungry learning-to-rank models. These results suggest that leveraging the structure in professional search queries is a promising direction towards building explainable search tools to support professional search tasks.
You may use the following Zoom link to join the talk:
https://umd.zoom.us/j/98806584197?pwd=SXBWOHE1cU9adFFKUmN2UVlwUEJXdz09