![]() |
ИСТИНА |
Войти в систему Регистрация |
ФНКЦ РР |
||
Query-biased search result summaries, or “snippets”, help users decide whether a result is relevant for their information need, and have become increasingly important for helping searchers with difficult or ambiguous search tasks. However, existing snippet evaluation methods focus on the snippet quality of summarizing the document for the given query, and do not consider the user’s search task. We propose a methodology for task-based snippet evaluation, with the aim of directly evaluating how how well the snippets help users to satisfy their information need. This includes an open-source infrastructure for collecting controlled yet realistic searcher behavior data, that could allow analysis of search session success as a function of snippet generation quality. We also present preliminary results of using this methodology, and identify some of the challenges that remain.