Designing Human-AI Partnerships for Information Search and Evaluation

Overview 

This research project designs and prototypes new ways to find, interpret, and evaluate online information with the goal of helping to combat rampant misinformation. These prototypes enable the research team to study how people evaluate and integrate information coming from disparate online sources, with a particular focus on fact-checking as it relates to the COVID-19 pandemic.  

Methods and Findings 

The research team conducted a lab-based eye-tracking study to investigate how the interactivity of an AI-powered fact-checking system affects user interactions, such as dwell time, attention, and mental resources involved in using the system. A within-subject experiment was conducted, where participants used an interactive and a non-interactive version of a mock AI fact-checking system and rated their perceived correctness of COVID-19 related claims. Data collected included web-page interactions, eye-tracking data, and mental workload using NASA-TLX. Researchers found that the presence of the affordance of interactively manipulating the AI system’s prediction parameters affected users’ dwell times, and eye-fixations on AOIs, but not mental workload. In the interactive system, participants spent the most time evaluating claims’ correctness, followed by reading news. This promising result shows a positive role of interactivity in a mixed-initiative AI-powered system. 

By studying human-AI interaction in the context of misinformation, the research team demonstrated how responsible AI technologies can help combat misinformation. This integrative test case can be used to yield broader insights into designing socially responsible AI. 

Prototype 

The current prototype offers new ways to assess the veracity of online content, helping people compare information from diverse sources and the reliability of those sources. Such a prototype also enables the research team to study how people evaluate and integrate information coming from disparate online sources. Try the demo system for yourself by entering a query about COVID-19. 

Team Members


Documents


Select Publications


Anubrata Das and Matthew Lease. “A Conceptual Framework for Evaluating Fairness in Search.” Technical report, University of Texas at Austin. 2019.  

Anubrata Das, Kunjan Mehta, and Matthew Lease. “CobWeb: A Research Prototype for Exploring User Bias in Political Fact-Checking.” In ACM SIGIR Workshop on Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval (FACTS-IR). 2019.