While many models have been proposed for automated checking of news, even established news outlets and fact-checking services are often disputed today. How can we design AI models that are transparent and accountable to earn human trust and build a foundation for effective human-AI partnerships? We envision AI as an assistive technology to enhance and augment human abilities through interaction, rather than provide black-box automation. I’ll discuss our recent work integrating human-centered, front-end interface design with back-end language processing algorithms, using the specific problem of misinformation to ground our broader work toward FAT* design for enhanced search systems. Time allowing, I’ll discuss a second stream of research on effective task design and aggregation for collecting item ratings via crowdsourcing. Cost-benefit analysis over 10,000 ratings and rationales collected on Mechanical Turk suggests a win-win: experienced workers can provide rationales for their ratings with almost no increase in task completion time while providing a multitude of benefits: more reliable ratings, greater transparency for assessing workers and ratings, reduced need for expert gold, dual-supervision from ratings and rationales, and added value from the rationales themselves.
Matthew Lease (https://www.ischool.utexas.edu/~ml) is an Associate Professor in the School of Information at the University of Texas at Austin. He received his Ph.D. in Computer Science from Brown University and his B.Sc. in Computer Science from the University of Washington. He has received early career awards from the NSF, IMLS, and DARPA. Recent honors include Best Student Paper at the 2019 European Conference for Information Retrieval (ECIR) and Best Paper at the 2016 Association for the Advancement of Artificial Intelligence (AAAI) Human Computation and Crowdsourcing conference (HCOMP). Lease is currently helping lead Good Systems (http://goodsystems.utexas.edu