log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Why Artificial Intelligence relies on Humans (& Humanists) in the Loop: Models of Information, Misinformation & Disinformation
IRB 4105 and Zoom: https://umd.zoom.us/j/97884449871?pwd=UFlReWh3ckpuMTI2ZXVEVXRoeGErdz09
Thursday, February 22, 2024, 2:00-3:15 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

We live in a rapidly changing knowledge ecosystem, wherein everyone, increasingly, relies on search engines, whether for mapping where they are to go in a strange place, or seeking the context that helps us understand why things happen. Large language models offer powerful solutions to what had been complex problems of research and writing. Handwritten text recognition programs (as I will discuss) offer greater access than ever to the secrets of early modern manuscripts. Students rarely consult printed books and articles in the way they did decades ago. They instead confront information that is delivered by computer-generated algorithms, and large language models. Many of us—even historians—have gotten used to the ease of finding our sources, both primary and secondary, via web search engines. AI has created powerful access and mechanisms and tools, and yet it makes mistakes, of all kinds, and relies, always, on humans – for design, shaping, input, algorithms, corrections, suggestions, context. It is what we make of it. In that sense it can be incredibly powerful, and increasingly so. But it will always contain flaws, always need guidance and correction. It will always require the wisdom provided by context, by humans who have read deeply and widely, who have read deeply in the handwriting of those who lived in the distant past. It needs the wisdom of those who know how to determine that something did not happen (that the large language model got it wrong), by those who have access to privileged sources and studies and data, or those who just know the local roads.  Of even more concern is the ease with which all AI can be shaped, and reshaped, sometimes with malign intent. So while AI often helps us find information, it also provides misinformation, and even deliberate disinformation. How do we craft models that make allowance for the humans, and humanists, that will always need to be in the loop, if we really want to understand the world better? Of course humans can also make mistakes, and mislead. I guess the larger question is how can we consciously create models that acknowledge this reliance. How can we create models that acknowledge and respect he importance of scholars across the university – to creating knowledge. How can we deliberately conceptualize and frame these search engines, LLM’s, HTR tools, etc., to work together with humans, and with humanists? We need to translate the disciplinary training of generations of humanists into this process, and keep them interactively, as scholars. Otherwise we run the risk of creating increasingly useless “knowledge” that we cannot verify, that leads us astray, not only on roads, but in terms of medical, political, and social choices.

Bio

Holly Brewer is Burke Professor of American History and Associate Professor at the University of Maryland. She is a specialist in early American history and the early British empire as well as early modern debates about justice.  In terms of this talk, she has been co-chair of PACT (Publishing Access & Contract Terms) for the past four years, which has been helping to craft open access policies across the university and within the larger scholarly ecosystem. She gave a paper at the American Philosophical Society in 2022 on using AI for Handwritten Text Recognition.And more generally, has been struggling with questions of knowledge and verification, and how and why the traditional training of historians needs to become more transparent in how we use and access and validate information. She is also project director and lead editor of a digital humanities website that using creative tools to help us access the past (slaverylawpower.org) that has been supported by the NHPRC (the US National Archives) and the ASLH (the American Society for Legal History) To read more about her work see earlymodernjustice.org.

 

She has been awarded more than eight national prizes for her published work, including the Order of the Coif Book prize from the American Association of Law Schools.  She is currently finishing a book that examines the origins of American slavery in larger political and ideological debates: it is tentatively entitled "The Kings’ Slaves: Creating America’s Plantation System,” for which she was awarded a Guggenheim Fellowship in 2014 as well as fellowship support from the NEH, National Humanities Center, and the Cromwell Foundation. She published part of it as "Slavery, Sovereignty and 'Inheritable Blood': Reconsidering John Locke and the Origins of American Slavery" in the American Historical Review (October 2017)  which was winner of the 2019 Srinivas Aravamudan Prize from the American Society for Eighteenth Century Studies. She published another part in the Law & History Review in 2021 “Creating a Common Law of Slavery for England and its New World Empire.” which was awarded the Sutherland prize for the best article in British legal history from the American Society for Legal History. first book traced the origin and impact of "democratical" ideas across  the empire by examining debates about who can consent in theory and legal practice: By Birth or Consent: Children, Law, and the Anglo-American Revolution in Authority. She is deeply interested in how this earlier history helps us understand the impact of the American Revolution and in interpreting the Constitution.  

This talk is organized by Emily Dacquisto