Legal and Ethical Issues in AI
How should we think about AI? Is it fundamentally a copying technology that copies and creates new things? Or is it more like mining - an extractive process wherein a machine is able to remove resources or data, often at great environmental expense? Are there ways for AI to be useful, and are there ways to use it in the public interest? And regardless of what metaphor you use to understand machine learning and large language models, how do they intersect with copyright and with libraries?
In a wide-ranging conversation data journalist and NYU professor Meredith Broussard, Re:Create Coalition Director Brandon Butler, and Nick Garcia of Public Knowledge addressed these and other questions in Legal and Ethical Issues in AI, part of Library Futures’s Machine Learning and Artificial Intelligence for Information Professionals series.
AI: Copying, Mining, or Something Else?
The discussion began with Brandon (speaking only for himself and not as a representative of Re:Create or its members) asserting that AI is fundamentally a copying technology, noting that the way AI learns this is “a pretty straight line” to the way researchers learn things. Libraries have used copying–from photocopiers to book scanners–to preserve and share the knowledge they have acquired. From a copyright perspective, “really what’s at stake…is not access to expression, it’s access to knowledge”–and who controls that access. Copyright, in his view, isn’t one of the things that’s a problem for libraries and AI.
In Meredith’s view, AI companies are not copiers but miners: an AI model is a machine, and the AI companies are sucking up information on the internet, putting it into the machine, and spewing “new stuff” for which the original creators were not compensated. We have a model from mining: mineral rights. If you’re going to dig up minerals on someone’s land, you pay the landowner. If you’re going to use someone’s work to train a machine, you need to pay them. She did feel that AI (with human oversight) was useful for “boring projects,” including facial recognition in historical photographs and generating metadata.
AI and the Public Good
Nick Garcia spoke to an issue all the panelists agreed on: that AI needs to serve the public good, and that it cannot be controlled by companies whose motives are profit. Nick argued in favor of systems for “public AI,” from public AI datasets to legal language for public interest copyright exemptions (such as the text and data mining exceptions in the EU) to universal service and digital equity programs. Libraries, he noted, are poised to develop and house many such endeavors, as preserving and sharing knowledge for the public good are central to the library mission.
Brandon pushed back on Meredith’s mining metaphor, noting that mining leaves a landscape depleted, whereas copying preserves the originals. In his view AI training is not “taking” expression but deriving facts, and what comes out of a model is a new expression, not a copy. He maintained that using works for training doesn’t deprive authors of their works and that copyright shouldn’t hinder access to useful data for training.
The AI Narrative–And How to Step Back from the Brink
Nick noted that it is important to try AI in order to learn about it and understand it. We should approach new technology with humility, he said, and not immediately dismiss differing viewpoints as we develop narratives about what the technology is and what it can do.
Using metaphors and human-scale images to understand AI is helpful, Meredith agreed. It allows us to focus on what AI can do today and the real harms it causes in the present rather than speculating on the future. AI is “complicated, beautiful math” that produces more accurate outputs with more data, even if we don’t understand the inner workings.
Finally, the panelists addressed their views on how to step back from the brink of negative societal forces related to AI.
According to Nick, we need to challenge powerful entities with money and influence, but locking down AI with restrictions isn’t the answer, as large companies would be better equipped to bypass those restrictions. He suggested drawing inspiration from libraries’ radical principles of providing information and access.
Brandon, meanwhile, reiterated that copyright is a distraction from the core issues of AI use and control, advocating for AI training to be considered fair use. He warned against “walled gardens” created by licensing fees.
And finally Meredith introduced “technochauvinism” to the conversation, questioning whether AI is always the right tool. Librarians have a valuable perspective on these issues, she noted. “Everybody should be listening to librarians more.”
Links from the Panelists
- National Artificial Intelligence Research Resource Pilot
- The Public Interest Corpus
- AI, Authorship, and the Public Interest Grant Recipients
- Seeking Reliable Election Information? Don’t Trust AI
- Worried About AI Monopoly? Embrace Copyright’s Limits
Links from the Chat
Our webinar attendees also had a lot to share!
- AI as Normal Technology
- OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws – Computerworld
- AI Search Has A Citation Problem
- Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
- A data center that doesn’t even exist can raise your electricity bill
- Thirsty for power and water, AI-crunching data centers sprout across the West
- Inside the Data Centers That Train A.I. and Drain the Electrical Grid
- The Cloud Is Material: On the Environmental Impacts of Computation and Data Storage
- Kenyan workers with AI jobs thought they had tickets to the future until the grim reality set in
- George R.R. Martin Is Carving Up OpenAI In Court, So Far
- OpenAI loses bid to dismiss part of US authors’ copyright lawsuit
- The multi-faceted challenge of powering AI
- Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors
- Behind the Curtain: How an AI jobs apocalypse unfolds
- How the Federal Reserve Could Inflate or Pop an AI Bubble
- AI Competencies for Academic Library Workers
- What We Talk About When We Talk About AI
- AI in Warfare and Military Applications
- Universal Music Group Settles Major AI Lawsuit With Udio After Song Theft Claims
- Chokepoint Capitalism
- Worried About AI Monopoly? Embrace Copyright’s Limits
