As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tech?
Correction: When discussing the large language model Bloom, Elizabeth said it functions in 26 different languages. Bloom is actually able to generate text in 46 natural languages and 13 programming languages - and more are in the works.
As the director of Palisade Research, Jeffrey specializes in assessing the offensive capabilities of current AI systems to understand the risks posed by AI misuse and loss of control scenarios. Jeffrey previously worked at Anthropic through his security consulting firm Gordian. Jeffrey has been instrumental in developing secure infrastructure for various tech companies, philanthropic organizations, and existential-risk projects. His research has covered cybersecurity and AI intersections, emerging biotechnological threats, and nuclear warfare risks. Jeffrey has advised state and federal government offices on AI and emerging technological risks. Outside of work, Jeffrey is passionate about rollerblading, skiing, snowboarding, and exploring remote, serene locations.
Dr. Elizabeth Seger is a researcher at the Centre for the Governance of AI (GovAI) where she investigates foundation model release strategy and policy. Her current projects focus on open-source risks and benefits and on methods of AI democratization. She also leads GovAI's research stream on epistemic security, investigating the impacts of emerging technologies on information ecosystems and democratic processes.
Elizabeth holds a PhD in Philosophy of Science and Technology from the University of Cambridge, where she remains a research affiliate with the AI: Futures and Responsibility (AI:FAR) project and a BSc in molecular biology and bioethics from UCLA.
Numerous parties are calling for “the democratization of AI,” but the phrase is used to refer to various goals, the pursuit of which sometimes conflict. This paper, co-authored by Elizabeth Seger, identifies four kinds of “AI democratization” that are commonly discussed
This report, co-authored by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AI
This paper, co-authored by Jeffrey Ladish, demonstrates that it’s possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200 while retaining its general capabilities
Supports governments, technology companies, and other key institutions by producing relevant research and guidance around how to respond to the challenges posed by AI
Aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanity
Studies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever