AI Protopia: Ideas for how AI can improve the world
Talks and writing from Progress Conference 2025
Progress in artificial intelligence is moving fast. How might AI improve the world? What are the barriers to beneficial AI deployment? How can AI accelerate science, or break through government inertia? Will free speech be secured in an AI-enabled world?
Speakers at Progress Conference 2025 tackled these questions and more, and we’re glad to share the recorded talks from the AI track. Here are some of the talks from the conference, and related writing published after.
Track dispatch
The termination shock: Where AI progress meets reality by Anton Leicht
In this essay for Big Think’s special issue The Engine of Progress, RPI Fellow
reports on the AI track at the conference and, more broadly, reflects on the frictions that AI must overcome to introduce real-world change in politics, policy, and institutions.To leave our solar system, a spacecraft must endure the termination shock, a region of space where the fiery solar winds of our Sun clash against the glacial currents of deep outer space. The termination shock can tear apart the most sophisticated and well-crafted probes and vessels, but overcoming it is the only way to explore the universe beyond our planets and Sun.
In October, I found myself at the second annual Progress Conference in Berkeley, California. Based on what I learned through its high-profile artificial intelligence (AI) track, AI progress, too, could be headed for a termination shock as it leaves the fast-paced environment of San Francisco and its tech industry and crosses the boundary into the real world of slow and thorny institution.
Getting this right will require the ability to build shortcuts at speed and the wisdom to realize when a shortcut is a mistake, which is to say it will require thoughtful appreciation of the past and an unabashed willingness to steer the future in equal measure. That balance is what distinguishes the progress movement from both accelerationism and statism: It is deeply rooted in critical consideration of what society has done right in the past. In many conversations with fellow attendees, I could sense their desire to take seriously the challenge of building the best possible future without forgetting about the merits of past accomplishments.
Talk videos
Sam Altman in conversation with Tyler Cowen
What is the best ‘this is not a bubble’ argument?
Sam and
talked about when GPT-6 will be released; what might happen to margins on food, healthcare, and housing; conspiracy theories; and much more.Solving The Complexity Crisis: Transcending Metrics And Goals
From Emmett Shear, CEO of Softmax: “Modernity and post-modernity have given us great plenty, great freedom, and great power. Yet the systems we have built are slowly strangling themselves. We are ever more polarized, ever more paralyzed, ever less trusting that we can trust our media. The root of this sickness is that our systems have become overfit. We can solve any problem in the world by building more complex systems, except the problem of our systems being too complex. For that, we must transcend our metric and goal driven systems.”
The First Amendment will determine the future of AI
, Executive Vice President at the Foundation for Individual Rights and Expression (FIRE) discusses how the First Amendment might influence the trajectory of AI development: “Will artificial intelligence develop with the freedom of the internet or the restrictions of broadcast radio and television? The answer depends on how the courts apply the First Amendment’s free speech protections to this newest communications technology. In this talk, we will explore the history of film, television, radio, and the internet to show how the First Amendment shaped these technologies, and how its application to some of the most difficult questions surrounding deepfakes, intellectual property, and discrimination will shape the future of artificial intelligence.”AI in the Machinery of Government
(Foundation for American Innovation) moderates a discussion with Tim Fist (Institute for Progress), Samuel Hammond (Foundation for American Innovation), and Séb Krier (Google DeepMind) about AI and government. How can AI improve how governments work? How specifically might the “machinery of government” change with AGI? This discussion brought together researchers, practitioners, and policymakers.AI for Science Bottlenecks
Here’s how
(Freaktakes) pitched this talk: “Whether I visit the Bay or Cambridge, MA, I meet tons of researchers who want to bring the Age of AI to scientists. But they are often going about it in different ways. This panel will feature a set of individuals who span both cultures. I’d love to understand why they commit their careers to addressing the problems they do and, concretely, what work each would spend the next marginal research dollar on.” This session brought together (CEO of Rowan Scientific), Noam Brown (Research Scientist at OpenAI), and Theofanis Karaletsos (Head of AI at CZI Science) to discuss ways that AI can unlock scientific progress.The Future of Public Epistemics: Beyond Community Notes
Jay Baxter (X), one of the engineers behind Community Notes, talked about approaches to public epistemics in the age of AI. What might the future of AI-scale public epistemic tech look like? This talk offers an optimistic vision, while sharing lessons learned from Community Notes so far, such as why finding agreement across polarized divides works so well, why democratic inputs and transparency are so important, and why “added context” can be a more helpful framing than “fact checking.”
How to Start a Frontier AI Lab
Anjney Midha (General Partner at Andreessen Horowitz) has invested in multiple startups that are now operating like frontier AI labs. At the conference, he discussed lessons learned from Anthropic, Midjourney, Mistral, Sesame, Black Forest Labs, and more with Henry Williams (Content Production Partner at Andreessen Horowitz).
Other writing
The hidden legal engine of progress — from railroads to AI
Dean Ball also wrote about how common law has long balanced innovation and accountability. Can it do the same for AI?
Take the foundational case law of the railroad. One of the earliest demonstrated harms was trains striking livestock that roamed farms and ranches alongside the tracks. Under most common law precedent, it would have been up to the railroad companies to build fences around their lines to keep livestock (and people) away. Instead, however, common law courts in Northern states concluded that, because facilitating the rapid development of the railroad was a necessity, the adjacent property owners would bear the responsibility to construct fences.
Similarly, passengers of the railroad would have to take the quirks of the new technology into account. Traditionally, railroads would have been examples of common carriers — think of courier and mail services in medieval times. Common carriers faced strict liability for property transmitted for customers, meaning they were liable for any damages, regardless of how much care they took.
Yet for the railroads, courts determined that passengers would be responsible for damage to their cargo if they failed to take care to protect it from the train’s all-too-common bumps and jolts. Understanding and mitigating these well-known downsides of rail travel was deemed to be the responsibility of the individual — not the railroad company.
Seven Thoughts on “AI Scientists”
After speaking at the conference, Corin Wagen published this essay speculating about what might happen with “AI Scientists”
At this point it’s obvious that AI will affect science in many ways. To name just a few:
LLMs are changing the way that we write and interact with code, so any scientific field that involves software or data analysis has already been impacted a lot. (We use LLMs a ton for coding here at Rowan, as I’m sure do all other software-adjacent scientific enterprises.)
Machine-learning models are a godsend for complex simulation problems across lots of different fields: climate modeling, fluid dynamics, systems biology, chemistry & materials science, and so forth. These models too are already seeing production use across lots of domains; to date, most of our work at Rowan has been focused in this area.
Literature review and information retrieval is well-suited for LLMs: FutureHouse and others have done good work here, and it’s likely that we’ll see many more improvements in this domain. And there are myriad uses for computer vision and robotics in lab automation and monitoring, many of which are already being explored.
All of the above feel inevitable to me—even if all fundamental AI progress halted, it is virtually inevitable that we would see significant and impactful progress in each of these areas for years to come. Here I instead want to focus on the more speculative idea of “AI scientists,” or agentic AI systems capable of some independent and autonomous scientific exploration. After having a lot of conversations about “AI scientists” recently from people with very different backgrounds, I’ve developed some opinions on the field.
We are publishing videos of conference talks over the next several weeks. We’ll post videos on the RPI YouTube channel. 2025 talks will all be added to this specific playlist here.
Thanks to Big Think our conference media partner, for producing all these videos and The Engine of Progress, a special issue of Big Think exploring the people and ideas driving humanity forward.


>The termination shock can tear apart the most sophisticated and well-crafted probes and vessels
Hold up. The termination shock is only an increase in density of about 1.6-2.2 over the surrounding regions of space (which is extremely low density). There's no way this could "tear apart" a spacecraft.
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2008GL034869
The analysis also has the causality backwards. The article frames institutions as friction blocking AI deployment, but the data shows the opposite: scaled adoption that shows up in productivity statistics requires new institutional infrastructure to coordinate deployment.
The Hertz EV failure is instructive. They had capital, technology, and no regulatory barriers—yet failed catastrophically because the coordination infrastructure didn’t exist: no standardized repair protocols, no portable maintenance expertise, no established residual value models for used EVs, no insurance frameworks for fleet-scale risk assessment.
Historically, productivity gains appear in aggregate statistics only after coordination grammars emerge—the National Electrical Code (1897) for electrification, GMAC financing templates (1919-25) for consumer durables. These weren’t “legacy friction” but necessary infrastructure built through negotiation among parties with capital at risk.
Current technologies face both coordination gaps simultaneously for the first time: the 1886 supply-side problem (expensive, bespoke installation) and the 1919 demand-side problem (unaffordable for mass market). We have RCT data showing 40% individual productivity gains alongside 1.3% aggregate TFP growth—the classic micro-macro gap that historically took 12-16 years to resolve through grammar development, not disruption.
The "AI in Government" panel's lament that "only 10% of workers use AI" is exactly what you'd expect pre-grammar. The question isn't "How do we get them to use it faster?" but:
- What are the liability frameworks for AI-generated decisions?
- What are the audit protocols for AI-assisted casework?
- What are the training pipelines that preserve institutional knowledge?
These aren't "bureaucratic obstacles"—they're the necessary wiring to make AI safe and effective, and enable it to deploy at scale.