Discussion about this post

User's avatar
Metacelsus's avatar

>The termination shock can tear apart the most sophisticated and well-crafted probes and vessels

Hold up. The termination shock is only an increase in density of about 1.6-2.2 over the surrounding regions of space (which is extremely low density). There's no way this could "tear apart" a spacecraft.

https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2008GL034869

Expand full comment
PEG's avatar

The analysis also has the causality backwards. The article frames institutions as friction blocking AI deployment, but the data shows the opposite: scaled adoption that shows up in productivity statistics requires new institutional infrastructure to coordinate deployment.

The Hertz EV failure is instructive. They had capital, technology, and no regulatory barriers—yet failed catastrophically because the coordination infrastructure didn’t exist: no standardized repair protocols, no portable maintenance expertise, no established residual value models for used EVs, no insurance frameworks for fleet-scale risk assessment.

Historically, productivity gains appear in aggregate statistics only after coordination grammars emerge—the National Electrical Code (1897) for electrification, GMAC financing templates (1919-25) for consumer durables. These weren’t “legacy friction” but necessary infrastructure built through negotiation among parties with capital at risk.

Current technologies face both coordination gaps simultaneously for the first time: the 1886 supply-side problem (expensive, bespoke installation) and the 1919 demand-side problem (unaffordable for mass market). We have RCT data showing 40% individual productivity gains alongside 1.3% aggregate TFP growth—the classic micro-macro gap that historically took 12-16 years to resolve through grammar development, not disruption.

The "AI in Government" panel's lament that "only 10% of workers use AI" is exactly what you'd expect pre-grammar. The question isn't "How do we get them to use it faster?" but:

- What are the liability frameworks for AI-generated decisions?

- What are the audit protocols for AI-assisted casework?

- What are the training pipelines that preserve institutional knowledge?

These aren't "bureaucratic obstacles"—they're the necessary wiring to make AI safe and effective, and enable it to deploy at scale.

Expand full comment
2 more comments...

No posts