Links 4/30
Chaos edition
Richard Danzig’s 2018 CNAS report, written before LLMs and aging scarily: “superiority is not synonymous with security,” the human-in-the-loop reassurance is too weak and steadily eroding, and “early is imperative, late is too late” for control of complex opaque autonomous systems. Just because we can doesn’t mean we should. Technology Roulette — Richard Danzig
Kevin Kelly bets against the AGI-resolves-by-2029 vibe: AI keeps advancing in ways that expand our ignorance rather than answering it, and US-China duopoly, post-globalization social chaos, and AI-induced media-trust collapse compound for a 10–15 year stretch of uncertainty about uncertainty itself. This seems plausible to me. It feels like the world is more uncertain than it ever has been in my life. Our Uncertain Uncertainties — Kevin Kelly
Or is that perception of everything becoming more uncertain a kind of illusion? Adam Mastroianni in this article from 2024 (ah, a simpler time): apocalyptic beliefs are so common across cultures because they feel reasonable, not because they feel good — extending his and Dan Gilbert’s “illusion of moral decline” work to explain why people persistently expect the world to end soon. Re-reading this piece, I am struck that they don’t really ever show that their proposed mechanisms (biased attention and biased memory) actually cause the illusion. The end is nigh and here’s why — Adam Mastroianni

