2026-03-11
silent waters
Doom Debates has an interview with Steven Byrnes, who is obviously brilliant in his ability to synthesize neuroscience literature into models of intelligence in the human brain. Nevertheless, I have a lot of difficult the non-LLM inevitable AI doom scenario, because if the path to “true” ASI is not LLMs, then it’s unclear to me why we should be more worried about ASI than we were twenty years ago, since all our progress since then has been in a presumably unrelated dead-end direction. If anything, the more capable LLMs become, the less incentive there will be to look into completely different approaches or architectures. This applies particularly to something like RL-based steering, since even if it is the ultimate key to creativity, it seems to me that the ready surplus of human desire makes creating more economically undesirable on the margin. If Byrnes really thinks that AI with advanced steering unlocks capabilities which will lead to doom, then obviously the thing to do would be to keep it to himself. Instead, for reasons I don’t quite understand, he advocates for the technical superiority of his particular approach even as he proclaims that it will result in our ultimate collective demise.
RAND with an analysis of the concept of geopolitical stability and to what extent is might be affected by AGI. It’s strange how many people don’t seem to understand that a stability based on unilateral control is so much more fragile than one composed of mutual checks and balances, which is much better at tolerating asymmetry, uncertainty, and relative changes in circumstances. Related, 80K Hours interviews Sam Winter-Levy and Nikita Lalwani describe why Mutually Assured Destruction will be difficult to overcome, even with significantly mismatched capabilities1.
Richard Ngo has a critique of “econ-brain” which seems to me to be essentially a critique of globalism, based on the same intuition as what provokes instinctive dislike of Robin Hanson’s proposal of “sacred money”: that it collapses all existing sacred money markets (status) into fiat denomination in a single centralized exchange, subjecting local preferences to a global Keynesian beauty contest. The people are yearning for a workable system of personal scrip; unfortunately the whispering earring does not yet exist to function as the personal reserve chairman (and accountant) for every individual, so we might be stuck with the fuzzier and undirectly enforced system of status for a while yet.
Philosophy Bear with some interesting personal notes. Possibly related, Vishal Prasad on his perspective on “writing for AI”. As someone accustomed to writing in a somewhat detached style of formalized pseudo-modesty, I wonder how much of my complete self LLMs will really be able to obtain2. I personally find it very easy to psychoanalyze myself from just text, but that’s with a lifetime of familiarity and additional nonpublic information.
David Roberts in the Republic of Letters with a review of Strangers, as an exercise and meditation on the habit of overly critical self-judgment.
While on the topic of mismatched capabilities, Aurelian has an evocative article on asymmetric warfare and why the stronger party does not always “win”.
Tangentially related, Henrik Karlsson (and Ava) on acting accordance to your “true self”. The way I view this tension is that one’s underlying nature exists as a sharp and spiky object, one which is inevitably sanded down in the course of socialization into a smooth and gleaming sphere. Some people welcome the smoothing; others reorient themselves to hide their spikes, accepting that they won’t be fully seen; or, in attempting to get the best of both worlds, one can spot-smooth themselves to minimize damage while retaining as much of the original shape as possible.

