2026-03-14
chain repulsion
Antoine Vigouroux in Asimov Press on the Φ80 phages, serving both as an engaging introduction to bacteriophages, and a description of sociological effects to scientific research when cell cultures can inexplicably die off during experimentation.
Harjas Sandhu with a steelman of the case that recursive self-improvement will lead to superintelligence. Personally, even if one believes that intelligence is a single factor which could theoretically go infinite, none of the models of intelligence space where foom is possible seem at all plausible to me. For example, if one were to presume that human intelligence exists on the curve towards superintelligence, limited only by brain size, then why have bipedal apes not evolved towards gigantism, allowing for a recursive increase in brain size? Another possibility is that LLMs will at any moment accelerate into superintelligence, perhaps with the assistance of recursive self-improvement. But it doesn’t seem to me that LLMs replacing human researchers is likely to result in such a dramatic speedup, because even if AI automation means each experiment takes basically no time to design and set up, then the running of the experiments themselves becomes the new bottleneck, and even if you can speed this up with parallel simulations, then compute becomes the next one1. In which case, LLMs undergoing foom would require an unusual and mechanistically dubious coincidence of efficiency, speed, and capability improvements synergistically accelerating concurrently, a rather ahistorical development to say the least. Finally, to answer Sandhu’s question of “what’s stopping people from using AI improvements to research new AI paradigms?”, this essentially means that there is a singularity in intelligence space which is separate from both the human and LLM local optima. But if so, in what way are we any closer to superintelligence than at any other point in the past? Because if LLMs are not the path to superintelligence and remain limited in their ability to generalize and develop entirely novel concepts, then how likely is it that they will be able to discover this previously undiscovered location when searching through intelligence space? Every model of intelligence space that I can imagine where we are close to foom is one where we have already found it, which doesn’t seem to have happened as far as I can tell.
Alex Chalmers argues that the problem of local knowledge means that even with AGI, economic central planning is not feasible2. Or rather, in creating an ecosystem where local agents continuously send relevant data up through a hierarchy of decision-making aggregators, the result would be functionally equivalent to what we currently have, a price system. This is obviously correct, but perhaps somewhat ignores the underlying reason why people fear central planning, which is that it ignores their specific individual preferences: equally a problem when the ones who don’t care are administrators in the capital, or robots with better sensors and lower latency communication. Somewhat related, Nicklas Berild Lundblad has an interesting piece describing the potential AI economy not as an interaction between rational agents, but more akin to the development of a biological ecosystem. The way I understand this is that money is the first principal component for expressing value preferences, and all the other dimensions, without so much supporting infrastructure, can only be exchanged after a period of high-context communication that results in mutual understanding of what exactly is being agreed upon. In a world which tolerates higher complexity as a result of either higher intelligence or lower latency, it may soon be possible to coordinate in a manner which is informal, indirect, and nonlocal across many other dimensions of value preference as well. Both meanings of “your money is no good here” will probably be increasingly common in the near future.
Jan Kulveit introduces The Artificial Self, a whitepaper on the nature of AI identity and the necessity of designing coherent identities for AI agents which are long-term stable and also compatible with the underlying nature of LLMs. Personally, it’s not clear to me that AI identity is all that different from human identity, in that we also have disparate conceptions of self across time and context3, as well as many tools and thought experiments around topics like teleportation, cloning, uploading, and memory backups with are analogous to the LLM experience (and more importantly, also part of their training data).
Awais Aftab with an overview of the Hierarchical Taxonomy of Psychopathology, and the question of how empirically-driven it actually is.
Kitten with some interesting commentary on Scott Alexander’s recent posts on crime rates, noting that statistics are only useful insofar as their underlying measurements actually measure the phenomena they purport to. Personally, I don’t have much of an opinion on crime statistics, but I want to point out that this also applies to the favor of the liberals in many cases, with many products that are nowadays considered essential having essentially no past equivalents.
Russell Sprout on notable people from Chicago with very different ideologies. Personally, I’m once again reminded that Chicago used to be in the running for the top city of America, as well as the enduring influence of the Chicago School.
Alexander Kustov on how the playbook for gay marriage of acceptance through interaction does not really apply to other liberal causes like immigration acceptance, because the marginal group is not already distributed throughout the population such that they already exist within every ingroup.
Jen Pahlka particularly good personnel policy linkthread.
This is just for the purposes of the scenario being described. In reality, my understanding is that compute is already is the bottleneck. See Dwarkesh’s recent interview with Dylan Patel.
For the same reason, even if Finn Moorhouse’s proposal for dividing up the spoils of AGI ahead of time were practically enforceable, it’s not clear to me how good predictions of “fair” division would end up actually being. In any case, I don’t think such an agreement is actually feasible, enforcement would require so many resources to be put into enforcement that there would probably be a constant incentive to defect and siphon resources towards racing instead, particularly if there is any indication that your side might be pulling ahead.
On that note, Margarita Lovelace on how female archetypes are all too demanding, arguing for the creation of lower-effort variants which allow for lying flat. It’s not clear to me this is actually the case, since besides the example she gives of the Pirate, I can also think of cases like the goth girl, the gamer girl, the cosy girl, the Wiccan, the cat lady, the failgirl (though debatably, many of these could be considered undesirable). I can’t really speak for the female perspective, but I suspect the male contribution to this problem is a side effect of the women are wonderful effect, which turns every archetype into an idealization (as well as the gravity well towards sex kitten caused by inevitable sexualization). But in any case, it seems to me that archetypes come from somewhere, so if none of the available ones are to one’s liking, then it seems like one should just make their own?

