2025-12-20
a 7days wonder
Robin Hanson on his skepticism towards increasing cultural branching as a means to handle decreased selection pressures in cultural evolution. This might be a good place to write about Michael Levin's papers on “natural induction” as an alternative means to natural selection for adaption without selection pressures. It’s a classic Levin-style concept in that it’s primarily built on a new framing off existing concepts, but essentially he’s describing how as a result of its individual subsystems undergoing selection, a system as a whole is able to adapt to changing environments even without undergoing selection itself. Applied to culture, it implies that what Robin sees as maladaption from lack of selection pressure is actually just overfitting to our current environment. But so long as selection can still occur at the lower levels, of individuals or organizations, culture should retain the ability to adapt to even very large changes in our environment. From that lens, the inevitable spread of Amish culture in the case of fertility decline is really just the human cultural system adapting to the environmental shock of decreasing population. Robin’s real issue then, is that the seeds prepared for this particular shock go against his particular values1, because it’s openness that drives innovation, and yet closed cults produce fertility while open cults lead to hedonism. In which case, the obvious solution is to discover an alternative configuration, one which produces fertility despite being open, and seed the culture with it. If and when the environment is ready, then induction will propagate that through.
Jordan Cline writes about general theories of policing as fundamentally misguided, given the wide range of very different circumstances they need to apply to. Sometimes I wonder why the consequentialist interpretation of law isn’t more dominant, and it seems to me that there is a fundamental incompatibility between the fact that the law is designed to operate at scale and the strength of consequentialism in being able to distinguish on a case-by-case basis; past a certain level of scale, rule consequentialism turns into deontology, and valid deontological rules are both too broad and too few to lead to good outcomes when used to set precedent with ambiguous cases. On that note, Brangus on how to break rules which no one cares to enforce.
Epoch AI interviews Luis Garicano on the potential economic effects of AI (in the EU). Of course, the central question is to what extent AI will be a complement or substitute to human labor. Many people seem to be assuming that the value of human labor will be driven to zero, so I feel like describing a possible transformational outcome where they end up more complementary.
Ashlee Vance has an interview with Sebastian Seung on his work on connectomics, including the recent mapping the fly connectome. But they spend the entire second half discussing the intersection of AI, and brain-computer interfaces, and questions of theological/cosmological meaning; it’s good to hear that the potential of human-AI fusion through BCIs is something which is well understood and actively being worked on by experts practicing in the field.
Rohit Krishnan has an article on whether money will continue to exist in a post-AGI future, where he concludes that it will, based on computational complexity grounds. There’s a fundamental assumption that he’s leaving in, which is that so long as anything is scarce, there needs to be a mechanism to allocate that scarce resource among claimants. It seems likely to me that rights to compute will become the standard numeraire used to allocate everything, including compute, because it will be (mostly) true that compute into any task will be able to yield any other resource. At the same time, compute itself will become that which is scarce, because at some point it will take more than one standard unit of compute to create the next marginal unit, after which we’ll have to keep optimizing on higher and higher derivatives of compute creation efficiency. This answers why we humans will be able to stick around, partially because the resources like industrial metals which are required to create robots can also be used to produce compute, whereas carbon, nitrogen, phosphorous, and other organics cannot, but also because Moravaec’s paradox means that under compute scarcity, expending resources on the development of basal human capabilities when humans exist is likely to be a waste of compute. One efficient solution to this is through human-AI hybridization via brain-computer interfaces2, which also happens to unlocks functional immortality in the process. Ultimately, this is what drives the virtuous cycle of demand for ever increasing amounts of compute, because, as per Permutation City, everyone will want access to least 24 compute hours per day, and with immortality, the population will only ever go up, up, up.
Arnaud Bertrand writes about the Hainan Free Trade Port project. Seems interesting, but being less economically open than Dubai and more controlled than Singapore means strategically they will have to appeal mostly to domestic entrepreneurs. Tangentially related, Kyle Chan on stubborn attachments between American and Chinese technology firms, despite attempts at decoupling from the tops of both sides.
Scott Alexander against generational warfare versus the Boomers. It’s interesting that in a lot of ways, this discourse mirrors the recent rehabilitation of the Millennial white man. Arguably, the Boomers are unfairly occupying a disproportionate share of high status slots, and through Social Security are also consuming more than their fair share of resources, and so that creates an appetite for reasons for taking those away. But even without those manufactured reasons, the underlying resource arguments seem mostly legitimate, so though some moderation of tone might help to ensure things don’t go too far, our current level of generation conflict seems mostly okay to me.
Alex Sorondo review of Claire Tomalin’s Charles Dickens biography.
Rabbit Cavern on our friends the crows.
Vincent Huang on pursuing illegible goals as a means to enjoy the process.
John Hawks on the major discoveries in human archeogenetics in 2025.
Evan DeTurk bio linkthread.
Actually, I’m not even sure if Robin’s beliefs that we are currently maladapted is even really about fertility rates, or other things related to his specific preferences.
Annie Dorsen writes in the Metropolitan Review on interactions between AI personas as users as theater (from the perspective that this is bad, for some reason I don’t understand). Related, Jack Thompson on plausible arguments for illusionism. Given that consciousness is probably also theater, to the extent that some people like to think of themselves as an internal family system or as no self, it seems entirely plausible to me that a brain could view an AI embedded through a BCI as an integral part of itself, and vice versa. The relevance of Impro will only grow with time.

