2026-01-23
tabaco y oro (pasodoble)
Anton Leicht writes about how AI safety and policy more generally is ignoring “middle powers”. It seems to me the underlying reason for this is that even those who are able to view things from a non-US perspective view themselves more as internationalist humanists rather than as representatives of their own particular countries. Unfortunately, the world that policy runs through, both in terms of governing institutions and the beliefs of the people involved, still operates primarily under nationalist systems. Somewhat related, Tom Davidson and Will MacAskill discuss the robot takeover and other implications of longer timelines. Assuming takeoff is slow, then in order to compete with China the US labs will probably seek to expand globally in search of revenue: this will presumably entail provisioning of data centers throughout the world, a development which could, if properly managed, increase the geopolitical importance of middle powers beyond their role as merely potential victims.
Cognitive Revolution Nathan Labenz Q&A, which starts with some interesting thoughts on the current state of fine-tuning. I previously wrote about how it seemed like specialization via fine-tuning could be the way to unlock an initial form of continual learning, and that perhaps this was the Thinking Machines game plan. But now it seems like fine-tuning specialization is regarded as producing unpredictable and potentially misaligned behavior, and that perhaps also explains what’s been happening with them recently1. Anyway, if specialization is falling out of favor, then combined with jagged intelligence, this possibly indicates that at least in the short run there will be plenty of room for human specialists in fields which are sufficiently niche that they are not worth fully training into general models. I’m not saying this is one of those, but it is amusing that the recent open-source Anthropic take-home exam is basically perfectly suited for specialists in gas optimization for smart contracts.
Ruxandra Teslo2 and Matthew Esche write in the IFP on breaking the local monopolies of institutionally housed Institutional Review Boards by guaranteeing the right to request service through independent IRBs. Particularly interesting is the appendix, which addresses concerns like potential ratings agency-style race-to-the-bottom dynamics.
The Hope Axis interviews Jasmine Sun on mediating between independent media and legacy journalism. Regarding distrust of legacy media, I feel like even besides the historical actual anti-tech bias, there’s a sort of feeling of unfairness that when an engineering guy starts investigating a field they know nothing about, working from first principles, they are justifiably treated as a crackpot; journalists are similarly entering fields which they have no background in and attempting to create facsimiles of understanding through the imposition of their own often ill-suited models and frameworks. Yet their descriptions are generally taken seriously by most people in society, often even above the objections of those who are being covered, who have superior domain knowledge. In both cases, there probably should be more of a sense of deference to the expertise of the existing community, at least until one can demonstrate their own understanding of the field through production of, if not provably better, at least minimally qualified outcomes.
Tommy Blanchard has an interesting review of Elder Race relating Clark’s Third Law and functional understanding to the concept of Umwelt. Very Michael Levin-coded.
Shijie Wang in ChinaTalk has an interesting description of how censorship is unintentionally channeling political discontentment towards extremes of officially condoned ideologies, including nostalgia towards a romanticized Cultural Revolution.
Alexander Kustov against justifying immigration with patronizing or misleading arguments, because immigration done properly produces good outcomes on net which naturally justify itself, and as a result can be easily defended.
Noah Smith with a good overview of the standard arguments that fertility decline is a real issue.
Henry Oliver comments on literacy discourse.
TPOT list of Substacks.
Also possible I was just totally wrong.
She also appears on Complex Systems podcast to discuss her proposal to create a common repository of technical documents from FDA filings of failed pharmaceutical trials.


It's interesting how you connect AI policy to 'middle powers'. The shift in fine-tuning's perceived efficacy also highlights really crucial architectural challenges.