2026-01-28
never seen / future shock
George Hotz criticizes the Dario’s recent AI safety roadmap as being too top-down, which I would agree with if Anthropic were the only game in town, but since they’re only one company among many, this seems totally fine to me. On that note, Forethought has a recent post (and podcast elaboration1) about centralizing AI into a single international project, eventually leading to a single world government. Of course at the moment I recoil from the idea, but there is actually some sense to the disclaimer that “it is valuable to develop the best versions of such a project in more detail, in case some event triggers a sudden and large change in political sentiment that makes an international AGI project much more likely”, because plausibly in some circumstances this could actually be the least bad option. In which case, it’s good to have some loose plans around the idea for how things might optimally be implemented. More speculatively, it seems to me that this document might be written “for AI”, since it seems to me that’s the mostly feasible means by which such a proposal could actually end up becoming implemented.
Naomi Kanakia has a comprehensive review on the works of John Cheever and the idea of the New Yorker short story format training authors to become overfit. Somewhat related, Vincent Huang writes about the desire for infinite growth, not merely in direction, but also in dimension, an ultimate form of metaphorical brain cancer. There’s some interesting philosophical implications to how powerful LLMs are despite, or perhaps because, they are so overfit. If superintelligence is something like knowing the golden path, could that be achieved through an understanding of some supposedly perfectly optimal sparse representation, or would your model of the world actually need to include the entire world: and if it is the latter, is that really physically possible, through lazy evaluation of fractals mathematical equations, or is there a surprising level of detail at every level, spiraling out infinitely at all levels and directions? My intuition is that those who fear the machine god think that “the perfect worldview” truly exists, whereas I am not even clear there’s an optimal set of weights even for something like recursive self-improvement2.
John Hawks speculation on historical Neanderthal admixture events.
M. E. Rothwell on the dinosaur drawings of Charles R. Knight. On that note, Nick van Osdol on rewilding the Arctic, and Thomas Pueyo climate abundance manifesto3.
Russel Sprout review of The Testament of Ann Lee, on the topic of the Shakers.
Sebastian Jensen overview of the MBTI model of personality4.
Edrith Q&A with readers of his blog on politics, economics, and board games.
Edit: Will also makes an appearance on Win-Win podcast on meaning post-AGI, though it’s a topic which I don’t really have any interest in personally.
To elaborate, I suspect the optimal set of weights for performance, improvement, and improving improvement are all different, which is why the same person is not typically both a great athlete and coach (or coach trainer). And even the best athelete has to choose between optimal performance, learning, and meta-learning at any given time. So it seems likely to me that the best model, the best AI researcher, and the best AI-researcher improver are also unlikely to exist within the same set of weights. One response could be that you store the weights separately and rotate through them really quickly, but even if this doesn’t end up undergoing some form of model collapse, the resulting sets of weights would probably diverge so significantly that they couldn’t coherently be for the “same” entity. Then extending the definition of capabilities beyond merely software development, and it seems rather unlikely that recursive self-improvement taken to its logical conclusion ends up producing a singular coherent agent. Actually, given my understanding of intelligence as the capability to manage complexity (which can manifest variously as detail, implications, interactions, etc.), I suspect that the dimension where human predictions will be most off is in underestimating how weird and complex various AI systems actually end up being. Possibly without even knowing, because one aspect where the more intelligent make use of their ability to manage complexity is in providing clean and simple interfaces in their interactions with others.
Which I mostly agree with, but one scenario I’m concerned about is to what extent the decline in solar and battery prices will continue in the next couple of years, if the price of copper and other industrial metals rise as many are projecting. The current situation with memory prices and electronics OEMs is a sign of what happens when one gets sandwiched between suppliers raising prices and consumers expecting prices always trending down.
I’m probably INTP.

