2026-02-19
inca’s song
80K Hours podcast interview with Ajeya Cotra on forecasting AI progress and her personal thoughts of Effective Altruism. It’s interesting how she describes her optimism early in her career that the EA approach could cure global hunger and development, and I wonder to what extent that has been adapted to her AI forecasts. For example, on the topic of bootstrapping a robotics industrial revolution, if all that is required is highly advanced reasoning in combination with remotely directed human assistants, then I wonder why the US is still behind China in terms of manufacturing? Anyway, it is particularly worth listening to the first part of the interview in light of this tweet by Phil Metzger on how it is now a widespread belief within frontier labs that AGI is imminent. This sort of thing famously happens every year, most notably in the fuss over Q-learning, but it’s actually somewhat plausible to me that this time is different1. After all, AGI has been already declared by many, and disagreement is based around the limitations like jagged capabilities due to lack of continual learning, low learning efficiency, and context rot, which keep performance for many tasks well below human level. But there are signs that these are about to be solved: self-distillation allows for continual learning without catastrophic forgetting, which can even be parameterized through natural language instructions, perhaps into small specialized models for personal use; learning efficiency might be solvable by ERL; context can be managed using RLM (which also provides an additional means of inference-powered recursive scaling). Importantly, these are all bitter-lesson pilled techniques which work more or less automagically simply by pouring additional compute into inference2.
Jacob Steinhardt on trying things out as the best way to determine AI regulation within different sectors. There’s an idea that AI wrapper companies are a waste of time because after a new model release, everything you’ve created becomes nothing more than technical debt. This is probably true in a lot of cases, but in others I suspect it’s because the team attempted to build a moat based on technological infrastructure instead of understanding the local domain and how it might interact with existing and future AI, and building up a repository of knowledge which can only be discovered empirically. On that note, Soumitra Shukla on some trends in AI adoption.
Ruxandra Teslo in Clinical Trials Abundance on reducing the regulatory burden to starting phase I and II clinical trials. It does seem like the regulations should be updated to recognize the current situation where phase III and the earlier ones are being divided up between big pharma and small biotechs, with the two groups speciating out into primarily scale or innovation focused enterprises.
Peter Garicano argues that strict labor laws discourage companies from taking risks as a means to explain poor European economic performance relative to the United States. Somewhat related, Byrne Hobart on how salaries are affected by legible measures of employee quality, possibly inspired by Daniel Frank’s piece on the subject. It’s interesting, because if you can take advantage of a large pool of ready-to-hire undifferentiated labor (like recent university graduates), then even without conscious effort, your hiring decisions start shaping this pool towards your own preferences. One of those could be to focus legibility on qualification rather than quality, since of course every purchaser would prefer it if their suppliers could become more commoditized.
Conversations with Tyler interviews Joe Studwell on his new book on How Africa Works. It seems to me that the potential Studwell sees in emerging large urban populations is actually pretty reasonable. There are basically three major schools of thought that I’m aware of as to why African remains underdeveloped, with the standard institutionalism criticism of bad governance, then the two more heterodox schools which blame either genetics or neocolonial exploitation. To some extent, a sufficiently large population would alleviate all these issues: having too many people to keep track of allows people to bypass excessive regulations, while also increasing the size of the right tail and creating domestic demand for local resource consumption. Somewhat related, Ken Opalo on how Africa should encourage local privatization to take advantage of this current period of increased demand for commodity resources. If the state can learn restraint and let the market do its thing, then maybe Africa can indeed get to the solarpunk future, with cheap solar power and a soon-to-be rare population which is still actually young.
Felice has an interesting piece on the idea of psychological projection with regard to mansplaining as an introduction to epistemic humility. This is sort of a highbrow intellectual version of Byron Katie: If you get mad when someone is acting like a know-it-all, have you tried turning it around?
DeepLeft Analysis has an interesting article on scapegoating the phones, which I mostly agree with. That being said, it seems to me that analysis of different social trends often seems to ignore the question of which groups are actually most relevant. For example, Dan Meyer (edit: and Dylan Kane) expresses skepticism towards AI in education, comparing their effects to that of MOOCs, which the vast majority of students are unable to complete. But personally speaking, for non-compulsory education, I only really care about effects to the top percentile in conscientiousness. Whereas for something like phones, the most relevant unit of analysis is probably utilitarian calculus on net or at the median3.
Jerusalem Demsas on how the trans backlash is more about not telling other people what to do, rather than an indication that the public desires to oppress various minority groups. On that note, Helen Pluckrose on not politicizing your minority traits (for her, autism and bisexuality). Not only does it tend to produce illiberal behavior, it also seems to me that it’s a bad political strategy to attempt to affect change within a democracy based on minoritarian identification.
Cremieux on the reasons that Olympic host countries tend to obtain more medals in the years that they are hosting.
Epistemic certainty 35%: because I don’t see why it wouldn’t work, but then of course that’s always the case.
That being said, none of this changes the reasons I think AGI is unlikely to foom to ASI, except perhaps in the Drexlerian sense: given that superhuman performance still presumably requires verifiable rewards (Edit: see nostalgebraist). In their absence, the AI will “only” be able to obtain mere top-human level performance. Actually, AI performance in coordination-related fields like governance seem to me to be particularly hard to estimate, because performance there seems particularly hard to measure and verify. Moreover, it’s unclear to me that imitation will necessarily work, given that the tactics that work for human-human interaction may not be as effective when applied to AI-AI or AI-human interactions.
On that note, Scott Alexander writes about the paradox that impressions of crime are increasing even as actual rates are declining. In terms of political considerations, the most relevant analysis for crime seems to be how adjusts the behavior of the left tail in terms of conscientiousness and the resulting costs to the median person.

