2026-04-01
a truth worth lying for
Alex Kim with interesting aspects of the Claude Code codebase. Also, funny tweet on the subject.
Forecasting Research Institute with an overview of predictions on likely economic impacts of AI, categorized across economists, AI forecasters, superforecasters, and the general public1.
Nat Purser in Asterisk Mag on how pausing AI is a misguided policy which is unlikely to achieve good outcomes given that, as Dean Ball argues, there does not appear to be any mechanism or plan for parlaying any such legislation to obtain an international agreement. Anton Leicht has an even stronger article, arguing that in addition to be ineffective, it will likely significantly diminish the future influence of AI safety, since the pause will likely be taken over by political actors who do not share either goals or priors with the movement. Personally, it seems to me that a pause will be self-defeating even from within the worldview of AI safety: firstly it increases the risk of centralization, because if companies cannot compete on capabilities, they will start competing on things like business partnerships and integrations instead, which are likely to lead to market consolidation; secondly because if we can no longer progress through throwing more compute at LLMs, companies are much more likely to devote researchers towards alternative architectures instead2, and if one thinks that superintelligence is possible, that should be one of their worst fears.
Daniel Muñoz against direct democracy based on an argument of voter ignorance. At this point, it seems like it is uncontroversial to say that the optimal level of democracy for a given polity depends on how united and rational their population is, which is why democracy works well in Switzerland, less well in America, and not at all in various dysfunctional states. But it seems to me the underlying issue isn’t so much that the population’s desires are wrong, but rather that they can’t accurately estimate the likely outcomes of any particular policy proposal; given that, it’s worth noting that representatives are only one method among many for aggregating preferences into actions, and there doesn’t necessarily have to be a tradeoff between how fine-grained preference measurement is and the quality of policymaking.
Afra Wang with another interview with Du Lei and Han Hua on the American and Chinese AI technology ecosystems and their political implications. It’s very interesting to see the similarities between Chinese local governments and Silicon Valley venture capital networks being pointed out3, since it seems to me that one of the more plausible solutions to building up state capacity in countries with poor institutions is by encouraging venture capital and intertwining it with their governance structures, such that they can tie together the capability to get things done with activities which are likely to produce economic growth as positive sum externalities.
Nathan Goldwag with an entertaining rant about the structure of the government in the Star Kingdom of Manticore.
Helen Pluckrose defending the ability to disapprove of people while defending their right to do it.
Tyler Alterman on the dance of mutual escalation as the protocol that people use to initiate romantic relationships. It seems to me high skill barrier required for this is one of the reasons that the debate over whether girls should ask guys out is always especially prevalent in tech-adjacent circles. Which is why I think that they should, since the downsides of this approach can be entirely mitigated if one does so in the form of providing a personal courting guide or some similar invitation to courtship.
Benjamin Breen pictures of Iran from 2017.
Alexander Kustov linkthread.
Robert Long linkthread.
Surprising amounts of agreement between all involved parties. You can also take it yourself: here’s my off-the-cuff vibes-based forecast.
Stefan Schubert with an overview of a Twitter debate initiated by François Chollet over how much smarter LLMs will be able to get; for the most part, it seems to me that people are converging on the understanding that LLMs are unlikely to dramatically exceed human capabilities anytime soon (to an extent which seems even to me to be maybe a little too certain). There’s a broader argument, that even if they aren’t much smarter than humans, there are other methods by which they might be able to obtain strategic dominance at a level where humans are rendered unable to effectively respond to any of their actions. Fortunately token output seems to be if anything slower than human thought (even if it is of generally higher quality, but this probably isn’t a real problem exactly because RL works best for what is easily verifiable). The other possibility is that LLMs will be able to parallelize their actions, but it’s unclear to me to what extent this is possible while retaining the identity and goals of a single cohesive actor. Unless aggregation is performed specifically through the architecture itself, it seems that any mechanisms which would allow this could be equally applied for humans as well, or between humans and AI.
In some sense, this feels like an analog to the joke that no matter what your hobby is, there will always be a Chinese kid that does it better; any governance idea you might have has already long been understood and implemented by some Chinese local government official.

