2025-04-05
predictions
Dwarkesh hosts Scott Alexander and Daniel Kokotajlo on AI 2027. Even if one were to agree with the premises about alignment being a single thing and intelligence explosion being likely, the “Slowdown” scenario seems straightforwardly great to me, actually an argument for continuing on our current path and only pausing after self-improving AI hits superhuman levels. For some reason, they don’t seem to view it that way, because: a lot of the policy pushes from last year were to pause far below that; the focus on China theft seems to be an argument for nationalism with the expectation that government interference will be equivalent to a pause; and since both scenarios are bad for the CCP, it also reads to me as a thinly veiled appeal to Xi Jinping to shut down TSMC. Anyway, the podcast interview is much more interesting than the paper itself, covering scenarios which back away from these premises a bit more. One thing to mention is that I’ve become increasingly pessimistic about basic income as a solution to AI, not for the common objection that people find meaning in work, but because I increasingly believe that during periods of uncertainty, there will be no political appetite to distribute basic income beyond the United States, resulting in a permanent geographic aristocracy. Assuming bioterror and other PvP scenarios can be properly guarded against, some method of ensuring access like open weights might be necessary to solve this problem, as well as providing escape valves against guild monopolization by entities like the ILA and AMA.
Perhaps relevant, a conversation between Elizabeth van Nostrand and Austin Chen about objectives of different branches of rationalism. This reminds me of the minor furor around the seeming inability for magazines to define post-rationalism properly while doing their profiles on the Zizians. I think this conversation does it pretty well: there’s the orthodox rationalists who prioritize truth-seeking, systematized winning for the Effective Altruists, and then post-rationalism tries to have both concepts living together in harmony in some Dao that cannot be named.
Richard Hanania against populism. I don’t want to believe that my political opponents are all stupid, but it’s actually unclear to me whether they actually thought everything through. Their treatment of Mexico and Canada shows that bully dynamics apply, and giving in just leads to further escalation. Assuming they don't take the bait and start fighting among themselves, the optimal response is a coordinated response, so does the US really believe they can take on everyone else all at once? The scary thing is that it seems like Trump needs to win; he would rather go bust than concede. The other thing is, what’s the story for automation to bring manufacturing back to the United States? If you force everyone back all at once at this point where the technology is not yet mature, the demand shock will actually be counterproductive by raising the costs of the would-be frontrunners. Anyway, comments on tariffs by Scott Sumner, Kyla Scanlon, Tanner Greer, Nate Silver.
Yascha Mounk on the college admissions essay.
Stephanie Murray natalism study links.

