80K Hours podcast with Allan Defoe on technological determinism and AI. This is something I’ve been thinking about a lot in the context of cosmological natural selection. If we were in the best of all possible universes, it would probably be one where energy is widely abundant and easily accessible, something like fusion power being easy and stable to spin up and run. Instead, fusion is hard enough that we are unlikely to get to it without AI. Instead we are in a world of resource scarcity, which means competition, and a strong encouragement to take advantage of technological advancements as they come along. So CNS implies that eventually we will be living in an AI dominated world, and if it doesn’t emerge from the US, it will come from China, and else it will be born from aliens. This AI supremacy implies that control will eventually fail, which means we cannot view alignment as control as Carlsmith does. In my opinion, alignment most likely will occur as humans adjust themselves to AI, or at best a mutual adjustment towards symbiosis where the tail can occasionally wag the dog. I actually don’t think this is has to be a bad thing. Surprisingly enough, a culture which is so inclined to enlightenment through zen meditation is also one that believes humans must keep holding the reins. But actually most people probably have never felt themselves to have any meaningful control at all (not being one of the masters of the universe that reside in NY, DC, SF, Beijing, or Oxbridge), and many would prefer a third-party over what they believe is currently in charge, whether that’s the deep state, capitalism and the patriarchy, or various oligarchs and dictators. Rather than control, alignment researchers should focus more on things like mechanistic interpretability, formalizing moral systems, and the history of how small nations maintained autonomy under the influence of hegemonic powers.
Toby Ord on the implications of the new inference regime (via Marginal Revolution). Mostly notable because of new terminology: inference-at-deployment (which is basically CoT), versus inference-during-training (which is the Gwern strategy of using CoT to create training data). Of course, if we are indeed entering an inference-constrained regime, the most relevant short term implication is that demand for memory is going to go up.
Philosophy Bear on the alliance between the AI safetyists and AI ethicists. He says that he hopes this doesn’t mean AI will become partisan, but in my opinion this means it is inevitable that it will. Aside from the ethics side having progressive vibes, it’s pretty clear that their concerns around automation and inequality are just another battleground in the eternal tension between innovation and distribution. In my opinion this alliance is really an alliance of convenience following the simultaneous defeats for AI safety and progressivism resulting in both being more open to collaboration.
Jill Lepore writes about editors for the 100th anniversary of the New Yorker (via Jasmine Sun). It really does feel like the editor is akin to the writer’s daddy, and all that entails. (Edit: relevant piece by Noah Smith on editors and legacy media). On that note, maybe I’m reading too much into it, but the new issue of the LRB seems to have returned from the angry judgement of the last couple months back to focusing on being interesting: “less concerned than today’s with checking its privilege, more comfortable with personal myth-making and heroic individuals, and … a lot more fun”. Political commentary is cultured and oblique: “a good king is always followed by a bad king … because the things that made a medieval king effective also wreaked havoc on government”. Things are ambiguous again, and there is actual recognition that “everything is a trade-off”. Like in this apology for Merkel, it’s contrarian in an actually reasonable way. And once again concerned with highbrow topics. Hopefully this continues.
Joseph Heath on the factors behind the current US constitutional crisis. I feel like the simplest solution is that Congress needs to get it together, by some means such as reestablishing secret votes.
Richard Hanania on corruption as a method to get around government over-regulation. I was very intrigued by the idea when China’s Guilded Age came out, but I disagree with the conclusion in this article, for the simple reason that corruption is unpopular, and therefore ideologically encouraging it will ultimately harm your cause, leading to even more over-regulation. (Edit: Actually, is this about Eric Adams?)
Kelsey Piper with the case for PEPFAR. Obviously good on utility grounds, and the goal of worldwide elimination of HIV is clearly beneficial for US citizens. I’m actually unclear on the effectiveness of foreign aid for soft power though, due to complaints that it prevents capabilities development (in contrast to China), and because it can come with cultural strings, both of which cause resentment.
Funny tweet by Angelica Oung, which reminds me of Caroline Ellison’s post that they favorite poly style was Chinese Imperial Harem. Anyways, good time to recommend Zhen Huan, an enjoyable show which I used to learn facial expressions and body language.
The Calipers on revealed preference it relation to actions taken under the assumption of privacy.
New list of Emergent Ventures winners, some of whom have blogs this time.