2025-10-07
dream state return
The Curve was being held last weekend, and Nathan Lambert and Zvi write about it.
Mechanize has a post about how full economic automation via AI is inevitable, due to unstoppable structural economic and societal forces. This is getting a lot of pushback, in that even if AGI itself is inevitable, exactly how it integrates with society is still something we can and should attempt to steer1. Their sister organization, Epoch AI has a podcast on the related topic of applying economic theory to predict the impacts of AGI. There’s an underlying premise in the discussion that the effects of AGI will be so revolutionary that the assumptions of most macroeconomic models will no longer hold, which means you need to reason based on microeconomic principles and then try to scale your predictions up from there, hopefully not missing any emergent phenomena in the process2. Somewhat related, there’s a good Odd Lots episode on the effects of the data center buildout on the stock market. They have an interesting case of Caterpillar, which is doing well not because their products are particularly good, but because their industrial equipment is so bad that no one else wants it, so it’s available; you see similar cases like Sandisk or Micron where gluts are turning into shortages as available quantity supersedes quality to become the only thing that matters. There’s a common adage is that it’s time to get out when the shoeshine boy starts giving you stock tips, but in this case it’s the very smartest people who are saying that this time is different, while the closer you are to retail the more obvious it is to you are that this is a bubble, and it’s a sign of nuance to consider whether this might be a good productive bubble or a bad and destructive one. Not sure what this implies.
Marginal Revolution podcast has an episode on their favorite economic models, and probably not coincidentally they are mostly about reasoning on the marginal difference. Also, Emergent Ventures announces their latest cohort of winners.
Yaw Boadu describes the ongoing decline of the Botswanan economy due to the collapse of the diamond bubble, and lists other historical examples of other commodity busts induced by technological advancements.
Hannah Ritchie has a proposal that flight diversions to selectively prevent the formation of contrails can eliminate up to half of flying’s contributions towards global warming. There’s some interesting discussion in the comments whether this is feasible in places like Europe given how crowded the airspace is, but the description of them forming over regions that are cold and humid seems to indicate that the relevant areas are mostly over the oceans, where there is lots of room to maneuver.
Dan Williams has an interesting post on to what extent people are equating democracy giving them results they do not like with the destruction of democracy. Nate Silver had a tweet recently about how the most popular aggregate position is economic liberalism and social conservatism, which indicates that right-populism might have emerged naturally out of the fall of elite prestige and gatekeeping3.
Daniel Miller in Tablet Magazine has an extremely long profile of Nick Land, which was fun to read, but has not improved my understanding of him (I have not ever read him). Probably would be significantly improved by liberal use of footnotes.
Lyman Stone has a piece on how he deliberately cultivates a mean persona online in order to avoid audience capture (as well as another arguing that having many children in quick succession is not harmful). Although it seems to me that this might be tuning your mindset in alternatively undesirable ways.
William Buckner on societies which treat women poorly. It does seem true that we can’t extrapolate WEIRD cultural mores to the rest of the world, on the other hand these cultures seem pretty marginal, so I don’t think you can necessarily generalize based off them either.
Zilan Qian has an interesting post in ChinaTalk about how AI girlfriends dominate in the west, but AI boyfriends are more popular in China. A lot of this seems like post hoc justification to me: for example, to answer the question of in China “why are adult women believed to be the main drivers of AI companionship”, one of the reasons given is “there are more males than females in China” (possibly related tweet by Noah Smith). In general, one of the reasons I’m very suspicious of all gender discourse as it applies to East Asia is that sexual relations are so central to evolution that self-deception probably evolved specifically to handle it, which means you require a sort of autistic outsider framing in order to bypass this particular elephant (hence don’t ask a fish how to fish). But these countries are sufficiently isolated and opaque to outsiders that once you understand enough to comment, you have also become too far inside to be impartial.
Chinese Cooking Demystified on snobbery over “polluting” one’s rice with dishes. Also, Peking Hotel has a nice list of substacks that write about China.
Jorge Velez writes about his experiences attending Burning Man over eight years.
I’m increasingly partial to the idea that the “good end” is complete integration between AI and humans at an individual level, a fusion of Seb Krier’s personal agent, Emmet Shear’s alignment defined by viewing others as self, and Karpathy’s “summoning ghosts”. The fact that people can eliminate their internal monologues, create coherent internal family systems, or become egoless implies that complete integration of a personal AI into one’s conception of self through brain-computer interface should be completely possible. It seems to me that this third-hemisphere approach solves many problems for free, like AI lacking agency or a sense of self and therefore many aspects of alignment, and keeping humans useful, and even making the baseline human more useful than they are now.
There’s an interesting parallel between questions of economic singularity and intelligence explosion. Defining emergence to be when microphenomena are interacting multiplicatively to create beneficial macrophenomena, you should only expect it to occur when selection pressures are high, because thermodynamically speaking, random configurations of individual forces will most often cancel each other out. Depending on whether these emergent forces produce positive feedback, you’ll end up with either an S-curve or logarithmic growth, but both curves eventually level off as you approach complete internal alignment. For economic growth, we still have Dyson spheres and the rest of the universe to look forward to, but what exactly full internal alignment means applied to neurons producing general intelligence is another question I’m a lot less clear on.
This is somewhat related to another topic covered in the episode, which is about whether we’ll end up with world government under AGI due to improved returns to scale as our administrative capabilities improve. On that note, Patrick Collison has a tweet about his visit to the Faroe Islands, possibly inspired by Dan Frank’s (hello!) recent post on the subject.
It’s interesting to view the trade-off in democracy with representation as a means to prevent elite overreach versus full populism which leads to frankly terrible policy. One of the reasons I’m so partial to my idea to elect the cabinet is that having a long list of candidates and using a slightly complicated system like quadratic voting means that uninformed voters will end up using their votes in a less steering and more participatory manner, particularly if the more visible positions like head of state are in the process made more ceremonial.


Thanks for sharing!