2025-12-02
east london summer
Lex Fridman has a very long interview with Michael Levin on his framing of intelligences as teleological agents1. Personally, I view this more as a useful metaphor than an actual description of reality, but it is undeniably a cool way of looking at reality: like his description of how aging can be viewed from both a holistic or inductive approach, as a blueprint trying to make use of increasingly deficient materials, or conversely as agents attempting to follow a plan which is becoming increasingly unclear. Anyway, given his sophisticated understanding of the importance of viewpoint, it’s unclear to me why he believes in Platonism, because it seems to me that you could equally well explain prime numbers and other recurring patterns in reality under materialism, if you describe them as emerging out of the lens2 by which we are viewing the true underlying reality.
Noah Smith writes about anxiety towards AI in the Anglosphere due to economic and environmental concerns. It’s interesting that China, South Korea, and Japan have maximal excitement relative to anxiety, and it’s my opinion that the standard explanation, that they’ve had recent evidence of the benefits to living standards provided by improved technology, is only partially explanatory. East Asian societies, despite their famed propensity towards anxiety, are not particularly worried about alignment risk because their philosophical ideas has always been centered around the idea of collective alignment3. As far as I understand, they widely believe that they have empirical evidence for the idea that cultural beliefs are a sufficient means to constrain behavior, even of the very powerful4. It’s not for nothing that there are always references to Chinese philosophy when Dean Ball or other AI optimists write about alignment.
Jack Thompson on HEXACO personality test scores for Qwen. Also, Geopolitechs with an overview of Deepseek’s new 3.2-Speciale model.
Anthropic red team report on automated finding of smart contract vulnerabilities. Given the evidence that the Balancer hack had some level of AI assistance, it’s surprising to me that TVL in Defi does not seem to have gone down now that Opus 4.5 signifies that AI coders are now actually very good, and are likely to improve even more in the next couple of months5.
Seemster has a good description for how AI should really be thought of in terms of x-risk, as a tradeoff between a short-term increase in x-risk in return for prolonging the inevitable end of human existence given stagnation.
Max Harms on how he was able to accurately depict China in Red Heart without having visited, as an argument for how AI can accurately understand reality even without being embodied. As I still have not read it, I can’t really comment, but my impression of peoples’ takes on places they have not visited is that even if all their views are accurate, their understanding of the relative importance of different ideas and concepts tends to be somewhat unbalanced. Related, Afra Wang on different ways of understanding China.
Snowden Todd on why Thailand is richer than their neighbors, arguing that it has less to do with not being colonized and more with with capital also being their major port city.
Michael DePeau-Wilson writes about phenylephrine and FDA drug removals in Asimov Press, although it’s not clear to me what exactly the harm is here. There’s possibly an argument that removals should be easier to do because that you encourage the FDA to make approvals faster, but it’s not clear to me that’s actually a significant factor delaying approvals.
Sam Freedman interviews Richard Thaler on nudging, which seems to be very popular recently, possibly because of the upcoming book which is mentioned by Cass Sunstein on his Conversations with Tyler appearance. On that note, Bryan Caplan discusses abortion, and Diana Fleischman debates Lyman Stone on eugenics6, both cases where it seems to me that nudging somehow tends to lead libertarians into somewhat questionable territory.
Andrew Gelman discusses competing claims over the Mississippi miracle, which Kelsey Piper responds to (Edit: fuller response, and Gelman also admits he was overly credulous towards Wainer).
Freddie deBoer on how traits not being heritable does not necessary mean that they can be changed through known interventions.
Samir Varna on the caste system in India. The phenomena he describes about exit seems somewhat related to Cremieux’s recent post on Roma self-identification in Romania, which is a plausible mechanism for how special treatment and disparate outcomes can become self-reinforcing.
Harjas Sandhu on finding yourself stuck in a loop which you do not enjoy. That being said, Desmolysium has a post on other reasons you find yourself feeling bad.
Related, Byrne Hobart on the idea of what AI “wants”.
Or really, a nested layering of lenses, of which some like mathematics and entropy are closer to the source, on which others like geometry and physics are building upon, then perhaps chemistry and thermodynamics, next sensations and feelings, and then social reality and individual worldviews. This is why I don’t believe in moral realism, because even as you perceive saving a drowning child as morally laudable, it’s possible that in some alternative framing of the universe, the same action can be interpreted as something truly dreadful happening to some eldritch entity. Though that doesn’t mean that you don’t save the child, because that alternative framing arises out of a divergence from very deep in the stack which you do not have any access to and therefore cannot reason about. Nevertheless, by similar reasoning one can envision smaller divergences which might be occurring higher up in the stack, in which a single action can similarly be interpreted as both a benefit or harm, depending on the perspective.
Edit: This seems to be more or less what Sam Senchal is describing in his Observer Theory (Although I don’t follow him to his more woo or religious implications, and I’m very uncertain about the nature of underlying reality, regarding it’s completeness and ruliad-ness).
Inductive approaches like Daoism, Confucianism, Legalism, and Buddhism correspond respectively to alignment with nature (or more broadly universal rules), society (or more broadly cultural and social rules), the ruler (or rather, the state), or all things (or, the self being an illusion, nothing). Whereas holistic descriptions of society as being based on social contract or democratic consensus ultimately depend on ideas about equality and consent. These cannot be maintained against entities which are sufficiently powerful, leaving behind only the cold logic of Hobbes underlying Weber, that might makes right.
Like how ritual constrains the emperor, or the story of how two peaches can force the death of three warriors (by proxy of pride, shame, and brotherhood). But more broadly, their philosophical history trains them to be able to recognize these forces operating within their own modern societies. Despite that, most still find themselves unable to break free.
I link to Lyman because he links to everything else, not because I agree that he convincingly won, since it’s clear that Diana was seeking understanding while Lyman was playing to win. On that note, Nathan Goldwag on the tactics of Turning Point USA, which indeed seem not to be in good faith. Though there’s also the fact that manufacturing popular national outrage should neither be this easy nor so one-sided, which perhaps requires some introspection.

