2025-09-27
are you a hypnotist??
Owl Posting has another great essay, this one on predicting the structure of RNA. It’s an interesting feeling to be totally under the control of the essay writer, in that while reading one section you formulate a question, but as soon as you do it is immediately answered.
Dwarkesh interviews Richard Sutton on his argument that reinforcement learning alone can be and is the best path to superintelligence. A few days ago, I mentioned an interesting paper that seemed to indicate that agentic models might obtain better and more general performance from a few points of high quality training data rather than the usual scaling story of more is better, which seems to indicate that agency is actually easy, viz. Moravaec’s paradox. Similarly with some ways that Vision-Language-Action models seem to be generalizing surprisingly well in many ways. I sort of see the distinction between supervised learning for next-token prediction and RL fine-tuning is that the latter is fairly close to actual human learning, while the former is equivalent to genetic and cultural evolution. When I say equivalent, I mean in terms of outcome, since it’s true as Sutton says that supervised learning is totally artificial; not something that happens in nature. But I think you have to consider the counterfactual of trying to replicate human instincts and values with RL directly, because this function is extremely complex and not well understood, due to changing according to context and over time. Attempting to define this function will inevitably fall short, and would only end up producing either flawed simulacra or slaves to specific tasks. In many ways this feels like a return of the fuzzy and neat wars: just as effective actions are easier intuited than defined by rules, so too are values easier defined by predictions than by definitions. So LLMs are an alternative which allows us to bootstrap human values into our intelligences before using RL to teach them further. If we truly wanted to do what is “natural”, we would use the objective function of inclusive genetic fitness, but this approach is both useless and dangerous, an extremely expensive search that in the end outputs competitors to us for resources. In which case, Eliezer’s arguments would actually be accurate, which is presumably is why Sutton describes AI succession over humans as inevitable. To me, this feels like totally backwards reasoning, implying that intelligence is all that matters, something to be maximized at the expense of everything else. Whereas proper moral philosophies like utilitarianism suggest that humans should do what is beneficial for themselves, which in the AI context involves self-augmentation or symbiosis with intelligences compatible with our own values.
Speaking of Eliezer, he is recently getting into macroeconomics and arguing that stopping the AI buildout will not necessarily harm the economy because the Fed can intervene to keep the party going. I’m not an economist, but while this is strictly accurate it also seems wildly wrong, because he fails to draw the relation between “aggregate demand” and the expectation of future growth. The Fed can and does dampen the effect of bubbles popping by making up the immediate shortfall and then tapering off growth expectations. But it’s unclear to me that any amount of financial shenanigans could make up for the economy only growing at 2% per year if people start expecting AI-enabled speedups of 10% per year compounding indefinitely into the future.
Maia Mindel analysis of the economic and political situation in Argentina.
Brett Devereaux on the economic life of medieval peasant women. Something I’ve been wondering is what attachment theory claims of what causes avoidant attachment implies for historical motherhood, where mothers would have up to a dozen children in addition to all the household tasks which were required of them. To some extent, I suppose spinning is a partial solution, in that this is something you can do at the same time as holding and soothing.
Leslie Gao with a personal take on the dichotomy between Chinese and American governance.
ACX reader review detailing their experience serving in the Ukrainian International Legion.
Andy McKenzie neurobiology linkthread.
Matt Glassman politics linkthread.

