2025-06-18
progeny of rmdaxtasing
Nostalgebraist on the response to his piece on the AI as a mask-wearing void. It’s good to see an alternative approach to alignment that doesn’t rely on fighting for control emerging. I don’t think control is necessarily a terrible approach, it just seems like it’s too general, while for LLMs specifically it’s probably better not to start things on an adversarial footing.
Lex Fridman interviews Terry Tao. Probably better to listen to highlights rather than the whole thing, but the part about AI proofs has me thinking about what will be preserved for humans after AGI. I think there’s a good argument that what will actually last (at least until ASI), is talent. By which I mean speed: the immediate awareness of new options as they appear; the ability to instantly find the core of the matter1. For any field where talent is the primary determiner of success, it’s probably also the case that training data will be difficult to generate, since otherwise talent could be overcome simply by putting in more hours2. The other factor is that if fast responses matter, then higher levels of compute will be required to operate at the required speed, which could be relevant if we remain in a compute-bound scenario.
(Edit: some thoughts on talent by Sebastian Jensen)
Simon Willison on how not to use AI agents.
Robin Hanson responds to Dynomight’s post on prediction markets, noting that even if the information given by prediction markets cannot be directly applied, the fact that it gives off new information is still something you can work with. I feel like this is less a defense of futarchy than prediction markets, since this seems like it would require agreement on a shared model and priors.
Thomas Puyeo on the geography of Iran. Includes mention of how Iran has separatist minorities on all of its borders, which is why, although I sympathize with Richard Hanania’s thoughts on regime change in Iran, it’s unclear to me whether it would be as easy as he thinks it would be.
Naomi Kanakia on the works of Raymond Carver.
Richard Chappell describes how something being good does not mean everyone is obligated to do it. He describes this as consequentialist logic, but to me this sort of idea feels very deontological. A proper consequentialist would realize that they might not be the one best suited to carry out some particular task, however good it is in abstract.
Max Nussenbaum on the emotions of dads and 6-month-olds.
My current working definition of agency is that it’s the equivalent to talent without time constraints, which allows you to compensate by caring more for any lack of speed.
If we’re analogizing to AI learning, these two traits together let you generate high quality synthetic data from even incomplete examples. One exception to this idea that comes to mind is AlphaZero. But the reason chess and go are talent-constrained in humans is that there’s a limit to how many games someone can play in their lifetime. For AI, this bottleneck doesn’t matter because both games have nice properties which allow high-quality synthetic data to be easily generated computationally.

