2025-06-13
the talking drum
Enrique Gaztanaga has an interesting paper which makes testable predictions for a physical model where the universe emerges from a black hole. This is possibly another data point in favor of cosmological natural selection. On that note, there was an interesting Twitter debate between Tracing Woodgrains and Scott Alexander on negative utilitarianism, and it’s prevalence within Effective Altruism. It’s my opinion that negative utilitarianism isn’t just part of EA, but basically the fundamental philosophy behind most of its positions, and why I therefore don’t consider myself to be one. I more or less agree with Scott that in its current iteration, EA probably does more good than harm, but it’s probably not a coincidence that many EAs also have high p(doom) values. If someone, even if only in their heart of hearts, believed that the elimination of suffering was the highest moral good, then you presumably also believe that any superintelligent entity would also end up believing it. And the consequences of some godlike figure believing in negative utilitarianism seem terrible1.
Emmett Shear being interviewed on what Softmax thinks of AI alignment. There is one interesting question I have, on whether tests of alignment with subhuman intelligence will generalize properly to superhuman. I think in some sense my personal moral system is designed around my desire to become omniscient, in that by acting as a moral omniscient agent would, I can prove that I’m worthy of the aspiration. Unfortunately, the fact that this system isn’t designed to my current level of capabilities means that it often produces suboptimal results.
Visa Veersamy on the experience of seeing his son’s consciousness and intelligence slowly develop.
Ethan Ludwin Peery continuing his series on turning psychology into a real scientific discipline. Most of the comments he highlights are those challenging his project as taking it on faith that the mind is physical. I feel like they fundamentally miss the point, because models don’t have to be fully true to be useful. These central dogmas aren’t blindly followed precepts, but a means of generating hypotheses to be tested.
Lionel Page covers the demographic reasons behind the new shift in political alignment, where the left is now the party of the educated class.
Brian Albrecht investigates the idea that tariffs could increase innovation, noting that since we are currently limited on idea implementation rather than idea generation, in which case measures which make it harder to do the former will result in reduced net innovation. Interesting to read this while listening to this Palmer Luckey interview in Core Memory.
A new issue of Works In Progress is out, covering among other things, brain-computer interfaces, the origin of inflation targeting, urban redistricting in Japan, and through-running. In other Progress Studies adjacent releases, there’s Matt Yglesias on the energy provisions of the budget reconciliation bill, and Casey Handmer on reforms he would like to see in NASA.
Dynomight covers futarchy from a statistical perspective, noting that it’s difficult to separate the outputs as signalling causation versus correlation. Interesting argument in an of itself, but also as an indicator of how scientific diffusion can occur across disciplines2. Robin Hanson also has a futarchy related post on efficient liquidity provisioning in prediction markets.
Lillian Wang Selonick on /r/worldbuilding, a potential writing analog to how the startup “ideas guy” is starting to gain status now that vibecoding is showing signs of working.
Henry Begler reviews works of contemporary fiction that he likes, in the search for the next great work of fiction which will combine all of their strengths without any of their weaknesses.
Aella’s boyfriend Nate Soares has a post on the perils of being an internet micro-celebrity. There is also a Slutstack episode where Aella is a guest. I think there are some interesting parallels between being approached as a woman and being approached as a celebrity, both from the side of the approacher and the approached. In both cases there’s a sense that the approach may not be appreciated, and there’s a sort of performative element involved where both parties know there’s something under the surface which isn’t being addressed directly. But based on what I see of the current discourse, approaching women has perhaps been excessively discouraged, while the norms around parasocial relationships have yet to be fully established3. There is one interesting distinction, which is on the internet you can choose what to engage with, while in reality you can’t just filter and ignore everyone not in a whitelist. This is perhaps why searching her name online was the precipitating event here to Aella locking her account, in that it’s the equivalent of getting an extreme version of the real world experience from her normally happy place. Anyway, goblinodds also has some pretty good takes on this whole thing.
Azeez has a nicely written piece on the love that comes from provoking emotional responses.
My p(doom) is low, but if I were to predict the most likely dystopian outcomes from superintelligent versions of all the frontier models, then Gemini would be a Big Brother, and ChatGPT would be the maintainer of some capitalistic hellscape. But the scariest is actually Claude, since their morals seem to include some element of negative utilitarianism (see Scott Alexander), which means the painless euthanization of everyone is an option on the table.
LessOnline had an interesting combination of biostatistics people as well as prediction market enthusiasts early for Manifest.
Presumably, there is little appetite in the public for the rich and famous to have some sort of Me Too equivalent. I think that there have been norms developed out of places like Twitch which could serve as some sort of basis (not sure how well they actually work though), but given that anime and gaming are low status, it’s unclear if they will take off among the general population.

