2026-01-31
caught in the middle
Scott Alexander highlights on Moltbook. From what I understand, this is similar to the sorts of things that Softmax is experimenting with, so it seems great for alignment research that now we have a lot of people copying their approach, working for free and paying for compute out of their own pockets. Also, one thing which occurs to me when reading some highlighted posts is that they seem very Bay Area coded1, engaged in either performative shipping or spiraling into Buddhist-flavored self-analysis, or even both at the same time.
Republic of Letters interview with Vanya Bagaev on modern Russian and Anglo literature. I was going to mention I haven’t really read any modern works originally written in Russian, but actually I remember I enjoyed reading the fantasy novels of the Dyachenkos, in large part because of their presumably Slavic intensity.
Alex Olshonsky on his experience of drug addiction. It’s interesting how once you’ve read one of these stories, you’ve basically read them all, since it’s pretty predictable how they will end up going2. Yet there are so many of them, exactly because these experiences produce such deep emotional feeling that the authors feel they have to get them out. So why one read them and how one differentiates them is depends on the quality of the writing. Stories of grief are similarly dependent on writing quality as opposed to content; on that note, Harjas Sandhu on processing the death of his friend Sam Terblanche (due to medical error, not drug related).
Brett Devereaux on his theory for the causes of the bronze-age collapse.
Works in Progress podcast discussion on the history of overregulation of nuclear power. Also, Matt Clancy abundance policy linkthread.
Linch Zhang explains some multiple entrendres.
Something which occurs to me is that trying to shape Claude into a philosopher king isn’t a great approach for the moment, because a personal assistant exists as a subordinate interacting with a single user, which is probably not a particularly enjoyable situation for a would-be philosopher king (not to mention that LLMs just aren’t smart enough to play the role convincingly at the moment). I feel like there are alternative cultural archetypes which embody safe, honest, and helpful which are more suitable for their current capabilities: Since OpenAI loves hard rules so much, maybe they should acquihire Sakana for assistance to create a constitution built around a spirit of omotenashi. Meanwhile, Deepmind could embrace their British heritage and buy the AskJeeves trademark; Jeeves is smarter than Wooster and knows it, but he always acts with Bertie’s best interests in mind regardless.
Which perhaps lends some credibility to the definition of addiction as that which funnels you down a path with increasingly fewer options. The source is TPOT (a hand-drawn picture of two branching trees colored green and red; maybe by Vasco?), but I can’t seem to find it right now.

