Visa Veersamy on shortform versus longform content. Something I’ve been wondering actually is whether linkthreads should exist at all, or if I should just post most of my comments just as Substack Notes.
Derek Lowe on drugs which work by mechanisms which we don’t yet understand. I’ve been wondering how something like this might be affected by AI-assisted drug discovery. This is sort of in the same vein as the debate over passive investing driving out active. There, the more passive volume there is, the higher the opportunities for active investors. Similarly, the more we rely on known mechanisms to design drugs, the more valuable discovery of new mechanisms should be. But it’s unclear to me if, given how much more the former benefits from AI in terms of lower cost, if projects like fully-automated wet labs might be totally driven out of the market due to comparative efficiency at producing the outputs. Alternatively, this could cement the position of big-pharma, particularly if universities and government grants continue having political problems.
There are a few interesting responses to Noah Smith’s recent article on Elon Musk’s intelligence. One of my pet peeves is when you treat your ideological opponents as a single cohesive group in order to point out “inconsistencies” in their beliefs. One such example which I’m tempted to make is the combination of the belief that there is no such thing as g, with many different kinds of intelligences, and the rote response that Elon Musk is dumb. Nate Silver sort of makes that point here. There’s another more interesting take by Steve Randy Waldman, which is that Elon Musk is god-tier talented at breaking rules and getting away with it, which is overpowered in the United States, where there are many rules you need to break in order to get anything done.
Matt Yglesias argues that intentional misinformation mostly harms your own side because your opponents won’t believe anything you say anyway. I personally agree, but I think to the purveyors of conspiracy theories, increased polarization is kind of the entire point. Related, Scott Alexander has a theory that polarization is downstream of political opinions being tied to status-based beliefs, with the mutual attempted humiliation of those with opposing views. Also, Seeds of Science states that LLMs are actually more persuasive than humans because they don’t get mad or dismissive when faced with opposing beliefs. Similar piece by Eurydice. Lastly for these links on liberalism, Rob Kurzban on how groups aren’t actually real: something like how it isn’t that “I don’t see color”, but that it hardly seems like the most relevant factor for how I want to relate to someone else.
Dean Ball on potential options for AI and software liability. This is actually another area where the worlds of crypto and AI have unexpected analogs, where the arrest of the Tornado Cash developers marks a new front in the old battle between the cypherpunks and the spooks.
Alvaro de Menard on deciding to get good at sex in your 30s. I have a pet theory that the relative ratio of dominants to submissives is because for most people, confidence has to be justified. Given the taboos around sex and since your first experience is rarely good, unless you have an extreme improvement mindset, some large proportion of would-be dominants won’t feel comfortable taking the lead, particularly if they started late. Related, Real Society arguing how you can become valuable by becoming among the best in the world at any particular bundle of useful skills. It’s actually funny how often this debate between breadth, depth, or T-shape gets rewritten.
Metropolitan Review on Nosferatu (the movie), interesting because it might be their first more or less entirely positive review. It truly isn’t a time for books.