2026-04-02
chromatic aberration
Win-Win Podcast with a very good discussion between Daniel Kokotajlo and Dean Ball on AI regulation. Actually, it seems to me that the two basically already agree about everything, except that in my opinion Daniel may not have a particularly good understanding of how politics works, either generally or specifically for our current situation. Personally, it seems to me that it doesn’t really matter whether the labs or the government should be considered more trustworthy; what seems more significant to me is the observation that there are multiple frontier labs, but only one single government. If one wants to talk about checks and balances, it seems to me that even the most well-crafted system is unlikely to perform very well out of distribution, so the more transformative one thinks AI’s effects might be, the less one should be willing to trust the government to handle it properly. It seems to me that market competition is a much more fundamental and therefore robust form of checks and balances, and therefore the means which is much more likely to provides means by which well-meaning actors will be able to influence outcomes.
Forethought with a sketch on the potential for using AI to better public epistemics. Actually, it seems likely to me that we will be able to get this essentially for free, because AI itself benefits from being able to accurately assess the underlying motivations of publically available information. Rather than having to run such assessments every time, it is much more efficient at scale to just label everything as soon as it is created.
Joel Becker with some interesting thoughts on predicting the future with curve-fitting, which includes an interesting line that people tend to ‘make implicit guesses of how the future might go, then notice that “straight lines on graphs” predict the future better than their guesses’. Personally, although the outcomes are the same, it seems to me that there’s a significant difference between predicting entirely based off extrapolating trends and deriving a trend based off an actual understanding of the underlying mechanisms, because ultimately trends tend not to last forever. That being said, there are also better or worse reasons for thinking that a trend might not continue. Particularly bad are vibes-based arguments like reversion to the mean, or that if a trend continues, that means something will have happened, and nothing ever happens; the problem with such arguments is that, even if they are true, they excessively privilege the present by assuming that, not only is now just as good a time as any, but that it absolutely has to be now that the trend stops working. Slightly better models identify some bottleneck which will need to be overcome in order for progress to continue, but this is also not particularly good, because even if one cannot themselves figure out how to overcome this bottleneck, it’s generally safe to assume that there are people working in AI who are more intelligent than oneself. Better reasons tend to involve things like hitting fundamental physical limits, or the existence of multiple mutually-reinforcing blockers for which there has been little or no progress (or even indications of promising directions) despite significant efforts. Even then, sometimes surprises can occur. On that note, Cognitive Revolution interview with Nathan Labenz on his current AI worldview, including a discussion on timeline predictions and bottlenecks.
Niko McCarty on identifying what you want but currently do not have as a good method for scientific discovery. I would certainly not be a good scientist, because even if something exists, if I don’t have access to it, I will typically entirely forget that such a thing ever existed.
Dialectic Podcast interview with Celine Nguyen on various topics related to intellectual self-actualization, which includes an interesting anecdote that a Substack meetup was the catalyzing event to her starting her own blog. I would like to highlight an idea from Ana1 on how Substack could create a lot of value by adding a feature faciliating meetups and other social connections based on shared readership. This is many writers and readers already do informally, and it would be cool if there were tools to help them grow, and potentially reach the same scale as the ACX network2.
Sympathetic Opposition and Georgia Ray with some obviously true innovative scientific theories3.
Nicholas Decker on positive beneficial spillover effects from cash transfers.
Jenn Pahlka with interesting AI-pilled speculations on civic technology development.
Scott Sumner movie reviews (partial paywall).
Cultural Romantic linkthread.
No Magic Pill linkthread.
friend of the blog
Unfortunately none of the time-city combinations for ACX match my current Spring itinerary. On that note, I will be in Toronto soon, from late April to late May. According to Substack I have 7 readers in Canada; if anyone wants to hang out, send me a message. I also currently plan to travel around North America this summer, and I’m open to suggestions for places to visit.
Also, is this the end of DeepLeftAnalysis?

