2024-03-30
red
Dana White on Lex Fridman. It’s interesting to me how degenerate gambling seems to be related to high agency. For me, I tend to only do things if they have positive EV and if the worst case scenario is very weak. But apparently high agency people are fine with worst case scenarios of -300M$ and negative EV. Maybe this is some sort of strategy for adding variance to hill-climbing? Or perhaps this is another case of survivorship bias here.
Dwarkesh with some AI researchers. First off, this was way more interesting than the interview with Demis, both in that they are actually willing to share candid thoughts, and that you get the feeling that they are actually in the muck. Secondly, I sort of miss the Dwarkesh interviews which are with relatively unknown experts in underrated fields, while still good, here it feels a little too chummy, more in the vein of buddies hanging out. Finally, what they say about compute being the bottleneck and capabilities being logarithmic fits in with my rationale for why foom is unlikely, going to lower my pdoom accordingly.
Ruxandra follup on her policy vs. culture piece. It’s interesting, because it feels rare that an author will publish a follow-up piece acknowledging limitations in their original post. That being said, because of it the piece feels kind of weak and without any real conclusion. Two interesting points, first that I also find it distasteful how much it’s celebrated to air the dirty laundry of others, often for petty shit: I really don’t get the appeal of places like /r/amitheasshole, /r/mildlyinfuriating, or /r/relationships. Secondly, there is a intriguing analogy about obesity being caused by affluence but also affected by culture and biology. What would be the equivalent of GLP-1 agonists for fertility though? Libido is the obvious answer, but by itself it might actually reduce stable pairing. Perhaps the best solution would be something that reduces anxiety, perhaps a GABA inhibitor. But then, the anxiety is also there for a reason, so you would also want to increase conscientiousness: we also want to boost dopamine then. You know, I credit at least 30% of my own deficiencies in this area to ALDH2*2.
Etienne Fortier-DuBois on dystopias. Feels sort of like a Straussian argument for the AI increases complexity/free energy e/acc goal.
Thomas Pueyo on the Texas Triangle.

