Econ 102 episode with Noah Smith and Vitalik Buterin on his article about the potential for centralized authoritarian systems to triumph in the 21st century. I read this as part of the idea that things tend to grow or shrink and rarely remain static, whether this is Menzies with his observation that unions are either centripetal or centrifugal (regarding ASEAN and the EU), or the famous Three Kingdoms quote that “The Empire, long divided, must unite; long united, must divide”. The key tension between authoritarianism and freedom is between economic benefits and social freedoms, and people generally accept constraints on the latter if in doing so it unlocks the former. The purpose of progress in technology is to unlock the ability to have it both ways, either by enabling citizens to force the state to allow freedoms, or by improving coordination such that the center is no loner required. If the problem is that IT has allowed statists to be more efficient in processing information, such that it is now competitive with market economies, then we also need to make use of this improved processing capability to make all this information legible to everyone. But rather than censoring, this is exactly why freedom of speech is so important, because the new bottleneck is the inability of a centralized actor to direct attention across every possible opportunity (the reason firms stick to their “core competencies”, and why Google keeps killing new products). But independent agents acting individually can each continue to mine at the margin.
Ian Leslie and Lionel Page both attack utilitarianism as a basis for policy decisions. I’ve previously wrote my definition of utilitarianism here and here, but these blogs are a good excuse for a rewrite. Lionel Page says that “[utilitarianism] is founded on misguided intuitions about what happiness is. We do not live to feel happy, but we feel happy to live and be successful in life.” But the system of maximizing variables works regardless of what you are trying to maximize, so if you disagree that happiness is the most important thing, why not just define what actually matters and try to maximize that? As a selection algorithm, I see this as the idea that everyone would prefer to make the decision where their own values and preferences are maximized, as selected by an omniscient version of themselves. There are two key implications to this framing: the first is that you are not omniscient, which means that this system is necessarily probabilistic; the second is that this is defined in terms of individuals, which means that it is subjective: lacking “God”, there is no such thing as a “correct” value for the parameters which determine how to aggregate competing utility tradeoffs between different people, or even yourself at different points in your life. This is not ideal for seekers of an objective moral reality, but this postmodern version of utilitarianism which attempts to maximize the expected value calculation of subjectively aggregated utility functions gives you a system which I believe can be used to model any moral philosophy in a numerical way. For example, I think it’s pretty clear that the heuristics developed by most societies around uncertainty, higher-order effects, hyperbolic discounting, and loss aversion lead to particularism and what is generally referred to as commonsense morality. This also provides a lens by which you can view moral disagreements as merely differences in how you assign and weigh these parameters. Ian Leslie makes the claim that his position against euthanasia is actually one for liberty, because people can be coerced into euthanasia, but this would also be an argument for denying things like abortion, loans, or blood donation. More relevant is his claim that “our personal choices are always enmeshed in networks of social connection and obligation, whether we want them to be or not”, which is his actual objection, a belief that the second-order effects here are more harmful than the direct positive benefits. For myself, I’m always suspicious of such arguments, because in most cases, I don’t think anyone is capable of calculating accurately enough to decide how the higher order effects will actually sum out in the end, which means they are usually a hand-wavy way to getting to the desired conclusion which is difficult to challenge. In light of this uncertainty, I generally err on the side of supporting the individual’s choice to decide for themselves what is best for their own utility function.
Relevant, Moral Mayhem episode supposedly a defense of finance, but the interesting part is the first half on EA and utilitarianism.
Also relevant, Erik Hoel on authenticity. There was a tweet that I saw a while ago (that I can’t find), a poll asking which of the three you most view yourself as: a state, a set of values, or a process. In my current understanding, your full self is the area in possibility space that includes your past, extending out into all of the possible branches you have in your future. Here your current position is a state, your values are the slope in the currently open branches (velocity vectors), and the process describes how external forces will be translated into changes in position or slope. There is a loop to conscious existence: at every position, what are my values, which determine my current goal, how do I reach that destination, which may change my values.
Scott Alexander with a utilitarian analysis for incarceration (also a tweet thread). It’s weird how much statistical debates on controversial topics are viewed only at an abstract level. Generally, I view trying to optimize the relation between two aggregate variables as a mistake wen working with processes or systems, since the goal is to maximize efficiency, which you do by targeting the highest leverage point. The story about shoplifting deterrence is a case in point: a slap on the wrist is indeed enough to deter most people, but applied generally let’s serial offenders loose, so clearly you can’t just apply a single policy to everyone and expect that to work. Similarly, if the 10 strike rule worked in the Netherlands, but why didn’t three strikes work in California? Presumably because the bottleneck in California is not in sentencing incapacitation of repeatedly convicted criminals, but somewhere else, perhaps in enforcement or prosecution. Similarly, if prison space is a constraint, why not scale all sentences to match the expected utility gain from incapacitation, like scaling all incarceration lengths by sentence count, rather than using seemingly arbitrary mechanisms like mandatory minimums or n-count cutoffs (sort of exists via discretion, but not systematically)?
Zvi outlines the case for repealing the Jones Act.
Kasra on leaving the Eternal September that is NYC.