#17: Expected value vs hit rate; The necessity of a positive failure rate; Winner's curse; Copenhagen interpretation of ethics; Searching for outliers
April-October 2022
Returning after a long hiatus. Hello again :)
Maximize expected value, not hit rate
A large part of our life revolves around oversight systems, which includes governance and regulators where maximizing hit rate is what counts. In a power law world, however, maximizing EV is the optimal thing to do. You only need a few people/projects to achieve their maximum potential.
This is analogous to poker, where we should focus on EV-positive bets, not the outcomes of dice rolls. Similarly, for social programs like Universal Basic Income (UBI), we should not worry about the X% who produce less; the program can be EV positive as a whole if it gives an opportunity for a few to realize their "power law" talent.
Corollary: Failure rate should not be zero. If your failure rate is very low, you are not being ambitious enough.
Failure is necessary for a thriving ecosystem
The reality is that the failure rate of startups is part of what makes the ecosystem work: the recycling of capital and talent, the communication of important information around what markets are viable, and what are not. If you don't have that recycling, and that signal effect, you actually can really damage the ecosystem.
So if the failure rate within, say, a two-year period of seed-funded startups went from 75% to 30%, because there's tons of government money flowing in, it's going to massively distort the market. It's going to keep good people working on bad things for longer, and it's going to make it much, much harder for people with good ideas to succeed.
Winner's curse
Winner's curse is a phenomenon that may occur in common value auctions, where all bidders have the same ex post value for an item but receive different private ex ante signals about this value and wherein the winner is the bidder with the most optimistic evaluation of the asset and therefore will tend to overestimate and overpay.
Accordingly, the winner will be "cursed" in one of two ways:
either the winning bid will exceed the value of the auctioned asset making the winner worse off in absolute terms, or
the value of the asset will be less than the bidder anticipated, so the bidder may garner a net gain but will be worse off than anticipated.
Savvy bidders will avoid the winner's curse by bid shading, or placing a bid that is below their ex ante estimation of the value of the item.
The severity of the winner's curse increases with the number of bidders, since it becomes more likely that some of them have overestimated the auctioned item's value.
Examples of some auctions where the winner's curse is significant: Spectrum auctions, IPOs, Pay per click advertising online, and Federal offshore oil leases.
A better way to divide the pie
Barry Nalebuff suggests a fairer way to negotiate when the participants are starting from different power positions.
…most people end up being confused over what their negotiation is really about…They focus on the whole pizza pie, not the relevant negotiation pie. Once you frame the negotiation in terms of the relevant pie, the logical conclusion is that the relevant part of the pie should be divided evenly.
Insights from the Hacker News discussion:
I don't agree with the simplistic conclusion in the article. It seems to only consider fairness within the context of a single transaction/negotiation. It does not consider what happens when this strategy is repeated between actors across multiple negotiations (in repeat interaction scenarios).
The base case, in which Alice and Bob don't reach an agreement, has a specific name: BATNA.
The Copenhagen Interpretation of Ethics
The Copenhagen Interpretation of quantum mechanics says that you can have a particle spinning clockwise and counterclockwise at the same time – until you look at it, at which point it definitely becomes one or the other. The theory claims that observing reality fundamentally changes it.
The Copenhagen Interpretation of Ethics says that when you observe or interact with a problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more. Even if you don’t make the problem worse, even if you make it slightly better, the ethical burden of the problem falls on you as soon as you observe it. In particular, if you interact with a problem and benefit from it, you are a complete monster.
This post offers a series of examples (original) where people were blamed for trying to help, even when they had a tangibly positive impact on a subset of other people.
Scott Alexander has also written in the past about Newtonian ethics:
…At a distance of a ten meters – the distance of his house to the nearest of their hovels – this is monstrous and abominable. Now imagine that same hundredth person living in New York City, some ten thousand kilometers away. It is no longer monstrous and abominable that he does not help the ninety-nine villagers left in the Congo. Indeed, it is entirely normal; any New Yorker who spared too much thought for the Congo would be thought a bit strange…
Searching for outliers
Light-tailed distributions most often occur because the outcome is the result of many independent contributions, while heavy-tailed distributions often arise from the result of processes that are multiplicative or self-reinforcing.
In a light-tailed distribution, outliers don’t matter much; in a heavy-tailed distribution, outliers matter a lot. Because of this, heavy-tailed distributions are much less intuitive to understand or predict.
A heavy-tailed distribution is one where the top few percent of outcomes are a large multiple of the typical or median outcome. A classic example would be Vilfredo Pareto’s finding that about 80% of Italy’s land was owned by 20% of the population.
A subtlety here is that the traits that make a candidate a potential outlier are often very different from the traits that would make them “pretty good,” so improving your filtering process to produce more “pretty good” candidates won’t necessarily increase the rate of finding outliers, and might even decrease it. Because of this, it’s important to filter for “maybe amazing,” not “probably good.”
It’s very common for people sampling from heavy-tailed distributions to focus on “ruling out” candidates instead of “ruling in,” which is likely to be a bad approach…it’s generally true that it’s easier to filter for downsides than upsides, because downsides are more legible.
Source: https://www.benkuhn.net/outliers/