#16: Inoculation effect; Determinants of long term returns; On happiness; Class 1 vs Class 2 problems
February 2022
Inoculation effect
If you give someone a weak argument and they are able to refute it, they will be more resistant to stronger arguments in the future.
What happens if you have a movement you care about and people are making terrible arguments for your side? Then you come along with a better argument…and it fails, because your audience is used to knocking down the bad arguments and doesn’t care to listen to you.
…let’s take an example that’s closer to home. Say you blame anti-vaccine advocates for causing a measles outbreak...And then, imagine what would happen if it turned out that you were completely wrong, and vaccines or lack thereof didn’t cause measles, and suddenly a bunch of people might be less likely than ever to get vaccinated.
(Source)
Determinants of long term returns
Over the long term, the only thing that matters for an investment's return is growth in some metric of profits. And, for the vast majority of businesses, this means that over time returns are really approximated by growth in sales; since profit margins can't be above 100% except for one-off accounting reasons, ultimately the only source of upside is the top line.
But when you model a business, it's completely irresponsible to just extrapolate growth rates…the responsible thing is to figure out the size of the company's market and measure the convergence of growth by sketching out an S-curve, where the business hits some natural market share after which it's not profitable or possible to expand.
For a growing market, you can give the market its own S-curve. A top-down model of Spotify might start with global music spending, then look at an S-curve for streaming within that, then apply an S-curve for Spotify's growth within that. This is also why growth companies in new industries can be so volatile: a bad quarter forces investors to update not just the S-curve for the company’s share but the S-curve for what the company is gaining share in.
(Source: Byrne Hobart’s newsletter)
On Happiness
From “The Subtle Art of Not Giving a Fuck”:
There is a premise that underlies a lot of our assumptions and beliefs…that happiness is algorithmic, that it can be worked for and earned and achieved, as if it were getting accepted to law school or building a really complicated Lego set. If I achieve X, then I can be happy. If I look like Y, then I can be happy. If I can be with a person like Z, then I can be happy.
This premise, though, is the problem. Happiness is not a solvable equation. Dissatisfaction and unease are inherent parts of human nature and, as we’ll see, necessary components to creating consistent happiness.
The above is also consistent with the “set point of happiness” theory:
The set point for happiness is a psychological term that describes our general level of happiness. Each of us has a different set point—some have a high set point, meaning we are mostly happy; some of have a low set point, meaning we are mostly unhappy; while others fall somewhere in between. Our set point for happiness is based on our genetics and conditioning. While we may have emotional ups and downs throughout our lives, these are temporary. No matter what life throws at us, over time, our happiness bounces back to the same set point.
Long-term happiness is rooted in internal circumstances, not external ones.
If boosting your set point for happiness interests you, then doing so is an inside job. This is empowering because it puts your happiness in your control.
And from the Three Times Wiser newsletter:
True happiness occurs only when you find the problems you enjoy having.
I believe that beyond a minimal level of subsistence and safety, the set-point theory applies to [almost] everyone, and is one of the nicest sociological phenomena in the world. Without it, the rich/successful would always be happy and the poor always unhappy.
Class 1 vs Class 2 problems (vs Class 3?)
There are two classes of problems caused by new technology. Class 1 problems are due to it not working perfectly. Class 2 problems are due to it working perfectly.
One example: many of the current problems with facial recognition are due to the fact that it is far from perfect. It can have difficulty recognizing dark skin tones; it can be fooled by simple disguises; it can be biased in its gendering. All these are Class 1 problems because this is still a technology in its infancy. Much of the resistance to widely implementing facial recognition stems from its imperfections. But what if it worked perfectly? What if the system was infallible in recognizing a person from just their face? A new set of problems emerge: Class 2 problems. If face recognition worked perfectly, there would be no escaping it, no way to duck out in public. You could be perfectly tracked in public, not only by the public, but by advertisers and governments. “Being in public” would come to have a different meaning than it does now. Perfect facial recognition would probably necessitate some new varieties of public commons, with different levels of disclosure. Furthermore, if someone could hack the system, its very trustworthiness would be detrimental. A faked ID could go far. We don’t question perfect tech; when was the last time you questioned the results of a calculator?
Class 1 problems arise early and they are easy to imagine. Usually market forces (entrepreneurial spirit and profit-motive) are perfectly capable of solving them.
Class 2 problems are much harder to solve because they require more than just the invisible hand of the market to overcome them…These kind of system challenges require a suite of extra-market levers, such emerging cultural norms, smart regulation, broad education, and reframing of the problem. (Source)
From the comments on the above article:
I would like to propose a third class of problem: the Class 3 problem is an unintended side-effect of the technology. Class 3 problems are often bad to start with, but become insidious (with the potential to become catastrophic) once a technology transitions from Class 1 to Class 2. Examples: Thalidomide, asbestos.
It could be argued that Class 3 problems are really Class 1 problems, since the technology is not functioning perfectly, but…the technology is working perfectly for its intended purpose, but contains an adverse side-effect which does not affect its intended functioning.
Class 3 problems demonstrate the importance of government regulation on a free market. eg: Thalidomide was not approved by the US FDA, preventing much of the tragedy that some other nations encountered when the drug was discovered to cause severe birth defects.

