#15: Nuanced communication is hard; On Seriousness; Rorschach; How to become the best in the world
January 2022
Nuanced communication is hard
Excerpts from a fascinating Twitter thread by Dan Luu describing why nuanced communication usually doesn't work at scale.
When I joined Azure, I asked people what the biggest risk to Azure was and the dominant answer was that if we had more global outages, major customers would lose trust in us and we'd lose them forever, permanently crippling the business. Meanwhile, the only message VPs communicated was the need for high velocity. When I asked why there was no communication about the thing considered the highest risk to the business, the answer was if they sent out a mixed message that included reliability, nothing would get done."
The fear was that if they said that they needed to ship fast and improve reliability, reliability would be used as an excuse to not ship quickly and needing to ship quickly would be used as an excuse for poor reliability and they'd achieve none of their goals.
If I write a blog post and 5% of readers get it and 95% miss the point, I view that as a good outcome since was useful for 5% of people. … But it's different if you run a large org. If you send out a nuanced message and 5% of people get it and 95% of people do contradictory things because they understood different parts of the message, that's a disaster.
This is why, despite being widely mocked, "move fast & break things" can be a good value. It coneys which side of the trade-off people should choose. A number of companies I know of have put velocity & reliability/safety/etc. into their values and it's failed every time.
MS leadership eventually changed the message from velocity to reliability. First one message, then the next. Not both at once.
On seriousness
Provoking essay by Katherine Boyle which argues that America has lost its “seriousness”; it has been replaced by irony and mockery. This essay is not easy to summarize and is worth reading in full, but I’ll excerpt her excerpt of David Foster Wallace’s prescient insight (emphasis mine):
Irony and cynicism were just what the U.S. hypocrisy of the fifties and sixties called for. That’s what made the early postmodernists great artists. The great thing about irony is that it splits things apart, gets up above them so we can see the flaws and hypocrisies and duplicates. The virtuous always triumph? Ward Cleaver is the prototypical fifties father? "Sure." Sarcasm, parody, absurdism and irony are great ways to strip off stuff’s mask and show the unpleasant reality behind it. The problem is that once the rules of art are debunked, and once the unpleasant realities the irony diagnoses are revealed and diagnosed, "then" what do we do? Irony’s useful for debunking illusions, but most of the illusion-debunking in the U.S. has now been done and redone. Once everybody knows that equality of opportunity is bunk and Mike Brady’s bunk and Just Say No is bunk, now what do we do? All we seem to want to do is keep ridiculing the stuff. Postmodern irony and cynicism’s become an end in itself, a measure of hip sophistication and literary savvy. Few artists dare to try to talk about ways of working toward redeeming what’s wrong, because they’ll look sentimental and naive to all the weary ironists. Irony’s gone from liberating to enslaving. There’s some great essay somewhere that has a line about irony being the song of the prisoner who’s come to love his cage. -Conversations with David Foster Wallace, Stephen J. Burn
Katherine also argues that “immigrants often make the best Americans, taking the [American] experiment more seriously”. And yet, immigrants and entrepreneurs are being vilified in today’s America. The criticism that tech workers face in the Bay Area for increasing house prices or traffic or “changing the culture” is unabashedly hypocritical to the fact that Bay Area’s prosperity was significantly created by those same entrepreneurs and tech workers.
Rorschach
Mike Solana argues that how we think of technology and its future/applications, especially artificial intelligence, [whether from a dystopian or optimistic lens], is a Rorschach test, i.e. our opinions/hopes/fears are a mirror to our base instincts, not a reasoned analysis.
…as we try and understand the difference between the most intelligent human who has ever lived and a hypothetical god-like intelligence born of the Singularity, let us set our difference in intelligence at a conservative ‘1000x.’
How does one even begin to conceive of a being this smart?
Here we approach our inscrutable abstract, and our robot Rorschach test. But in this contemporary version of the famous psychological prompts, what we are observing is not even entirely ambiguous. We are attempting to imagine a greatly-amplified mind. To the question of “mind,” each of us has a particularly relevant data point — our own. In trying to imagine the amplified intelligence, it is natural to imagine our own intelligence amplified. In imagining the motivations of this amplified intelligence, we naturally imagine ourselves. If, as you try to conceive of a future with machine intelligence, a monster comes to mind, it is likely you aren’t afraid of something alien at all. You’re afraid of something exactly like you. What would you do with unlimited power?
The inner workings of a mind can’t be fully shared, and they can’t be observed by a neutral party. We therefore do not — can not, currently — know anything of the inner workings of people in general. But we can know ourselves. So in the face of large abstractions concerning intelligence, we hold up a mirror.
I don’t fear artificial intelligence, I fear people who fear artificial intelligence.
I think this is an interesting take, but most people are probably more afraid of “the unknown” than their projected self. Further, even if we never reach the Singularity, sufficiently powerful AIs can lead to extreme inequality and authoritarian regimes. We don’t need to look at the mirror to fear the development of such regimes in the presence of super-centralized power; they are numerous in our present and past.
How to become the best in the world
To become the best at any one skill will take a lot of effort at a minimum (and likely also require a healthy dose of luck, genetics or other external factors you can’t control).
But Tomas Pueyo has a suggestion:
…trying to be the best at one thing isn’t the smartest path to success. Instead, you should put your effort into mastering a combination of skills. The solution is skill stacking, a concept popularized by Scott Adams.
It’s easier and more effective to be in the top 10% in several different skills—your “stack”—than it is to be in the top 1% in any one skill.
The best skills to choose are those that don’t tend to go together, but complement each other well. For example, engineers aren’t known to be great public speakers, so those who are have a huge professional advantage.