This analysis is relevant to understanding political fights online. Most activist groups are smart enough to realize that they need to handle tradeoffs in their personal lives. But when it comes to shaping society's priorities, activists push for one clear-cut top goal. That's why calls for nuance or pointing out overdoing things don't sit well with them: these suggestions might cause mixed signals and create a laid-back vibe where not sticking to one value is okay if you're focusing on another. This whole idea is based on the belief that political talks like these are mostly about setting the rules for everyday people who can't handle nuanced thinking.
If this analysis really nails down what each side in these fights is objecting to from the other's point of view, I'm curious if it'd be helpful for someone to address this issue directly. A lot of times, when I get pulled into these debates, it's like deep down, I feel the other side is totally overlooking a genuine tradeoff, and making bad decisions because they're not thinking at the margin. But if they're actually assuming that society can only focus on one priority at a time and arguing for what they believe is the most important, then I'm getting worked up over a totalizing worldview they don't really have. Instead, I should talk about why I think some other value matters more.
It'd be awesome if people could and would prioritize handling nuance and tradeoffs in political talks. Maybe we should make that society's top priority?
This is also relevant to understanding why the genre of off-the-cuff tossoffs like "what if corporations are the real superintelligence" or "why can't we solve AGI alignment the same way we solved 'aligning corporations'?" are so wrong.
Corporations are not superintelligences. They are, in fact, extremely stupid, much stupider than the sum of their parts (a million corporate employees sum to a lot less than a million times smarter human), suffer from severe diseconomies of scale, and subject to only the weakest forms of natural selection due to their inability to replicate themselves reliably leading to the permanent existence of very large dispersion in efficiency/quality between corporations. (You will never see a single especially-well-run corporation take over most of the business world, the way you repeatedly saw more-fit COVID viruses drive to extinction lesser variants.) They are so stupid that they cannot walk and chew bubblegum at the same time, and must choose, because they can only have 1 top priority at a time - and CEOs exist mostly to repeat the top priority that "we do X".
Why then do we have corporations and they have any real-world power at all? Because they are simply very large and parallel and potentially-immortal, and are the least-bad organizations human minds can reliably form at present given the blackbox of human minds & inability to copy them. Not because they are optimal or intelligent.
...the least-bad organizations human minds can form reliably at present given the blackbox of humans and inability to copy them.
And given the SEC.
if you have a thousand organisations each pushing in a different cardinal direction in some high-dimensional space, getting backing and making progress based on how important it is to varying numbers of people, that looks a lot like some sort of gradient descent. Maybe this sort of single-issue focus isn't as inefficient as it might appear?
There are plenty of ways this analogy can break down, and also plenty of ways it can go wrong even within the analogy. A major victory in one direction can easily "overshoot" into a highly sub-optimal state (e.g. revolution), or various factors can consolidate a lot of update power into just two opposed directions (e.g. polarized two-party states).
Plus of course gradient descent is generally based on some error function that can be evaluated precisely and doesn't change while you're trying to optimize, neither of which is true in politics, so the analogy is far from perfect.
It's a reasonable model. One problem with this as a predictive model, however, is that log-rolling happens across issues; a politician might give up on their budget-cutting to kill an anti-business provision, or give up an environmental rule to increase healthcare spending. So the gradients aren't actually single valued, there's a complex correlation / tradeoff matrix between them.
It seems like large organizations achieve structure through a combination of legislation and value-setting. They use policies and rules to legislate nuance, but rely on a single value to steer daily decision-making. This whole analysis really needs to be understood as being about the daily decision-making piece of the puzzle.
I think this ignores how decisions actually get made, but I think we're operating at too high a level of abstraction to actually disagree productively.
Arguably EA/Rationality needed much simpler and less nuanced messaging on how to deal with aI capabilities companies. We really should have gone with 'absolutely do not help or work for companies increasing ai capabilities. Only work directly on safety.' Nuance is cool and all but the nuanced messaging arguably just ended up enabling Anthropic and OpenAI.
I think internal communication in organizations like MS can be pretty bad. There's almost no attempts to build shared frames or make documentation on object-level questions, let alone making frames about understanding the interactions in the organization.
I think that forms an obstacle for communication. If people don't have a shared set of concepts, it seems inherently hard to communicate things. I would be interested in whether more nuance can be achieved if more philosophical legworks is done.
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
Isn't this a bit tautological? After all by definition 'top priority' implies a singular 'top'...
I do think there's a bit more lurking here, and the basic implication of Dan Luu's tweets is that you can have only priority at all, 2 already is a mess and nothing gets done, and it gets worse with the number of priorities you have.
If the implication is that people can't have secondary priorities of lower importance, then that seems just false?
Have you read the post? It specifically says this is for big organizations, and not relevant to small ones (or by extension individuals).
The post lays out a decent argument for why organizations can't maintain two priorities of roughly the same importance.
But I don't see why that bars the possibility of secondary priorities that are clearly stated to be much less important?
I see what you’re saying - I thought you were referring to individual people. I’m pretty sure we all agree here and this is just a semantics thing.
h/t to rpglover64 who pointed me towards this twitter thread in this comment.
Here's Dan Luu's take on what happens when orgs try to communicate nuanced priorities. (Related to my You Get About Five Words post)