This got deleted from 'The Dictatorship Problem
[https://www.lesswrong.com/posts/pFaLqTHqBtAYfzAgx/the-dictatorship-problem]',
which is catastrophically anxietybrained, so here's the comment:
This is based in anxiety, not logic or facts. It's an extraordinarily weak
argument.
There's no evidence presented here which suggests rich Western countries are
backsliding. Even the examples in Germany don't have anything worse than the US
GOP produced ca. 2010. (And Germany is, due to their heavy censorship, worse at
resisting fascist ideology than anyone with free speech, because you can't
actually have those arguments in public.) If you want to present this case, take
all those statistics and do economic breakdowns, e.g. by deciles of per-capita
GDP. I expect you'll find that, for example, the Freedom House numbers show a
substantial drop in 'Free' in the 40%-70% range and essentially no drop in
80%-100%.
Of the seven points given for the US, all are a mix of maximally-anxious
interpretation and facts presented misleadingly. These are all arguments where
the bottom line ("Be Afraid") has been written first; none of this is reasonable
unbiased inference.
The case that mild fascism could be pretty bad is basically valid, I guess, but
without the actual reason to believe that's likely, it's irrelevant, so it's
mostly just misleading to dwell on it.
Going back to the US points, because this is where the underlying anxiety prior
is most visible:
Interpretation, not fact. We're still in early enough stages that the reality of
Biden is being compared to an idealized version of Trump - the race isn't in
full swing yet and won't be for a while. Check back in October when we see how
the primary is shaping up and people are starting to pay attention.
This has been true for a while. Also, in assessing the consequences, it's
assuming that Trump will win, which is correlated but far from guaranteed.
Premise is a fact, conclusion is interpretation, and not at all a reliable one.
1
9mako yass4d
There's something very creepy to me about the part of research consent forms
where it says "my participation was entirely voluntary."
1. Do they really think an involuntary participant wouldn't sign that? If they
understand that they would, what purpose could this possibly serve, other
than, as is commonly the purpose of contracts; absolving themselves of blame
and moving blame to the participant? Which would be downright monstrous.
Probably they just aren't fucking consequentialists, but this is all they
end up doing.
2. This is a minor thing, but it adds an additional creepy garnish: Nothing is
100% voluntary, because everything is a function of the involuntary base
reality that other people command force and resources and we want to use
them for things so we have to go along with what other people want to some
extent. I'm at peace with this, and I would prefer not to have to keep
denying it, and it feels like I'm being asked to participate in the addling
of moral philosophy.
5
3Johannes C. Mayer4d
I have a heuristic to evaluate topics to potentially write about where I
especially look for topics to write about that usually people are averse to
writing about. It seems that topics that score high according to this heuristic
might be good to write about as they can yield content with high utility
compared to what is available, simply because other content of this kind (and
especially good content of this kind) is rare.
Somebody told me that they read some of my writing and liked it. They said that
they liked how honest it was. Perhaps writing about topics that are selected
with this heuristic tends to invoke that feeling of honesty. Maybe just by being
about something that people normally don't like to be honest about, or talk
about at all. That might at least be part of the reason.
2lc4d
"No need to invoke slippery slope fallacies, here. Let's just consider the
Czechoslovakian question in of itself" - Adolf Hitler
1James Spencer4d
WILL INTERNATIONAL AI ALIGNMENT COOPERATION TRUMP THE RIGHTS OF WEAKER
COUNTRIES?
TLDR - REAL COOPERATION ON INTERNATIONAL AI REGULATION MAY ONLY BE POSSIBLE
THROUGH A MUCH MORE PEACEFUL BUT UNSENTIMENTAL FOREIGN POLICY
In 1987 President Reagan said to the United Nations "how quickly our differences
worldwide would vanish if we were facing an alien threat from outside this
world." Isn't an unaligned Artificial General Intelligence that alien threat?
And it's easy - and perhaps overly obvious and comforting - to say that humanity
would unite, but now we have this threat what would that unity look like?
Here's one not necessarily comforting thought, the weak (nations) will get
trampled further by the strong (nations). If cooperation rather than
competition among power is vital then wouldn't we need to prioritise keeping
powerful and potentially powerful countries - at least in AI terms - over other
ideological concerns. To see what this looks like let's look at some of those
powerful countries:
* China - the obvious one, would we need to annoy the national security hawks
over Taiwan, but also decent, humane liberals over Tibet and Sichuan?
* Russia - Ukraine would annoy just about everybody
* Israel - Well this happens already because of domestic considerations, but it
might reverse domestic political calculations on:
* UK - the British are a big player in AI (and seemingly more important than
the EU) so would needling them about Northern Ireland really be worth ticking
off the one reliable ally the US has with clout?
This is before looking at the role of countries that may be important in
relation to AI and who the US wouldn't want going rogue on regulation but who
neighbour China - such as Japan, South Korea and the chip superpower Taiwan.