Other key things going on with Switzerland, at least according to my vague impressions:
All except 7 and 10 are fine ways to make a point, you don't need to puff them up through a straw. I think people should be ok with sharing sentence-length or comment-length ideas without expanding them into post-length (or book-length, ugh, there are so many books that could have been comments or posts).
7 falls to Betteridge's law. You should've known when you wrote that title :-) Anyway I live in Switzerland, it has very many good aspects but frontier-like freedom isn't one of them, neither in law nor in practice nor in vibes.
10 sounds like you actually have some cool info to share, so go ahead and share it :-)
Well, one thing I would like to figure out in the process of writing many of these is whether they are true!
My go-to example is cheese. I still have a vivid memory of going to a US supermarket and buying a packet of Kraft... something, then coming back to my hotel room, taking a bite of the thing and becoming horrified. In Switzerland every cheese-like thing in the supermarket is actually cheese.
Add to that universal compulsory health insurance, public transport everywhere, laws making it difficult to fire or evict a person, minimum capital of 20K CHF to start an LLC, and you'll see clearly whether Switzerland is libertarian or not.
(In case it's not clear, I think Switzerland's non-libertarian approach is a very good thing overall. With the exception of policies that make it harder to build more housing, which are as bad as everywhere else.)
Should you maybe just pretend that every conversation is good faith anyways?
Some communities have "assume good faith" as a rule/guideline and that seems to often work. I think if someone is defensive/triggered, responding in kind tends to make the person dig in more. If you instead assume the person is engaging in good faith, that may draw them into a mode of good faith engagement even if that wasn't their original posture.
Though this is potentially exploitable if the other person never does shift, so you need to make sure you don't only stay in good-faith mode.
If I see another person complaining that no one reads their post when it starts with a 15-sentence epistemic status meta-commentary that doesn't give me any reason to want to keep reading, I will explode into a violent burst of fury and laser beams.
Oh yeah, this one is definitely about me.
From Scott Alexander's recent post (13: Runway):
Your audience chose to read you for some reason. Maybe you had a catchy title. Maybe someone they liked recommended you. Maybe the algorithm placed your post in front of their eyes while they sat there drooling and immobile. They had some hope that reading you would be mildly more interesting than the alternative. That’s your runway. It will last a few sentences to a few paragraphs before they drift off. Don’t waste your first few paragraphs defining something everyone already knows the definition of, or telling a rambling story about why you decided to write this.
and I fucking told you so
But you didn't tell us...
some of my old colleagues at CEA went to found FTX which was a Very Bad Sign but I felt like saying something publicly was a big no no by EA norms
Courts are amazing
I'll say something stronger: rationality reckons with econ but badly needs a reckoning with law, the thing that tries to actually do CEV for real humans and societies.
In an ideal world, their intersection helps people price risk, and do exception handling on risk mismanagement. In the real world, it is coopted in order to obfuscate risk and increase internality/externality arbitrages.
I like your attitude. I have some thoughts about two of your mini-posts:
1: Indirectness and ineligibility are both defences. This problem will get way worse as governments and the autism-cluster of people increase the legibility of society in order to track, measure and 'optimize' more. Boldness and honesty is punished, because one is increasingly (in modern times) guilty by default, so discourse becomes increasingly subtle and murky. Even thought the preference for open information is the norm on LW, I consider it perverse. Indirectness is basically clothing. It's also taste (which is why we say 'I need to go to the bathroom' rather than 'I need to take a shit'). Information hiding is common even in engineering, and the desire to hide ones 'private variables' from outsiders is socially, aesthetically and strategically valid.
9: No, 'optimization' for appearance is mostly evil (the exception is good-natured deception, i.e. art). Good titles do not correlate with good posts strongly enough. The incentive towards clickbait is not a healthy one and I do not want to participate in it. If the algorithm sorts by quality of the title, then it does not sort by quality of the associated post (popularity is also not a good metric, though the populace will disagree). I also recognize this 'one has to prove themselves to the audience' stance as herd behaviour (e.g. it's common on Reddit), and the herd (the statistical average) is not a good judge of that which is uncommon (which good, original takes are)
Ben Hoffman's "Blatant Lies are the Best Kind!" is maybe the best post title followed by the least clarifying post I have ever encountered. The title is honestly amazing
Strong disagree. This is one of the worst post titles and a terrible slogan.
...Why?
Like, the title summarizes an at the very least interesting hypothesis to which I have not seen any great counter-arguments. Maybe it's wrong, but it surely doesn't seem obviously wrong? And "one of the worst post titles" seems clearly wrong. It communicates its central thesis quite well and snappily!
Maybe if there had been a clear argument it would've gotten better counter-arguments. I'm not even sure what specific claim you're identifying as the hypothesis, such that it could be true or false.
To sketch a basic "boo blatant lies" argument: It's challenging for a group of people to have epistemic standards because of ambiguity, fallibility (which makes it exceedingly rare for anyone to be perfectly honest), and the challenges of people at some distance from a situation correctly identifying what happened in that situation (especially when they may start off more inclined to trust some people than others, or fallible themselves). Blatant lies - the ones that are easily identifiable as intentional lies even from a distance - are ones that the group can most straightforwardly coordinate on recognizing and responding to. Which is a starting point to build from. (And even in a group that only catches the blatant lies, many repeat epistemic offenders will slip up eventually and commit a blatant lie that can be caught, and many of the ones who are careful not to slip up will at least be constrained to meaningful degree by the loose-but-still-present epistemic standards.)
So, why get so indignant about blatant lies? To coordinating against a subset of bad epistemic practices that it's feasible for us to coordinate against.
And the slogan "blatant lies are the best kind!" seems to be poking against that coordination for epistemic standards. It's like saying "the best kinds of thieves are the ones who get caught and punished" to interrupt someone in the process of apprehending a thief and ask them why they're so worked up about it.
Here's another angle: Donald Trump. He sure does tell a lot of blatant lies. Has that been better for the epistemic environment of US politics than the less direct epistemic shenanigans that other Presidents have pulled? Has it been better for the practice of US politics?
I am a busy man and will die knowing I have not said all I wanted to say. But maybe I can at least leave some IOUs behind.
1) Blatant conflicts are the best kind
Ben Hoffman's "Blatant Lies are the Best Kind!" is maybe the best post title followed by the least clarifying post I have ever encountered. The title is honestly amazing, but the text of the post, instead of a straightforward argument that the title promises, is an extremely dense and almost meta-fictional dialogue about the title:
I think we probably should prosecute good lying more than bad lying, though of course that's tricky. I'd argue the same is true for other forms of conflict: passive aggression is worse than overt aggression, maybe, probably. I haven't written the post yet to figure it out, but it seems important to know.
2) Fire codes are the root of all evil
Fire accidents seem to have the unique combination of producing extremely strong emotional responses by people in a local community, while also often being traceable to an o-ring like failure that you can over-index on. Also, fire marshals are the closest to war heroes that local municipalities have, so good luck going up against them if they are lobbying for a fire code change. This makes fire code decisions often uniquely insane.
For example, did you know that American streets are probably 30% wider than they would otherwise be because American fire departments insist on having extremely needlessly large fire-trucks that couldn't navigate narrower streets? And that this has probably non-trivially contributed to larger cars which are the primary driver of greater road fatalities in the US compared to other countries, alone probably cancelling out practically all welfare gains from stricter fire codes in the last 20 years? At least that's what one fermi I made suggested, but I would need to double check it before I post this.
3) It is extremely easy to get people to vouch for you, this makes public character references not very helpful
In like 3 different high-stakes conflicts, I've watched people of questionable moral judgement successfully dispel suspicion by simply... asking someone they barely knew to vouch for them. Apparently you can just ask 10–20 well-respected people to vouch for you, or to say something that'll read to others like vouching for you, and one of them will say yes!
If you see someone publicly under attack, don't update too much if someone you know and respect says something vague like "I have only had good experiences with this person and think they are a high integrity kind of guy".
4) Public criticism need not pass the ITT of the people critiqued
People are basically never the villain of their own story. The concepts and frames that people use to view the world are practically always structured such that there is no simple logical argument or plausible empirical fact that would show that what they are doing is morally wrong, both because of selection effects, and because that would open them up to social attack.
This means if you try to just honestly answer the question of whether what someone is doing is bad for the world, you will almost certainly not do it in a way that makes sense from inside of their frame. Therefore, a discussion standard in which you consistently request that people characterize others in a way that passes their Intellectual Turing Test will systematically fail to notice when people are causing harm.
5) Courts are amazing
Courts are probably the most prevalent formal form of social conflict resolution — it's almost hard to imagine a "legal" system without them. Practically all religions, municipalities, countries, professional communities, and even a substantial fraction of large companies have internal courts.
Unfortunately there are a few things that make courts pretty tricky to implement in practice for things like the rationality, AI safety and EA communities. Badly implemented courts also can just make things worse by creating a clear target for attack and pressure. Seems very tricky, but probably we should have more courts (or maybe not, I would need to write the post to figure it out).
6) If your room still sucks after fixing your lights, put some plants in it
If a room feels off the lighting is probably too "spiky" or too blue. Now good lighting is not sufficient for good interior design. But do you know what is? Good lighting and live plants.
A room with good lighting and a bunch of plants practically cannot have bad vibes. Brutalist architecture has shown that even a dirty concrete block, wrapped in nice lighting filtered through plants, will look amazing:
Prison cell
Immediate modern interior decoration design contest winner
7) Is Switzerland the perfection of American Freedom?[1]
As we all agree after my previously completely uncontroversial post "Let goodness conquer all that it can defend", America invented freedom. Switzerland then took those ideals and tried to refine them. The obvious big difference: Switzerland built its system on civil rather than common law. Did Switzerland perfect freedom or corrupt it? A comparative study of two relatively libertarian successful economies.
8) Most arguments are not in good faith, of course
Look, I love good faith discourse (here meaning "conversation in which the primary goal of all participants is to help other participants and onlookers arrive at true beliefs"). I try really hard all day to create environments where people can have something closer to good faith discourse. But do you know what is a core requirement for creating environments that can sustain good faith discourse? The ability to notice if what is going on is obviously not good faith discourse.
Good faith discourse is rare! People are often afraid and triggered and trying to push towards their preferred policies in underhanded ways. Of course most discussion on Twitter, even among well-meaning participants, is not in good faith. And just because a discussion is not in good faith doesn't mean it isn't valuable, it just means that the conversation is a bit more like two lawyers arguing their case, instead of two trusted colleagues exploring the space of considerations together.
9) Please, for the love of god, optimize the title and first paragraph of your post
If I see another person complaining that no one reads their post when it starts with a 15-sentence epistemic status meta-commentary that doesn't give me any reason to want to keep reading, I will explode into a violent burst of fury and laser beams. Please, your title and your first paragraph need to give me a reason to keep reading. I have a shit ton of stuff to read, as has everyone else. If you haven't somehow communicated that I will get something valuable out of reading your post by the end of your first paragraph, I will not keep reading.
10) A story of my involvement in EA and AI Safety
I've been around for a while! I have pushed for various community policies and priorities. Sometimes I was dumb and wrong, sometimes I was right and prescient and you all owe me so many Bayes points. It would be helpful to have a post of my history with this broad ecosystem.
In short: Luke Muehlhauser told me to do EA community building instead of rationality community building, so I went to work at CEA, had a terrible time and got soured on Oxford EA[2], started LW 2.0, became friends with lots of Bay Area EAs, some of my old colleagues at CEA went to found FTX which was a Very Bad Sign but I felt like saying something publicly was a big no no by EA norms, then FTX exploded and I fucking told you so, then OpenAI and Anthropic seemed like really bad bets, then OpenAI exploded and I fucking told you so, then Anthropic's RSP seemed really dubious, and then that exploded and I fucking told you so, and now here we are.
Honorable mentions of even earlier stage post ideas that didn't make the list:
Daniel Filan pitched me on this post, but I am intrigued enough that I would actually like to write it.
And Leverage Research