This is a special post for quick takes by Flaglandbase. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
3 comments, sorted by Click to highlight new comments since: Today at 7:21 AM

So I was banned from commenting on LessWrong . . .

My whole life I've been ranting about how incomprehensibly evil the world is. Maybe I'm the only one who thinks things shouldn't be difficult in the way they are.
Evil is things that don't work, but can't be avoided. A type of invincible stupidity.

For example, software is almost supernaturally evil. I've been tortured for a quarter century by computer systems that are inscrutable, deliberately dysfunctional, unpredictable; and above all the freezing and crashing.
The unusability of software is a kind of man-made implacability. It can't be persuaded or reasoned with. Omnimalevolence as an emergent property.
Software is just a microcosm of society.

The reaction to my decades of online rants and hate-filled screeds has been very consistent: the Silence or the Bodysnatchers. Meaning no reaction, or an extremely negative one (I'm not allowed to link either).
There seems to be a deep willingness among normal people to accept evil, which may be the source of their power.
When I was banned from LessWrong commenting (after two requests to be reinstated), they said such talk was "weird". Weird does NOT automatically mean wrong! 

Studying the evilness of human-designed interfaces might reveal why the world has always sucked.
Seemingly simple things (like easy interfaces) are still absolutely impossible today. Only the illusion exists, and not for me.
Does that mean that seemingly impossible things (like an intelligence explosion) will turn out to be simple reality tomorrow? 
Maybe. Heck PROBABLY. But maybe not.

The fact that it's so difficult to make even the simplest systems not suck, may mean that much larger systems won't work either.
In fact, it's certain that many unexpected things will go wrong before then.
The only way to get transhuman AIs to work MAY be by connecting many existing smaller systems, perhaps even including groups of humans.
 

[-]Raemon1yModerator Comment71

Hey Flagland, I feel a bit bad about how this played out, but after thinking more and reading this, the mod team has decided to full restrict your commenting permissions. I don't really expect you posting about your interests here on shortform to be productive for you or for LW. 

We're also experimenting more with moderating in public so it's clearer to everyone where are our boundaries are. (I think expect this to feel a bit more intense as a person-getting-moderated, but to probably be better overall for transparency)

My whole life I've been ranting about how incomprehensibly evil the world is. Maybe I'm the only one who thinks things shouldn't be difficult in the way they are. [...] The reaction to my decades of online rants and hate-filled screeds has been very consistent: the Silence or the Bodysnatchers. Meaning no reaction, or an extremely negative one (I'm not allowed to link either).

There seems to be a deep willingness among normal people to accept evil, which may be the source of their power.

To be clear, I think your topics have been totally fine things to think about and discuss on LessWrong. The problem is that, well, ranting and hate-filled screeds just aren't very productive most of the time. If it seemed like you were here to think clearly and figure out solutions, that'd be a pretty different situation.

I used to believe the world is so unimaginably horrible that we should do everything possible to accelerate AI progress, regardless of the risk, even if a runaway AI inadvertently turns the earth into a glowing orb dedicated to dividing by zero. I still believe that, but I also used to believe that in the past.