Wiki Contributions

Comments

I used to believe the world is so unimaginably horrible that we should do everything possible to accelerate AI progress, regardless of the risk, even if a runaway AI inadvertently turns the earth into a glowing orb dedicated to dividing by zero. I still believe that, but I also used to believe that in the past.

So I was banned from commenting on LessWrong . . .

My whole life I've been ranting about how incomprehensibly evil the world is. Maybe I'm the only one who thinks things shouldn't be difficult in the way they are.
Evil is things that don't work, but can't be avoided. A type of invincible stupidity.

For example, software is almost supernaturally evil. I've been tortured for a quarter century by computer systems that are inscrutable, deliberately dysfunctional, unpredictable; and above all the freezing and crashing.
The unusability of software is a kind of man-made implacability. It can't be persuaded or reasoned with. Omnimalevolence as an emergent property.
Software is just a microcosm of society.

The reaction to my decades of online rants and hate-filled screeds has been very consistent: the Silence or the Bodysnatchers. Meaning no reaction, or an extremely negative one (I'm not allowed to link either).
There seems to be a deep willingness among normal people to accept evil, which may be the source of their power.
When I was banned from LessWrong commenting (after two requests to be reinstated), they said such talk was "weird". Weird does NOT automatically mean wrong! 

Studying the evilness of human-designed interfaces might reveal why the world has always sucked.
Seemingly simple things (like easy interfaces) are still absolutely impossible today. Only the illusion exists, and not for me.
Does that mean that seemingly impossible things (like an intelligence explosion) will turn out to be simple reality tomorrow? 
Maybe. Heck PROBABLY. But maybe not.

The fact that it's so difficult to make even the simplest systems not suck, may mean that much larger systems won't work either.
In fact, it's certain that many unexpected things will go wrong before then.
The only way to get transhuman AIs to work MAY be by connecting many existing smaller systems, perhaps even including groups of humans.
 

The past week my Windows 10 box has been almost unusable as it spent the days wasting kilowatts and processing cycles downloading worse-than-useless malware "updates" with no way to turn them off! 

Evil is the most fundamental truth of the world. The Singularity cannot happen soon enough . . .

I just spent four hours trying to get a new cellphone to work (that others insist I should have), and failed totally.

There is something fantastically wrong with this shitplanet, but completely different than anyone is willing to talk about. 

I didn't realize there was an automatic threshold of total retaliation the moment Russia nukes Ramstein air base.

I guess simple text based browsers and websites that just show the minimal information you want in a way the user can control are not cool enough, and so we have all those EU regulations that "solve" a problem by making it worse.

If whoever is running Russia is suicidal, sure, but if they still want to win, it might make sense to use strategic weapons tactically to force the other side to accept a stalemate right up to the end.

Answer by FlaglandbaseOct 11, 20220-7

Highest risk are probably NATO airbases in Poland, Slovakia, and Romania used to supply and support Ukraine. There may also be nuclear retaliation against north German naval bases. They're more likely to attack smaller American cities first before escalating.

The only thing more difficult than getting readers for your blog is getting readers for your fiction (maybe not on here).

If the universe is really infinite, there should be an infinite number of possible rational minds. Any randomly selected mind from that list should statistically be infinite in size and capabilities. 

Load More