Posts

Sorted by New

Wiki Contributions

Comments

There's no proof that superintelligence is even possible. The idea of the updating AI that will rewrite itself to godlike intelligence isn't supported.

There is just so much hand-wavey magical thinking going on in regard to the supposed superintelligence AI takeover.

The fact is that manufacturing networks are damn fragile. Power networks too. Some bad AI is still limited by these physical things. Oh, it's going to start making its own drones? Cool, so it is running thirty mines, and various shops, plus refining the oil and all the rest of the network's required just to make a sparkplug?

One tsunami in the RAM manufacturing district and that AI is crippled. Not to mention that so many pieces of information do not exist online. There are many things without patent. Many processes opaque.

We do in fact have multiple tries to get AI "right".

We need to stop giving future AI magical powers. It cannot suddenly crack all cryptography instantly. It's not mathematically possible.

This place uses upvote/downvote mechanics, and authors of posts can ban commentors from writing there... which man, if you want to promote groupthink and all kinds of ingroup hidden rules and outgroup forbidden ideas, that's how you'd do it. 

You can see it at work - when a post is upvoted is it because it's well-written/useful or because it's saying the groupthink? When a post is downvoted is it because it contains forbidden ideas? 

When you talk about making a new faction - that is what this place is. And naming it Rationalists says something very direct to those who don't agree - they're Irrationalists.

Perhaps looking to other communities is the useful path forward. Over on reddit there's science and also askhistorians. Both have had "scandals" of a sort that resulted in some of the most iron-fisted moderation that site has to offer. The moderators are all in alignment about what is okay and not. Those communities function extremely well because a culture is maintained.

LessWrong has posts where nanites will kill us all. A post where someone is afraid, apparently, of criticizing Bing ChatGPT because it might come kill them later on. 

There is moderation here but I can't help to think of those reddit communities and ask whether a post claiming someone is scared of criticizing Bing ChatGPT should be here at all. 

When I read posts like that I think this isn't about rationality at all. Some of them are a kind of written cosplay, hyped up fiction, which when it remains, attracts others. Then we end  up with someone claiming to be an AI running on a meat substrate... when in fact they're just mentally ill. 

I think those posts should have been removed entirely. Same for those gish gallop posts of AI takeover where it's nanites or bioweapons and whatever else. 

But at the core of it, they won't be and will remain in the future because the bottom level of this website was never about raising the waterline of sanity - it was AI is coming, it will kill us, and here's all the ways it will kill us. 

It's a keystone, a basic building block. It cannot be removed. It's why you see so few posts here saying "hey, AI probably won't kill us and even if something gets out of hand, we'll be able to easily destroy it". 

When you have fundamental keystones in a community, sure there will be posts pointing out things but really the options become leave or stay. 

I agree. When you look up criticism of LessWrong you find plenty of very clear, pointed, and largely correct criticisms. 

I used time-travel as my example because I didn't want to upset people but really any in-group/out-group forum holding some wild ideas would have sufficed. This isn't at Flat Earther levels yet but it's easy to see the similarities. 

There's the unspoken things you must not say otherwise you'll be pummeled, ignored or fought. Blatantly obvious vast holes are routinely ignored. A downvote mechanism works to push comments down. 

Talking about these problems just invites people in the problems to attempt to draw you in with the flawed arguments. 

Saying, hey, take three big steps back from the picture and look again doesn't get anywhere.

Some of the posts I've seen on here are some sort of weird doom cosplay. A person being too scared to criticize Bing Chatgpt? Seriously? That can't be real. It reminds me of the play-along posts I've seen in antivaxxer communities in a way.

The idea of "hey, maybe you're just totally wrong" isn't super useful to move anything but it seems obvious that fan-fiction of nanites and other super techs that exist only in stories could probably be banned and this would improve things a lot. 

But beyond that, I'm not certain this place can be saved or eventually be useful. Setting up a place proclaiming it's about rationality is interesting and can be good but it also implicitly states that those who don't share your view are irrational, and wrong.

As the group-think develops any voice not in line is pushed out all the ways they can be pushed out and there's never a make-or-break moment where people stand up and state outright that certain topics/claims are no longer permitted (like nanites killing us all).

The OP may be a canary, making a comment but none of the responses here produced a solution or even a path.

I'd suggest one: you can't write nanite until we make nanites. Let's start with that. 

You have no atomic level control over that. You can't grow a cell at will or kill one or release a hormone. This is what I'm referring to. No being that exists has this level of control. We all operate far above the physical reality of our bodies.

But we suggest an AI will have atomic control. Or that code control is the same as control.

Total control would be you sitting there directing cells to grow or die or change at will.

No AI will be there modifying the circuitry it runs on down at the atomic level.

I'd suggest there may be an upper bound to intelligence because intelligence is bound by time and any AI lives in time like us. They can't gather information from the environment any faster. They cannot automatically gather all the right information. They cannot know what they do not know.

The system of information, brain propagation, cellular change runs at a certain speed for us. We cannot know if it is even possible to run faster.

One of the magical thinking criticisms I have of AI is that it suddenly is virtually omniscient. Is that AI observing mold cultures and about to discover penicillin? Is it doing some extremely narrow gut bateria experiment to reveal the source of some disease? No it's not. Because there are infinite experiments to run. It cannot know what it does not know. Some things are Petri dishes and long periods of time in the physical world and require a level of observation the AI may not possess.

The assumption there is that the faste the hardware underneath, the faster the sentience running on it will be. But this isn't supported by evidence. We haven't produced a sentient AI to know whether this is true or not.

For all we know, there may be a upper limit to "thinking" based on neural propagation of information. To understand and integrate a concept requires change and that change may move slowly across the mind and underlying hardware.

Humans have sleep for example to help us learn and retain information.

As for self modification - we don't have atomic level control over the meat we run on. A program or model doesn't have atomic level control over its hardware. It can't move an atom at will in its underlying circuitry to speed up processing for example. This level of control does not exist in nature in any way.

We don't know so many things. For example, what if consciousness requires meat? That it is physically impossible on anything other than meat? We just assume it's possible using metal and silica.

No being has cellular level control. Can't direct brain cells to grow or hormones to release etc. This is what I mean by it does not exist in nature. There is no self modification that is being propagated that AI will have.

Teleportation doesn't exist so we shouldn't make arguments where teleportation is part of it.

You have no control down on the cellular level over your body. No deliberate conscious control. No being does. This is what I mean by does not exist in nature. Like teleportation.

We do have examples of these things in nature, in degrees. Like flowers turning to the sun because they contain light-sensing cells. Thus, it exists in nature and we eventually replicate it.

Steam engines is just energy transfer and use, and that exists. So does flying fast. 

Something not in nature (as far as we can tell) is teleportation. Living inside a star. 

I don't mean specific narrow examples in nature. I mean the broader idea. 

So I can see intelligence evolving over enormous time-frames, and learning exists, so I do concur we can speed up learning and replicate it... but the underlying idea of a being modifying itself? Nowhere in nature. No examples anywhere on any level. 

Load More