Posts

Sorted by New
0
2y
49
-5
2y
8
14
2y
17

Wiki Contributions

Comments

True, in a way. Without solving said mystery (of how an animal brain produces not only calculations but also experiences) you could theoretically create philosophical zombie uploads. But in this post what is really desired is to save all conscious beings from death and disease by uploading them, so to that effect (the most important one) it still looks impossible.

(I deleted my post because on hindsight it sounded a bit off topic.)

I never linked complexity to absolute certainty of something being sentient or not, only to pretty good likelihood. The complexity of any known calculation+experience machine (most animals, from insect above) is undeniably way more than that of any current Turing machine. Therefore it's reasonable to assume that consciousness demands a lot of complexity, certainly much more than that of a current language model. To generate experience is fundamentally different than to generate only calculations. Yes, this is an opinion, not a fact. But so is your claim!

I know for a fact that at least one human is consciousness (myself) because I can experience it. That's still the strongest reason to assume it, and it can't be called into question as you did.

"There is no reason to think architecture is relevant to sentience, and many philosophical reasons to think it's not (much like pain receptors aren't necessary to feel pain, etc.)."

That's just non-sense. A machine that makes only calculations, like a pocket calculator, is fundamentally different in arquitecture from one that does calculations and generates experiences. All sentient machines that we know have the same basic arquitecture. All non-sentient calculation machines also have the same basic arquitecture. The likelihood that sentience will arise in the latter arquitecture as long as we scale it is, therefore, not impossible, but quite unlikely. The likelihood that it will arise in a current language model which doesn't need to sleep, could function for a trillion of years without getting tired, and that we know pretty much how it works which is fundamentally different from an animal brain and fundamentally similar to a pocket calculator, is even more unlikely.

"On one level of abstraction, LaMDA might be looking for the next most likely word. On another level of abstraction, it simulates a possibly-Turing-test-passing person that's best at continuing the prompt."

Takes way more complexity to simulate a person than LaMDAs arquitecture, if possible at all in a Turing machine. A human brain is orders of magnitude more complex than LaMDA.

"The analogy would be to say about human brain that all it does is to transform input electrical impulses to output electricity according to neuron-specific rules."

With orders of magnitude more complexity than LaMDA. So much so that with decades of neuroscience we still don't have a clue how consciousness is generated, while we have pretty good clues how LaMDA works.

"a meat brain, which, if we look inside, contains no sentience"

Can you really be so sure? Just because we can't see it yet doesn't mean it doesn't exist. Also, to deny consciousness is the biggest philosophical fallacy possible, because all that one can be sure that exists is his own consciousness.

"Of course, the brain claims to be sentient, but that's only because of how its neurons are connected."

Like I said, to deny consciousness is the biggest possible philosophical fallacy. No proof is needed that a triangle has 3 sides, the same about consciousness. Unless you're giving the word other meanings.

Answer by superads91Jun 16, 202220

There's just no good reason to assume that LaMDA is sentient. Arquitecture is everything, and its arquitecture is just the same as other similar models: it predicts the most likely next word (if I recall correctly). Being sentient involves way more complexity than that, even something as simple as an insect. It claiming that it is sentient might just be that it was mischievously programmed that way, or it just found it was the most likely succession of words. I've seen other language models and chatbots claim they were sentient too, though perhaps ironically.

Perhaps as importantly, there's also no good reason to worry that it is being mistreated, or even that it can be. It has no pain receptors, it can't be sleep deprived because it doesn't sleep, can't be food deprived because it doesn't need food...

I'm not saying that it is impossible that it is sentient, just that there is no good reason to assume that it is. That plus the fact that it doesn't seem like it's being mistreated plus it also seems almost impossible to mistreat, should make us less worried. Anyway we should always play safe and never mistreat any "thing".

Right then, but my original claim still stands: your main point is, in fact, that it is hard to destroy the world. Like I've explained, this doesn't make any sense (hacking into nuclear codes). If we create an AI better than us at code, I don't have any doubts that it CAN easily do it, if it WANTS. My only doubt is whether it will want it or not. Not whether it will be capable, because like I said, even a very good human hacker in the future could be capable.

At least the type of AGI that I fear is one capable of Recursive Self-Improvement, which will unavoidably attain enormous capabilities. Not some prosaic non-improving AGI that is only human-level. To doubt whether the latter would have the capability to destroy the world is kinda reasonable, to doubt it about the former is not.

The post is clearly saying "it will take longer than days/weeks/months SO THAT we will likely have time to react". Both are highly unlikely. It wouldn't take a proper AGI weeks or months to hack into the nuclear codes of a big power, it would take days or even hours. That gives us no time to react. But the question here isn't even about time. It's about something MORE intelligent than us which WILL overpower us if it wants, be it on 1st or 100th try (nothing guarantees we can turn it off after the first failed strike).

Am I extremely sure that an unaligned AGI would cause doom? No. But to be extremely sure of the opposite is just as irrational. For some reason it's called a risk - it's something that has a certain probability, and given that we all should agree that that probability is high enough, we all should take the matter extremely seriously regardless of our differences.

Your argument boils down to "destroying the world isn't easy". Do you seriously believe this? All it takes is to hack into the codes of one single big nuclear power, thereby triggering mutually assured destruction, thereby triggering nuclear winter and effectively killing us all with radiation over time.

In fact you don't need AGI to destroy the world. You only need a really good hacker, or a really bad president. In fact we've been close about a dozen times, so I hear. If Stanislav Petrov had listened to the computer in 1983 who indicated 100% probability of incoming nuclear strike, the world would have been destroyed. If all 3 officials of the Russian submarine of the Cuban Missile Crisis had agreed to launch what they mistakenly thought would be a nuclear counter strike, the world would have been destroyed. Etc etc.

Of course there are also other easy ways to destroy the world, but this one is enough to invalidate your argument.

"You may notice that the whole argument is based on "it might be impossible". I agree that it can be the case. But I don't see how it's more likely than "it might be possible"."

I never said anything to the contrary. Are we allowed to discuss things that we're not sure whether it "might be possible" or not? It seems that you're against this.

Tomorrow people matter, in terms of leaving them a place in minimally decent conditions. That's why when you die for a cause, you're also dying so that tomorrow people can die less and suffer less. But in fact you're not dying for unborn people - you're dying for living ones from the future.

But to die to make room for others is simply to die for unborn people. Because them never being born is no tragedy - they never existed, so they never missed anything. But living people actually dying is a tragedy.

And I'm not against the fact that giving live is a great gift. Or should I say, it could be a great gift, if this world was at least acceptable, which it's far from being. It's just that not giving it doesn't hold any negative value, it's just neutral instead of positive. Whereas taking a life does hold negative value.

It's as simple as that.

I can see the altruism in dying for a cause. But it's a leap of faith to claim, from there, that there's altruism in dying by itself. To die why, to make room for others to get born? Unborn beings don't exist, they are not moral patients. It would be perfectly fine if no one else was born from now on - in fact it would be better than even 1 single person dying.

Furthermore, if we're trying to create a technological mature society capable of discovering immortality, perhaps much sooner will it be capable of colonizing other planets. So there are trillions of empty planets to put all the new people before we have to start taking out the old ones.

To die to make room for others just doesn't make any sense.

"consciousness will go on just fine without either of us specifically being here"

It sure will. But that's like saying that money will go on just fine if you go bankrupt. I mean, sure, the world will still be full of wealth, but that won't make you any less poor. Now imagine this happening to everyone inevitably. Sounds really sh*tty to me.

"Btw I'm new to this community,"

Welcome then!

Load More