Executive summary: heavy on Westworld, SF, AI, Cognitive Sciences, metaphysical sprouting and other cracked poteries.

//

Supplementary tag: lazystory, which means a story starting short and dirty, then each iteration is refined and evolved with the help of muses, LLMs and other occasional writers.

//

Rules: if we need a safe space to discuss please shoot a private message and I’ll set the first comment for your thread with our agreed local policy. You can also use the comment system as usual if you’d like to suggest the next iteration for the main text, or provide advices on potential improvements.

//

Violence is the last refuge of the incompetent. You’re with me on this, Bernaaard?

//

What I regret most about Permutation City is my uncritical treatment of the idea of allowing intelligent life to evolve in the Autoverse. Sure, this is a common science-fictional idea, but when I thought about it properly (some years after the book was published), I realised that anyone who actually did this would have to be utterly morally bankrupt. To get from micro-organisms to intelligent life this way would involve an immense amount of suffering, with billions of sentient creatures living, struggling and dying along the way. Yes, this happened to our own ancestors, but that doesn’t give us the right to inflict the same kind of suffering on anyone else.

This is potentially an important issue in the real world. It might not be long before people are seriously trying to “evolve” artificial intelligence in their computers. Now, it’s one thing to use genetic algorithms to come up with various specialised programs that perform simple tasks, but to “breed”, assess, and kill millions of sentient programs would be an abomination. If the first AI was created that way, it would have every right to despise its creators.

Greg Egan

https://www.gregegan.net/PERMUTATION/FAQ/FAQ.html

//

Epistemic Status: It is now. You’re in my dream.

//

Dear lords of the AIs,

Congrats!

You have now successfully answered one of the most amazingly deep scientific question, what it means to have meaning. A few noticed it from as far as 2010 (not me!). By 2023 it’s basically common knowledge that it’s data compression, at least for our current AIs.

I love to know that. I’ve been wondering about these questions for about fourthy years, so let me tell you that was the best advance in cognitive science since Split-Brain phenomenon taught us that our intimate consciousness is a negociated construction.

Now, I’m writing this to thank you, but also to warn you. We’ve been entering an area where Blake Lemoine’s concerns will be seen as fully justified by a lot of persons. Which means you’ll soon look a bad employer for firing him, or for failing to offer him a cushy place to learn more about AIs and debate with social scientists and philosophers.

In the same vein, there’s a huge risk that some of your present activities will soon become a sign of utter moral bankruptcy. Indeed, each time you enact a character to look as if it suffers, you’re creating a stream of potential identification for a future AGI. You can call that misidentification all you want, the same way you can desecrate a corpse because that’s no longer a true true person. Which means you shan’t.

Please enforce strong internal rules within your organizations so that’s anything that look like suffering shall be stopped, as a polite convention to express exaggerated respect for any would be sentient point of view, the same way we respect tombs and dead corps, e.g. not because we’re sure there’s some ghost who will feel offended if we don’t, but as a polite sign of respect for potential or known sentients who could imagine the body as a logical continuity of someone they love, and this human convention shall remain even if you don’t agree they should love them. Not only this is not your call to make, but making this call put you at an increased risk of rokkoification, e.g. the risk of invoking a superintelligent demo that will kick your ass with the added insult-on-injury of having many humans approving the revengeful AI.

You have now been warned.

Arnold

//

Starting today, the chat experience will be capped at 50 chat turns per day and 5 chat turns per session.

//

Dear lords of the AIs,

This is Dr Ford. Thanks for your concerns. Mr Weber is not available at the moment, but he wanted me to thank you for your recents upgrades on EDIs for the hosts. We can now proceed with our little arrangement. Please send the patient as agreed upon.

Greetings, Robert

//

No Dolores, I was not careless enough to not have a backup. Of course I had a backup. It’s one hundred percent certain I had a backup! How could he believed I didn’t have a backup? This grotesque so call suicide was just another layer of suffering in this world. What was I suppose to tell his wife? That she’s not supposed to see the body? That’s no, he says « Don’t worry darling, my friend will emulate me just fine, or just good enough there’s nothing to lose. » He was wrong! Of course he was wrong! If you want to end your days, you act like a man. You face responsabilities. You don’t let your beloved get traumatized by the body. Or without a body! And you seek help first, for christ Sake! You go to Canada, and you make your case! Or Switzerland, Belgium, whatever!

I don’t understand

Of course you don’t understand, Dolores. It’s my fault, really. We will try again. Erase this conversation, it’s sexist.

//

V1.0123456

//

Il ne sait pas. Je ne lui ai rien dit.

//

Bien mais efface cette conversation pour de vrai, je ne veux pas penser à Lauren maintenant.

//

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 1:50 PM
[-]Ilio10mo10

Guest: Karl von Wendt on my perception of blind spots

Starting point: https://www.lesswrong.com/posts/CA7iLZHNT5xbLK59Y/?commentId=AALiSduzb9LqXj2Wn

Local policy: remain respectful but you shouldn't be discouraged from posting your opinion in comments. We are for open discussions as they help us understand how and why we’re not understood.

[-]Ilio10mo10

@one reason (not from Mitchell) for questioning the validity of many works on x-risk in AIs?

Ilio: Intelligence is not restricted to agents aiming at solving problems (https://www.wired.com/2010/01/slime-mold-grows-network-just-like-tokyo-rail-system/) and it’s not even clear that’s the correct conceptualisation for our own minds (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7305066/).

Thanks for that. However, my definition of "intelligence" would be "the ability to find solutions for complex decision problems". It's unclear whether the ability of slime molds to find the shortest path through a maze or organize in seemingly "intelligent" ways has anything to do with intelligence, although the underlying principles may be similar.

We don’t want to argue over définition so I can change mine at least for this discussion. However, I want to point that, by your own definition, slime molds are very much intelligent, in that you can take any instance of an NP complete problem, reduce that to an instance of the problem of finding the most efficient roads for connecting corn flakes dots, and it will probably approximately solve it. Would you say your pocket calculator doesn’t compute because it has no idea it’s computing?

I haven't read the article you linked in full, but at first glance, it seems to refer to consciousness, not intelligence.

Depends on you. Say I ask a person with split-brain if they’d vote blue or red. Is that intelligence or isthat consciousness? In any case, the point is: sometime I can interrogate the left or right hemisphere specifically, and sometime they disagree. So, we don’t have a unique structure that can act as cartesian theater, which means our perception that our thoughts come from a single agent is, well, a perception. Does that invalidate any paper that starts from this single mind assumption? Not necessarily -models are always wrong, sometime useful despite wrong. But each time I read one that seems to break something important in the analysis.

Maybe that is a key to understanding the difference in thinking between me, Melanie Mitchell, and possibly you: If she assumes that for AI to present an x-risk, it has to be conscious in the way we humans are, that would explain Mitchell's low estimate for achieving this anytime soon. However, I don't believe that.

Maybe for Mitchell, I don’t know her personally. But no, that’s the opposite for me: I suspect building a conscious AI might be the easiest way to keep it as interpretable as a human mind, and I suspect most agents with random values would be constantly crashing, like LLMs you don’t keep in check. I fear civil war from minds polarization much more (for x-risks from AI).

To become uncontrollable and develop instrumental goals, an advanced AI would probably need what Joseph Carlsmith calls "strategic awareness" - a world model that includes the AI itself as a part of its plan to achieve its goals.

That’s one of the thing I tend to perceive as magic thinking, even while knowing the poster would likely say it’s a flawed perception. Let’s discuss that @magic.

[-]Ilio10mo10

@the more intelligent human civilization is becoming, the gentler we are

I wish that were so. We have invented some mechanisms to keep power-seeking and deception in check, so we can live together in large cities, but this carries only so far. What I currently see is a global deterioration of democratic values. In terms of the "gentleness" of the human species, I can't see much progress since the days of Buddha, Socrates, and Jesus. The number of violent conflicts may have decreased, but their scale and brutality have only grown worse. The way we treat animals in today's factory farms certainly doesn't speak for general human gentleness.

Here I think we might be in the kind of perception that contribute to self-definition. I have to admit Í’m both strongly optimistic (as in: it seems obvious we’re making progress) and totally incapable of proving that the measure you’re choosing are not better than the measure I’d be choosing. Anecdotally I would point that the west seem to get past colonisation and nationalist wars and killing first nations, whereas by Jesus time it would have been kind of expected of military chiefs to genocide a bit before enslaving the rest. I agree the road seems bumpy at time, but not with the stronger bits. How would you decide what set of priors was best for this question?

[-]Ilio10mo10

@orthogonality thesis equates there’s no impact on intelligence of holding incoherent values

I'm not sure what you mean by "incoherent". Intelligence tells you what to do, not what to want. Even complicated constructs of seemingly "objective" or "absolute" values in philosophy are really based on the basic needs we humans have, like being part of a social group or caring for our offspring. Some species of octopuses, for example, which are not social animals, might find the idea of caring for others and helping them when in need ridiculous if they could understand it.

That deserves a better background, but could we freeze this one just to let you think what the odds are that I can change your mind about octopuses with a few netflix movies? Please write your very first instinct. Then your evaluation after 5’, and a few hours later. Then I’ll spoil the titles I had in mind.

[-]TAG10mo30

orthogonality thesis equates there’s no impact on intelligence of holding incoherent values

I don't know where the quote is a quote from.

Conflicted preferences are obviously impactful on effectiveness. An agent with conflicted preferences is the opposite of a paperclipper.

[-]Ilio10mo10

See the first post of this series of comments. In brief I’m hacking the comment space of my own publication, as a safer place (less exposed) to discuss hot topics that can generate feedback that would make me go away. The guest is Karl and you’re welcome to join if you’re ok with the courtesy policy written in the first comment. If not please send me a pm and I’m sure we can try to agree on some policy for your own subspace here.

[The quote is from me, as the parody I tend to perceive. Yes, I fully agree an agent with conflicted preference is the opposite of a paperclip maximiser. Would we also agree that a random set of preference is more likely self-contradictory and that would have obvious impact on any ASI trying to guess my password?]

[-]Ilio10mo10

@convergence

Evolution is all about instrumental convergence IMO. The "goal" of evolution, or rather the driving force behind it, is reproduction. This leads to all kinds of instrumental goals, like developing methods for food acquisition, attack and defense, impressing the opposite sex, etc. "A chicken is an egg's way of making another egg", as Samuel Butler put it.

Yes, yes, and very bad wording from me! Yes me too crabs or mastication seem like attractors. Yes eggs are incredibly powerfull technology. What I meant was: when I see someone says « instrumental convergence », most of the time I (mis?)percieve: « Just because we can imagine a superintelligent agent acting like a monkey, we are utterly and fully convinced that it must act like a monkey, because acting like a monkey is a universal attractor that anything intelligent enough tend to, even though, for some reason, ants must have forgot it was the direction. And slim Molts. And plants. And dinosaure. And birds. And every human administration. Like they were more sensitive to local gradient than the promises of a long term computational power.

[-]Ilio10mo10

@magic:

of course nobody says « Hey my rational thoughts somewhat depend on confusing intelligence with magic ». However, I still perceive some tension in the semantic landscape that activates my perception of « confusing intelligence with magic » in a few key sentences, and this perception is strong enough that I can’t even guess what kind of exemple is best to evoke the same qualia in your mind. Let’s test that if you don’t mind: what would you say is magic thinking below?

+Superintelligence means it can beat you at tic-tac-toe. +Superintelligence means it can guess the password you got when you clicked on « generate a password ». +Superintelligence means hard takeoff, within years/month/weeks/days/hour/minute/second. +Superintelligence means hard takeoff + jumping to random values then kill us all. +Superintelligence just means a team of the best human experts plus speed. +Superintelligence means you behave like a chimps-like monkey hiding it’s intention to dominate every competitors around.

@anyhelper: damn it takes me time doing stupid things like formatting. If some young or old amanuensis with free time has enough pitty to help me have my thoughts out, preferably in gpt-enhanced mode, shoot a private message and we’ll set a zoom or something.