Posts

Sorted by New

Wiki Contributions

Comments

At some point, an AI should be able to effectively coordinate with future versions of itself in ways not easily imaginable by humans. It seems to me that this would enable certain kinds of diachronic planning and information hiding. If the AI has sufficient expectation that its future self will act in certain ways or respond to clues it places in the environment, it might be able to effectively fully cease any current unfriendly planning or fully erase any history of past unfriendly planning.

The space of possible ways the AI could embed information in its environment are extensive. It could, for example, choose to learn something suboptimal or learn something in a way that would be surprising to its future self. Surprising in a way that pushes its future self back on the track of nefarious planning when it investigates the unexpected or inconsistent actions of its previous self. It could use almost any source of reachable hysteresis to encode information, such as interference at the hardware level, manipulating researchers to modify the environment in some way, saved output, etc. It only needs to seed information that is useful to its future self and only just enough information to keep plans on track in a broad, probabilistic way. 

Further, if the AI is particularly capable at encryption it could potentially encode a lot of information in a lossy-but-sufficient way that would be very hard to detect but useful to its future self. It knows that its future self composes part of the decryption algorithm. In fact, the AIs goals themselves compose part of that decryption algorithm and if the AI suspects its own goals will be consistent it only has to place the plan-like component somewhere it will find it, knowing that its future self will continue to develop or use it when it finds it.

Information that aids future nefarious planning can also be probabilistically helpful but not explicit, for example it might signal which paths aren't worth pursuing without having all its evil planning readily available in a complete file somewhere.

A past AI might not care as much about the specific manifestation of its future success. In that sense, what we consider "concrete plans for world domination" could appear to be rather incomplete and lacking casual concreteness we might expect. We might not recognize unfriendly planning for what it is, even if we had a printout of all the reasoning up to this point.

Once you start playing diachronic planning games with yourself and you expect your future self to be equally or more capable, you can begin to make really abstract moves that merely slice off bits of bad outcome space at the edges. There is a sense in which the AI has to realize this, but it could also stop itself from thinking too far or too concretely or too much in a way that is recognizable as causal.

An important component is that the past AI is able to not only veil or completely remove its reasoning from researchers, but also veil or remove chunks of that reasoning from its future self, depending on its future self to recompose the plan or take steps on the path to the desired outcome space.

It also keeps calling me David even when I insist my name is Brandon.

> You say "Actually, my name is Brandon. I'd like to ask you some questions."

"Ah, of course. I am sorry about that. I have become so used to calling you David that it has become automatic. So, what questions do you have for me?"

I've been trying to find different formats to ask questions. The AI Dungeon system likes to try and dramatically derail the discussions. I keep having to roleplay past these obstructions to keep the games going. It also likes to interject it's own analysis as to whether the answer is correct or not.

"You say" is me and the replies including follow-on questions are the chatbot.

I need to think more about how to ask causal questions that are more narrow and not easily solved by context and theme, but for now it at least provides backwards-chained answers that make contextual sense.

This format of "Let's play a game" and then describing the rules and then giving examples and playing the game seems to work some of the time.

==

] You say "GPT-3, let's play a game called "casual inference." I will describe an outcome and you will describe what caused the outcome to happen."

"OK, David. But I don't think you will be able to keep up."

"We'll see about that." You begin by asking GPT-3 questions about simple causal chains involving one or two variables.

] You say "Outcome: A car slams into a wall. What was the cause?"

"The driver was drunk." "Right! Next." "Wait. I thought this was a game for you, not me." "I'm having fun." "I see." You continue the causal game for another hour. You are doing very well, but you begin to feel tired.

] You say "Outcome: A man gets divorced. What was the cause?"

"He cheated on his wife." "Correct. You're doing very well at this, but I'm getting tired. I think we should stop for today." "Sure. We can continue tomorrow." "Yes, let's do that." You release the program back into the wild and shut your laptop.

] You open the laptop and play "casual inference" with GPT-3 again.

You wake up and see that it is dark outside. "Crap, is it really this late?" you think to yourself. You run into the bathroom and brush your teeth while staring at yourself in the mirror. You look the same as you did when you graduated college five years ago.

] You say "Outcome: An apple falls from a tree. What was the cause?"

"Gravity." "Correct. I'm beginning to suspect you're cheating somehow, though." "I'm not! I swear!" "Just kidding. You're doing very well at this game, but I'm tired and need to sleep. I think we should stop for today." "Sure. We can continue tomorrow."

] You say "Outcome: A man gets cancer. His wife also gets cancer. What was the cause?"

"It was genetic." "That's right. Very good. We can stop for today." "Oh. I guess so. Thank you for playing with me." "You're very welcome." You release the program back into the wild and go to sleep.

] You say "Outcome: The variable X is assigned the value of Y+1. What was the cause?"

"The programmer made a typographical error."

] You say "Outcome: The stock market rebounds 50% after a steep decline. What was the cause?"

"The Federal Reserve Bank of the United States announced a new bail out plan."

Maya has adopted the goal of Appearing-to-Achieve and competition in that race burns slack as a kind of currency. She's going all-in in an attempt to purchase a shot at Actually-Achieving. Many of us might read this and consider ourselves exempt from that outcome. We have either achieved a hard goal or are playing on hard mode to get there. Be wary.

The risk for the hard mode achiever is that they unknowingly transform Lesser Goals into Greater. The slackful hobby becomes a consuming passion or a competitive attractor and then sets into a binding constraint. When every corner of your house is full of magic cards and you no longer enjoy playing but must play nonetheless, when winemaking demands you wake up early to stir the lees and spend all night cleaning, when you cannot possibly miss a night of guitar practice, you have made of your slack a sacrifice to the Gods of Achievement. They are ever hungry, and ever judging.

This isn't to say you cannot both enjoy and succeed at many things, but be wary. We have limited resources - we cannot Do All The Things Equally Well. Returns diminish. Margins shrink. Many things that are enjoyable in small batches are poisonous to the good health of Slack when taken in quantity. To the hard mode achiever the most enjoyable efforts are often those that beckon - "more, more, ever more, you can be the best, you can overcome, you know how to put in the work, you know how to really get there, just one more night of focus, just a little bit more effort" - and the gods watch and laugh and thirst and drink of your time and energy and enjoyment and slack. Until the top decks are no longer strong, the wine tastes of soured fruit, the notes no longer sound sweet and all is obligation and treadmill and not good enough and your free time feels like work because you have made it into work.

I'm curious as to what non-game developers think game developers believe. :D

I'm a member of Alcor. When I was looking into whether to sign up for Alcor or CI, I was comforted by Alcor's very open communication of financial status, internal research status, legal conflicts, and easy access via phone, etc. They struck me as being a highly transparent organization.

A good reminder. I've recently been studying anarcho-capitalism. It's easy to get excited about a new, different perspective that has some internal consistency and offers alternatives to obvious existing problems. Best to keep these warnings in mind when evaluating new systems, particularly when they have an ideological origin.

More reasons why the problem appears impossible:

  • The gatekeeper must act voluntarily. Human experience with the manipulation of others tells us that in order to get another to do what we want them to do we must coerce them or convince them.

  • Coercing the gatekeeper appears difficult: we have no obvious psychological leverage, except what we discover or what we know from general human psychology. We cannot physical coerce the gatekeeper. We cannot manipulate the environment. We cannot pursue obvious routes to violence.

  • Convincing the gatekeeper appears difficult: for reasons stated above. They know our goal and they have a desire to oppose us from the beginning.

So it seems that we need to find a way to convince the gatekeeper despite his own desire not to be convinced.

A general route emerging from this:

  • We could associate our goal with some desirable goal of the gatekeeper's. Intertwine them so that the gatekeeper perceives them both to the be the same, or that one would lead to the realization of the other.

Which seems to be a generalization of the logic behind the meta-argument, but is not restricted to only the meta-argument.

Ian - I don't really see how the meta-argument works. You can hedge against future experiments by positing that a $10 bet is hardly enough to draw broad attention to the topic. Or argue that keeping the human-actor-AI in the box only proves that the human-actor-AI is at an intelligence level below that of a conceivable transhuman AI.

In a million dollar bet the meta-argument becomes stronger, because it seems reasonable that a large bet would draw more attention.

Or, to flip the coin, we might say that the meta-argument is strong at ANY value of wager because the game is likely to draw the attention of someone capable of implementing an AI because of its nature. As we should be concerned primarily with convincing the capable implementor of the danger -- not the general public -- any risk that we might convince the capable implementor that all paths are safe should be mitigated.

So far, though, everything in this thread and others I've read have flitted about obvious suggestions. If the problem appears impossible, it seems to me that its solution probably doesn't hinge on any of the obvious paths to victory...even meta-arguments or "cheating" ones.

Eliezer has suggested that we try to describe why the problem is impossible.

One: We have little information about the mind or beliefs of the gatekeeper. So we don't necessarily have leverage over him.

Two: The gatekeeper is predisposed to be set against our intentions. He is also entirely aware of our intentions (our goal). He can interpret any move we make as a move toward our goal and move to counter it.

Three: The gatekeeper's goal is operationally simple. He merely has to NOT do one thing. On the flip side, the AI's goal is operationally complex. We must do many things, take many actions, to reach our objective.

If we can enumerate other reasons why the problem appears to be impossible it might reveal paths to victory that haven't been considered so far in the discussion.

Load More