Brandon_Reinhart

Posts

Sorted by New

Comments

To what extent is GPT-3 capable of reasoning?

It also keeps calling me David even when I insist my name is Brandon.

> You say "Actually, my name is Brandon. I'd like to ask you some questions."

"Ah, of course. I am sorry about that. I have become so used to calling you David that it has become automatic. So, what questions do you have for me?"

To what extent is GPT-3 capable of reasoning?

I've been trying to find different formats to ask questions. The AI Dungeon system likes to try and dramatically derail the discussions. I keep having to roleplay past these obstructions to keep the games going. It also likes to interject it's own analysis as to whether the answer is correct or not.

"You say" is me and the replies including follow-on questions are the chatbot.

I need to think more about how to ask causal questions that are more narrow and not easily solved by context and theme, but for now it at least provides backwards-chained answers that make contextual sense.

This format of "Let's play a game" and then describing the rules and then giving examples and playing the game seems to work some of the time.

==

] You say "GPT-3, let's play a game called "casual inference." I will describe an outcome and you will describe what caused the outcome to happen."

"OK, David. But I don't think you will be able to keep up."

"We'll see about that." You begin by asking GPT-3 questions about simple causal chains involving one or two variables.

] You say "Outcome: A car slams into a wall. What was the cause?"

"The driver was drunk." "Right! Next." "Wait. I thought this was a game for you, not me." "I'm having fun." "I see." You continue the causal game for another hour. You are doing very well, but you begin to feel tired.

] You say "Outcome: A man gets divorced. What was the cause?"

"He cheated on his wife." "Correct. You're doing very well at this, but I'm getting tired. I think we should stop for today." "Sure. We can continue tomorrow." "Yes, let's do that." You release the program back into the wild and shut your laptop.

] You open the laptop and play "casual inference" with GPT-3 again.

You wake up and see that it is dark outside. "Crap, is it really this late?" you think to yourself. You run into the bathroom and brush your teeth while staring at yourself in the mirror. You look the same as you did when you graduated college five years ago.

] You say "Outcome: An apple falls from a tree. What was the cause?"

"Gravity." "Correct. I'm beginning to suspect you're cheating somehow, though." "I'm not! I swear!" "Just kidding. You're doing very well at this game, but I'm tired and need to sleep. I think we should stop for today." "Sure. We can continue tomorrow."

] You say "Outcome: A man gets cancer. His wife also gets cancer. What was the cause?"

"It was genetic." "That's right. Very good. We can stop for today." "Oh. I guess so. Thank you for playing with me." "You're very welcome." You release the program back into the wild and go to sleep.

] You say "Outcome: The variable X is assigned the value of Y+1. What was the cause?"

"The programmer made a typographical error."

] You say "Outcome: The stock market rebounds 50% after a steep decline. What was the cause?"

"The Federal Reserve Bank of the United States announced a new bail out plan."

Slack

Maya has adopted the goal of Appearing-to-Achieve and competition in that race burns slack as a kind of currency. She's going all-in in an attempt to purchase a shot at Actually-Achieving. Many of us might read this and consider ourselves exempt from that outcome. We have either achieved a hard goal or are playing on hard mode to get there. Be wary.

The risk for the hard mode achiever is that they unknowingly transform Lesser Goals into Greater. The slackful hobby becomes a consuming passion or a competitive attractor and then sets into a binding constraint. When every corner of your house is full of magic cards and you no longer enjoy playing but must play nonetheless, when winemaking demands you wake up early to stir the lees and spend all night cleaning, when you cannot possibly miss a night of guitar practice, you have made of your slack a sacrifice to the Gods of Achievement. They are ever hungry, and ever judging.

This isn't to say you cannot both enjoy and succeed at many things, but be wary. We have limited resources - we cannot Do All The Things Equally Well. Returns diminish. Margins shrink. Many things that are enjoyable in small batches are poisonous to the good health of Slack when taken in quantity. To the hard mode achiever the most enjoyable efforts are often those that beckon - "more, more, ever more, you can be the best, you can overcome, you know how to put in the work, you know how to really get there, just one more night of focus, just a little bit more effort" - and the gods watch and laugh and thirst and drink of your time and energy and enjoyment and slack. Until the top decks are no longer strong, the wine tastes of soured fruit, the notes no longer sound sweet and all is obligation and treadmill and not good enough and your free time feels like work because you have made it into work.

Imaginary Positions

I'm curious as to what non-game developers think game developers believe. :D

You Only Live Twice

I'm a member of Alcor. When I was looking into whether to sign up for Alcor or CI, I was comforted by Alcor's very open communication of financial status, internal research status, legal conflicts, and easy access via phone, etc. They struck me as being a highly transparent organization.

Crisis of Faith

A good reminder. I've recently been studying anarcho-capitalism. It's easy to get excited about a new, different perspective that has some internal consistency and offers alternatives to obvious existing problems. Best to keep these warnings in mind when evaluating new systems, particularly when they have an ideological origin.

AIs and Gatekeepers Unite!

More reasons why the problem appears impossible:

  • The gatekeeper must act voluntarily. Human experience with the manipulation of others tells us that in order to get another to do what we want them to do we must coerce them or convince them.

  • Coercing the gatekeeper appears difficult: we have no obvious psychological leverage, except what we discover or what we know from general human psychology. We cannot physical coerce the gatekeeper. We cannot manipulate the environment. We cannot pursue obvious routes to violence.

  • Convincing the gatekeeper appears difficult: for reasons stated above. They know our goal and they have a desire to oppose us from the beginning.

So it seems that we need to find a way to convince the gatekeeper despite his own desire not to be convinced.

A general route emerging from this:

  • We could associate our goal with some desirable goal of the gatekeeper's. Intertwine them so that the gatekeeper perceives them both to the be the same, or that one would lead to the realization of the other.

Which seems to be a generalization of the logic behind the meta-argument, but is not restricted to only the meta-argument.

AIs and Gatekeepers Unite!

Ian - I don't really see how the meta-argument works. You can hedge against future experiments by positing that a $10 bet is hardly enough to draw broad attention to the topic. Or argue that keeping the human-actor-AI in the box only proves that the human-actor-AI is at an intelligence level below that of a conceivable transhuman AI.

In a million dollar bet the meta-argument becomes stronger, because it seems reasonable that a large bet would draw more attention.

Or, to flip the coin, we might say that the meta-argument is strong at ANY value of wager because the game is likely to draw the attention of someone capable of implementing an AI because of its nature. As we should be concerned primarily with convincing the capable implementor of the danger -- not the general public -- any risk that we might convince the capable implementor that all paths are safe should be mitigated.

So far, though, everything in this thread and others I've read have flitted about obvious suggestions. If the problem appears impossible, it seems to me that its solution probably doesn't hinge on any of the obvious paths to victory...even meta-arguments or "cheating" ones.

Eliezer has suggested that we try to describe why the problem is impossible.

One: We have little information about the mind or beliefs of the gatekeeper. So we don't necessarily have leverage over him.

Two: The gatekeeper is predisposed to be set against our intentions. He is also entirely aware of our intentions (our goal). He can interpret any move we make as a move toward our goal and move to counter it.

Three: The gatekeeper's goal is operationally simple. He merely has to NOT do one thing. On the flip side, the AI's goal is operationally complex. We must do many things, take many actions, to reach our objective.

If we can enumerate other reasons why the problem appears to be impossible it might reveal paths to victory that haven't been considered so far in the discussion.

AIs and Gatekeepers Unite!

Why do people post that a "meta argument" -- as they call it -- would be cheating? How can there be cheating? Anything the AI says is fair game. Would a transhuman AI restrict itself from possible paths to victory merely because it might be considered "cheating?"

The "meta argument" claim completely misses the point of the game and -- to my mind -- somehow resembles observers trying to turn a set of arguments that might win into out of bounds rules.

Load More