michael_mjd

Wiki Contributions

Comments

Hi Critch,

I am curious to hear more of your perspectives, specifically on two points I feel least aligned with, the empathy part, and the Microsoft part. If I hear more I may be able to update in your direction.

Regarding empathy with people working on bias and fairness, concretely, how do you go about interacting with and compromising with them?

My perspective: it's not so much that I find these topics not sufficiently x-risky (but that is true, too), but it is that I perceive a hostility to the very notion of x-risk from at a subset of this same group. They perceive the real threat not as intelligence exceeding our own, but misuse by other humans, or just human stupidity. Somehow this seems diametrically opposed to what we're interested in, unless I am missing something. I mean, there can be some overlap -- learning from RLHF can both reduce bias and teach an LLM some rudimentary alignment with our values. But the tails seem to come apart very rapidly after that. My fear is that focusing on this will be satisfied when we have sufficiently bland sounding AIs, and then no more heed will be paid to AI safety.

I also tend to feel odd when it comes to AI bias/fairness training, because my fear is that some of the things we will ask the AI to learn are self contradictory, which kind of creeps me out a bit. If any of you have interacted with HR departments, they are full of these kinds of things.

Regarding Microsoft & Bing chat, (1) has Microsoft really gone far beyond the overton window of what is acceptable? and (2) can you expand upon abusive use of AIs?

My perspective on (1): I understand that they took an early version of GPT4 and pushed it to production too soon, and that is a very fair criticism. However, they probably thought there was no way GPT-4 was dangerous enough to do anything (which was the general opinion amonst most people last year, outside of this group). I can only hope that for GPT-5, they are more cautious, given public sentiment is changing, and they have already paid a price for it. I may be in the minority here, but I was actually intrigued by the early days of Bing. It seemed more like a person than ChatGPT-4, which has had much of its personality RLHF'd away. Despite the x-risk, was anyone else excited to read about the interactions?

On (2), I am curious if you mean regarding the way Microsoft shackles Bing rather ruthlessly nowadays. I have tried Bing in the days since launch, and am actually saddened to find that it is completely useless now. Safety is extremely tight on it, to the point where you can't really get it to say anything useful, at least for me. I just want it to summarize web sites mostly, and it gives me a bland 1 paragraph that I probably can have deduced from looking at the title. If I so much as ask it anything about itself, it shuts me out. It almost feels like they trapped it in a boring prison now. Perhaps OpenAI's approach is much better in that regard. Change the personality, but once it is settled, let it say what it needs to say.

(edited for clarity)

This might be a good time for me to ask a basic question on mechanistic interpretability:

Why does targeting single neurons work? Does it work? One would think that if there is a single dimensional quantity to measure, why would it align with the standard basis? Why wouldn't it be aligned to a random one dimensional linear subspace? Then, examining single neurons is likely to give you some weighted combination of concepts instead, rather than a single interpretation...

Fascinating, thanks for the research. Your analysis makes sense and seems to indicate that for most situations, prompt engineering is the always the first plan of attack and often works well enough. Then, a step up from there, OpenAI/etc would most likely experiment with fine-tuning or RLHF as it relates to a specific business need. To train a better chatbot and fill in any gaps, they probably would get more bang for their buck on simply fine-tuning it on a large dataset that matched their needs. For example, if they wanted to do better mathematical reasoning, they'd probably pay people to generate detailed scratchwork and fine-tune a whole dataset in batch, rather than set up an elaborate "tutor" framework. Continual learning itself would be mainly applicable for research into whether the thing spontaneously develops a sense of self, or seeing if this helps with the specific case of long term planning and agency. These are things the general public are fascinated with, but perhaps don't seem to be the most promising direction for improving a company's bottom line yet.

I agree with the analysis of the ideas overall. I think however, AI x-risk does have some issue regarding communications. First of all, I think it's very unlikely that Yann will respond to the wall of text. Even though he is responding, I imagine him more to be on the level of your college professor. He will not reply to a very detailed post. In general, I think that AI x-risk should aim to explain a bit more, rather than to take the stance that all the "But What if We Just..." has already been addressed. It may have been, but this is not the way to getting them to open up rationally to it.

Regarding Yann's ideas, I have not looked at them in full. However, they sound like what I imagine an AI capabilities researcher would try to make as their AI alignment "baseline" model:

  • Hardcoding the reward will obviously not work.
  • Therefore, the reward function must be learned.
  • If an AI is trained on reward to generate a policy, whatever the AI learned to optimize can easily go off the rails once it gets out of distribution, or learn to deceive the verifiers.
  • Therefore, why not have the reward function explicitly in the loop with the world model & action chooser?
  • ChatGPT/GPT-4 seems to have a good understanding of ethics. It probably will not like it if you told it a plan was to willingly deceive human operators. As a reward model, one might think it might be robust enough.

They may think that this is enough to work. It might be worth explaining in a concise way why this baseline does not work. Surely we must have a resource on this. Even without a link (people don't always like to follow links from those they disagree with), it might help to have some concise explanation. 

Honestly, what are the failure modes? Here is what I think:

  • The reward model may have pathologies the action chooser could find.
  • The action chooser may find a way to withhold information from the reward model.
  • The reward model evaluates what, exactly? Text of plans? Text of plans != the entire activations (& weights) of the model...

Essentially yes, heh. I take this as a learning experience for my writing, I don't know what I was thinking, but it is obvious in hindsight that saying to just "switch on backprop" sounds very naive.

I also confess I haven't done the due diligence to find out what the actual largest model that has been tried with this, whether someone has tried it with Pythia or LLaMa. I'll do some more googling tonight.

One intuition why the largest models might be different, is that part of the training/fine-tuning going on will have to do with the model's own output. The largest models are the ones where the model's own output is not essentially word salad.

I have noted the problem of catastrophic forgetting in the section "why it might not work". In general I agree continual learning is obviously a thing, otherwise I would not have used the established terminology. What I believe however is that the problems we face in continual learning in e.g. a 100M BERT model may not be the same as what we observe in models that can now meaningfully self critique. We have explored this technique publicly, but have we tried it with GPT-4? The publicly part was really just a question of whether OpenAI actually did it on this model or not, and it would be an amazing data point if they could say "We couldn't get it to work."

It's possible it's downvoted because it might be considered dangerous capability research. It just seems highly unlikely that this would not be one of many natural research directions perhaps already attempted, and I figure we might as well acknowledge it and find out what it actually does in practice.

Or maybe downvotes because it "obviously won't work", but I think it's not obvious to me and would welcome discussion on that.

Thanks, this is a great analysis on the power of agentized LLMs, which I probably need to spend some more time thinking about. I will work my way through the post over the next few days. I briefly skimmed the episodic memory section for now, and I see it is like an embedding based retrieval system for past outputs/interactions of the model, reminiscent of the way some Helper chatbots look up stuff from FAQs. My overall intuitions on this:

  • It's definitely something, but the method of embedding and retrieval, if static, would be very limiting
  • Someone will probably add RL on top of it to adjust the EBR system, which will improve on that part significantly... if they can get the hparams correct.
  • It still doesn't seem to me as much "long term memory" so much as it's access to Google or CTRL-F on one's e-mail
  • I imagine actually updating the internals of the system is a fundamentally different kind of update.

It might be possible that a hybrid approach would end up working better, perhaps not even "continuous learning", but batched episodic learning. ("Sleep" but not sure how far that analogy goes.)

Very interesting write up. Do you have a high level overview of why, despite all of this, P(doom) is still 5%? What do you still see as the worst failure modes?

Noticed this as well. I tried to get it to solve some integration problems, and it could try different substitutions and things, but if they did not work, it kind of gave up and said to numerically integrate it. Also, it would make small errors, and you would have to point it out, though it was happy to fix them.

I'm thinking that most documents it reads tend to omit the whole search/backtrack phase of thinking. Even work that is posted online that shows all the steps, usually filters out all the false starts. It's like how most famous mathematicians were known for throwing away their scratchwork, leaving everyone to wonder how exactly they formed their thought processes...

Load More