Wiki Contributions

Comments

I'm not sure what point this post is trying to make exactly. Yes, it's function approximation; I think we all know that.

When we talk about inner and outer alignment, outer alignment is "picking the correct function to learn." (When we say "loss," we mean the loss on a particular task, not the abstract loss function like RMSE.) 

Inner alignment is about training a model that generalizes to situations outside the training data.

(it would be convenient if yes, but this would feel surprising - otherwise you could just start a corporation, not pay your taxes the first year, dissolve it, start an identical corporation the second year, and so on.)

This (a consistent pattern of doing the same thing) would get you prosecuted, because courts are allowed to pierce the corporate veil, which is lawyer-speak for "call you out on your bullshit." If it's obvious that you're creating corporations as a legal fiction to avoid taxes, the court will go after the shareholders directly (so long as the prosecution can prove the corporation exists in name only).

Because GPT-3.5 is a fine-tuned version of GPT-3, which is known to be a vanilla dense transformer.

GPT-4 is probably, in a very funny turn of events, a few dozen fine-tuned GPT-3.5 clones glued together (as a MoE).

Whether the couple is capable of having preferences probably depends on your definition of “preferences.” The more standard terminology for preferences by a group of people is “social choice function.” The main problem we run into is that social choice functions don’t behave like preferences. By Gibbard’s theorem, we can guarantee that any social choice function is either Pareto inefficient or unobservable (because it’s not incentive-compatible).

Sometimes, Pareto inefficiency is the price we must pay for people to volunteer information. (e.g. random dictatorship is Pareto-inefficient if we’re risk averse, but it encourages everyone to state their true preferences.) But I don’t see what information we’re getting here. Everyone’s preferences were already known ahead of time; there was no need to choose the inefficient option.

One elephant in the room throughout my geometric rationality sequence, is that it is sometimes advocating for randomizing between actions, and so geometrically rational agents cannot possibly satisfy the Von Neumann–Morgenstern axioms.

It's not just VNM; it just doesn't even make logical sense. Probabilities are about your knowledge, not the state of the world: barring bizarre fringe cases/Cromwell's law, I can always say that whatever I'm doing has probability 1, because I'm currently doing it, meaning it's physically impossible to randomize your own actions. I can certainly have a probability other than 0 or 1 that I will do something, if this action depends on information I haven't received. But as soon as I receive all the information involved in making my decision and update on it, I can't have a 50% chance of doing something. Trying to randomize your own actions involves refusing to update on the information you have, a violation of Bayes' theorem.

The problem is they don't want to switch to Boston, they are happy moving to Atlanta.

In this world, the one that actually exists, Bob still wants to move to Boston. The fact that Bob made a promise and would now face additional costs associated with breaking the contract (i.e. upsetting Alice) doesn't change the fact that he'd be happier in Boston, it just means that the contract and the action of revealing this information changed the options available. The choices are no longer "Boston" vs. "Atlanta," they're "Boston and upset Alice" vs. "Atlanta and don't upset Alice."

Moreover, holding to this contract after the information is revealed also rejects the possibility of a Pareto improvement (equivalent to a Dutch book). Say Alice and Bob agree to randomize their choice as you say. In this case, both Alice and Bob are strictly worse off than if they had agreed on an insurance policy. A contract that has Bob more than compensate Alice for the cost of moving to Boston if the California option fails would leave both of them strictly better off.

You forgot to include a sixth counterargument: you might successfully accomplish everything you set out to do, producing dozens of examples of misalignment, but as soon as you present them, everyone working on capabilities excuses them away as being "not real misalignment" for some reason or another.

I have seen more "toy examples" of misalignment than I can count (e.g. goal misgeneralization in the coin run example, deception here, and the not-so-toy example of GPT-4 failing badly as soon as it was deployed out of distribution--with the only thing needed to break it being a less-than-perfect prompt and giving it the name Sydney. We've successfully shown AIs can be misaligned in several ways we predicted ahead of time according to theory. Nobody cares, and nobody has used this information to advance alignment research. At this point I've concluded AI companies, even ones claiming otherwise, will not care until somebody dies.

Using RL(AI)F may offer a solution to all the points in this section: By starting with a set of established principles, AI can generate and revise a large number of prompts, selecting the best answers through a chain-of-thought process that adheres to these principles. Then, a reward model can be trained and the process can continue as in RLHF. This approach is potentially better than RLHF as it does not require human feedback.

I'd like to say that I fervently disagree with . Giving an unaligned AI the opportunity to modify its own weights (by categorizing its own responses to questions), then politely asking it to align itself, is quite possibly the worst alignment plan I've ever heard; it's penny-wise, pound-foolish. (Assuming it even is penny-wise; I can think of several ways to generate a self-consistent AI that would cost less.)

I think there's a fundamental asymmetry in the case you mentioned--it's not verifying whether a program halts that's difficult, it's writing an algorithm that can verify whether any program halts. In other words, the problem is adversarial inputs. To keep the symmetry, we'd need to say that the generation problem is "generate all computer programs that halt," which is also not possible.

I think a better example would be, how hard is it to generate a semiprime? Not hard at all: just generate 2 primes and multiply them. How hard is it to verify a number is semiprime? Very hard, you'd have to factor it.

That's correct, but that just makes this a worse (less intuitive) version of the stag hunt.

Really? I think a tiny bit of effort will do exactly nothing, or at best further entrench their beliefs ("See? Even the rationalists think we have valid points!"). The best response is just to ignore them, like most trolls.

Load More