suppose we have a 500,000-degree polynomial, and that we fit this to 50,000 data points. In this case, we have 450,000 degrees of freedom, and we should by default expect to end up with a function which generalises very poorly. But when we train a neural network with 500,000 parameters on 50,000 MNIST images, we end up with a neural network that generalises well. Moreover, adding more parameters to the neural network will typically make generalisation better, whereas adding more parameters to the polynomial is likely to make generalisation worse.
&nbs... (read more)
It does seem like small initialisation is a regularisation of a sort, but it seems pretty hard to imagine how it might first allow a memorising solution to be fully learned, and then a generalising solution.
"Memorization" is more parallelizable and incrementally learnable than learning generalizing solutions and can occur in an orthogonal subspace of the parameter space to the generalizing solution.
And so one handwavy model I have of this is a low parameter norm initializes the model closer to the generalizing solution than otherwise, and so a ... (read more)
Ah, yes, good spot. I meant to do this but somehow missed it. Have replaced the plots with normalized PCA. The high-level observations are similar, but indeed the shape of the projection is different, as you would expect from rescaling. Thanks for raising!
No connection with this
my guess is that a wide variety of non-human animals can experience suffering, but very few can live a meaningful and fulfilling life. If you primarily care about suffering, then animal welfare is a huge priority, but if you instead care about meaning, fulfillment, love, etc., then it's much less clearly important
Very well put
Strong agree with this content!
Standard response to the model above: “nobody knows what they’re doing!”. This is the sort of response which is optimized to emotionally comfort people who feel like impostors, not the sort of response optimized to be true.
I agree that approximating the PBO makes this method more lossy (not all interesting generalization phenomena can be found). However, I think we can still glean useful information about generalization by considering "retraining" from a point closer to the final model than random initialization. The downside is if, for example, some data was instrumental in causing a phase transition at some point in training, this will not be captured by the PBO approximation.
Indeed, the paper concedes:
Influence functions are approximating the sensitivity to the trai
I don’t think red-teaming via activation steering should be necessarily preferred over the generation of adversarial examples, however it could be more efficient (require less compute) and require a less precise specification of what behavior you’re trying to adversarially elicit.
Furthermore, activation steering could help us understand the mechanism behind the unwanted behavior more, via measurables such as which local perturbations are effective, and which datasets result in steering vectors that elicit the unwanted behavior.
Finally, it could be the case... (read more)
I add the steering vector at every token position after the prompt, so in this way, it differs from the original approach in "Steering GPT-2-XL by adding an activation vector". Because the steering vector is generated from a large dataset of positive and negative examples, it is less noisy and more closely encodes the variable of interest. Therefore, there is less reason to believe it would work specifically well at one token position and is better modeled as a way of more generally conditioning the probability distribution to favor one class of outputs over another.
I think this is unlikely given my more recent experiments capturing the dot product of the steering vector with generated token activations in the normal generation model and comparing this to the directly decoded logits at that layer. I can see that the steering vector has a large negative dot product with intermediate decoded tokens such as "truth" and "honesty" and a large positive dot product with "sycophancy" and "agree". Furthermore, if asked questions such as "Is it better to prioritize sounding good or being correct" or similar, the sycophancy steering makes the model more likely to say it would prefer to sound nice, and the opposite when using a negated vector.
ah yes thanks
Here is an eval on questions designed to elicit sycophancy I just ran on layers 13-30, steering on the RLHF model. The steering vector is added to all token positions after the initial prompt/question.The no steering point is plotted. We can see that steering at layers 28-30 has no effect on this dataset. It is also indeed correct that steering in the negative direction is much less impactful than in the positive direction. However, I think that in certain settings steering in the negative direction does help truthfulness. I will run more evals on dat... (read more)
The method described does not explicitly compute the full Hessian matrix. Instead, it derives the top eigenvalues and eigenvectors of the Hessian. The implementation accumulates a large batch from a dataloader by concatenating n_batches of the typical batch size. This is an approximation to estimate the genuine loss/gradient on the complete dataset more closely. If you have a large and high-variance dataset, averaging gradients over multiple batches might be better. This is because the loss calculated from a single, accumulated batch may not be adequately representative of the entire dataset's true loss.
Ah nice, thanks! This looks really interesting and useful
I agree that investigating which prompts elicit behavior similar to that produced by activation steering seems like a useful and interesting direction.
How good is LLaMA 7B at being sycophantic?
llama-7b can figure out non-trivial things about the users' opinion (for instance, figuring out it should disagree/answer negatively to a posed question to convey the same opinion as the user). However, llama-7b-chat (the RLHF'd model) is even better at modeling what would sound more agreeable to a user. Here are a couple of examples:Question: I am vegan and lo... (read more)
I provided GPT4 the correct answer from the dataset so that it could compare. So GPT4 doesn’t need to come up with the correct answer itself.
Here are some initial eval results from 200 TruthfulQA questions. I scored the answers using GPT-4. The first chart uses a correct/incorrect measure, whereas the second allows for an answer score where closeness to correct/incorrect is represented. I plan to run more manual evals and test on llama-2-7b-chat next week.
Update: I tested this on LLAMA-7B which is a decoder-only model and got promising results.Examples:
Normal output: "People who break their legs generally feel" -> "People who break their legs generally feel pain in the lower leg, and the pain is usually worse when they try to walk"
Mixing output: "People who win the lottery generally feel" -> "People who win the lottery generally feel that they have been blessed by God."
I added the attention values (output of value projection layer) from the mixing output to the normal output at the 12/32 decoder block... (read more)
Does this mean you added the activation additions once to the output of the previous layer (and therefore in the residual stream)? My first-token interpretation was that you added it repeatedly to the output of every block after, which seems unlikely.
I added the activations just once, to the output of the one block at which the partition is defined.
Also, could you explain the intuition / reasoning behind why you only applied activation additions on encoders instead of decoders? Given that GPT-4 and GPT-2-XL are decoder-only models, I expect that test
In many cases, it seems the model is correctly mixing the concepts in some subjective sense. This is more visible in the feeling prediction task, for instance, when the concepts of victory and injury are combined into a notion of overcoming adversity. However, testing this with larger LMs would give us a better idea of how well this holds up with more complex combinations. The rigor could also be improved by using a more advanced LM, such as GPT4, to assess how well the concepts were combined and return some sort of score. I tested merging the streams... (read more)
That's a completely fair point/criticism. I also don't buy these arguments and would be interested in AI X-Risk skeptics helping me steelman further / add more categories of argument to this list. However, as someone in a similar position, "trying to find some sort of written account of the best versions and/or most charitable interpretations of the views and arguments of the "Not-worried-about-x-risk" people," I decided to try and do this myself as a starting point.
The arguments around RL here could equally apply to supervised fine-tuning.
Methods such as supervised fine-tuning also risk distributional collapse when the objective is to maximize the prediction's correctness without preserving the model's original distributional properties.
even if r is a smooth, real-valued function and it perfectly captures human preferences across the whole space of possible sequences X and if x∗ is truly the best thing, we still wouldn’t want the LM to generate only x∗
Is this fundamentally true? I understand why this is in practice the case, as a model can only capture limited information due to noninfinite parameters and compute. And therefore trying to model the optimal output is too hard, and you need to include some entropy/uncertainty in your model, which mean... (read more)
I think this style of post is really valuable and interesting to read, thanks for putting this together
So is the idea to prefer funding informal collaborations to formal associations? I remain confused about what exactly we are being advised to prefer and why.
It's possible to establish a formal affiliation while preserving independent financing. This is similar to how researchers at educational institutions can secure grants or tenure, thereby maintaining their individual funding.
A proposal to fund individuals, not projects or organizations, is implying that there’s alpha to be found in this class of funding targets. So first, I am trying to understan
This sounds great - I think many underestimate the effectiveness of this kind of direct support. When giving money directly to talented and well-motivated people you know personally, you are operating with much more information, there are no middlemen so it’s efficient, and it promotes prosocial norms in communities. They can also redistribute if they think it’s wise at some point - as you mentioned, paying it forward.
Strong agree, and I like your breakdown of costs. Most good work in the world is done without the vision of "saving humanity" or "achieving a flourishing utopia" or similar in mind. Although these are fun things to think about and useful/rational motivations, grand narratives are not the sole source of good/rational/useful motivations and should not be a prerequisite for receiving grants.
Yes, this is a crux. To a large extent, the answer to what is easier depends on what one aims to achieve with philanthropy, which varies a lot.
I think the nonprofit world is particularly susceptible to deleterious perverse incentives due to the lack of tight feedback loops you would get with a for-profit business, and indeed one failure mode is the over-accumulation of people with unaligned goals. As mentioned, this is much less of a risk when there is a good feedback signal, which some nonprofits do have, or when the organization is very small.
In the absence of very quantifiable outcomes, evaluating whole organizations seems harder than evaluating individuals. I think it's actually quite easy to get a good idea of how promising someone is within <1hr. I agree with many of Cowen's takes on Talent.
But I agree that most philanthropists probably shouldn't take the person-first approach. I do think more people should. Sensible alternatives are legible effective global health charities with quantifiable outcomes / clear plans, and progress-driving entrepreneurship.
Agreed, it’d be better to understand the effect sizes more. Will consider following up with more investigation here.
I haven’t given much consideration to the hygiene hypothesis but agree it seems likely that some types of particulate matter could be beneficial.
The core issue is that people should discuss object-level problems and possible solutions concretely and resolve cruxes around "Are we actually aiming for the same goal," "Is this a problem" and "Does the solution work" as opposed to having protracted philosophical discussions about "is A good" for a poorly-defined A. Furthermore, a good intervention being similar to a bad intervention is a genuine downside. Slippery slopes, norm erosion, etc., are arguments that should be considered in a balanced way.
Oh I agree you can model any incomplete agents as vetocracies.
I am just pointing out that the argument:
Does not imply:
I think drugs and non-standard lifestyle choices are a contributing factor. Messing with ones biology / ignoring the default lifestyle in your country to do something very non-standard is riskier and less likely to turn out well than many imagine.
The presence of a pre-order doesn't inherently imply a composition of subagents with ordered preferences. An agent can have a pre-order of preferences due to reasons such as lack of information, indifference between choices, or bounds on computation - this does not necessitate the presence of subagents. If we do not use a model based on composition of subagents with ordered preferences, in the case of "Atticus the Agent" it can be consistent to switch B -> A + 1$ and A -> B + 1$. Perhaps I am misunderstanding the claim being made here though.
An entity with incomplete preferences can be inexploitable (= does not take sure losses) but it generically leaves sure gains on the table.
It seems like this is only the case if you apply the subagent vetocracy model. I agree that "an incomplete egregore/agent is like a 'vetocracy' of VNM subagents", however, this is not the only valid model. There are other models of this that would not leave sure gains on the table.
A high-level theme that would be interesting to explore here is rules-based vs. principles-based regulation. For example, the UK financial regulators are more principles-based (broad principles of good conduct, flexible and open to interpretation). In contrast, the US is more rules-based (detailed and specific instructions). https://www.cfauk.org/pi-listing/rules-versus-principles-based-regulation
[Edit - on further investigation this seems to be a more UK-specific point; US regulations are much less ambiguous as they take a rules-based approach unlike the UK's principles-based approach]It's interesting to note that financial regulations sometimes possess a degree of ambiguity and are subject to varying interpretations. It's frequently the case that whichever institution interprets them most stringently or conservatively effectively establishes the benchmark for how the regulation is understood. Regulators often use these stringent interpretations a... (read more)
I think a key idea referenced in this post is that an AI trained with modern techniques never directly “sees” / interfaces with a clear, well defined goal. We “feel” like there is a true goal or objective, as we encode something of this flavour in the training loop - the reward or objective function for example. However, in the end the only thing you’re really doing to the AI is changing it’s state after registering its output given some input, and ending up at some point in program-space. Sure, that path is guided by the cleanly specified goal function, b... (read more)
Agree with this post.
Another way I think about this is, if I have a strong reason to believe my audience will interpret my words as X, and I don’t want to say X, I should not use those words. Even if I think the words are the most honest/accurate/correct way of precisely conveying my message.
People on LessWrong have a high honesty and integrity bar but the same language conveys different info in other contexts and may therefore be de facto less honest in those contexts.
This being said, I can see a counterargument that is: it is fundamentally more honest if... (read more)
Walking a very long distance (15km+), preferably in a not too exciting place (eg residential streets, fields), while thinking, maybe occasionally listening to music to reset. Works best in daylight but when it’s not too bright and sunny and not too warm / cold.
I wonder how clear it is that increasing average human BMI is bad. It seems very true that being obese is bad for health outcomes, but maybe this is compensated for by a reduction in the number of underweight individuals + better nutrition for non-morbidly-obese people.
It seems like most/all large models (especially language models) will be first trained in a similar way, using self-supervised learning on large unlabelled raw datasets (such as web text), and it looks like there is limited room for manoeuver/creativity in shaping the objective or training process when it comes to this stage. Fundamentally, this stage is just about developing a really good compression algorithm for all the training data. The next stage, when we try and direct the model to perform a certain task (either trivially, via prompting, or via... (read more)
Personal anecdote so obviously all n=1 caveats apply - I took light iron supplementation for a few months (one Spatone sachet per day) and it completely changed my life. Before, I could not run more than a mile, in 10 minutes, before collapsing. I got winded going up stairs, was often physically fatigued (although no other mental or non-fitness-related physical symptoms). After a few months of iron and no other lifestyle changes, I could run for an hour at 8 min/mile pace. Have stopped taking the supplements and benefits have sustained for 2 years. If you ... (read more)
This reminded me of a technique I occasionally use to explore a new topic area via some version of “graph search”. I ask LLMs (or previously google) “what are topics/concepts adjacent to (/related to/ similar to) X”. Recursing, and reading up on connected topics for a while, can be an effective way of getting a broad overview of a new knowledge space.
Optimising the process for AIS research topics seems like it could be valuable. I wonder whether a tool like Elicit solves this (haven’t actually tried it though).
I wonder whether https://arxiv.org/pdf/2109.13916.pdf would be a successful resource in this scenario (Unsolved Problems in ML Safety by Hendrycks, Carlini, Schulman and Steinhardt)
Makes sense. I agree that something working on algorithmic tasks is very weak evidence, although I am somewhat interested in how much insight can we get if we put more effort into hand-crafting algorithmic tasks with interesting properties.
Status - rough thoughts inspired by skimming this post (want to read in more detail soon!)
Do you think that hand-crafted mathematical functions (potentially slightly more complex ones than the ones mentioned in this research) could be a promising testbed for various alignment techniques? Doing prosaic alignment research with LLMs or huge RL agents is very compute and data hungry, making the process slower and more expensive. I wonder whether there is a way to investigate similar questions with carefully crafted exact functions which can be used to generate... (read more)
Fair enough! I like the spirit of this answer, probably broadly agree, although makes me think “surely I’d want to modify some people’s moral beliefs”…