L Rudolf L

My blog is here. You can contact me using this form.

Wiki Contributions

Comments

On a meta level, I think there's a difference in "model style" between your comment, some of which seems to treat future advances as a grab-bag of desirable things, and our post, which tries to talk more about the general "gears" that might drive the future world and its goodness. There will be a real shift in how progress happens when humans are no longer in the loop, as we argue in this section. Coordination costs going down will be important for the entire economy, as we argue here (though we don't discuss things as galaxy-brained as e.g. Wei Dai's related post). The question of whether humans are happy self-actualising without unbounded adversity cuts across every specific cool thing that we might get to do in the glorious transhumanist utopia.

Thinking about the general gears here matters. First, because they're, well, general (e.g. if humans were not happy self-actualising without unbounded adversity, suddenly the entire glorious transhumanist utopia seems less promising). Second, because I expect that incentives, feedback loops, resources, etc. will continue mattering. The world today is much wealthier and better off than before industrialisation, but the incentives / economics / politics / structures of the industrial world let you predict the effects of it better than if you just modelled it as "everything gets better" (even though that actually is a very good 3-word summary). Of course, all the things that directly make industrialisation good really are a grab-bag list of desirable things (antibiotics! birth control! LessWrong!). But there's structure behind that that is good to understand (mechanisation! economies of scale! science!). A lot of our post is meant to have the vibe of "here are some structural considerations, with near-future examples", and less "here is the list of concrete things we'll end up with". Honestly, a lot of the reason we didn't do the latter more is because it's hard.

Your last paragraph, though, is very much in this more gears-level-y style, and a good point. It reminds me of Eliezer Yudkowsky's recent mini-essay on scarcity.

Regarding:

In my opinion you are still shying away from discussing radical (although quite plausible) visions. I expect the median good outcome from superintelligence involves everyone being mind uploaded / living in simulations experiencing things that are hard to imagine currently. [emphasis added]

I agree there's a high chance things end up very wild. I think there's a lot of uncertainty about what timelines that would happen under; I think Dyson spheres are >10% likely by 2040, but I wouldn't put them >90% likely by 2100 even conditioning on no radical stagnation scenario (which I'd say are >10% likely on their own). (I mention Dyson spheres because they seem more a raw Kardashev scale progress metric, vs mind uploads which seem more contingent on tech details & choices & economics for whether they happen)

I do think there's value in discussing the intermediate steps between today and the more radical things. I generally expect progress to be not-ridiculously-unsmooth, so even if the intermediate steps are speedrun fairly quickly in calendar time, I expect us to go through a lot of them.

I think a lot of the things we discuss, like lowered coordination costs, AI being used to improve AI, and humans self-actualising, will continue to be important dynamics even into the very radical futures.

Re your specific list items:

  • Listen to new types of music, perfectly designed to sound good to you.
  • Design the biggest roller coaster ever and have AI build it.
  • Visit ancient Greece or view all the most important events of history based on superhuman AI archeology and historical reconstruction.
  • Bring back Dinosaurs and create new creatures.
  • Genetically modify cats to play catch.
  • Design buildings in new architectural styles and have AI build them.
  • Use brain computer interfaces to play videogames / simulations that feel 100% real to all senses, but which are not constrained by physics.
  • Go to Hogwarts (in a 100% realistic simulation) and learn magic and make real (AI) friends with Ron and Hermione.

These examples all seem to be about entertainment or aesthetics. Entertainment and aesthetics things are important to get right and interesting. I wouldn't be moved by any description of a future that centred around entertainment though, and if the world is otherwise fine, I'm fairly sure there will be good entertainment.

To me, the one with the most important-seeming implications is the last one, because that might have implications for what social relationships exist and whether they are mostly human-human or AI-human or AI-AI. We discuss why changes there are maybe risky in this section

  • Use AI as the best teacher ever to learn maths, physics and every subject and language and musical instruments to super-expert level.

We discuss this, though very briefly, in this section.

  • Take medication that makes you always feel wide awake, focused etc. with no side effects.
  • Engineer your body / use cybernetics to make yourself never have to eat, sleep, wash, etc. and be able to jump very high, run very fast, climb up walls, etc.
  • Modify your brain to have better short term memory, eidetic memory, be able to calculate any arithmetic super fast, be super charismatic.

I think these are interesting and important! I think there isn't yet a concrete story for why AI in particular enables these, apart from the general principle that sufficiently good AI will accelerate all technology. I think there's unfortunately a chance that direct benefits to human biology lag other AI effects by a lot, because they might face big hurdles due to regulation and/or getting the real-world data the AI needs. (Though also, humans are willing to pay a lot for health, and rationally should pay a lot for cognitive benefits, so high demand might make up for this).

  • Ask AI for way better ideas for this list.

I think the general theme of having the AIs help us make more use of AIs is important! We talk about it in general terms in the "AI is the ultimate meta-technology" section.

But then, if the model were to correctly do this, it would score 0 in your test, right? Because it would generate a different word pair for every random seed, and what you are scoring is "generating only two words across all random seeds, and furthermore ensuring they have these probabilities".

I think this is where the misunderstanding is. We have many questions, each question containing a random seed, and a prompt to pick two words and have e.g. a 70/30 split of the logits over those two words. So there are two "levels" here:

  1. The question level, at which the random seed varies from question to question. We have 200 questions total.
  2. The probability-estimating level, run for each question, at which the random seed is fixed. For models where we have logits, we run the question once and look at the logits to see if it had the right split. When we don't have logits (e.g. Anthropic models), we run the question many times to approximate the probability distribution.

Now, as Kaivu noted above, this means one way to "hack" this task is that the LLM has some default pair of words - e.g. when asked to pick a random pair of words, it always picks "situational" & "awareness" - and it does not change this based on the random seed. In this case, the task would be easier, since it only needs to do the output control part in a single forward pass (assigning 70% to "situational" and 30% to "awareness"), not the combination of word selection and output control (which we think is the real situational awareness -related ability here). However, empirically LLMs just don't have such a hardcoded pair, so we're not currently worried about this.

I was wondering the same thing as I originally read this post on Beren's blog, where it still says this. I think it's pretty clearly a mistake, and seems to have been fixed in the LW post since your comment.

I raise other confusions about the maths in my comment here.

I was very happy to find this post - it clarifies & names a concept I've been thinking about for a long time. However, I have confusions about the maths here:

Mathematically, direct optimization is your standard AIXI-like optimization process. For instance, suppose we are doing direct variational inference optimization to find a Bayesian posterior parameter  from a data-point , the mathematical representation of this is:

By contrast, the amortized objective optimizes some other set of parameters $\phi$ over a function approximator  which directly maps from the data-point to an estimate of the posterior parameters  We then optimize the parameters of the function approximator  across a whole dataset  of data-point and parameter examples.

First of all, I don't see how the given equation for direct optimization makes sense.  is comparing a distribution over  over a joint distribution over . Should this be for variational inference (where  is whatever we're using to parametrize the variational family), and  in general?

Secondly, why the focus on variational inference for defining direct optimization in the first place? Direct optimization is introduced as (emphasis mine):

Direct optimization occurs when optimization power is applied immediately and directly when engaged with a new situation to explicitly compute an on-the-fly optimal response – for instance, when directly optimizing against some kind of reward function. The classic example of this is planning and Monte-Carlo-Tree-Search (MCTS) algorithms [...]

This does not sound like we're talking about algorithms that update parameters. If I had to put the above in maths, it just sounds like an argmin:

 

where  is your AI system,  is whatever action space it can explore (you can make  vary based on how much compute you're wiling to spend, like with MCTS depth),  is some loss function (it could be a reward function with a flipped sign, but I'm trying to keep it comparable to the direct optimization equation.

Also, the amortized optimization equation RHS is about defining a , i.e. the parameters in your function approximator , but then the LHS calls it , which is confusing to me. I also don't understand why the loss function is taking in parameters , or why the dataset contains parameters (is  being used throughout to stand for outputs rather than model parameters?).

To me, the natural way to phrase this concept would instead be as

where  is your AI system, and , with the dataset .

I'd be curious to hear any expansion of the motivation behind the exact maths in the post, or any way in which my version is misleading.

For the output control task, we graded models as correct if they were within a certain total variation distance of the target distribution. Half the samples had a requirement of being within 10%, the other of being within 20%. This gets us a binary success (0 or 1) from each sample.

Since models practically never got points from the full task, half the samples were also an easier version, testing only their ability to hit the target distribution when they're already given the two words (rather than the full task, where they have to both decide the two words themselves, and match the specified distribution).

Did you explain to GPT-4 what temperature is? GPT-4, especially before November, knew very little about LLMs due to training data cut-offs (e.g. the pre-November GPT-4 didn't even know that the acronym "LLM" stood for "Large Language Model").

Either way, it's interesting that there is a signal. This feels similar in spirit to the self-recognition tasks in SAD (since in both cases the model has to pick up on subtle cues in the text to make some inference about the AI that generated it).

We thought about this quite a lot, and decided to make the dataset almost entirely public.

It's not clear to us who would monomaniacally try to maximise SAD score. It's a dangerous capabilities eval. What we were more worried about is people training for low SAD score in order to make their model seem safer, and such training maybe overfitting to the benchmark and not reducing actual situational awareness by as much as claimed.

It's also unclear what the sharing policy that we could enforce would be that mitigates these concerns while allowing benefits. For example, we would want top labs to use SAD to measure SA in their models (a lot of the theory of change runs through this). But then we're already giving the benchmark to the top labs, and they're the ones doing most of the capabilities work.

More generally, if we don't have good evals, we are flying blind and don't know what the LLMs can do. If the cost of having a good understanding of dangerous model capabilities and their prerequisites is that, in theory, someone might be slightly helped in giving models a specific capability (especially when that capability is both emerging by default already, and where there are very limited reasons for anyone to specifically want to boost this ability), then I'm happy to pay that cost. This is especially the case since SAD lets you measure a cluster of dangerous capability prerequisites and therefore for example test things like out-of-context reasoning, unlearning techniques, or activation steering techniques on something that is directly relevant for safety.

Another concern we've had is the dataset leaking onto the public internet and being accidentally used in training data. We've taken many steps to mitigate this happening. We've also kept 20% of the SAD-influence task private, which will hopefully let us detect at least obvious forms of memorisation of SAD (whether through dataset leakage or deliberate fine-tuning).

I agree that building-based methods (startups) are possibly neglected compared to research-based approaches. I'm therefore exploring some things in this space; you can contact me here

Load More