All of DragonGod's Comments + Replies

i.e. if each forecaster  has an first-order belief , and  is your second-order belief about which forecaster is correct, then  should be your first-order belief about the election.

I think there might be a typo here. Did you instead mean to write: "" for the second order beliefs about the forecasters?

The claim is that given the presence of differential adversarial examples, the optimisation process would adjust the parameters of the model such that it's optimisation target is the base goal.

Probably sometime last year, I posted on Twitter something like: "agent values are defined on agent world models" (or similar) with a link to a LessWrong post (I think the author was John Wentworth).

I'm now looking for that LessWrong post.

My Twitter account is private and search is broken for private accounts, so I haven't been able to track down the tweet. If anyone has guesses for what the post I may have been referring to was, do please send it my way.

3Dalcy7mo
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables

Most of the catastrophic risk from AI still lies in superhuman agentic systems.

Current frontier systems are not that (and IMO not poised to become that in the very immediate future).

I think AI risk advocates should be clear that they're not saying GPT-5/Claude Next is an existential threat to humanity.

[Unless they actually believe that. But if they don't, I'm a bit concerned that their message is being rounded up to that, and when such systems don't reveal themselves to be catastrophically dangerous, it might erode their credibility.]

Immigration is such a tight constraint for me.

My next career steps after I'm done with my TCS Masters are primarily bottlenecked by "what allows me to remain in the UK" and then "keeps me on track to contribute to technical AI safety research".

What I would like to do for the next 1 - 2 years ("independent research"/ "further upskilling to get into a top ML PhD program") is not all that viable a path given my visa constraints.

Above all, I want to avoid wasting N more years by taking a detour through software engineering again so I can get Visa sponsorship.

[... (read more)

Specifically, the experiments by Morrison and Berridge demonstrated that by intervening on the hypothalamic valuation circuits, it is possible to adjust policies zero-shot such that the animal has never experienced a previously repulsive stimulus as pleasurable.

I find this a bit confusing as worded, is something missing?

Does anyone know a ChatGPT plugin for browsing documents/webpages that can read LaTeX?

The plugin I currently use (Link Reader) strips out the LaTeX in its payload, and so GPT-4 ends up hallucinating the LaTeX content of the pages I'm feeding it.

How frequent are moderation actions? Is this discussion about saving moderator effort (by banning someone before you have to remove the rate-limited quantity of their bad posts), or something else? I really worry about "quality improvement by prior restraint" - both because low-value posts aren't that harmful, they get downvoted and ignored pretty easily, and because it can take YEARS of trial-and-error for someone to become a good participant in LW-style discussions, and I don't want to make it impossible for the true newbies (young people discovering

... (read more)

I find noticing surprise more valuable than noticing confusion.

Hindsight bias and post hoc rationalisations make it easy for us to gloss over events that were apriori unexpected.

5Raemon7mo
My take on this is that noticing surprise is easier than noticing confusion, and surprise often correlates with confusion so a useful thing to do is have a habit of : 1. practice noticing surprise 2. when you notice surprise, check if you have a reason to be confused (Where surprise is "something unexpected happened" and confused is "something is happening that I can't explain, or my explanation of doesn't make sense")

I think the model of "a composition of subagents with total orders on their preferences" is a descriptive model of inexploitable incomplete preferences, and not a mechanistic model. At least, that was how I interpreted "Why Subagents?".

I read @johnswentworth as making the claim that such preferences could be modelled as a vetocracy of VNM rational agents, not as claiming that humans (or other objects of study) are mechanistically composed of discrete parts that are themselves VNM rational.

 

I'd be more interested/excited by a refutation on the grounds ... (read more)

5Nina Rimsky8mo
The presence of a pre-order doesn't inherently imply a composition of subagents with ordered preferences. An agent can have a pre-order of preferences due to reasons such as lack of information, indifference between choices, or bounds on computation - this does not necessitate the presence of subagents.  If we do not use a model based on composition of subagents with ordered preferences, in the case of "Atticus the Agent" it can be consistent to switch B -> A + 1$ and A -> B + 1$.  Perhaps I am misunderstanding the claim being made here though.

Suppose it is offered (by a third party) to switch  and then 

Seems incomplete (pun acknowledged). I feel like there's something missing after "to switch" (e.g. "to switch from A to B" or similar).

Another example is an agent through time where as in the Steward of Myselves

This links to Scott Garrabrant's page, not to any particular post. Perhaps you want to review that?

I think you meant to link to: Tyranny of the Epistemic Majority.

We aren’t offering these criteria as necessary for “knowledge”—we could imagine a breaker proposing a counterexample where all of these properties are satisfied but where intuitively M didn’t really know that A′ was a better answer. In that case the builder will try to make a convincing argument to that effect.

Bolded should be sufficient.

In fact, I'm pretty sure that's how humans work most of the time. We use the general-intelligence machinery to "steer" ourselves at a high level, and most of the time, we operate on autopilot.

Yeah, I agree with this. But I don't think the human system aggregates into any kind of coherent total optimiser. Humans don't have an objective function (not even approximately?).

A human is not well modelled as a wrapper mind; do you disagree?

2Thane Ruthenis9mo
Certainly agree. That said, I feel the need to lay out my broader model here. The way I see it, a "wrapper-mind" is a general-purpose problem-solving algorithm hooked up to a static value function. As such: * Are humans proper wrapper-minds? No, certainly not. * Do humans have the fundamental machinery to be wrapper-minds? Yes. * Is any individual run of a human general-purpose problem-solving algorithm essentially equivalent to wrapper-mind-style reasoning? Yes. * Can humans choose to act as wrapper-minds on longer time scales? Yes, approximately, subject to constraints like force of will. * Do most humans, in practice, choose to act as wrapper-minds? No, we switch our targets all the time, value drift is ubiquitous. * Is it desirable for a human to act as a wrapper-mind? That's complicated. * On the one hand, yes because consistent pursuit of instrumentally convergent goals would lead to you having more resources to spend on whatever values you have. * On the other hand, no because we terminally value this sort of value-drift and self-inconsistency, it's part of "being human". * In sum, for humans, there's a sort of tradeoff between approximating a wrapper-mind, and being an incoherent human, and different people weight it differently in different context. E. g., if you really want to achieve something (earning your first million dollars, averting extinction), and you value it more than having fun being a human, you may choose to act as a wrapper-mind in the relevant context/at the relevant scale. As such: humans aren't wrapper-minds, but they can act like them, and it's sometimes useful to act as one.

Thus, any greedy optimization algorithm would convergently shape its agent to not only pursue , but to maximize for 's pursuit — at the expense of everything else.

Conditional on:

  1. Such a system being reachable/accessible to our local/greedy optimisation process
  2. Such a system being actually performant according to the selection metric of our optimisation process 

 

I'm pretty sceptical of #2. I'm sceptical that systems that perform inference via direct optimisation over their outputs are competitive in rich/complex environments. 

Such o... (read more)

4Thane Ruthenis9mo
It's not a binary. You can perform explicit optimization over high-level plan features, then hand off detailed execution to learned heuristics. "Make coffee" may be part of an optimized stratagem computed via consequentialism, but you don't have to consciously optimize every single muscle movement once you've decided on that goal. Essentially, what counts as "outputs" or "direct actions" relative to the consequentialist-planner is flexible, and every sufficiently-reliable (chain of) learned heuristics can be put in that category, with choosing to execute one of them available to the planner algorithm as a basic output. In fact, I'm pretty sure that's how humans work most of the time. We use the general-intelligence machinery to "steer" ourselves at a high level, and most of the time, we operate on autopilot.

Do please read the post. Being able to predict human text requires vastly superhuman capabilities, because predicting human text requires predicting the processes that generated said text. And large tracts of text are just reporting on empirical features of the world.

Alternatively, just read the post I linked.

4cubefox9mo
I did read your post. The fact that something like predicting text requires superhuman capabilities of some sort does not mean that the task itself will result in superhuman capabilities. That's the crucial point. It is much harder to imitate human text than to write while being a human, but that doesn't mean the imitated human itself is any more capable than the original. An analogy. The fact that building fusion power plants is much harder than building fission power plants doesn't at all mean that the former are better. They could even be worse. There is a fundamental disconnect between the difficulty of a task and the usefulness of that task.
2Blueberry8mo
Maybe you're an LLM.

In what sense are they "not trying their hardest"?

4tailcalled9mo
I think you inserted an extra "not".
1cubefox9mo
Being able to perfectly imitate a Chimpanzee would probably also require superhuman intelligence. But such a system would still only be able to imitate chimpanzees. Effectively, it would be much less intelligent than a human. Same for imitating human text. It's very hard, but the result wouldn't yield large capabilities.

which is indifferent to the simplicify of the architecture the insight lets you find.

The bolded should be "simplicity". 

Sorry, please where can I get access to the curriculum (including the reading material and exercises) if I want to study it independently?

The chapter pages on the website doesn't seem to list full curricula.

If you define your utility function over histories, then every behaviour is maximising an expected utility function no?

Even behaviour that is money pumped?

I mean you can't money pump any preference over histories anyway without time travel.

The Dutchbook arguments apply when your utility function is defined over your current state with respect to some resource?

I feel like once you define utility function over histories, you lose the force of the coherence arguments?

What would it look like to not behave as if maximising an expected utility function for a utility function defined over histories.

My contention is that I don't think the preconditions hold.

Agents don't fail to be VNM coherent by having incoherent preferences given the axioms of VNM. They fail to be VNM coherent by violating the axioms themselves.

Completeness is wrong for humans, and with incomplete preferences you can be non exploitable even without admitting a single fixed utility function over world states.

8niplav9mo
I notice I am confused. How do you violate an axiom (completeness) without behaving in a way that violates completeness? I don't think you need an internal representation. Elaborating more, I am not sure how you even display a behavior that violates completeness. If you're given a choice between only universe-histories a and b, and your preferences are imcomplete over them, what do you do? As soon as you reliably act to choose one over the other, for any such pair, you have algorithmically-revealed complete preferences. If you don't reliably choose one over the other, what do you do then? * Choose randomly? But then I'd guess you are again Dutch-bookable. And according to which distribution? * Your choice is undefined? That seems both kinda bad and also Dutch-bookable to me tbh. Alwo don't see the difference between this and random choice (shodt of going up in flames, which would constigute a third, hitherto unassumed option). * Go away/refuse the trade &c? But this is denying the premise! You only have universe-histories a and b tp choose between! I think what happens with humans is that they are often incomplete over very low-ranking worlds and are instead searching for policies to find high-ranking worlds while not choosing. I think incomplwteness might be fine if there are two options you can guarantee to avoid, but with adversarial dynamics that becomes more and more difficult.
4Alexander Gietelink Oldenziel9mo
Agree. There are three stages: 1. Selection for inexploitability 2. The interesting part is how systems/pre-agents/egregores/whatever become complete. If it already satisfies the other VNM axioms we can analyse the situation as follows: Recall that ain inexploitable but incomplete VNM agents acts like a Vetocracy of VNM agents. The exact decomposition is underspecified by just the preference order and is another piece of data (hidden state). However, given sure-gain offers from the environment there is selection pressure for the internal complete VNM Subagents to make trade agreements to obtain a pareto improvement. If you analyze this it looks like a simple prisoner dilemma type case which can be analyzed the usual way in game theory. For instance, in repeated offers with uncertain horizon the Subagents may be able to cooperate. 1. Once they are (approximately) complete they will be under selection pressure to satisfy the other axioms. You could say this the beginning of 'emergence of expected utility maximizers' As you can see the key here is that we really should be talking about Selection Theorems not the highly simplified Coherence Theorems. Coherence theorems are about ideal agents. Selection theorems are about how more and more coherent and goal-directed agents may emerge.

Yeah, I think the preconditions of VNM straightforwardly just don't apply to generally intelligent systems.

2Dagon9mo
As I say, open question.  We have only one example of a generally intelligent system, and that's not even very intelligent.  We have no clue how to extend or compare that to other types. It does seem like VNM-rational agents will be better than non-rational agents at achieving their goals.  It's unclear if that's a nudge to make agents move toward VNM-rationality as they get more capable, or a filter to advantage VNM-rational agents in competition to power.  Or a non-causal observation, because goals are orthogonal to power.

Not at all convinced that "strong agents pursuing a coherent goal is a viable form for generally capable systems that operate in the real world, and the assumption that it is hasn't been sufficiently motivated.

What are the best arguments that expected utility maximisers are adequate (descriptive if not mechanistic) models of powerful AI systems?

[I want to address them in my piece arguing the contrary position.]

4Garrett Baker9mo
I like Utility Maximization = Description Length Minimization.
9Linda Linsefors9mo
The boring technical answer is that any policy can be described as a utility maximiser given a contrived enough utility function. The counter argument to that if the utility function is as complicated as the policy, then this is not a useful description. 

If you're not vNM-coherent you will get Dutch-booked if there are Dutch-bookers around.

This especially applies to multipolar scenarios with AI systems in competition.

I have an intuition that this also applies in degrees: if you are more vNM-coherent than I am (which I think I can define), then I'd guess that you can Dutch-book me pretty easily.

4Dagon9mo
I don't know of any formal arguments that predict that all or most future AI systems are purely expected utility maximizers.  I suspect most don't believe that to be the case in any simple way.   I do know of a very powerful argument (a proof, in fact) that if an agent's goal structure is complete, transitively consistent, continuous, and independent of irrelevant alternatives, then it will be consistent with an expected-utility-maximizing model.  See https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem The open question remains, since humans do not meet these criteria, whether more powerful forms of intelligence are more likely to do so.  

Caveat to the caveat:

The solution is IMO just to consider the number of computations performed per generated token as some function of the model size, and once we've identified a suitable asymptotic order on the function, we can say intelligent things like "the smallest network capable of solving a problem in complexity class C of size N is X".

Or if our asymptotic bounds are not tight enough:

"No economically feasible LLM can solve problems in complexity class C of size >= N".

(Where economically feasible may be something defined by aggregate global eco

... (read more)

The solution is IMO just to consider the number of computations performed per generated token as some function of the model size, and once we've identified a suitable asymptotic order on the function, we can say intelligent things like "the smallest network capable of solving a problem in complexity class C of size N is X".

Or if our asymptotic bounds are not tight enough:

"No economically feasible LLM can solve problems in complexity class C of size >= N".

(Where economically feasible may be something defined by aggregate global economic resources or similar, depending on how tight you want the bound to be.)

Regardless, we can still obtain meaningful impossibility results.

Very big caveat: the LLM doesn't actually perform O(1) computations per generated token.

The number of computational steps performed per generated token scales with network size: https://www.lesswrong.com/posts/XNBZPbxyYhmoqD87F/llms-and-computation-complexity?commentId=QWEwFcMLFQ678y5Jp

2DragonGod10mo
Caveat to the caveat:

Strongly upvoted.

Short but powerful.

Tl;Dr: LLMs perform O(1) computational steps per generated token and this is true regardless of the generated token.

The LLM sees each token in its context window when generating the next token so can compute problems in O(n^2) [where n is the context window size].

LLMs can get along the computational requirements by "showing their working" and simulating a mechanical computer (one without backtracking, so not Turing complete) in their context window.

This only works if the context window is large enough to contain the work... (read more)

2DragonGod10mo
Very big caveat: the LLM doesn't actually perform O(1) computations per generated token. The number of computational steps performed per generated token scales with network size: https://www.lesswrong.com/posts/XNBZPbxyYhmoqD87F/llms-and-computation-complexity?commentId=QWEwFcMLFQ678y5Jp

A reason I mood affiliate with shard theory so much is that like...

I'll have some contention with the orthodox ontology for technical AI safety and be struggling to adequately communicate it, and then I'll later listen to a post/podcast/talk by Quintin Pope/Alex Turner, or someone else trying to distill shard theory and then see the exact same contention I was trying to present expressed more eloquently/with more justification.

One example is that like I had independently concluded that "finding an objective function that was existentially safe when optimis... (read more)

4Chris_Leong10mo
My main critique of shard theory is that I expect one of the shards to end up dominating the others as the most likely outcome.

"All you need is to delay doom by one more year per year and then you're in business" — Paul Christiano.

Took this to drafts for a few days with the intention of refining it and polishing the ontology behind the post.

I ended up not doing that as much, because the improvements I was making to the underlying ontology felt better presented as a standalone post, so I mostly factored them out of this one.

I'm not satisfied with this post as is, but there's some kernel of insight here that I think is valuable, and I'd want to be able to refer to the basic thrust of this post/some arguments made in it elsewhere.

I may make further edits to it in future.

It should be noted, however, that while inner alignment is a robustness problem, the occurrence of unintended mesa-optimization is not. If the base optimizer's objective is not a perfect measure of the human's goals, then preventing mesa-optimizers from arising at all might be the preferred outcome. In such a case, it might be desirable to create a system that is strongly optimized for the base objective within some limited domain without that system engaging in open-ended optimization in new environments.(11) One possible way to accomplish this might be t

... (read more)

Is this a correct representation of corrigible alignment:

  1. The mesa-optimizer (MO) has a proxy of the base objective that it's optimising for.
  2. As more information about the base objective is received, MO updates the proxy.
  3. With sufficient information, the proxy may converge to a proper representation of the base objective.
  4. Example: a model-free RL algorithm whose policy is argmax over actions with respect to its state-action value function 
    1. The base objective is the reward signal
    2. The value function serves as a proxy for the base objective.
    3. The value function
... (read more)

March 22nd is when my first exam starts.

It finishes June 2nd.

Is it possible for me to delay my start a bit?

1CallumMcDougall10mo
Yeah, I think this would be possible. In theory, you could do something like: * Study relevant parts of the week 0 material before the program starts (we might end up creating a virtual group to accommodate this, which also contains people who either don't get an offer or can't attend but still want to study the material.) * Join at the start of the 3rd week - at that point there will be 3 days left of the transformers chapter (which is 8 days long and has 4 days of core content), so you could study (most of) the core content and then transition to RL with the rest of the group (and there would be opportunities to return to the transformers & mech interp material during the bonus parts of later chapters / capstone projects, if you wanted.) How feasible this is would depend on your prereqs and past experience I imagine. Either way, you're definitely welcome to apply!

I'm gestating on this post. I suggest part of my original framing was confused, and so I'll just let the ideas ferment some more.

Yeah for humans in particular, I think the statement is not true of solely biological evolution.

But also, I'm not sure you're looking at it on the right level. Any animal presumably doesvmany bits worth of selection in a given day, but the durable/macroscale effects are better explained by evolutionary forces acting on the population than actions of different animals within their lifetimes.

Or maybe this is just a confused way to think/talk about it.

4tailcalled10mo
Can you list some examples of durable/macroscale effects you have in mind?

I could change that. I was thinking of work done in terms of bits of selection.

Though I don't think that statement is true of humans unless you also include cultural memetic evolution (which I think you should).

4tailcalled10mo
I might be wrong but I think evolution only does a smallish number of bits worth of selection per generation? Whereas I think I could easily do orders of magnitude more in a day.

Yeah, I'm aware.

I would edit the post once I have better naming/terminology for the distinction I was trying to draw.

It happened as something like "humans optimise for local objectives/specific tasks" which eventually collapsed to "local optimisation".

[Do please subject better adjectives!]

Hmm, the etymology was that I was using "local optimisation" to refer to the kind of task specific optimisation humans do.

And global was the natural term to refer to the kind of optimisation I was claiming humans don't do but which an expected utility maximiser does.

6abramdemski10mo
In the context of optimization, the meaning of "local" vs "global" is very well established; local means taking steps in the right direction based on a neighborhood, like hillclimbing, while global means trying to find the actual optimal point.

The "global" here means that all actions/outputs are optimising towards the same fixed goal(s):

Local Optimisation

  • Involves deploying optimisation (search, planning, etc.) to accomplish specific tasks (e.g., making a good move in chess, winning a chess game, planning a trip, solving a puzzle).
  • The choice of local tasks is not determined as part of this framework; local tasks could be subproblems of another optimisation problem (e.g., picking a good next move as part of winning a chess game), generated via learned heuristics, etc.

 

Global Optimisation

  • Entai
... (read more)
4Gordon Seidoh Worley10mo
This doesn't seem especially "global" to me then. Maybe another term would be better? Maybe this is a proximate/ultimate distinction?

Consequentialism is in the Stars not Ourselves?

Still thinking about consequentialism and optimisation. I've argued that global optimisation for an objective function is so computationally intractable as to be prohibited by the laws of physics of our universe. Yet it's clearly the case that e.g. evolution is globally optimising for inclusive genetic fitness (or perhaps patterns that more successfully propagate themselves if you're taking a broader view). I think examining why evolution is able to successfully globally optimise for its objective function wou... (read more)

Load More