Spencer Becker-Kahn

Independent AI Safety Researcher. Previously SERI MATS scholar and FHI Senior Research Scholar. Before that, pure math in academia at Cambridge, UW, MIT.  Twitter. LinkedIn

Wiki Contributions

Comments

I for one would find it helpful if you included a link to at least one place that Eliezer had made this claim just so we can be sure we're on the same page. 

Roughly speaking, what I have in mind is that there are at least two possible claims. One is that 'we can't get AI to do our alignment homework' because by the time we have a very powerful AI that can solve alignment homework, it is already too dangerous to use the fact it can solve the homework as a safety plan. And the other is the claim that there's some sort of 'intrinsic' reason why an AI built by humans could never solve alignment homework.

You refer a couple of times to the fact that evals are often used with the aim of upper bounding capabilities. To my mind this is an essential difficulty that acts as a point of disanalogy with things like aviation. I’m obviously no expert but in the case of aviation, I would have thought that you want to give positive answers to questions like “can this plane safely do X thousand miles?” - ie produce absolutely guaranteed lower bounds on ‘capabilities’. You don’t need to find something like the approximately smallest number Y such that it could never under any circumstances ever fly more than Y million miles.

Hmm it might be questionable to suggest that it is "non-AI" though? It's based on symbolic and algebraic deduction engines and afaict it sounds like it might be the sort of thing that used to be very much mainstream "AI" i.e. symbolic AI + some hard-coded human heuristics?

FWIW I did not interpret Thane as necessarily having "high confidence" in "architecture / internal composition" of AGI. It seemed to me that they were merely (and ~accurately) describing what the canonical views were most worried about. (And I think a discussion about whether or not being able to "model the world" counts as a statement about "internal composition" is sort of beside the point/beyond the scope of what's really being said)

It's fair enough if you would say things differently(!) but in some sense isn't it just pointing out: 'I would emphasize different aspects of the same underlying basic point'. And I'm not sure if that really progresses the discussion? I.e. it's not like Thane Ruthenis actually claims that "scarily powerful artificial agents" currently exist. It is indeed true that they don't exist and may not ever exist. But that's just not really the point they are making so it seems reasonable to me that they are not emphasizing it.

----


I'd like to see justification of "under what conditions does speculation about 'superintelligent consequentialism' merit research attention at all?" and "why do we think 'future architectures' will have property X, or whatever?!". 


I think I would also like to see more thought about this. In some ways, after first getting into the general area of AI risk, I was disappointed that the alignment/safety community was not more focussed on questions like this. Like a lot of people, I'd been originally inspired by Superintelligence - significant parts of which relate to these questions imo - only to be told that the community had 'kinda moved away from that book now'. And so I sort of sympathize with the vibe of Thane's post (and worry that there has been a sort of mission creep)

Newtonian mechanics was systematized as a special case of general relativity.

One of the things I found confusing early on in this post was that systemization is said to be about representing the previous thing as an example or special case of some other thing that is both simpler and more broadly-scoped. 

In my opinion, it's easy to give examples where the 'other thing' is more broadly-scoped and this is because 'increasing scope' corresponds to the usual way we think of generalisation, i.e. the latter thing applies to more setting or it is 'about a wider class of things' in some sense. But in many cases, the more general thing is not simultaneously 'simpler' or more economical. I don't think anyone would really say that general relativity were actually simpler. However,  to be clear, I do think that there probably are some good examples of this, particularly in mathematics, though I haven't got one to hand. 

OK I think this will be my last message in this exchange but I'm still confused. I'll try one more time to explain what I'm getting at. 

I'm interested in what your precise definition of subjective probability is. 

One relevant thing I saw was the following sentence:

If I say that a coin is 50% likely to come up heads, that's me saying that I don't know the exact initial conditions of the coin well enough to have any meaningful knowledge of how it's going to land, and I can't distinguish between the two options.

It seems to give something like a definition of what it means to say something has a 50% chance. i.e. I interpret your sentence as claiming that a statement like 'The probability of A is 1/2' means or is somehow the same as a statement a bit like

[*]  'I don't know the exact conditions and don't have enough meaningful/relevant knowledge to distinguish between the possible occurrence of (A) and (not A)'

My reaction was: This can't possibly be a good definition. 

The airplane puzzle was supposed to be a situation where there is a clear 'difference' in the outcomes - either the last person is in the 1 seat that matches their ticket number or they're not. - they're in one of the other 99 seats. It's not as if it's a clearly symmetric situation from the point of view of the outcomes. So it was supposed to be an example where statement [*] does not hold, but where the probability is 1/2. It seems you don't accept that; it seems to me like you think that statement [*] does in fact hold in this case. 

But tbh it feels sorta like you're saying you can't distinguish between the outcomes because you already know the answer is 1/2! i.e. Even if I accept that the outcomes are somehow indistinguishable, the example is sufficiently complicated on a first reading that there's no way you'd just look at it and go "hmm I guess I can't distinguish so it's 1/2", i.e. if your definition were OK it could be used to justify the answer to the puzzle, but that doesn't seem right to me either.  

So my point is still: What is that thing? I think yes I actually am trying to push proponents of this view down to the metaphysics - If they say "there's a 40% chance that it will rain tomorrow", I want to know things like what it is that they are attributing 40%-ness to.  And what it means to say that that thing "has probability 40%".  That's why I fixated on that sentence in particular because it's the closest thing I could find to an actual definition of subjective probability in this post.

 

I have in mind very simple examples.  Suppose that first I roll a die. If it doesn't land on a 6, I then flip a biased coin that lands on heads 3/5 of the time.  If it does land on a 6 I just record the result as 'tails'. What is the probability that I get heads? 

This is contrived so that the probability of heads is 

5/6 x 3/5 = 1/2.

But do you think that that in saying this I mean something like "I don't know the exact initial conditions... well enough to have any meaningful knowledge of how it's going to land, and I can't distinguish between the two options." ?

Another example: Have you heard of the puzzle about the people randomly taking seats on the airplane? It's a well-known probability brainteaser to which the answer is 1/2 but I don't think many people would agree that saying the answer is 1/2 actually means something like "I don't know the exact initial conditions... well enough to have any meaningful knowledge of how it's going to land, and I can't distinguish between the two options." 

There needn't be any 'indistinguishability of outcomes' or 'lack of information' for something to have probability 0.5, it can just..well... be the actual result of calculating two distinguishable complementary outcomes.

We might be using "meaning" differently then!

I'm fine with something being subjective, but what I'm getting at is more like: Is there something we can agree on about which we are expressing a subjective view? 

I'm kind of confused what you're asking me - like which bit is "accurate" etc.. Sorry, I'll try to re-state my question again:

- Do you think that when someone says something has "a 50% probability" then they are saying that they do not have any meaningful knowledge that allows them to distinguish between two options?

I'm suggesting that you can't possibly think that, because there are obviously other ways things can end up 50/50. e.g. maybe it's just a very specific calculation, using lots of specific information, that ends up with the value 0.5 at the end. This is a different situation from having 'symmetry' and no distinguishing information.

Then I'm saying OK, assuming you indeed don't mean the above thing, then what exactly does one mean in general when saying something is 50% likely?

 

Load More