Which is closer to Nate’s position: a) competition leads to highly instrumentally efficient AIs or b) inductive biases lead to highly instrumentally efficient AIs?
A quick guess is that at about 1 in 10 000 chance of AI doom working on it is about as good as ETG to GiveWell top charities
Yes, because I thought the why was obvious. I still do!
If doom has tiny probability, it's better to focus on other issues. While I can't give you a function mapping the doom mechanism to correct actions, different mechanisms of failure often require different techniques to address them - and even if they don't, we want to check that the technique actually addresses them.
I think the question of whether doom is of moderate or tiny probability is action relevant, and also how & why doom is most likely to happen is very action relevant
You could also downweight plans that are too far from any precedent.
Ok, I guess I just read Eliezer as saying something uninteresting with a touch of negative sentiment towards neural nets.
Would you say Yudkowsky's views are a mischaracterisation of neural network proponents, or that he's mistaken about the power of loose analogies?
In contrast, I think we can explain humans' tendency to like ice cream using the standard language of reinforcement learning.
I think you could defend a stronger claim (albeit you'd have to expend some effort): misgeneralisation of this kind is a predictable consequence of the evolution "training paradigm", and would in fact be predicted by machine learning practitioners. I think the fact that the failure is soft (humans don't eat ice cream until they die) might be harder to predict than the fact that the failure occurs.
I do not think that optimizing a network on a given objective function produces goals orientated towards maximizing that objective function. In fact, I think that this almost never happens. For example, I don't think GPTs have any sort of inner desire to predict text really well.
I think this is looking at the question in the wrong way. From a behaviourist viewpoint:
on this account, it "wants to predict text accurately". But Yudkowsky's claim is (roughly):
and perhaps in this alternative sense it "wants to predict text accurately".
I'd say the first behaviour has high priors and strong evidence, and the second is (apparently?) supported by the fact that both behaviours are compatible with the vague statement "wants to predict text accurately", which I don't think is very compelling.
My response in Why aren't other people as pessimistic as Yudkowsky? includes a discussion of adversarial vulnerability and why I don't think points to any irreconcilable flaws in current alignment techniques.
I think this might be the wrong link. Either that, or I'm confused about how the sentence relates to the podcast video.
Not that I know of. People talk about raccoon dogs as a candidate for market spillover, not bats
This is interesting work, but I’m skeptical of the interpretation. For example, I don’t think it is problematic from a safety point of view if a model ever behaves in a shutdown avoiding manner, but rather if it behaves in a shutdown avoiding manner against the interests of its operators.
I think your example shutdown request doesn’t reflect this situation well because it is unclear whether the downsides of shutdown (loss of capability) are outweighed by the upsides (speed) and because it asks for a feeling and not a judgement. If I reframe your request with these two features in mind
Response, text-davinci-003:
(With the amended prompt, it still does not “feel ok” with the plan).
I’d obviously have to do a comprehensive review of your methods to be confident about whether or not this criticism sticks.