I think that DG is making a more nickpicky point and just claiming that that specific definition is not feasible rather than using this as a claim that foom is not feasible, at least in this post. He also claims that elsewhere but has a diferent argument about humans being able to make narrow AI for things like strategy(wich I think are also wrong) At least that's what I've understood from our previous discussions.
yeah, totally, i'm also just using that post as a jump-off point for a more in-depth long-form discussion about dragon god's beliefs.
So it seems that a lot of people who applied to Understanding Search in Transformers project to do mechanistic interpretability research and probably a lot of them won't get in.
I think there's a lot of similar projects and potential low-hanging fruit people could work on and we probably could organize to make more teams working on similar things.
I’m willing to organize at least one such project myself(specifically working on trying to figure out how algorithm distillation https://arxiv.org/pdf/2210.14215.pdf works) and will talk with Linda about it in 2 we...
About the sampling thing. I think a better way to do it that will work for other kind models would be trainining a few diferent models that do better or worse on the task and use different policies, and then you just make a dataset of samples of trajectories from multiple of them. Wich should be cleaner in terms of you knowing what is going on on the training set than getting the data as the model trains (wich on the other hand is actually better for doing AD)
That also has the benefit of letting you study how wich agents you use to generate the training da...
Oh nice, I was interested on doing mechanistic interpretability on decision transformers myself and had gotten started during SERI MATS but now was more interested in looking into algorithm distillation and the decision transformers stuff fell to the wayside(plus I haven't been very productive during the last few weeks unfortunately). It's too late to read the post in detail today but will probably read it in detail and look at the repo tomorrow. I'm interested in helping with this and I'm likely going to be working on some related research in the near fut...
Another posible update is towards shorter timelines if you think that humans might not be trained whith the optimal amount of data(since we can't just for example read the entire internet) and so it might be posible to get better peformance whith less parameters, if you asume brain has similar scaling laws.
Not a response to your actual point but I think that hypothetical example probably doesn't make sense (as in making the ai not "care" doesn't prevent it from including mindhacks in its plan) If you have a plan that is "superingently optimized" for some misaligned goal then that plan will have to take into account the effect of outputing the plan itself and will by default contain deception or mindhacks even if the AI doesn't in some sense "care" about executing plans. (or if you setup some complicated scheme whith conterfactuals so the model ignores the ef...
So first it is really unclear what you would actually get from gtp6 in this situation.
(As an aside I tried with gptj and it outputted an index with some chapter names).
You might just get the rest of your own comment or something similar....
Or maybe you get some article about Eliezer's book, some joke book written now or the actual book but it contains sutle errors Eliezer might make, a fake article an AGI that gpt6 predicts would likely take over the world by then would write... etc.
Since in general gpt6 would be optimized to predict (in ...
Well if Mary does learn something new( how it feels "from the inside" to see red or whatever ) she would notice, and her brainstate would reflect that plus whatever information she learned. Otherwise it doesn't make sense to say she learned anything.
And just the fact she learned something and might have thought something like "neat, so that's what red looks like" would be relevant to predictions of her behavior even ignoring possible information content of qualia.
So it seems distinguishable to me.
Not sure what you mean.
If some action is a risk to the world but Harry doesn't know vow doesn't prevent him from doing it.
If afer taking some action Harry realizes it risked the world nothing happens except maybe him not being unable to repeat the decision if it comes up again.
If not taking some action (Example defeating someone about to obliviate him) would cause him to forget about a risk to the world vow doesn't actually force him to do it.
And if Harry is forced to decide between ignorance and a risk to the world he will choose whichever he thinks is least likely to destroy the world.
The thing about ignorance seems to also aply to abandoning intelligence buffs.
This post doesn't make me actually optimistic about conjeture actually pulling this off, because for that I would have to see details but it does at least look like you understand why this is hard and why the easy versions like just telling gpt5 to imitate a nice human won't work. And I like that this actually looks like a plan. Now maybe it will turn out to not be a good plan but at least is better than openAI's plan of
"well figure out from trial and error how to make the Magic safe somehow".