I'm interested in doing in-depth dialogues to find cruxes. Message me if you are interested in doing this.
I do alignment research, mostly stuff that is vaguely agent foundations. Currently doing independent alignment research on ontology identification. Formerly on Vivek's team at MIRI.
I'm writing a post about this at the moment. I'm confused about how you're thinking about the space of agents, such that "maybe we don't need to make big changes"?
When you make a bigger change you just have to be really careful that you land in the basin again.
How can you see whether you're in the basin? What actions help you land in the basin?
The AI reasons more competently in corrigible ways as it becomes smarter, falling deeper into the basin.
The AI doesn't fall deeper into the basin by itself, it only happens because of humans fixing problems.
(Ryan is correct about what I'm referring to, and I don't know any details).
I want to say publicly, since my comment above is a bit cruel in singling out MATS specifically: I think MATS is the most impressively well-run organisation that I've encountered, and overall supports good research. Ryan has engaged at length with my criticisms (both now and when I've raised them before), as have others on the MATS team, and I appreciate this a lot.
Ultimately most of our disagreements are about things that I think a majority of "the alignment field" is getting wrong. I think most people don't consider it Ryan's responsibility to do better at research prioritization than the field as a whole. But I do. It's easy to shirk responsibility by deferring to committees, so I don't consider that a good excuse.
A good excuse is defending the object-level research prioritization decisions, which Ryan and other MATS employees happily do. I appreciate them for this, and we agree to disagree for now.
Tying back to the OP, I maintain that multiplier effects are often overrated because of people "slipping off the real problem" and this is a particularly large problem with founders of new orgs.
I want to register disagreement. Multiplier effects are difficult to get and easy to overestimate. It's very difficult to get other people working on the right problem, rather than slipping off and working on an easier but ultimately useless problem. From my perspective, it looks like MATS fell into this exact trap. MATS has kicked out ~all the mentors who were focused on real problems (in technical alignment) and has a large stack of new mentors working on useless but easy problems.
[Edit 5hrs later: I think this has too much karma because it's political and aggressive. It's a very low effort criticism without argument.]
By the way, there seems to be an issue where sympy silently drops precision under some circumstances. Definitely a bug. A couple of times it's caused non-trivial errors in my KLs. It's pretty rare, but I don't know any way to completely avoid it. Thinking of switching to a different library.
Relevant comment on reddit from someone working on Leela Odds:
Why would models start out aligned by default?
This is the best I've got so far. I estimated the rating using the midpoint of a logistic regression fit to the games. The first few especially seem to have been inflated due to not having enough high rated players in the data, so it had to extrapolate. And they all seem inflated by (I'd guess) a couple of hundred points due to the effects I mentioned in the post. (Edit: Please don't share the graph alone without this context).
The NN rating in the Blitz data highlights the flaw in this method of estimating the rating.
I haven't found a way to get similar data on human vs human games.
Took a while to download all this. I'm curious what your blitz rating is?
Does that sound right?
Can't give a confident yes because I'm pretty confused about this topic, and I'm pretty unhappy currently with the way the leverage prior mixes up action and epistemics. The issue about discounting theories of physics if they imply high leverage seems really bad? I don't understand whether the UDASSA thing fixes this. But yes.
That avoids the "how do we encode numbers" question that naturally raises itself.
I'm not sure how natural the encoding question is, there's probably an AIT answer to this kind of question that I don't know.
I think the "follow the spirit of the rule" thing is more like rule utilitarianism than like deontology. When I try to follow the spirit of a rule, the way that I do this is by understanding why the rule was put in place. In other words, I switch to consequentialism. As an agent that doesn't fully trust itself, it's worth following rules, but the reason you keep following them is that you understand why putting the rules in place makes the world overall better from a consequentialist standpoint.
So I have a hypothesis: It's important for an agent to understand the consequentialist reasons for a rule, if you want its deontological respect for that rule to remain stable as it considers how to improve itself.