Have you tried emailing the authors of that paper and asking if they think you're missing any important details? Imo there's 3 kinds of papers:
I'm pro more safety work being replicated, and would be down to fund a good effort here, but I'm concerned about 2 and 3 getting confused
Interesting. Thanks for writing up the post. I'm reasonably persuaded that this might work, though I am concerned that long chains of RL are just sufficiently fucked functions with enough butterfly effects that this wouldn't be well approximated by this process. I would be interested to see someone try though
Sure, but there's a big difference between engaging in PR damage control mode and actually seriously engaging. I don't take them choosing to be in the former as significant evidence of wrong doing
Thanks for adding the context!
Separately, I think an issue is that they’re incredibly non-transparent about what they’re doing and have been somewhat misleading in their responses to my tweets and not answering any of the questions.
I can't really fault them for not answering or being fully honest, from their perspective you're a random dude who's attacking them publicly and trying to get them lots of bad PR. I think it's often very reasonable to just not engage in situations like that. Though I would judge them for outright lying
Thanks for sharing, this is extremely important context - I'm way more ok with dual use threats from a company actively trying to reduce bio risk from AI who seem to have vaguely reasonable threat models, than just reckless gain of function people with insane threat models. It's much less clear to me how much risk is ok to accept from projects actively doing reasonable things to make it better, but it's clearly non zero (I don't know if this place is actually doing reasonable things, but Mikhail provides no evidence against)
I think it was pretty misleading for Mikhail not to include this context in the original post.
I think that being a good founder in AI safety is very hard, and generally only recommend doing it after having some experience in the field - this strongly applies to research orgs, but also to eg field building. If you're founding something, you need to constantly make judgements about what is best, and don't really have mentors to defer to, unlike many entry level safety roles, and often won't get clear feedback from reality if you get them wrong. And these are very hard questions, and if you don't get them right, there's a good chance your org is mediocre. I think this applies even to orgs within an existing research agenda (most attempts to found mech interp orgs seem doomed to me). Field building is a bit less dicey, but even then, you want strong community connections and a sense for what will and will not work.
I'm very excited for there to be more good founders in AI Safety, but don't think loudly signal boosting this to junior people is a good way to achieve this. And imo "founding an org" is already pretty high status, at least if you're perceived to have some momentum behind you?
I'm also fine with people without a lot of AI safety expertise partnering with those who do have it as co founders, but I struggle to think of orgs that I think have gone well who didn't have at least one highly experienced and competent co-founder
40 people attending a one hour meeting spends 40 hours of total employee time. 20 people attending a one hour meeting spends 20 hours of total employee time. Total employee time is the main metric that matters
Good point, I buy it
It doesn't seem that surprising, conditioned on everything being linear
Eg Wood Labs + eval prompt is 1.0 eval-ness Wood Labs + deployment prompt is 0.7 eval-ness
A steering vector should be able to amplify this, if the noise isn't too bad, to get it below 0
I continue to feel like we're talking past each other, so let me start again. We both agree that causing human extinction is extremely bad. If I understand you correctly, you are arguing that it makes sense to follow deontological rules, even if there's a really good reason breaking them seems locally beneficial, because on average, the decision theory that's willing to do harmful things for complex reasons performs badly.
The goal of my various analogies was to point out that this is not actually a fully correcct statement about common sense morality. Common sense morality has several exceptions for things like having someone's consent to take on a risk, someone doing bad things to you, and innocent people being forced to do terrible things.
Given that exceptions exist, for times when we believe the general policy is bad, I am arguing that there should be an additional exception stating that: if there is a realistic chance that a bad outcome happens anyway, and you believe you can reduce the probability of this bad outcome happening (even after accounting for cognitive biases, sources of overconfidence, etc.), it can be ethically permissible to take actions that have side effects around increasing the probability of the bad outcome in other ways.
When analysing the reasons I broadly buy the deontological framework for "don't commit murder", I think there are some clear lines in the sand, such as maintaining a valuable social contract, and how if you do nothing, the outcomes will be broadly good. Further, society has never really had to deal with something as extreme as doomsday machines, which makes me hesitant to appeal to common sense morality at all. To me, the point where things break down with standard deontological reasoning is that this is just very outside the context where such priors were developed and have proven to be robust. I am not comfortable naively assuming they will generalize, and I think this is an incredibly high stakes thing where far and away the only thing I care about is taking the actions that will actually, in practice, lead to a lower probability of extinction.
Regarding your examples, I'm completely ethically comfortable with someone making a third political party in a country where the population has two groups who both strongly want to cause genocide to the other. I think there are many ways that such a third political party could reduce the probability of genocide, even if it ultimately comprises a political base who wants negative outcomes.
Another example is nuclear weapons. From a certain perspective, holding nuclear weapons is highly unethical as it risks nuclear winter, whether from provoking someone else or from a false alarm on your side. While I'm strongly in favour of countries unilaterally switching to a no-first-use policy and pursuing mutual disarmament, I am not in favour of countries unilaterally disarming themselves. By my interpretation of your proposed ethical rules, this suggests countries should unilaterally disarm. Do you agree with that? If not, what's disanalogous?
COVID-19 would be another example. Biology is not my area of expertise, but as I understand it, governments took actions that were probably good but risked some negative effects that could have made things worse. For example, widespread use of vaccines or antivirals, especially via the first-doses-first approach, plausibly made it more likely that resistant strains would spread, potentially affecting everyone else. In my opinion, these were clearly net-positive actions because the good done far outweighed the potential harm.
You could raise the objection that governments are democratically elected while Anthropic is not, but there were many other actors in these scenarios, like uranium miners, vaccine manufacturers, etc., who were also complicit.
Again, I'm purely defending the abstract point of "plans that could result in increased human extinction, even if by building the doomsday machine yourself, are not automatically ethically forbidden". You're welcome to critique Anthropic's actual actions as much as you like. But you seem to be making a much more general claim.