Most of this is interesting but unsurprising. Having reflected on it for a bit, I do find one thing surprising. It is very strange that Illya doesn't know who is paying his lawyers. Really, he assumes that it is OpenAI and is apparently fine with that. I'm surprised he isn't concerned about a conflict of interest. I assume he has enough money that he could hire his own lawyers if he wanted. I would expect him to hire at least one lawyer himself to ensure that his own interests are represented and to check the work of the other lawyers.
I signed the statement. My concern, which you don't address, is that I think the statement should call for a prohibition on AGI, not just ASI. I don't think there is any meaningful sense in which we can claim that particular developments are likely to lead to AGI, but definitely won't lead to ASI. History has shown that anytime narrow AI reaches human levels, it is already superhuman. Indeed, if one imagines that tomorrow one had a true AGI (I won't define AGI here, but imagine an uploaded human that never needs to sleep or rest), then all one would need to do to make ASI is to add more hardware to accelerate thinking or add parallel copies.
As a professional software developer with 20+ years of experience who has repeatedly tried to use AI coding assistants and gotten consistently poor results, I am skeptical of even your statement that, "The average over all of Anthropic for lines of merged code written by AI is much less than 90%, more like 50%." 50% seems way too high. Or if it is then most of that code is extraneous changes that aren't part of the core code that executes. For example, I've seen what I believe to be AI-generated code where 3/4 of the API endpoints are unused. They exist just because the AI assumes that the rest endpoint for each entity ought to have all the actions even though that didn't make sense in this case.
I think there is a natural tendency for AI enthusiasts to overestimate the amount of useful code they are getting from AI. If we were going to make any statements about how much code was generated by AI at any organization, I think we would need much better data than I have ever seen.
Have you considered that the policies are working correctly for most people with a "normie" communication style? I agree that they should be clearer. However, when I read your description of what they are saying, I think the rule makes sense. It isn't that everything must be entirely fragrance-free. The intended rule seems to be nothing strongly scented. For example, I've met women who use scented shampoo, but you don't notice it on them even when you are close to them. I've also met women who you immediately smell the scent of their shampoo from 3 feet. It seems they are basically asking that people use reasonable judgment. That may not be sufficient for extremely sensitive people, but it will address a lot of the problem. By having it in their code of conduct, they can ask people to leave if it is a problem.
Your Beantown Stomp statement seems to be the proper way to communicate the actually intended policy.
If your goal is to discourage violence, I feel you've missed a number of key considerations that you need to address to discourage people. Specifically, I find myself confused by several things:
So, by all means, we at LessWrong condemn any attempt to use violence to solve the race to ASI that kills everyone. But, if you were attempting to prevent some group from splintering off to seek a violent means of resistance, I think you've somewhat missed the mark.
I did watch this interview, but not his other videos. It does start with the intro from that trailer. However, I did not see it as reflecting a personality cult. Rather, it seemed to me that it was trying to establish Eliezer's credibility and authority to speak on the subject for people who don't know who he is. You have to remember that most people aren't tracking the politics of the rationality community. They are very used to an introduction that hypes up the guest. Yes, it may have been a bit more hyperbolic than I would like, but given how much podcast/interview guests are hyped on other channels and the extent to which Eliezer really is an expert on the subject, much more so than many book authors that get interviewed, it was necessary to lay it on strong.
I've been a long-time user of book darts and highly recommend them.
The one other downside is that if they are on the page and catch on something so that they rotate, the clip can cut into the page edge. This can generally be avoided by making sure you put them all the way to the edge of the page and being aware not to let anything drag along the edge of the pages of a book with darts in it.
Do co-ops scale? I would guess they may not. If many firms are larger than the size that co-ops effectively scale to, then we would see more traditional firms than co-ops.
This is why the modal narrative, which is more likely than any other particular story, centers around loss of human control the end of 2027
I think this isn't a strong enough statement. Indeed, the median narrative is longer. However, even the modal narrative ought to include at least one unspecified obstacle occurring. In a three-year plan, the most frequent scenarios have something go wrong.
I worry that this paper and article seem to be safety washing. They imply that existing safety techniques for LLMs are appropriate for more powerful systems. They apply a safety mindset from other domains to AI in an inappropriate way. I do think AI safety can learn from other fields, but those must be fields with an intelligent adversarial agent. Studying whether failure modes are correlated doesn't matter when you have an intelligent adversary who can make failure modes that would not normally be correlated happen at the same time. If one is thinking only about current systems, then perhaps such an analysis would be helpful. But both the paper and article fail to call that out.