What did Sam Altman do? Like, what are the top two things that he did that make you say that? That will maybe sound absolutely wild to you, but I'm still not convinced Sam Altman did anything bad. It's possible, sure, but based on the depositions, it's far from clear IMO.
Edit: unless you mean the for-profit conversion, but I understood you to be talking about the board drama specifically.
I'd be pretty unhappy if things stayed the same for me. I'd want at least some radical things, like curing aging. But still, I will definitely want to be able to decide for myself and not be forced into anything in particular (unless really necessary for some reason, like to save my life, but hopefully it won't be).
Yup, but the AIs are massively less likely to help with creating cruel content. There will be a huge asymmetry in what they will be willing to generate.
Imagine an Internet where half the population is Grant Sanderson (the creator of 3Blue1Brown). That'd be awesome. Grant Sanderson has the same incentives as anyone else to create cruel and false content, but he just doesn't.
People are very worried about a future in which a lot of the Internet is AI-generated. I'm kinda not. So far, AIs are more truth-tracking and kinder than humans. I think the default (conditional on OK alignment) is that an Internet that includes a much higher population of AIs is a much better experience for humans than the current Internet, which is full of bullying and lies.
All such discussions hinge on AI being relatively aligned, though. Of course, an Internet full of misaligned AIs would be bad for humans, but the reason is human disempowerment, not any of the usual reasons people say such an Internet would be terrible.
It's good to see more funding entering the field
Is the funding coming from new funding sources?
I still mourn a life without AI
Honestly, if AI goes well I really won't. I will mourn people who have died too early. The current situation is quite bad. My main feeling will probably be of extreme relief at first.
think my brain was trying to figure out why I felt inexplicably bad upon hearing that Joe Carlsmith was joining Anthropic to work on alignment
...
Perhaps the most important strategic insight resulting from this line of thought is that making illegible safety problems more legible is of the highest importance
Well, one way to make illegible problems more legible is to think about illegible problems and then go work at Anthropic to make them legible to employees there, too.
It would be helpful for the discussion (and for me) if you stated an example of a legible problem vs. an illegible problem. I expect people might disagree on the specifics, even if they seem to agree with the abstract argument.
I think we have enough evidence to say that, in practice, this turns out very easy or moot. Values tend to cluster in LLMs (good with good and bad with bad; see emergent misalignment results), so value fragility isn't a hard problem.