Their alignment team gets busy using the early transformative AI to solve the alignment problems of superintelligence. The early transformative AI spits out some slop, as AI does. Alas, one of the core challenges of slop is that it looks fine at first glance, and one of the core problems of aligning superintelligence is that it’s hard to verify;
Ok, but wouldn't we also be testing our AIs on problems that are easy to verify?
Like, when the cutting edge AIs are releasing papers that are elegantly solving long standing puzzles in physics or biology, and making surprising testable predictions along the way, we'll know that they're capable of producing more than slop.
Are you proposing that...
...or something else?
Fuck yeah. This is inspiring. It makes me feel proud and want to get to work.
Yeah, I saw.
I have 60% probability that you intentionally structured the post to feel like the pattern of how you felt reading the book
I'll take that bet. 1:1, $100?
is the media attention of publishing a book through standards publishers worth putting the authors motives in question?
Yes. It's approximately the whole point. The authors have already produced massive amounts of free online content raising the alarm about AI risk. Those materials have had substantial impact, persuading the type of person who tends to read and be interested in long blog posts, of that kind. But that is a limited audience.
The point of publishing a proper book is precisely to reach a larger audience, and to shift the overton window of what's views are known to be respectable.
Books released by standard publishers, sold at bookstores, get much more media attention and readership than free e-books.
I'd pay at least $100 to someone who could tell me where to buy a mask like that, or how to easily assemble the pieces.
I found an advance copy. :)
How? I thought MIRI was trying to be very careful with copies getting around before the launch day.
Getting more experience that might inform what you what sounds like a generally sound idea, but isn't the "baby" stage only like 5% of whole process of raising a child? If you don't like taking care of babies that doesn't mean that you overall don't want kids, right?
Is this that bad?
I think most Christians are probably pretty happy, humane lives. And the ways in which their lives are not happy seem likely to be improved a lot by trustworthy superintelligence.
Like if a guy is gay, growing up trapped in a intensely Christian environment that is intent on indoctrinating him that homosexuality is sinful, seems pretty bad. But in the year 3000, it seems like the Christian superintelligence will either have effective techniques for removing his homosexual urges.
It does seem bad if you're trapped in an equilibrium where everyone knows that being gay is sinful, and also that removing homosexual urges is sinful, and also there's enormous superintelligence resources propping up those beliefs, such that it's not plausible for one to escape the memetic traps. Is that what you anticipate?