I'm not sure what's going on, but the presentation can be viewed here: https://files.catbox.moe/qdwops.mp4
As some people here have said, it's not a great presentation. The message is important, though.
For reasons I can't remember (random Amazon recommendation?) I read Life 3.0 five years ago, and I've been listening to podcasts about AI alignment ever since. I work with education at a national level, and this January I wrote and published a book "AI in Education" to help teachers use ChatGPT in a sensible way – and, in its own chapter, make more people aware of the risks with AI. I've been giving a lot of talks about AI and education since then, and I end each presentation with some words about AI risks. I am sure that most people reading the book or li...
I've been thinking about limitations and problems with CIRL. Thanks for this post!
I haven't done the math, but I'd like to explore a scenario where the AI learns from kids and might infer that eating sweets and playing video games is better than eating a proper meal and doing your homework (or whatever). This could of course be mitigated by learning preferences from parents, which could have a stronger impact on how the AI picks up preferences. But there is a strong parallel to how humanity treats this planet of ours. Wouldn't an AI infer that we actually want to raise the global temperature, make a lot of species extinct, and generally be fairly short-sighted?