Yesterday, I posted 23 ideas of possible blog posts for the remaining 23 days of Inkhaven. They were all AI-adjacent. Today I’ll post 22 ideas of non-AI ones. I probably won’t write any of these, because priorities, but ya know… maybe if people are particularly keen?? 1. My experience recording...
I’ve written 7 blogs for Inkhaven so far. That leaves 23 to go. If you’ve been annoyed by the sudden influx of daily blog posts, good news! I’ve decided to stop sending all of these to my subscribers. I’ll plan to limit it to the ones I think people will...
There are a few ways in which the term alignment is used by people working on AI safety. This leads to important confusions, which are the main point of this post. But there’s some background first, so some readers may want to skip to the “alignment vs. safety” section. As...
If you’re already familiar with the history of the field, you might wanna skip this one… I like to imagine future historians trying to follow the discourse around AI during the time I’ve been in the field… “Wait, so the AI ethics people think that the AI safety people are...
On a sunny Saturday afternoon two weeks ago, I was sitting in Dolores park, watching a man get turned into a cake. It was, I gather, his birthday and for reasons (Maybe something to do with Scandanavia?) his friends had decided to celebrate by taping him to a tree and...
About a year ago, we wrote a paper that coined the term “Gradual Disempowerment.” It proved to be a great success, which is terrific. A friend and colleague told me that it was the most discussed paper at DeepMind last year (selection bias, grain of salt, etc.) It spawned articles...
A few years ago I listened to a fascinating podcast interview featuring former Democratic presidential candidates Andrew Yang and Marianne Williamson. They agreed that politics is a mess and politicians are constantly doing bad things that harm the people they are supposed to serve. But they couldn’t agree on how...