The First Circle

55


Epistemic Status: Squared

The following took place at an unconference about AGI in February 2017, under Chatham House rules, so I won’t be using any identifiers for the people involved.

Something that mattered was happening.

I had just arrived in San Francisco from New York. I was having a conversation. It wasn’t a bad conversation, but it wasn’t important.

To my right on the floor was a circle of four people. They were having an important conversation. A real conversation. A sacred conversation.

This was impossible to miss. People I deeply respected spoke truth. Their truth. Important truth, about the most important question – whether and when AGI would be developed, and what we could do about that to change that date, or to ensure a good outcome. Primarily about the timeline. And that truth was quite an update from my answer, and from my model of what their answers would be.

I knew we were, by default, quite doomed. But not doomed so quickly!

They were unmistakably freaking out. Not in a superficial way. In a deep, calm, we are all doomed and we don’t know what to do about it kind of way. This freaked me out too.

They talked in a different way. Deliberate, careful. Words chosen carefully.

I did not know the generation mechanism. I did know that to disturb the goings on would be profane. So I sat on a nearby couch and listed for about an hour. I said nothing.

At that time, decision was made to move to another room. I followed, and during the walk was invited to join. Two participants left, two stayed, and I joined.

The space remained sacred. I knew it had different rules, and did my best to follow them. When I was incorrect, they explained. Use ‘I’ statements. Things about your own beliefs, your models, your feelings. Things you know to be true. Pay attention to your body, and how it is feeling, where things come from, what they are like. Report it. At one point one participant said they were freaking out. I observed I was freaking out. Someone else said they were not freaking out. I said I thought they were. The first reassured me they thought there was some possibility we’d survive. Based on their prior statements, that was an update. It helped a little.

I left exhausted by the discussion, the late hour and the three hour time zone shift, and slept on it. Was this people just now waking up, perhaps not even fully? Or were people reacting too much to Alpha Go? Was this a west coast zeitgeist run amok? An information cascade?

Was this because people who understood that there was Impending Ancient Doom and we really should be freaking out about it used to freaking out vastly more than everyone else? So when Elon Musk freaked out and put a billion dollars into OpenAI without thinking it through, and other prominent people freaked out, they instinctively kept their relative freak out level a constant amount higher than the public’s freakout level, resulting in an overshoot?

Was this actually because people were freaking out about Donald Trump giving them a sense we were doomed and finding a way to express that?

Most importantly, what about the models and logic? Did they make sense? The rest of the unconference contained many conversations on the same topic, and many other topics. There was an amazing AI timeline double crux, teaching me both how to double crux and much about timelines and AI development. But nothing else felt like that first circle.

As several days went by, and all the data points came together, I got a better understanding of both how much people had updated, and why people had updated. I stopped freaking out. Yes, the events of the previous year had been freakout worthy, and shorter timelines should result from them. And yes, people’s prior timelines had been a combination of too long and based on heuristics not much correlated to actual future events. But this was an over-reaction, largely an information cascade, for a confluence of reasons.

I left super invigorated by the unconference, and started writing again.

Meta-note: This should not convince anyone of anything regarding AI safety, AI timelines or related topics, but I do urge all to treat these questions with the importance they deserve. The links in this post by Raymond Arnold are a good place to start if you wish to learn more.

55