Israeli Prime Minister, Musk, Tegmark and Brockman discuss existential risk from AI.

Nothing truly revolutionary was said. I think the most interesting bits are that the Prime Minister seems to be taking AI risk seriously, has in mind exponential progress, wants to prevent monopolies and thinks that we roughly have 6 years before things change drastically.

Some quotes from the prime minister:

I had a conversation with one of your colleagues, Peter Thiel, and he said to me, "Oh, it's all scale advantages. It's all monopolies." I said, well, yeah, I believe that, but we have to stop it at a certain point because we don't want to depress competition.

AI is producing, you know, this wall [hands movement gesture some exponential progress wall]. And you have these trillion dollar companies that are produced what, overnight? And they concentrate enormous wealth and power with smaller and smaller number of people. And the question is, what do you do about that monopoly power?

With such power comes enormous responsibility. That's the crux of what we're talking about here, is how do we inject a measure of responsibility and ethics into this, into this exponentially changing development?

Max [Tegmark]'s book takes you to the existential question of whether, you know, you project basically machine intelligence or human intelligence into the cosmos. Human intelligence turned into machine intelligence, into the cosmos and so on. That's a big philosophical question. I'd like to think we have about six years for that.

I think we have to conduct a robust discussion with the other powers of the world based on their self-interest as you began to do. And I think that's a pioneering work. And I think we have a shot maybe at getting to some degree of control over our future, which could be amazing.


New Comment
2 comments, sorted by Click to highlight new comments since: Today at 6:40 AM

So I listened to the conversation.


  • He seems most worried about concentration of power
  • He seemed to understand the argument about it being dangerous to have powerful beings with different goals to us, but his position was unclear to me
  • He was worried that our social and political institutions aren’t adapted “like inventing nuclear in the Stone Age”
  • He’s read Life 2.0

I can’t help but read this simply as a politician who worries about their future hold on power. (I’d be curious to know how leaders discuss AI behind closed doors)