Wiki Contributions

Comments

I wonder if the initial 67% in favor of x-risk was less reflective of the audience's opinion on AI specifically, but a general application of the heuristic "<X fancy new technology> = scary, needs regulation."

(That is, if you replaced AI with any other technology that general audiences are vaguely aware of but don't have a strong opinion on, such as CRISPR or nanotech, would they default to about the same number?)

Also, I would guess that hearing two groups of roughly equally smart-sounding people debate a topic one has no strong opinion on tends to revise one's initial opinion closer to "looks like there's a lot of complicated disagreement so idk maybe it's 50/50 lol," regardless of the actual specifics of the arguments made.

There seems to be a lack of emphasis in this market on outcomes where alignment is not solved, yet humanity turns out fine anyway. Based on an Outside View perspective (where we ignore any specific arguments about AI and just treat it like any other technology with a lot of hype), wouldn't one expect this to be the default outcome?

Take the following general heuristics:

  • If a problem is hard, it probably won't be solved on the first try.
  • If a technology gets a lot of hype, people will think that it's the most important thing in the world even if it isn't. At most, it will only be important on the same level that previous major technological advancements were important.
  • People may be biased towards thinking that the narrow slice of time they live in is the most important period in history, but statistically this is unlikely.
  • If people think that something will cause the apocalypse or bring about a utopian society, historically speaking they are likely to be wrong.

This, if applied to AGI, leads to the following conclusions:

  1. Nobody manages to completely solve alignment.
  2. This isn't a big deal, as AGI turns out to be disappointingly not that powerful anyway (or at most "creation of the internet" level influential but not "disassemble the planet's atoms" level influential)

I would expect the average person outside of AI circles to default to this kind of assumption.

It seems like the only option that seems fully compatible with this perspective is 

G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.

which is one of the lowest probabilities on the market. I'm guessing that this is probably due to the fact that people participating in such a market are heavily selected from those who already have strong opinions on AI risk?