Jeff Rose

Posts

Sorted by New

Wiki Contributions

Comments

Reframing the AI Risk

In addition to being misleading, this just makes AI one more (small) facet of security.  But security is broadly underinvested in and there is limited government pushback.  In addition, there is already a security community which prioritizes other issues and thinks differently.  So this would place AI in the wrong metaphorical box.  

While I'm not a fan of the proposed solution I do want to note that its good that people are beginning to look at the problem. 

wrapper-minds are the enemy

One line of reasoning is as follows:

  1. We don't know what goal(s) the AGI will ultimately have.   (We can't reliably ensure what those goals are.)
  2. There is no particular reason to believe it will have any particular goal.  
  3. Looking at all the possible goals that it might have, goals of explicitly benefiting or harming human beings are not particularly likely.
  4. On the other hand, because human beings use resources which the AGI might want to use for its own goals and/or might pose a threat to the AGI (by, e.g. creating other AGIs) there are reasons why an AGI not dedicated to harming or benefiting humanity might destroy humanity anyway. (This is an example or corollary of  "instrumental convergence".)
  5. Because of 3, minds tortured for eternity is highly unlikely.  
  6. Because of 4, humanity being ended in the service of some alien goal which has zero utility from the perspective of humanity is far more likely. 
To what extent have ideas and scientific discoveries gotten harder to find?

Exactly this.   The rest, those little irregularities, at the time didn't matter, because we didn't know what we didn't know.  

Against Active Shooter Drills

It is a separate and entirely different problem.   

First, do no harm.

A claim that Google's LaMDA is sentient

One in a hundred likely won't be enough if the organization doing the boxing is sufficiently security conscious. (And if not, there will likely be other issues.)

Why has no person / group ever taken over the world?

China is currently an effective peer competitor of the US, among other issues.  2010 is a rough estimate of when that condition started to obtain.

Godzilla Strategies

I think people here are uncomfortable advocating for political solutions either because of their views of politics or their comfort level with it.  

You don't have to believe that alignment is impossible to conclude that you should advocate for a political/governmental solution.  All you have to believe is that the probability of x-risk from AGI is reasonably high and the probably of alignment working to prevent it it not reasonably high.  That seems to describe the belief of most of those on LessWrong.

Why has no person / group ever taken over the world?

I suspect you will not accept this answer, but for many practical definitions the United States had control over the world starting in 1991 and ending around 2010.

Godzilla Strategies

It suggests putting more weight on a plan to get AI Research globally banned.   I am skeptical that this will work (though if burning all GPUs would be a pivotal act the chances of success are significantly higher), but it seems very unlikely that there is a technical solution either.

In addition, at least some  purported technical solutions to AI risk seem to meaningfully increase the risk to humanity.  If you have someone creating an AGI to exercise sufficient control over the world to execute a pivotal act, that raises the stakes of being first enormously which incentivizes cutting corners.  And, it also makes it more likely that the AGI will destroy humanity and be quicker to do so. 

Load More