What's the point of speculating about what something literally defined as having more knowledge than us would believe?

Precisely. No point. But people been speculating a lot about how it would behave, talking themselves into certainty that it would eat us. Those people need speculative antidote. If you speculate about one thing too much, but not about anything else, you start taking speculation as weak evidence, and deluding yourself.

edit: Also, try eating random unknown chemicals if you truly believe you should not worry about unknowns. One absolutely SHOULD worry about changing the status quo.

To respond to the edit, I simply don't see the analogy.

Your wording makes it sound analogous because you could describe what I'm saying as "don't worry about unknowns" (i.e., you have no evidence for whether God exists or not, so don't worry about it), and you could also describe your reductio the same way (i.e., you have no evidence for whether some random chemical is safe, so don't worry about it), but when I try to visualize the situation I don't see the connection.

A better analogy would be being forced to take one of five different medication... (read more)

1Crux8yIt may be because I haven't slept in 30 hours, but I'm having a hard time interpreting your writing. I've seen you make some important insights elsewhere, and I occasionally see exactly what you're saying, but my general impression of you is that you're not very good at judging your audience and properly managing the inferential distance. You seem to agree with me to some extent in this discussion, or at least we don't seem to have a crucial disagreement, and this topic doesn't seem very important anyway, so I'm not necessarily asking you to explain yourself if that would take a long time, but perhaps this can serve as some constructive criticism thrown at you in a dark corner of a random thread. As a meta question, would this sort of reply do better as a PM? What are the social considerations (signaling etc) with this sort of response? I don't know where to even start in that train of thought.

Scenario analysis: semi-general AIs

by Will_Newsome 1 min read22nd Mar 201266 comments

1


Are there any essays anywhere that go in depth about scenarios where AIs become somewhat recursive/general in that they can write functioning code to solve diverse problems, but the AI reflection problem remains unsolved and thus limits the depth of recursion attainable by the AIs? Let's provisionally call such general but reflection-limited AIs semi-general AIs, or SGAIs. SGAIs might be of roughly smart-animal-level intelligence, e.g. have rudimentary communication/negotiation abilities and some level of ability to formulate narrowish plans of the sort that don't leave them susceptible to Pascalian self-destruction or wireheading or the like.

At first blush, this scenario strikes me as Bad; AIs could take over all computers connected to the internet, totally messing stuff up as their goals/subgoals mutate and adapt to circumvent wireheading selection pressures, without being able to reach general intelligence. AIs might or might not cooperate with humans in such a scenario. I imagine any detailed existing literature on this subject would focus on computer security and intelligent computer "viruses"; does such literature exist, anywhere?

I have various questions about this scenario, including:

  • How quickly should one expect temetic selective sweeps to reach ~99% fixation?
  • To what extent should SGAIs be expected to cooperate with humans in such a scenario? Would SGAIs be able to make plans that involve exchange of currency, even if they don't understand what currency is or how exactly it works? What do humans have to offer SGAIs?
  • How confident can we be that SGAIs will or won't have enough oomph to FOOM once they saturate and optimize/corrupt all existing computing hardware?
  • Assuming such a scenario doesn't immediately lead to a FOOM scenario, how bad is it? To what extent is its badness contingent on the capability/willingness of SGAIs to play nice with humans?
Those are the questions that immediately spring to mind, but I'd like to see who else has thought about this and what they've already considered before I cover too much ground.
My intuition says that thinking about SGAIs in terms of population genetics and microeconomics will somewhat counteract automatic tendencies to imagine cool stories rather than engage in dispassionate analysis. I'd like other suggestions for how to achieve that goal.
I'm confused that I don't see people talking about this scenario very much; why is that? Why isn't it the default expected scenario among futurologists? Or have I just not paid close enough attention? Is there already a name for this class of AIs? Is the name better than "semi-general AIs"?
Thanks for any suggestions/thoughts, and my apologies if this has already been discussed at length on LessWrong.

 

1