Other peoples' beliefs are evidence. Many people believe in God. No one believes that disc golf causes eternal torture. The two hypotheses should not be assigned equal probability.

that's only because of how many people have believed in religion in the past

So you do not believe that others' beliefs are evidence?

So you do not believe that others' beliefs are evidence?

A belief can be evidence for its stipulated meaning (this holds often), but could also be counterevidence, or irrelevant. What is a belief evidence for? Not at all automatically its stipulated meaning.

5Crux8yIt's sometimes (or even very often) evidence, but not when (1) there's not even a shred of evidence elsewhere, and (2) there's a convincing, systematic explanation for how a particular cluster of epistemic vulnerabilities in human brain hardware led to its widespread adoption. In other words, a large portion of society believing something is evidence only if the memetic market test for the adoption of the idea at hand is intact. But our hardware and factory settings are so ridiculously mal-adapted to the epistemic environment of the modern world that this market test is extremely often utterly broken and useless. If you want to make use of the societal thoughts on an issue, you must first appraise the health of the market test for the adoption of the ideas. Is it likely that competition in this area of the memetic environment will lead to ever more sound beliefs, or is there a wrench in the system that is bound to lead to a systematic spiral to ever more ridiculous or counterproductive dogmas? Our hardware is just so riddled with epistemic problems that it would be a huge mistake to consider societal conclusions at face value. If the market test for meme propagation were intact, and the trial-and-error system for weeding out less useful beliefs in favor of more useful ones ran smoothly, large-scale acceptance of a position would of course be plenty of evidence--no further questions asked. But we live in a different world--one where this trial-and-error system is in utter disrepair in an absolutely staggering number of cases. In such a world, one must always start with the question, "Is the memetic market test intact in this case, or must I go this epistemic journey myself?" Of course the market test is better or worse from one place to the next, and I hang out here because the Less Wrong community certainly has one of the best belief propagation systems out there. If everybody on here seems to believe something with a lot of conviction, that to me is strong ev

Scenario analysis: semi-general AIs

by Will_Newsome 1 min read22nd Mar 201266 comments

1


Are there any essays anywhere that go in depth about scenarios where AIs become somewhat recursive/general in that they can write functioning code to solve diverse problems, but the AI reflection problem remains unsolved and thus limits the depth of recursion attainable by the AIs? Let's provisionally call such general but reflection-limited AIs semi-general AIs, or SGAIs. SGAIs might be of roughly smart-animal-level intelligence, e.g. have rudimentary communication/negotiation abilities and some level of ability to formulate narrowish plans of the sort that don't leave them susceptible to Pascalian self-destruction or wireheading or the like.

At first blush, this scenario strikes me as Bad; AIs could take over all computers connected to the internet, totally messing stuff up as their goals/subgoals mutate and adapt to circumvent wireheading selection pressures, without being able to reach general intelligence. AIs might or might not cooperate with humans in such a scenario. I imagine any detailed existing literature on this subject would focus on computer security and intelligent computer "viruses"; does such literature exist, anywhere?

I have various questions about this scenario, including:

  • How quickly should one expect temetic selective sweeps to reach ~99% fixation?
  • To what extent should SGAIs be expected to cooperate with humans in such a scenario? Would SGAIs be able to make plans that involve exchange of currency, even if they don't understand what currency is or how exactly it works? What do humans have to offer SGAIs?
  • How confident can we be that SGAIs will or won't have enough oomph to FOOM once they saturate and optimize/corrupt all existing computing hardware?
  • Assuming such a scenario doesn't immediately lead to a FOOM scenario, how bad is it? To what extent is its badness contingent on the capability/willingness of SGAIs to play nice with humans?
Those are the questions that immediately spring to mind, but I'd like to see who else has thought about this and what they've already considered before I cover too much ground.
My intuition says that thinking about SGAIs in terms of population genetics and microeconomics will somewhat counteract automatic tendencies to imagine cool stories rather than engage in dispassionate analysis. I'd like other suggestions for how to achieve that goal.
I'm confused that I don't see people talking about this scenario very much; why is that? Why isn't it the default expected scenario among futurologists? Or have I just not paid close enough attention? Is there already a name for this class of AIs? Is the name better than "semi-general AIs"?
Thanks for any suggestions/thoughts, and my apologies if this has already been discussed at length on LessWrong.

 

1