Connecting Your Beliefs (a call for help)

by lukeprog2 min read20th Nov 201173 comments

31

Taking Ideas Seriously
Personal Blog

A couple weeks after meeting me, Will Newsome gave me one of the best compliments I’ve ever received. He said: “Luke seems to have two copies of the Take Ideas Seriously gene.”

What did Will mean? To take an idea seriously is “to update a belief and then accurately and completely propagate that belief update through the entire web of beliefs in which it is embedded,” as in a Bayesian belief network (see right).

Belief propagation is what happened, for example, when I first encountered that thundering paragraph from I.J. Good (1965):

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind... Thus the first ultraintelligent machine is the last invention that man need ever make.

Good’s paragraph ran me over like a train. Not because it was absurd, but because it was clearly true. Intelligence explosion was a direct consequence of things I already believed, I just hadn’t noticed! Humans do not automatically propagate their beliefs, so I hadn’t noticed that my worldview already implied intelligence explosion.

I spent a week looking for counterarguments, to check whether I was missing something, and then accepted intelligence explosion to be likely (so long as scientific progress continued). And though I hadn’t read Eliezer on the complexity of value, I had read David Hume and Joshua Greene. So I already understood that an arbitrary artificial intelligence would almost certainly not share our values.

Accepting my belief update about intelligence explosion, I propagated its implications throughout my web of beliefs. I realized that:

  • Things can go very wrong, for we live in a world beyond the reach of God.
  • Scientific progress can destroy the world.
  • Strong technological determinism is true; purely social factors will be swamped by technology.
  • Writing about philosophy of religion was not important enough to consume any more of my time.
  • My highest-utility actions are either those that work toward reducing AI risk, or those that work toward making lots of money so I can donate to AI risk reduction.
  • Moral theory is not idle speculation but an urgent engineering problem.
  • Technological utopia is possible, but unlikely.
  • The value of information concerning intelligence explosion scenarios is extremely high.
  • Rationality is even more important than I already believed it was.
  • and more.

I had encountered the I.J. Good paragraph on Less Wrong, so I put my other projects on hold and spent the next month reading almost everything Eliezer had written. I also found articles by Nick Bostrom and Steve Omohundro. I began writing articles for Less Wrong and learning from the community. I applied to Singularity Institute’s Visiting Fellows program and was accepted. I quit my job in L.A., moved to Berkeley, worked my ass off, got hired, and started collecting research related to rationality and intelligence explosion.

My story surprises people because it is unusual. Human brains don’t usually propagate new beliefs so thoroughly.

But this isn’t just another post on taking ideas seriously. Will already offered some ideas on how to propagate beliefs. He also listed some ideas that most people probably aren’t taking seriously enough. My purpose here is to examine one prerequisite of successful belief propagation: actually making sure your beliefs are connected to each other in the first place.

If your beliefs aren’t connected to each other, there may be no paths along which you can propagate a new belief update.

I’m not talking about the problem of free-floating beliefs that don’t control your anticipations. No, I’m talking about “proper” beliefs that require observation, can be updated by evidence, and pay rent in anticipated experiences. The trouble is that even proper beliefs can be inadequately connected to other proper beliefs inside the human mind.

I wrote this post because I'm not sure what the "making sure your beliefs are actually connected in the first place" skill looks like when broken down to the 5-second level.

I was chatting about this with atucker, who told me he noticed that successful businessmen may have this trait more often than others. But what are they doing, at the 5-second level? What are people like Eliezer and Carl doing? How does one engage in the purposeful decompartmentalization of one's own mind?

31

73 comments, sorted by Highlighting new comments since Today at 2:02 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Also keep in mind that it's more important to make your beliefs as correct as possible then to make them as consistent as possible. Of course the ultimate truth is both correct and consistent; however, it's perfectly possible to make your beliefs less correct by trying to make them more consistent. If you have two beliefs that do a decent job of modeling separate aspects of reality, it's probably a good idea to keep both around, even if they seem to contradict each other. For example, both General Relativity and Quantum Mechanics do a good job modeling (parts of) reality despite being inconsistent and we want to keep both of them. Now think about what happens when a similar situation arises in a field, e.g., biology, psychology, your personal life, where evidence is messier then it is in physics.

6amcknight9yThis comment, if expanded, would make for a great main post. Especially if you can come up with a second good example.
4atucker9yIt's also worth noting that they are actually inconsistent though, rather than just not thinking about that issue. I think having connected beliefs is helpful for noticing that.

Something bothers me about this post. Querying my mind for "what bothers me about this post", the items that come up are:

  • the post starts with a pretty picture of a "web of beliefs", but is itself a long string of words
  • the picture is decorative, but certainly not illustrative; it is not a picture of your beliefs
  • the post offers some beliefs that you claim are connected, but none of the reasoning behind the claims
  • the post offers anecdotes, e.g. "successful businessmen may have this trait more often than others", but does not represent them as beliefs in a web of beliefs

The general theme appears to be "this post doesn't practice what it preaches".

If you did represent the beliefs you mention in this post in the form of a connected web, what would that look like?

(On a very few prior occasions I've tried such explicit representations, and was not motivated to keep using them.)

5thomblake9yIndeed, I believe the point of this post was partly to solve that sort of problem. Note it is a request for solutions.
4Morendil9yAgreed; I wasn't saying that as an objection to the underlying idea. I only wanted to state a few things that were bothering me about the post. Quite a few of the replies were off-topic, so possibly others have been bothered by some aspects of it too. My idea is that future requests for help along the same lines could be more effective if presented in a less troublesome form. The part where I do my best to respond to the request is the last couple lines.

I hadn’t noticed that my worldview already implied intelligence explosion.

I'd like to see a post on that worldview. The possibility of an intelligence explosion seems to be an extraordinary belief. What evidence justified a prior strong enough as to be updated on a single paragraph, written in natural language, to the extent that you would afterwards devote your whole life to that possibility?

I’m not talking about the problem of free-floating beliefs that don’t control your anticipations. No, I’m talking about “proper” beliefs that require observation, can be updated by evidence, and pay rent in anticipated experiences.

How do you anticipate your beliefs to pay rent? What kind of evidence could possible convince you that an intelligence explosion is unlikely, how could your beliefs be surprised by data?

What evidence justified a prior strong enough as to be updated on a single paragraph

I can't speak for lukeprog, but I believe that "update" is the wrong word to use here. If we acted like Bayesian updaters then compartmentalization wouldn't be an issue in the first place. I.J. Good's paragraph, rather than providing evidence, seems to have been more like a big sign saying "Look here! This is a place where you're not being very Bayesian!". Such a trigger doesn't need to be written in any kind of formal language - it could have been an offhand comment someone made on a completely different subject. It's simply that (to an honest mind), once attention is drawn to an inconsistency in your own logic, you can't turn back.

That said, lukeprog hasn't actually explained why his existing beliefs strongly implied an intelligence explosion. That wasn't the point of this post, but like you it's a post that I'd very much like to see. I'm interested in trying to build a Bayesian case for or against the intelligence explosion (and other singularity-ish outcomes).

You're right that there's a problem obtaining evidence for or against beliefs about the future. I can think of thre... (read more)

0XiXiDu9yThanks, I have yet to learn the relevant math. I only have a very vague idea about Bayesian probability at this point so my use of "update" might very well be wrong, as you indicated. There are some things I am particularly confused about when it comes to probability. I am looking forward to the time when I start learning probability theory (currently doing Calculus). Just a two examples: I don't understand why you can't just ignore some possible outcomes. We are computationally limited agents after all. Thus if someone comes a long and tells me about a certain possibility that involves a huge amount of utility but doesn't provide enough (how much would be enough anyway?) evidence, my intuition is to just ignore that possibility rather than to assign 50% probability to it. After all, if humanity was forced to throw a fair coin and heads would imply doom then I'd be in favor of doing everything to ensure that the coin doesn't land heads (e.g. a research project that figures out how to throw it in a certain way that maximizes the probability of it landing tails). But that would be crazy because anyone could come along and make unjustified predictions that involve huge utility stakes and I would have to ignore all lower utility possibilities even if there is a lot of empirical evidence in favor of them. Something else I am confused about is how probability is grounded. What rules are used to decide how much certain evidence can influence my probability estimates? I mean, if using probability theory only shifts my use of intuition towards the intuitive assignment of numerical probability estimates of evidence, in favor of a possibility, then it might as well distort my estimates more than trusting my intuition about the possibility alone. Because 1.) if using my intuition to decide how much each piece of evidence changes the overall probability, I can be wrong on each occasion rather than just one (accumulation of error) 2.) humans are really bad with numbers [http:
4Giles9yI think most people use "update" colloquially, i.e. something along the lines of "what you just said appears to constitute evidence that I wasn't previously aware of, and I should change my beliefs accordingly". I don't know how often rationalists actually plug these things into a formula. This is the problem of Pascal's Mugging [http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/] - I think it's something that people are still confused about. In general, if someone tells you about a really weird possibility you should assign it a probability of a lot less than 50%, as it would essentially be a conjunction of a lot of unlikely events. The problem is that the utility might be so high (or low) that when you multiply it by this tiny probability you still get something huge. I'm still waiting for an answer to that one, but in the meantime it seems worth attacking the problems that lie in the grey area between "easily tractable expected utility calculations" and "classic Pascal's mugging". For me, AI risk mitigation is still in that grey area.

The problem is that the utility might be so high (or low) that when you multiply it by this tiny probability you still get something huge.

Don't worry about it; if you decline a Pascal's mugging I'll cause positive utility equal to twice the amount of negative utility you were threatened with, and if you accept one I'll cause negative utility equal to twice what you were threatened with.

Trust me.

1orthonormal9yExcuse me, I'm going to go and use this on all my friends.
0endoself9yWhy did I never think of this? I mean I have thought of very similar things in a thought experiment sense and I've even used it to explain to people that paying the mugging cannot be something correct but unpalatable, but it never occurred to me to use it on someone.
0dlthomas9yBayes' Theorem [http://yudkowsky.net/rational/bayes] is precisely that rule.
4XiXiDu9yThat's not what I meant, I have been too vague. It is clear to me how to update on evidence given concrete data or goats behind doors in game shows. What I meant is how one could possible update on evidence like the victory of IBM Watson at Jeopardy regarding risks of AI. It seems to me that assigning numerical probability estimates to such evidence, that is then used to update on the overall probability of risks from AI, is a very shaky affair that might distort the end result as one just shifted the use of intuition towards the interpretation of evidence in favor of an outcome rather than the outcomes itself.
6[anonymous]9yCausal analysis is probably closer to what you're looking for. It displays stability under (small) perturbation of relative probabilities, and it's probably closer to what humans do under the hood than Bayes' theorem. Pearl often observes that humans work with cause and effect with more facility than numerical probabilities.
0pnrjulius9yNumerical stability is definitely something we need in our epistemology. If small errors make the whole thing blow up, it's not any good to us, because we know we make small errors all the time.
2lessdazed9yThere is no reason to believe intelligence stops being useful for problem solving as one gets more of it, but I can easily imagine evidence that would suggest that. A non-AI intelligence above human level, such as a human with computer processors integrated into his or her brain, a biologically enhanced human, etc. might prove to be no more usefully intelligent than Newton, Von Neumann, etc. despite being an order of magnitude smarter by practical measures. Leveraging makes little sense according to many reasonable utility functions. If one is guessing the color of random cards, and 70% of the cards are red, and 30% are blue, and red and blue pay out equally, one should always guess red each turn. Where utility is related to the ln of money, it makes sense to diversify, but that is different from a life-impact sort of case where one seeks to maximize the utility of others. The outside view is to avoid total commitment lest one be sucked into a happy death spiral and suffer from the sunk costs fallacy, but if those and similar fallacies can be avoided, total commitment makes sense.
1SimonF9yExtraordinary compared to what? We already now that most people are insane, so that belief beeing not shared by almost everybody doesn't make it unlikely a priori. In some ways the intellgence explosion is a straightforward extrapolation of what we know at the moment, so I don't think your critisism is valid here. I think one could tell a reasonably competent physicist 50 years prior to Schrödinger how to derive quantum mechanics in one paragraph of natural language. Human language can contain lots of information, especially if speaker and listener already share a lot of concepts. I'm not sure why you've written your comment, are you just using the opportunity to bring up this old topic again? I find myself irritated by this, even though I probably agree with you :)
2XiXiDu9yFor me there are a lot of things that sound completely sane but might be just bunk. Antimatter weapons, grey goo, the creation of planet eating black holes by particle accelerators, aliens or string theory. I don't have the necessary education to discern those ideas from an intelligence explosion. They all sound like possibilities to me that might or might not be true. All I can do is to recognize their extraordinary status and subsequently demand the peer-review and assessment of those ideas by a larger and educated audience. Otherwise I run the risk of being swayed by the huge utility associated with those ideas, I run the risk of falling prey to a Pascalian mugger. Is Luke Muehlhauser that competent when it comes to all the fields associated with artificial intelligence research? It's not old, it becomes more relevant each day. Since I first voiced skepticism about the topic they expanded to the point of having world-wide meetings. At least a few people playing devil's advocate is a healthy exercise in my opinion :-)
4AlexSchell9yAre you playing devil's advocate?
0Nisan9yFor what it's worth, I consider these things to have very different levels and kinds of implausibility.

This post made me realize just how important it is to completely integrate the new things you learn.

I have been reading a lot of books and blogs on the subject of students that finish school with honors, but don't seem to work very hard while doing so. I also met one of those people in person (he finished an entire 4 year curriculum with honors in just 3 months and is now a professor of that content)

It all boils to the same thing: Whatever the textbook is trying to tell you, make sure you integrate that in your life. Only then will you see if you really un... (read more)

2Technoguyrob9yI am one such person. I finished college at the age of 16, and I knew I was merely very good at guessing the teacher's password. People's remarks about my intelligence were dismissed by me internally, because I was aware of my own ignorance. However, what you say can be difficult to apply in practice during a semester. I see formal education as a method for gathering data, which you can use for Bayesian propagation after the fact. This is why it can feel like you learn much more thinking between semesters, rather than during. Your notion of necessity of integration is uncorrelated to outlier students. Given an outlier student, I would be surprised if active integration of textbook data was lower than given a non-outlier student. In both cases this conditional probability is, sadly, very small.
0tetsuo559yI too am very good at guessing the teachers password in addition to really learning the textbook contents. I am talking specifically about those students that do not use guessing the teacher's password as a way to finish with honors. I always do the propagation during the learning itself and improve upon it after the fact (i'll suddenly realize that something is off or changed days later) I said i had a hard time explaining it and your comment makes extra clear that i failed. I will use your feedback to improve the text i have in mind.
1[anonymous]9yI would find such a post very useful.
0tetsuo559yThis video tries to explain what i mean, i hope the inferential distance is not too far http://www.youtube.com/watch?feature=player_embedded&v=1F3vmNeyOvU [http://www.youtube.com/watch?feature=player_embedded&v=1F3vmNeyOvU]
0[anonymous]9yI have stumbled on links to his books and blogs, many on the IRC channel where rather sceptical of the usefulness of his advice. My own prior was rather low. Nevertheless I would very like LWers to share their impressions on this, since there is something there that looks almost like it could work.
0Metus9yI would also be interested and upvoted both this post and the parent for encouragement.
0tetsuo559yI found a video that explains what i mean at a very basic level http://www.youtube.com/watch?feature=player_embedded&v=1F3vmNeyOvU [http://www.youtube.com/watch?feature=player_embedded&v=1F3vmNeyOvU]

I spent a week looking for counterarguments, to check whether I was missing something

What did you find? Had you missed anything?

Part of my day job involves doing design reviews for alterations to relatively complex software systems, often systems that I don't actually understand all that well to begin with. Mostly, what I catch are the integration failures; places where an assumption in one module doesn't line up quite right with the end-to-end system flow, or where an expected capability isn't actually being supported by the system design, etc.

Which isn't quite the same thing as what you're talking about, but has some similarities; being able to push through from an understanding... (read more)

0thomblake9yThat's an interesting metaphor. I wonder if it doesn't actually support compartmentalization. In software, you don't want a lot of different links between the internals of modules, and so it seems you might not want lots of links between different belief clusters. Just make sure they're connected with a clean API. Of course, that's not really compartmentalization, that's just not drawing any more arrows on your Bayes net than you need to. If your entire religious belief network really hangs on one empirical claim, that might be a sufficient connection between it and the rest of your beliefs, at least for efficiency.
1TheOtherDave9yI agree that this approach likely works well for software precisely because software tends to be built modularly to begin with... it probably would work far less well for analyzing evolved rather than constructed systems.
0pnrjulius9yI wouldn't be so sure. Tooby and Cosmides keep finding evidence that our brains are organized to some degree in terms of adaptive modules, like "cheater detection" and "status seeking" which work spectacularly well in the domain for which they evolved, and then fail miserably when expanded to other domains. If we had a "modus tollens" module or a "Bayesian update" module, we'd be a lot smarter than we are; but apparently we don't, because most people are atrociously bad at modus tollens and Bayesian updating, but remarkably good at cheater detection and status seeking. (If you haven't seen it already, look up the Wason task experiments. They clearly show that human beings fail miserably at modus tollens---unless it's formulated in terms of cheater detection, in which case we are almost perfect.)
0TheOtherDave9yYup, I'm familiar with the cheater stuff, and I agree that the brain has subsystems which work better in some domains than others. The thing about a well-designed modular architecture, though, is not just that it has task-optimized subsystems, but that those subsystems are isolated from one another and communicate through interfaces that support treating them more or less independently. That's what makes the kind of compartmentalization thomblake is talking about feasible. If, instead, I have a bunch of subsystems that share each other's code and data structures, I may still be able to identify "modules" that perform certain functions, but if I try to analyze (let alone optimize or upgrade) those modules independently I will quickly get bogged down in interactions that require me to understand the whole system.

This is interesting to me in a sort of tangential way. It seems like studying philosophy exercises this tendency to propagate your beliefs in order to make them coherent. In fact logical belief propagation seems to embody a large aspect of traditional philosophy, so I would expect that on average someone who studies philosophy would have this tendency to a greater degree than someone who doesn't.

It would be interesting to me if anyone has seen any data related to this, because it feels intuitively true that studying philosophy changed my way of thinking, but it's of course difficult to pinpoint exactly how. This seems like a big part of it.

1thomblake9yThis seems right to me. I know a fair amount of philosophers and at least some varieties of them seem to naturally disfavor compartmentalization. For example, I was originally an atheist for purely methodological reasons. Religion invites dead dogma, and dead dogma kills your ability to discover true beliefs (for exactly the same reason "0 and 1 are not probabilities"). I felt therefore that adhering to a religion while being a philosopher was professionally irresponsible.
0jmmcd9yI agree, but more generally, philosophy takes more or less as an axiom the idea that a person's beliefs should not be self-contradictory. Many people operate without any such axiom, or (more importantly) without awareness that such an idea exists. Probably any type of scientific training contributes to raising consciousness from this very low base level.
1Zetetic9yWith that view, another issue is to what extent philosophy and scientific training are more attractive to people that have a tendency to avoid compartmentalization and to what extent studying philosophy and science assists in/amplifies the ability to propagate beliefs, if at all. It seems like the sort of study that goes into these areas of study would provide the learner with heuristics for belief propagation, though each area might come equipped with unique heuristics.
[-][anonymous]9y 5

This reminds me of a blog post by Jula Galef I read a couple weeks ago: The Penrose Triangle of Belief .

0Nisan9yBefore I clicked on the link I thought it was going to be about this [http://books.google.com/books?id=gDbOAK89tmcC&pg=PA414&lpg=PA414].

That's actually a good question. Let me rephrase it to something hopefully clearer:

Compartmentalization is an essential safety mechanism in the human mind; it prevents erroneous far mode beliefs (which we all adopt from time to time) from having disastrous consequences. A man believes he'll go to heaven when he dies. Suicide is prohibited in a patch for the obvious problem, but there's no requirement to make an all-out proactive effort to stay alive. Yet when he gets pneumonia, he gets a prescription for penicillin. Compartmentalization literally saves his... (read more)

7drethelin9yCompartmentalization may make ridiculous far beliefs have less of an impact on the world, but it also allows those beliefs to exist in the first place. If your beliefs about religion depended on the same sort evidence that underpins your beliefs about whether your car is running, then you could no more be convinced of religion than you could be convinced by a mechanic that your car "works" even though it does not start.
5rwallace9ySo your suggestion is that we should de-compartmentalize, but in the reverse direction to that suggested by the OP, i.e. instead of propagating forward from ridiculous far beliefs, become better at back-propagating and deleting same? There is certainly merit in that suggestion if it can be accomplished. Any thoughts on how?
4drethelin9yYou don't understand. Decompartmentalization doesn't have a direction. You don't go forwards towards a belief or backwards from a belief, or whatever. If your beliefs are decompartmentalized that means that the things you believe will impact your other beliefs reliably. This means that you don't get to CHOOSE what you believe. If you think the singularity is all important and worth working for, it's BECAUSE all of your beliefs align that way, not because you've forced your mind to align itself with that belief after having it.
4rwallace9yI understand perfectly well how a hypothetical perfectly logical system would work (leaving aside issues of computational tractability etc.). But then, such a hypothetical perfectly logical system wouldn't entertain such far mode beliefs in the first place. What I'm discussing is the human mind, and the failure modes it actually exhibits.
3marchdown9yWhat is that evidence against singularity which you're alluding to?
8rwallace9yI discuss some of it at length here: http://lesswrong.com/lw/312/the_curve_of_capability/ [http://lesswrong.com/lw/312/the_curve_of_capability/] I'll also ask the converse question: given that you can't typically prove a negative (I can't prove the nonexistence of psychic powers or flying saucers either), if what we are observing doesn't constitute evidence against the Singularity in your opinion, then what would?

if what we are observing doesn't constitute evidence against the Singularity in your opinion, then what would?

I'm not marchdown, but:

Estimating the probability of a Singularity requires looking at various possible advantages of digital minds and asking what would constitute evidence against such advantages being possible. Some possibilities:

  • Superior processing power. Evidence against would be the human brain already being close to the physical limits of what is possible.
  • Superior serial power: Evidence against would be an inability to increase the serial power of computers anymore.
  • Superior parallel power: Evidence against would be an indication of extra parallel power not being useful for a mind that already has human-equivalent (whatever that means) parallel power.
  • Improved algorithms: Evidence against would be the human brain's algorithms already being perfectly optimized and with no further room for improvement.
  • Designing new mental modules: Evidence against would be evidence that the human brain's existing mental modules are already sufficient for any cognitive task with any real-world relevance.
  • Modifiable motivation systems: Evidence against would be evidence that human
... (read more)
4rwallace9yOkay, to look at some of the specifics: The linked article is amusing but misleading; the described 'ultimate laptop' would essentially be a nuclear explosion. The relevant physical limit is ln(2)kT energy dissipated per bit erased; in SI units at room temperature this is about 4e-21. We don't know exactly how much computation the human brain performs; middle-of-the-road estimates put it in the ballpark of 1e18 several-bit operations per second for 20 watts, which is not very many orders of magnitude short of even the theoretical limit imposed by thermodynamics, let alone whatever practical limits may arise once we take into account issues like error correction, communication latency and bandwidth, and the need for reprogrammability. Indeed we hit this some years ago. Of course as you observe, it is impossible to prove serial speed won't start increasing again in the future; that's inherent in the problem of proving a negative. If such proof is required, then no sequence of observations whatsoever could possibly count as evidence against the Singularity. Of course uses can always be found for more parallel power. That's why we humans make use of it all the time, both by assigning multiple humans to a task, and increasingly by placing multiple CPU cores at the disposal of individual humans. Finding these is (assuming P!=NP) intrinsically difficult; humans and computers can both do it, but neither will ever be able to do it easily. As for improved algorithms. An advantage when they reduce akrasia, a disadvantage when they make you more vulnerable to wireheading. Indeed there won't, at least initially; supercomputers don't grow on trees. Of course, computing power tends to become cheaper over time, but that does take time, so no support for hard takeoff here. Matt Mahoney argues [http://mattmahoney.net/agi2.html] that this will indeed happen because an irreducible fraction of the knowledge of how to do a job is specific to that job. Some of the more interesti
5Giles9yJust out of interest... assume my far beliefs take the form of a probability distribution of possible future outcomes. How can I be "skeptical" of that? Given that something will happen in the future, all I can do is update in the direction of a different probability distribution. In other words, which direction am I likely to be biased in?
9Eugine_Nier9yIn the direction of overconfidence, i.e., assigning too much probability mass to your highest probability theory.
5rwallace9yWe should update away from beliefs that the future will resemble a story, particularly a story whose primary danger will be fought by superheroes (most particularly for those of us who would personally be among the superheroes!) and towards beliefs that the future will resemble the past and the primary dangers will be drearily mundane [http://lesswrong.com/lw/10n/why_safety_is_not_safe/].
7Nornagest9yThe future will certainly resemble a story -- or, more accurately, will be capable of being placed into several plausible narrative frames, just as the past has. The bias you're probably trying to point to is in interpreting any particular plausible story as evidence for its individual components -- or, for that matter, against. The conjunction fallacy [http://lesswrong.com/lw/ji/conjunction_fallacy/]implies that any particular vision of a Singularity-like outcome is less likely than our untrained intuitions would lead us to believe. It's an excellent reason to be skeptical of any highly derived theories of the future -- the specifics of Ray Kurzweil's singularity timeline, for example, or Robin Hanson's Malthusian emverse. But I don't think it's a good reason to be skeptical of any of the dominant singularity models in general form. Those don't work back from a compelling image to first principles; most of them don't even present specific consequential predictions, for fairly straightforward reasons. All the complexity is right there on the surface, and attempts to narrativize it inevitably run up against limits of imagination. (As evidence, the strong Singularity has been fairly poor at producing fiction when compared to most future histories of comparable generality; there's no equivalent of Heinlein writing stories about nuclear-powered space colonization, although there's quite a volume of stories about weak or partial singularities.) So yes, there's not going to be a singleton AI bent on turning us all into paperclips. But that's a deliberately absurd instantiation of a much more general pattern. I can conceive of a number of ways in which the general pattern too might be wrong, but the conjunction fallacy doesn't fly; a number of attempted debunkings, meanwhile, do suffer from narrative fixation issues. Superhero bias is a more interesting question -- but it's also a more specific one.
3rwallace9yWell, any sequence of events can be placed in a narrative frame with enough of a stretch, but the fact remains that different sequence of events differ in their amenability to this; fiction is not a random sampling from the space of possible things we could imagine happening, and the Singularity is narratively far stronger than most imaginable futures, to a degree that indicates bias we should correct for. I've seen a fair bit of strong Singularity fiction at this stage, though being, well, singular, it tends not to be amenable to repeated stories by the same author the way Heinlein's vision of nuclear powered space colonization was.
1lessdazed9yThe best way to colonize Alpha Centauri has always been to wait for technology to improve rather than launching an expedition, but it's impossible for that to continue to be true indefinitely. Short of direct mind-to-mind communication or something with a concurrent halt to AI progress, AI advances will probably outpace human communication advances in the near to medium term. It seems unreasonable to believe human minds, optimized according to considerations such as politicking in addition to communication, will be able to communicate just as well as designed AIs. Human mind development was constrained by ancestral energy availability and head size, etc., so it's unlikely that we represent optimally sized minds to form a group of minds, even assuming an AI isn't able to reap huge efficiencies by becoming essentially as a single mind, regardless of scale.
4rwallace9yOr human communications may stop improving because they are good enough to no longer be a major bottleneck, in which case it may not greatly matter whether other possible minds could do better. Amdahl's law: if something was already only ten percent of total cost, improving it by a factor of infinity would reduce total cost by only that ten percent.
-1XiXiDu9yIt is often cited how much faster expert systems are at their narrow area of expertise. But does that mean that the human brain is actually slower or that it can't focus its resources on certain tasks? Take for example my ability to simulated some fantasy environment, off the top of my head, in front of my mind's eye. Or the ability of humans to run real-time egocentric world-simulations to extrapolate and predict the behavior of physical systems and other agents. Our best computers don't even come close to that. Chip manufacturers are already earning most of their money by making their chips more energy efficient and working in parallel. We simply don't know how efficient the human brain's algorithms are. You can't just compare artificial algorithms with the human ability to accomplish tasks that were never selected for by evolution. This is an actual feature. It is not clear that you can have a general intelligence with a huge amount of plasticity that would work at all rather than messing itself up. This is an actual feature, see dysfunctional autism. You don't really anticipate to be surprised by evidence on this point because your definition of "minds" doesn't even exist and therefore can't be shown not to be copyable. And regarding brains, show me some neuroscientists who think that minds are effectively copyable. Cooperation is a delicate quality. Too much and you get frozen, too little and you can't accomplish much. Human science is a great example of a balance between cooperation and useful rivalry. How is a collective intellect of AGI's going to preserve the right balance without mugging itself into pursuing insane expected utility-calculations? Wait, are you saying that the burden of proof is with those who are skeptical of a Singularity? Are you saying that the null hypothesis is a rapid takeover? What evidence allowed you to make that hypothesises in the first place? Making up unfounded conjectures and then telling others to disprove them will le
1Morendil9yHow do you propose that would happen?
3rwallace9yWe've had various kinds of Luddism before, but this one is particularly lethal in being a form that appeals to people who had been technophiles. If it spreads enough, best case scenario is the pool of people willing to work on real technological progress shrinks, worst case scenario is regulation that snuffs out progress entirely, and we get to sit around bickering about primate politics until whatever window of time we had runs out.
3Morendil9yThat's awfully vague. "Whatever window of time we had", what does that mean? There's one kind of "technological progress" that SIAI opposes as far as I can tell: working on AGI without an explicit focus on Friendliness. Now if you happen to think that AGI is a must-have to ensure the long-term survival of humanity, it seems to me that you're already pretty much on board with the essential parts of SIAI's worldview, indistinguishable from them as far as the vast majority is concerned. Otherwise, there's plenty of tech that is entirely orthogonal with the claims of SIAI: cheap energy, health, MNT, improving software engineering (so-called), and so on.
3rwallace9yThe current state of the world is unusually conducive to technological progress. We don't know how long this state of affairs will last. Maybe a long time, maybe a short time. To fail to make progress as rapidly as we can is to gamble the entire future of intelligent life on it lasting a long time, without evidence that it will do so. I don't think that's a good gamble. I have seen claims to the contrary from a number of people, from Eliezer himself a number of years ago up to another reply to your comment right now. If SIAI were to officially endorse the position you just suggested, my assessment of their expected utility would significantly increase.
6Morendil9yWell, SIAI isn't necessarily a homogenous bunch of people, with respect to what they oppose or endorse, but did you look for instance at Michael Anissimov's entries on MNT? (Focusing on that because it's the topic of Risto's comment and you seem to see that as a confirmation of your thesis.) You don't get the impression that he thinks it's a bad idea, quite the contrary: http://www.acceleratingfuture.com/michael/blog/category/nanotechnology/ [http://www.acceleratingfuture.com/michael/blog/category/nanotechnology/] Here is Eliezer on the SL4 mailing list: The Luddites of our times are (for instance) groups like the publishing and music industries, the use of that label to describe the opinions of people affiliated with SIAI just doesn't make sense IMO.
2MichaelAnissimov9yHuman-implemented molecular nanotechnology is a bad idea. I just talk about it to attract people in who think it's important. MNT knowledge is a good filter/generator for SL3 and beyond thinkers. MNT without friendly superintelligence would be nothing but a disaster. It's true that SIAI isn't homogeneous though. For instance, Anna is much more optimistic about uploads than I am personally.
1rwallace9yThanks for the link, yes, that does seem to be a different opinion (and some very interesting posts). I agree with you about the publishing and music industries. I consider current rampant abuse of intellectual property law to be a bigger threat than the Singularity meme, sufficiently so that if your comparative advantage is in politics, opposing that abuse probably has the highest expected utility of anything you could be doing.
-1Risto_Saarelma9yMolecular nanotechnology and anything else that can be weaponized to let a very small group of people effectively kill a very large group of people is probably something SIAI-type people would like to be countered with a global sysop scenario from the moment it gets developed.

The trouble is that even proper beliefs can be inadequately connected to other proper beliefs inside the human mind.

Proper beliefs can be too independent; if you have a belief network A -> B and the probabilities of 'B given A' and 'B given not-A' are similar, A doesn't have much value when you care about B. It doesn't change your belief much, because it isn't connected very much.

But my guess is most human brains have "A -> B" and don't have "B given A" and "B given not A". So they don't check the difference, so they... (read more)

I don't know if this is an answer or a rephrasing of the problem, but "making sure your beliefs are propagated to the rest of your knowledge" is what I classified as the Level 2 Understanding.