I, the author, no longer endorse this post.


 

Abstrummary: I describe a central technique of epistemic rationality that bears directly on instrumental rationality, and that I do not believe has been explicitly discussed on Less Wrong before. The technnique is rather simple: it is the practice of taking ideas seriously. I also present the rather simple metaphor of an 'interconnected web of belief nodes' (like a Bayesian network) to describe what it means to take an idea seriously: it is to update a belief and then accurately and completely propagate that belief update through the entire web of beliefs in which it is embedded. I then give a few examples of ideas to take seriously, followed by reasons to take ideas seriously and what bad things happens if you don't (or society doesn't). I end with a few questions for Less Wrong.

 

Eliezer Yudkowsky and Michael Vassar are two rationalists who have something of an aura of formadability about them. This is especially true of Michael Vassar in live conversation, where he's allowed to jump around from concept to concept without being penalized for not having a strong thesis. Eliezer did something similar in his writing by creating a foundation of reason upon which he could build new concepts without having to start explaining everything anew every time. Michael and Eliezer know a lot of stuff, and are able to make connections between the things that they know; seeing which nodes of knowledge are relevant to their beliefs or decision, or if that fails, knowing which algorithm they should use to figure out which nodes of knowledge are likely to be relevant. They have all the standard Less Wrong rationality tools too, of course, and a fair amount of heuristics and dispositions that haven't been covered on Less Wrong. But I believe it is this aspect of their rationality, the coherent and cohesive and carefully balanced web of knowledge and belief nodes, that causes people to perceive them as formidable rationalists, of a kind not to be disagreed with lightly.

The common trait of Michael and Eliezer and all top tier rationalists is their drive to really consider the implications and relationships of their beliefs. It's something like a failure to compartmentalize; it's what has led them to developing their specific webs of knowledge, instead of developing one web of beliefs about politics that is completely separate from their webs of belief about religion, or science, or geography. Compartmentalization is the natural and automatic process by which belief nodes or groups of beliefs nodes become isolated from their overarching web of beliefs, or many independent webs are created, or the threads between nodes are not carefully and precisely maintained. It is the ground state of your average scientist. When Eliezer first read about the idea of a Singularity, he didn't do exactly what I and probably almost anybody in the world would have done at that moment: he didn't think "Wow, that's pretty neat!" and then go on to study string theory. He immediately saw that this was an idea that needed to be taken seriously, a belief node of great importance that necessarily affects every other belief in the web. It's something that I don't have naturally (not that it's either binary or genetic), but it's a skill that I'm reasonably sure can be picked up and used immediately, as long as you have a decent grasp of the fundamentals of rationality (as can be found in the Sequences).

Taking an idea seriously means:

  • Looking at how a new idea fits in with your model of reality and checking for contradictions or tensions that may indicate the need of updating a belief, and then propagating that belief update through the entire web of beliefs in which it is embedded. When a belief or a set of beliefs change that can in turn have huge effects on your overarching web of interconnected beliefs. (The best example I can think of is religious deconversion: there are a great many things you have to change about how you see the world after deconversion, even deconversion from something like deism. I sometimes wish I could have had such an experience. I can only imagine that it must feel both terrifying and exhilarating.) Failing to propagate that change leads to trouble. Compartmentalization is dangerous.
  • Noticing when an idea seems to be describing a part of the territory where you have no map. Drawing a rough sketch of the newfound territory and then seeing in what ways that changes how you understand the parts of the territory you've already mapped.
  • Not just examining an idea's surface features and then accepting or dismissing it. Instead looking for deep causes. Not internally playing a game of reference class tennis.
  • Explicitly reasoning through why you think the idea might be correct or incorrect, what implications it might have both ways, and leaving a line of retreat in both directions. Having something to protect should fuel your curiosity and prevent motivated stopping.
  • Noticing confusion.
  • Recognizing when a true or false belief about an idea might lead to drastic changes in expected utility.

There are many ideas that should be taken a lot more seriously, both by society and by Less Wrong specifically. Here are a few:

  • Existential risks and the possibilities for methods of prevention thereof.
  • Molecular nanotechnology.
  • The technological singularity (especially timelines and planning).
  • Cryonics.
  • World economic collapse.

Some potentially important ideas that I readily admit to not yet having taken seriously enough:

  • Molecular nanotechnology timelines.
  • Ways to protect against bioterrorism.
  • The effects of drugs of various kinds and methodologies for researching them.
  • Intelligence amplification.

And some ideas that I did not immediately take seriously when I should have:

  • Tegmark's multiverses and related cosmology and the manyfold implications thereof (and the related simulation argument).
  • The subjective for-Will-Newsome-personally irrationality of cryonics.1
  • EMP attacks.
  • Updateless-like decision theory and the implications thereof.
  • That philosophical and especially metaphysical intuitions are not strong evidence.
  • The idea of taking ideas seriously.
  • And various things that I probably should have taken seriously, and would have if I had known how to, but that I now forget because I failed to grasp their gravity at the time.

I also suspect that there are ideas that I should be taking seriously but do not yet know enough about; for example, maybe something to do with my diet. I could very well be poisoning myself and my cognition without knowing it because I haven't looked into the possible dangers of the various things I eat. Maybe corn syrup is bad for me? I dunno; but nobody's ever sat me down and told me I should look into it, so I haven't. That's the problem with ideas that really deserve to be taken seriously: it's very rare that someone will take the time to make you do the research and really think about it in a rational and precise manner. They won't call you out when you fail to do so. They won't hold you to a high standard. You must hold yourself to that standard, or you'll fail.

Why should you take ideas seriously? Well, if you have Something To Protect, then the answer is obvious. That's always been my inspiration for taking ideas seriously: I force myself to investigate any way to help that which I value to flourish. This manifests on both the small and the large scale: if a friend is going to get a medical operation, I research the relevant literature and make sure that the operation works or that it's safe. And if I find out that the development of an unFriendly artificial intelligence might lead to the pointless destruction of everyone I love and everything I care about and any value that could be extracted from this vast universe, then I research the relevant literature there, too. And then I keep on researching. What if you don't have Something To Protect? If you simply have a desire to figure out the world -- maybe not an explicit desire for intsrumental rationality, but at least epistemic rationality -- then taking ideas seriously is the only way to figure out what's actually going on. For someone passionate about answering life's fundamental questions to miss out on Tegmark's cosmology is truly tragic. That person is losing a vista of amazing perspectives that may or may not end up allowing them to find what they seek, but at the very least is going to change for the better the way they think about the world.

Failure to take ideas seriously can lead to all kinds of bad outcomes. On the societal level, it leads to a world where almost no attention is paid to catastrophic risks like nuclear EMP attacks. It leads to scientists talking about spirituality with a tone of reverence. It leads to statisticians playing the lottery. It leads to an academia where an AGI researcher who completely understands that a universe is naturalistic and beyond the reach of God fails to realize that this means an AGI could be really, really dangerous. Even people who make entire careers out of an idea somehow fail to take it seriously, to see its implications and how it should move in perfect alignment with every single one of their actions and beliefs. If we could move in such perfect alignment, we would be gods. To be a god is to see the interconnectedness of all things and shape reality accordingly. We're not even close. (I hear some folks are working on it.) But if we are to become stronger that is the ideal we must approximate.

Now, I must disclaim: taking certain ideas seriously is not always best for your mental health. There are some cases where it is best to recognize this and move on to other ideas. Brains are fragile and some ideas are viruses that cause chaotic mutations in your web of beliefs. Curiosity and dilligence are not always your friend, and even those with exceptionally high SAN points can't read too much Eldritch lore before having to retreat. Not only can ignorance be bliss, it can also be the instrumentally rational state of mind.2

What are ideas you think Less Wrong hasn't taken seriously? Which haven't you taken seriously, but would like to once you find the time or gain the prerequisite knowledge? Is it best to have many loosely connected webs of belief, or one tightly integrated one? Do you have examples of a fully executed belief update leading to massive or chaotic changes in a web of belief? Alzheimer's disease may be considered an 'update' where parts of the web of belief are simply erased, and I've already listed deconversion as another. What kinds of advantages could compartmentalization give a rationalist?

 


I should write a post about reasons for people under 30 not to sign up for cryonics. However, doing so would require writing a post about Singularity timelines, and I really really don't want to write that one. It seems that a lot of LWers have AGI timelines that I would consider... erm, ridiculous. I've asked Peter de Blanc to bear the burden of proof and I'm going to bug him about it every day until he writes up the article.

2 If you snarl at this idea, try playing with this Litany, and then playing with how you play with this Litany: 

If believing something that is false gets me utility,
I desire to believe in that falsity;
If believing something that is true gets me utility,
I desire to believe in that truth;
Let me not become attached to states of belief that do not get me utility.

257 comments, sorted by
magical algorithm
Highlighting new comments since Today at 2:45 AM
Select new highlight date
Moderation Guidelinesexpand_more

Human thought is by default compartmentalized for the same good reason warships are compartmentalized: it limits the spread of damage.

A decade or thereabouts ago, I read a book called Darwin's Black Box, whose thesis was that while gradual evolution could work for macroscopic features of organisms, it could not explain biochemistry, because the intricate molecular machinery of life did not have viable intermediate stages. The author is a professional biochemist, and it shows; he's really done his homework, and he describes many specific cases in great detail and carefully sets out his reasons for claiming gradual evolution could not have worked.

Oh, and I was able to demolish every one of his arguments in five minutes of armchair thought.

How did that happen? How does a professional put so much into such carefully constructed arguments that end up being so flimsy a layman can trivially demolish them? Well I didn't know anything else about the guy until I ran a Google search just now, but it confirms what I found, and most Less Wrong readers will find, to be the obvious explanation.

If he had only done what most scientists in his position do, and said "I have faith in God," and kept that compartmentalized from his work, he would have avoided a gross professional error.

Of course that particular error could have been avoided by being an atheist, but that is not a general solution, because we are not infallible. We are going to end up taking on some mistaken ideas; that's part of life. You cite the Singularity as your primary example, and it is a good one, for it is a mistaken idea, and one that is immensely harmful if not compartmentalized. But really, it seems unlikely there is a single human being of significant intellect who does not hold at least one bad idea that would cause damage if taken seriously.

We should think long and hard before we throw away safety mechanisms, and compartmentalization is one of the most important ones.

Compartmentalized ships would be a bad idea if small holes in the hull were very common and no one bothered with fixing them as long as they affected only one compartment.

It seems like he had one way decompartmentalisation so that his belife in god was weighing on "science" but not the other way round.

That's the idea behind Reason as memetic immune disorder.

Sure, compartmentalization can protect you from your failures, but it also protects you from your successes.

If you can understand Reason as memetic immune disorder, you should also be able to get to the level of taking this into account. That is, think about how there is a long history of failure to compartmentalize causing failures- a history of people making mistakes, and asking yourself if you're still confident enough to act on it.

Human thought is by default compartmentalized for the same good reason warships are compartmentalized:

I'm going to ask you to recall your 2010 self now, and ask if you were actually trying to argue for a causal relationship that draws an arrow from the safety of compartmentalization to its existence. This seems wrong. It occurs to me that if you're evolution, and you're cobbling together a semblance of a mind, compartmentalization is just the default state, and it doesn't even occur to you (because you're evolution and literally mindless) to build bridges between parts of the mind.

Well, even if we agree that compartmentalized minds were the first good-enough solution, there's a meaningful difference between "there was positive selection pressure towards tightly integrated minds, though it was insufficient to bring that about in the available time" and "there was no selection pressure towards tightly integrated minds" and "there was selection pressure towards compartmentalized minds".

Rwallace seems to be suggesting the last of those.

Point, but I find the middle of your three options most plausible. Compartmentalization is mostly a problem in today's complex world; I doubt it was even noticeable most of the time in the ancestral environment. False beliefs e.g. religion look like merely social, instrumental, tribal-bonding mental gestures rather than aliefs.

Yeah, I dunno. From a systems engineering/information theory perspective, my default position is "Of course it's adaptive for the system to use all the data it has to reason with; the alternative is to discard data, and why would that be a good idea?"

But of course that depends on how reliable my system's ability to reason is; if it has failure modes that are more easily corrected by denying it certain information than by improving its ability to reason efficiently with that data (somewhat akin to programmers putting input-tests on subroutines rather than write the subroutine so as to handle that kind of input), evolution may very well operate in that fashion, creating selection pressure towards compartmentalization.

Or, not.

What's about facts from environment - is it good to gloss over applicability of something that you observed in one context, to other context? The compartmentalization may look like good idea when you are spending over a decade to put the effective belief system into children. It doesn't look so great when you have to process data from environment. We even see correlations where there isn't any.

The information compartmentalization may look great if the crew of the ship is to engage in pointless idle debates over intercom. Not so much when they need to coordinate actions.

I'm not sure I'm understanding you here.

I agree that if "the crew" (that is, the various parts of my brain) are sufficiently competent, and the communications channels between them sufficiently efficient, then making all available information available to everyone is a valuable thing to do. OTOH, if parts of my brain aren't competent enough to handle all the available information in a useful way, having those parts discard information rather than process it becomes more reasonable. And if the channels between those parts are sufficiently inefficient, the costs of making information available to everyone (especially if sizable chunks of it are ultimately discarded on receipt) might overcome the benefits.

In other words, glossing over the applicability of something I observed in one context to another context is bad if I could have done something useful by non-glossing over it, and not otherwise. Which was reliably the case for our evolutionary predecessors in their environment, I don't know.

Well, one can conjecture the counter productive effects of intelligence in general and any aspects of it in particular, and sure there were a few, but it stands that we did evolve the intelligence. Keep in mind that without highly developed notion of verbal 'reasoning' you may not be able to have the ship flooded with abstract nonsense in the first place. The stuff you feel, it tracks the probabilities.

Can you clarify the relationship between my comment and counterproductive effects of intelligence in general? I'm either not quite following your reasoning, or wasn't quite clear about mine.

A general-purpose intelligence will, all things being equal, get better results with more data.

But we evolved our cognitive architecture not in the context of a general-purpose intelligence, but rather in the context of a set of cognitive modules that operated adaptively on particular sets of data to perform particular functions. Providing those modules with a superset of that data might well have gotten counterproductive results, not because intelligence is counterproductive, but because they didn't evolve to handle that superset.

In that kind of environment, sharing all data among all cognitive modules might well have counterproductive effects... again, not because intelligence is counterproductive, but because more data can be counterproductive to an insufficiently general intelligence.

The existence of evolved 'modules' within the frontal cortex is not settled science and is in fact controversial. It's indeed hard to tell how much data do we share, though. Maybe without habit of abstract thought, not so much. On other hand the data about human behaviours seem important.

The default state, is that anything which is not linked to limb movement or other outputs ever, could as well not exist in the first place.

I think the issue with compartmentalization, is that integration of beliefs is a background process, that ensures coherent response whereby one part of the mind would not come up with one action, and other with another, which would make you e.g. drive a car into a tree if one part of brain wants to turn left and other wants to turn right.

The compartmentalization of information is anything but safe. When you compartmentalize your e.g. political orientation, from your logical thinking, I can make you do either A or B by presenting exact same situation in either political, or logical, way, so that one of the parts activates, and arrives at either action A or action B. That is not safe. That is "it gets you eaten one day" unsafe.

And if you compartmentalize the decision making on a warship, it will fail to coordinate the firing of the guns, and will be sunk, even if it will take more holes. Consider a warship that is being attacked by several enemies. If you don't coordinate the firing of torpedoes, you'll have overkill fire at some of the ships, wasting firepower. You'll be sunk. It is known issue in RTS games. You can beat human with pretty dumb AI if it simply coordinates the fire between units better.

The biologist in this example above is a single cherry picked example, from the majority of scientists, for whom the process has worked correctly, and they stopped believing that God created animals, or have failed to integrate beliefs, and are ticking time bombs wrt producing bad hypotheses. An edge case between atheists and believers, he is.

The compartmentalization of information is anything but safe.

I agree in most cases; however, there are some cases where ideas are very Big and Scary and Important where a full propagation through your explicit reasoning causes you to go nuts. This has happened to multiple people on Less Wrong, whom I will not name for obvious reasons.

I would like to emphasize that I agree in most cases. Compartmentalization is bad.

I think it happens due to ideas being wrong and/or being propagated incorrectly. Basically, you would need extremely high confidence in a very big and scary idea, before it can overwrite anything. The MWI is very big and scary. Provisionally, before I develop moral system based on MWI, it is perfectly consistent to assume that it has probability of being wrong, q, and the relative morality of actions, unknown under MWI, and known under SI, does not change, and consequently no moral decision (involving comparison of moral values) changes before there is a high quality moral system based on MWI. As a quick hack moral system based on MWI is likely to be considerably incorrect and lead to rash actions (e.g. quantum suicide that actually turns out to be as bad as normal suicide after you figure stuff out)

The ship is compartmentalized against hole in the hull, not against something great happening to it. Incorrect idea with high confidence can be a hole in the hull; the water be the resulting nonsense overriding the system.

The author was an idiot. I too found the fatal flaw in about five minutes - in a bookshop.

IMO, the mystery here is not the author's fail, but how long the "evolution" fans banged on about it for - explaining the mistake over and over and over again.

IMO, the mystery here is not the author's fail, but how long the "evolution" fans banged on about it for - explaining the mistake over and over and over again.

Because lots of people (either not as educated or not as intelligent) didn't realize how highly flawed the book was. And when someone is being taken seriously enough that they are an expert witness in a federal trial, there's a real need to respond. Also, there were people like me who looked into Behe's arguments in detail simply because it didn't seem likely that someone with his intelligence and education would say something that was so totally lacking in a point, so the worry was that one was missing something. Of course, there's also the irrational but highly fun aspect of tearing arguments into little tiny pieces. Finally, there's the other irrational aspect that Behe managed to trigger lots of people to react by his being condescending and obnoxious (see for example his exchange with Abbie Smith where he essentially said that no one should listen to her he because he was a prof and she was just a lowly grad student).

Re: "there's also the irrational but highly fun aspect of tearing arguments into little tiny pieces"

I think that was most of it - plus the creationsts were on the other side, and the they got publicly bashed for a long time.

I was left wondering why so many intelligent people wasted so much energy and time on such nonsense for so long.

Dawkins and Dennet have subsequently got into the god bashing. What a waste of talent that is. I call it their "gutter outreach" program.

Dawkins and Dennet have subsequently got into the god bashing. What a waste of talent that is. I call it their "gutter outreach" program

Standard beliefs in deities are often connected with a memetic structure that directly encourages irrationalism. Look at the emphasis on "faith" and on mysterious answers. If one is interested in improving rationality, removing the beliefs that directly encourage irrationality is an obvious tactic. Religious beliefs are also responsible for a lot of deaths and resources taken up by war and similar problems. Removing those beliefs directly increases utility. Religion is also in some locations (such as much of the US) functioning as a direct barrier to scientific research and education (creationism and opposition to stem cell research are good examples). Overall, part of why Dawkins has spent so much time dealing with religion seems to be that he sees religion as a major barrier for people actually learning about the interesting stuff.

Finally, note that Dawkins has not just spent time on dealing with religious beliefs. He's criticized homeopathy, dousing, various New Age healing ideas, and many others beliefs.

I figure those folk should be leading from the front, not dredging the guttering.

Anyone can dispense with the ridiculous nonsense put forth by the religious folk - and they do so regularly.

If anything, Dennet and Dawkins add to the credibility of the idiots by bothering to engage with them.

If the religious nutcases' aim was to waste the time of these capable science writers - and effectively take them out of productive service - then it is probably "mission acomplished" for them.

those folk should be leading from the front, not dredging the guttering.

So what would constitute leading from the front in your view?

If the religious nutcases' aim was to waste the time of these capable science writers - and effectively take them out of productive service - then it is probably "mission acomplished" for them.

But there are a lot of science writers now. Carl Zimmer and Rebecca Skloot would be two examples. And the set of people who read about science is not large. If getting people to stop having religious hangups with science will make a larger set of people reading such material how is that not a good thing?

I was much happier with what they were doing before they got sucked into the whirlpool of furious madness and nonsense. Well, "Freedom Evolves" excepted, maybe.

If getting people to stop having religious hangups with science will make a larger set of people reading such material how is that not a good thing?

Your question apparently presumes falsehoods about my views :-(

Your question apparently presumes falsehoods about my views :-(

Clarify please? What presumptions am I making that are not accurate?

If I may attempt an interpretation, Tim is saying that the Great Minds should be busy thinking Great Thoughts, and that they should leave the swatting of religious flies to us lesser folk.

Uh, I never claimed that getting people to stop having religious hangups was not a good thing in the first place.

Ah, sorry bad phrasing on my part. Withdraw last question, and replace end with following argument "And the set of people who read about science is not large. Getting people to stop having religious hangups with science will make a larger set of people reading such material is a good thing, and people like Dawkins will do that aspect more effectively than if they were simply one of many science popularizers talking to largely the same audience."

As I understand it, there is precious little evidence of much marginal benefit - no matter who is making the argument. The religious folk realise it is the devil talking, put their fingers in their ears, and sing the la-la song - which works pretty well. Education will get there in the end. We have people working on that - but it takes a while. The internet should help too.

Dennett once explained:

"Yes, of course I'd much rather have been spending my time working on consciousness and the brain, or on the evolution of cooperation, for instance, or free will, but I felt a moral and political obligation to drop everything for a few years and put my shoulder to the wheel doing a dirty job that I thought somebody had to do."

Someone has to clean the toilets too - but IMO it doesn't have to be Daniel Dennett.

If you don't read creationists, it looks like there aren't any, and it looks like "evolution fans" are banging on about nothing. But, in reality, there are creationists, and they were also banging on in praise of the book. David Klinghoffer, for instance (prominent creationist with a blog.)

I, the author, no longer endorse this post.

Why? Did Will ever explain this?

Don't take ideas seriously unless you can take uncertainty seriously.

Taking uncertainty seriously is hard. Pick a belief. How confident are you? How confident are you that you're that confident?

The natural inclination is to guess way to high on both of those. Not taking ideas seriously acts as a countermeasure to this. It's an over-broad countermeasure, but better than nothing if you need it.

Warning: This comment consists mostly of unreliably-remembered anecdotal evidence.

When I read the line "The best example I can think of is religious deconversion: there are a great many things you have to change about how you see the world after deconversion, even deconversion from something like deism. I sometimes wish I could have had such an experience. I can only imagine that it must feel both terrifying and exhilarating.", my immediate emotional reaction was "No!!! You don't want this experience!!! It's terrifying!!! Really terrifying!!!" And I didn't notice any exhilaration when it happened to me. Ok, there were some things that were a really big relief, but nothing I wouldn't consider exhilarating. I guess I'll talk about it some more...

The big, main push of my deconversion happened during exam time, in... what was it? my second year of university? Anyway, I had read Eliezer's writings a few days (weeks? months?) ago, and had finally gotten around to realizing that yes, seriously, there is no god. At the time, I had long since gotten into the habit of treating my mind's internal dialogue as a conversation with god. And I had grown dependent on my habit of constantly asking god for help with everything I felt insecure about. And I felt insecure about pretty much everything. Especially those university exams. I still remember the terror of sitting in that room, with the exam paper on the desk, knowing that I wasn't as prepared as I should have been, and not having a god to ask for help to make things turn out ok anyway. Noticing myself silently mouthing the words of the prayers just out of habit, and then stopping myself when I realize that it's pointless and probably counterproductive... and then forgetting and unconsciously starting the prayers again, and catching myself again... and that went on for... what was it? days? weeks? months? years?

Anyway, back to the topic... "propagating that belief update through the entire web of beliefs in which it is embedded" isn't just something that you can do all at once and be done with it. If you're updating a core belief, then you're going to constantly find yourself noticing beliefs that need updating. And more often, you'll find yourself not noticing things that need updating, and not finding out about them until you notice some other problem, spend a lot of time tracing back to the cause of it, and then noticing some particular belief or habit that's still having a serious effect on your actions, but that doesn't have any justification in your current belief system, now that the false belief is removed.

And then there's the false positives, of things that you think are still being caused by the incompletely updated belief, but that really aren't...

Anyway, what I'm trying to say is... don't envy people who previously took religion seriously, then realized they were wrong, and then had to go through the long, tedious, terrifying process of updating their entire belief system. Personally, I think that I would have been much better off if I had started with a healthy belief system, rather than having the experience of updating from an extremely unhealthy belief system. Or maybe not, I don't know.

What are ideas you think Less Wrong hasn't taken seriously?

I think LW as a whole (but not some individuals) ignored practical issues of cognitive enhancement.

From outside-in:

  • Efficient learning paths. Sequences are great, but there is a lot of stuff to learn from books, and would be great to have dependencies mapped out with the best materials for things like physics, decision theory, logic, CS stuff.

  • Efficient learning techniques: there are many interesting ideas out there, but I do not have time to experiment with them all, such as Supermemo, speed reading.

  • Hardware tools. I feel like I am closer integrated with information with iphone/ipad, if reasonable eyewear comes to market this will be much enhanced.

  • N-back and similar.

  • Direct input via braiwaves/subvocalisation.

  • Pharmacological enhancement.

  • Real BCIs, which are starting to come to market servicing disabled people.

Even if these tools do not lead to Singularity (my guess) they might give edge to FAI researchers.

dual n-back: for the past month, I've spent 2-5 minutes most days on it.

I can do dual 4-back with 95%+ accuracy and 5-back with 60%, and I've likely plateaued (naturally, my skill rapidly improved at first). I enjoy it as "practice focusing on something", but haven't noticed any evidence of any general improvement in memory or other mental abilities. I plan on continuing the habit indefinitely.

After doing 100 trials of dual N back stretched over a week (mostly 4 back) I noticed that I felt slightly more conscious: my emotions were more salient, I enjoyed simple things more, and I just felt generally more alive. There were tons of free variables for me, though, so I doubt causation. Did you notice anything similar?

A collection of anecdotal evidence from players is available in Gwern's great n-back FAQ.

I've played for some two months earlier this year and my max level was 8. I haven't really noticed anything, but since I took no tests prior or after the training I can't really say a firm thing about it. The experience of getting better in n-back is exhilarating and bewildering enough that I plan to resume playing it soon. I mean, at the earlier levels I often felt intensely that a certain next level I just got to is physically impossible to beat, and behold, after a few days it seemed manageable, and after a week or so, trivial. All of this without any conscious learning process taking place, or any strategy coalescing. It's an especially unadulterated example how a brain that gets rewired feels from the inside.

Yes, I know the same feeling (and have remarked on it once or twice on the DNB ML) - it's very strange how over a day or two one can suddenly jump 10 or 20% on a level and have a feeling that eg. suddenly D4B is clear and comprehensible, while before only D3B was and D4B was a murky mystery one had difficulty keeping in one's head.

On the other hand - D8B? Dammit! I've been at n-backing for something like 2 years now, and have been stuck on D4B for months. You, Jonathan, and Will just go straight to D4B or D8B within a few months with ease. I must be doing something wrong.

(On a sidenote, as in the FAQ, I ask people for their negative or null reports as well as their positive ones. This thread is unusual in 2 null reports to 1 positive, but I'm sure there are more LWers who've tried!)

I did maybe 10-15 half-hour sessions of mostly D5B-D6B last year over the course of a few weeks and didn't notice any effects.

All the links to your FAQ in this thread are broken. Does the FAQ still exist?