Being levels above in [rationality] means doing rationalist practice 101 much better than others [just like] being a few levels above in fighting means executing a basic front-kick much better than others.

- lessdazed

I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.

- Bruce Lee

Recently, when Eliezer wanted to explain why he thought Anna Salamon was among the best rationalists he knew, he picked out one feature of Anna's behavior in particular:

I see you start to answer a question, and then you stop, and I see you get curious.

For me, the ability to reliably get curious is the basic front-kick of epistemic rationality. The best rationalists I know are not necessarily those who know the finer points of cognitive psychology, Bayesian statistics, and Solomonoff Induction. The best rationalists I know are those who can reliably get curious.

Once, I explained the Cognitive Reflection Test to Riley Crane by saying it was made of questions that tempt your intuitions to quickly give a wrong answer. For example:

A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

If you haven't seen this question before and you're like most people, your brain screams "10 cents!" But elementary algebra shows that can't be right. The correct answer is 5 cents. To get the right answer, I explained, you need to interrupt your intuitive judgment and think "No! Algebra."

A lot of rationalist practice is like that. Whether thinking about physics or sociology or relationships, you need to catch your intuitive judgment and think "No! Curiosity."

Most of us know how to do algebra. How does one "do" curiosity?

Below, I propose a process for how to "get curious." I think we are only just beginning to learn how to create curious people, so please don't take this method as Science or Gospel but instead as an attempt to Just Try It.

As with my algorithm for beating procrastination, you'll want to practice each step of the process in advance so that when you want to get curious, you're well-practiced on each step already. With enough practice, these steps may even become habits.

Step 1: Feel that you don't already know the answer.

If you have beliefs about the matter already, push the "reset" button and erase that part of your map. You must feel that you don't already know the answer.

Exercise 1.1: Import the feeling of uncertainty.

  1. Think of a question you clearly don't know the answer to. When will AI be created? Is my current diet limiting my cognitive abilities? Is it harder to become the Prime Minister of Britain or the President of France?
  2. Close your eyes and pay attention to how that blank spot on your map feels. (To me, it feels like I can see a silhouette of someone in the darkness ahead, but I wouldn't take bets on who it is, and I expect to be surprised by their identity when I get close enough to see them.)
  3. Hang on to that feeling or image of uncertainty and think about the thing you're trying to get curious about. If your old certainty creeps back, switch to thinking about who composed the Voynich manuscript again, then import that feeling of uncertainty into the thing you're trying to get curious about, again.

Exercise 1.2: Consider all the things you've been confident but wrong about.

  1. Think of things you once believed but were wrong about. The more similar those beliefs are to the beliefs you're now considering, the better.
  2. Meditate on the frequency of your errors, and on the depths of your biases (if you know enough cognitive psychology).

Step 2: Want to know the answer.

Now, you must want to fill in this blank part of your map.

You mustn't wish it to remain blank due to apathy or fear. Don't avoid getting the answer because you might learn you should eat less pizza and more half-sticks of butter. Curiosity seeks to annihilate itself.

You also mustn't let your desire that your inquiry have a certain answer block you from discovering how the world actually is. You must want your map to resemble the territory, whatever the territory looks like. This enables you to change things more effectively than if you falsely believed that the world was already the way you want it to be.

Exercise 2.1: Visualize the consequences of being wrong.

  1. Generate hypotheses about the ways the world may be. Maybe you should eat less gluten and more vegetables? Maybe a high-protein diet plus some nootropics would boost your IQ 5 points? Maybe your diet is fairly optimal for cognitive function already?
  2. Next, visualize the consequences of being wrong, including the consequences of remaining ignorant. Visualize the consequences of performing 10 IQ points below your potential because you were too lazy to investigate, or because you were strongly motivated to justify your preference for a particular theory of nutrition. Visualize the consequences of screwing up your neurology by taking nootropics you feel excited about but that often cause harm to people with cognitive architectures similar to your own.

Exercise 2.2: Make plans for different worlds.

  1. Generate hypotheses about the way the world could be — different worlds you might be living in. Maybe you live in a world where you'd improve your cognitive function by taking nootropics, or maybe you live in a world where the nootropics would harm you.
  2. Make plans for what you'll do if you happen to live in World #1, what you'll do if you happen to live in World #2, etc. (For unpleasant possible worlds, this also gives you an opportunity to leave a line of retreat for yourself.)
  3. Notice that these plans are different. This should produce in you some curiosity about which world you actually live in, so that you can make plans appropriate for the world you do live in rather than for one of the worlds you don't live in.

Exercise 2.3: Recite the Litany of Tarski.

The Litany of Tarski can be adapted to any question. If you're considering whether the sky is blue, the Litany of Tarski is:

If the sky is blue
I desire to believe the sky is blue.
If the sky is not blue,
I desire not to believe the sky is blue.

Exercise 2.4: Recite the Litany of Gendlin.

The Litany of Gendlin reminds us:

What is true is already so.
Owning up to it doesn't make it worse.
Not being open about it
doesn't make it go away.
And because it's true,
it is what is there to be interacted with.
Anything untrue isn't there to be lived.
People can stand what is true,
for they are already enduring it.

Step 3: Sprint headlong into reality.

If you've made yourself uncertain and then curious, you're now in a position to use argument, empiricism, and scholarship to sprint headlong into reality. This part probably requires some domain-relevant knowledge and an understanding of probability theory and value of information calculations. What tests could answer your question quickly? How can you perform those tests? If the answer can be looked up in a book, which book?

These are important questions, but I think the first two steps of getting curious are more important. If someone can master steps 1 and 2, they'll be so driven by curiosity that they'll eventually figure out how to do step 3 for many scenarios. In contrast, most people who are equipped to do step 3 pretty well still get the wrong answers because they can't reliably execute steps 1 and 2.

Conclusion: Curiosity in Action

A burning itch to know is higher than a solemn vow to pursue truth. If you think it is your duty to doubt your own beliefs and criticize your own arguments, then you may do this for a while and conclude that you have done your duty and you're a Good Rationalist. Then you can feel satisfied and virtuous and move along without being genuinely curious.

In contrast,

if you can find within yourself the slightest shred of true uncertainty, then guard it like a forester nursing a campfire. If you can make it blaze up into a flame of curiosity, it will make you light and eager, and give purpose to your questioning and direction to your skills.

My recommendation? Practice the front-kick of epistemic rationality every day. For months. Train your ape-brain to get curious.

Rationality is not magic. For many people, it can be learned and trained.

New Comment
100 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Also, learn to differentiate between genuine curiosity and what I like to call pseudo-curiosity - basically, being satisfied by conclusions rather than concepts. Don't let the two overlap. This is especially hard when conclusions are most of the time readily available and often the first item in a google search. In terms of genuine curiosity, google has been the bane of my existence - I will start off moderately curious, but instead of moving to that higher stage of curiosity, I will be sated by facts and conclusions without actually learning anything (similar to a guessing the teacher's password situation). After a couple hours of doing this, I feel very scholarly and proud of my ability to parse so much information, when in reality all I did was collect a bunch of meaningless symbols.

To combat this, I started keeping a "notebook of curiosities". The moment I get curious, I write whatever it is I'm curious about, and then write everything I know about it. At this point, I determine whether or not anything I know is a useful springboard; otherwise, I start from scratch. Then I circle my starting node and start the real work, with the following rules:

  • Every fact or conce
... (read more)

"A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?"

I had following (in rapid succession): 10 cents, whoops it adds up to 120 cents , aha, 5 cents, adds up to 110 , done.

Doesn't really matter what stupid heuristic you try if you verify the result. I can of course do: let a+b=1.1 , a=b+1 , b+1+b=1.1 , 2b=0.1 , b=0.05 , but it takes a lot longer to write, and to think, and note the absence of verification step here.

The "No! Algebra" is sure fire way to do things slower. Verification and double checking is the key imo. Algebra is for unwieldy problems where you can't test guesses quickly, failed to guess, have to use pencil and paper, etc. When you rely on short term memory you really could be best off trying to intuitively get the answer, then checking it, then rewarding yourself when correct (if verification is possible)

  • whoops is more like some parallel pondering just of how stupid i must be.
Instinctively my thought process goes: The dollar is the extra, then the ten cents is split, $0.05, done (plus or minus a double check). I can sense the $0.10 answer trying to be suggested instantly in the background, but it has a fraction of a second before it gets cut off, presumably because this is a kick type I've done 10,000 times. Formal algebra is the very slow (in relative terms) but reliable answer.
Well yea, the processes at that timescale are not even exactly serial. When the 10 cents appears i just derail into pondering how stupid I must be to have 10 cents even pop up consciously, while 5 cents pops up. When we were taught math at school we often had to do verification step. Then i was doing contests a fair bit and you care to check yourself there, you solve each problem and check the answer, then in the end if you solved everything you go over them again and doublecheck, triplecheck. We had few hard problems on tests instead of many easy ones. You often had to think - how do i check this? It seems not everyone's taught this way, some people have self esteem-boosting cultural stuff in mind, and the self doubt can be seen as worst thing ever culturally. In US movies there's always someone who's like, i can't do it, i can't do it, then the hero talks them into jumping over the gap anyway, and they do it, which is just silly. For other example, say, I face something like monty hall problem. I think - how can i solve it so that i can be sure in the answer? Well, the foolproof way is to consider all the possibilities, which i can do rather rapidly by visualizing it. I don't need to think in terms of probabilities. There's other important thing here: reductionism. One need to know what things are derived, and that derived things aren't 'better' or 'right'. The probabilities are substitute for evaluating a potentially infinite number of possible worlds and counting them. If you ever have conflict between some first principles reasoning and some advanced high level reasoning, the advanced reasoning is not the one that's working correctly, probably you're misapplying it. I recall many arguments over physics on some forum with some guy who just didn't understand the reductionism. His barrels would float due to Archimedes law, not due to pressure difference; then it gets confusing when you have a barrel fall down into water (dynamical situation), and he would try

Having worked on the Voynich Manuscript (which you namecheck above) for over a decade now, I'd say that uncertainty isn't just a feeling: rather, it's the default (and indeed natural) state of knowledge, whereas certainty is normally a sign that we've somehow failed to grasp and appreciate the limits and nature of our knowledge.

Until you can eradicate the itch that drives you to want to make knowledge final, you can never be properly curious. Real knowledge doesn't do final or the last words on a subject: it's conditional, partial, constrained, and heuristic. I contend that you should train your ape-brain to stay permanently curious: almost all certain knowledge is either fake or tautologous.

Exercise 2.2: Make plans for different worlds... Maybe you live in a world where you'd improve your cognitive function by taking nootropics, or maybe you live in a world where the nootropics would harm you.

On the bright side, this is pretty much the thought process I go through whenever I don't know the right answer to something. On the other hand ("on the dark side"?) I think my automatic instinct is "there's no scientific consensus on this that I've read about in my textbooks...therefore this is a Permanent Blank in my map and I just h... (read more)

Once, I explained the Cognitive Reflection Test to Riley Crane by saying it was made of questions that tempt your intuitions to quickly give a wrong answer. For example:

This could use spoiler tags, or ideally some substitute: it's useful for people to have a chance to be administered the CRT unawares (lest they imagine by hindsight bias that they would not have been misled, or others lose the chance to test them).

In feeling that you do not know the answer, Luke suggests to "Think of things you once believed but were wrong about." Why not take it a step further and say

1.3 When thinking about a time when you were wrong, think about how right being wrong feels*up until the moment you realize we are wrong.

In reflecting on times when I have been wrong what I find most disturbing is not what I was wrong about, but the degree to which being wrong is cognitively similar to being right. In college, I went to an Elizabeth Loftus lecture where she shockingly announc... (read more)

Curiosity is one possible motivation that forces you to actually look at evidence. Fear is more reliable and can be used when curiosity is hard to manufacture.

Fear can be powerful but it is far from reliable and usually not used best for ongoing motivation of any kind.
It depends on the kind of fear. The fear of going off my beeminder roads is good enough to motivate me to stay on them. YMMV.
It quite possibly would (vary). I have developed something of a "@#%@# you!" attitude to threats that are ongoing and try to reserve fear as an exception-oriented motivation device.
I don't think I could really feel fear about something in far mode thinking.
I worry that fear may paralyze. Curiosity seems more likely to spring someone into action. These effects probably vary between persons.

If fear paralyzes, maybe it's best used in bursts at times when you don't immediately need anything done and can spend some time on reevaluating basic assumptions. I wonder if there should be a genre of fiction that's analogous to horror except aimed at promoting epistemic paranoia. I've heard the RPG Mage: the Ascension cited in that context. I guess there's also movies like the Matrix series, the Truman Show, Inception. One could have an epistemic counterpart to Halloween.

I just watched The Truman Show a few days ago. I interpreted it as a story about a schizophrenic who keeps getting crazier, eventually experiencing a full out break and dying of exposure. The scenes with the production crew and audience are actually from the perspective of the schizophrenic's imagination as he tries to rationalize why so many apparently weird things keep happening. The scenes with Truman in them are Truman's retrospective exaggerations and distortions of events that were in reality relatively innocuous. All this allows you to see how real some schizophrenics think their delusions are.

I had never heard anybody interpreting it that way before.
I've never heard that one before, but there is a psychiatric illness in which people believe themselves to be watched at all times and that the world around them was created specifically for them, et cetera. It's called Truman Syndrome. All I know about schizophrenia I know from the copious number of psychiatric volumes and memoirs I've read. I have an older cousin with paranoid schizophrenia, but I don't even remember the last time I spoke to him.
I'm now imaginging children wearing signs with cognitive biases written on them running around door to door, and people answering the door, uttering brief arguments, and rewarding each kid with paperback science fiction if the kid can correctly identify the fallacy.

What I had in mind was replacing rituals involving the fear of being hurt with rituals involving the fear of being mistaken. So in a more direct analogy, kids would go around with signs saying "you have devoted your whole existence to a lie", and threaten (emptily) to go into details unless they were given candy.

Upvoted for making me laugh until it hurt. You could probably get sufficiently-twisted kids to do this on the usual Halloween. Dress them up as professors of philosophy or something; it'd be far scarier than zombie costumes. (This would actually be fantastic.) Alternately, dress up as a "philosopher" (Large fake beard and pipe, maybe?), set up something like a fake retiring room on your front porch, tell small children that their daily lives are based on subtly but critically broken premises, and give them candy. (Don't actually do this, unless your neighbors love or hate you unconditionally. Or you're moving away soon.)
Alternately, dress up as a zombie philosopher and shamble around moaning "quaaaalia" instead of "braaaains".

Last Halloween i dressed as a P-zombie. I explained to anybody who would listen that i had the same physical composition as a conscious human being, but was not in fact conscious. I'm not sure that any of them were convinced that i really was in costume.

For this to be really convincing and spoooky, you could stay in character: Halloween party attendant: Hi radical_negative_one, what are you dressed as? confederate: radical_negative_one is a p-zombie, who acts just like a real person but is not actually conscious! radical_negative_one: That's not true, I am conscious! I have qualia and an inner life and everything!
radical_negative_one: (To confederate:) No, you're the p-zombie, not me! (To Halloween party attendant:) They're getting everywhere, you know. They look and act just like you and me, physically you can't tell, but they have no soul! They're just dead things!! They sound like us, but nothing they say means anything, it's just noises coming out of a machine!!! Your best friend could be a p-zombie!!!! All your friends could be p-zombies!!!!! confederate It's all true! And he's one of them! Say, how do I know you're not a zombie?
confederate: No, radical_negative_one. You are the demons And then radical_negative_one was a zombie.
And tweed jacket with leather patches on the elbows, don't forget.
Ah, yes. That would satisfy nicely.
Oh, great. Now I have half a mind to go out this Halloween for the first time since junior high school dressed as a philosophy professor to scare middle aged housewives with rationalist arguments. And I would carry out my threat of giving details as to how they have devoted their whole existences to a lie. I do that a lot, actually, just not in a costume and generally not by coming up to stranger's houses for candy.
But that's the fear of learning that one is mistaken, not the fear of being mistaken...
You're right, of course. I don't think a fully direct analogy is possible here. You can't really threaten to make someone have been wrong.
"You always thought I wasn't the kind of person who would TP your house on Halloween, but if you don't give me candy I'll make you have been wrong all along!"
"Hah, got you - I actually thought all along that you were the kind of person who would TP my house if and only if denied candy on Errorwe'en!" "Okay, and given your beliefs, are you gonna give me candy?" "...Have a Snickers."
I can easily imagine a sci-fi horror story in which someone is powerful enough to do that. You'd have to demonstrate it first, of course, and the story would have to take some time to carefully explore what changes when someone is made to have been wrong, but it seems plausibly doable.
Emptily? Just how sure of that are you? (I like skittles.)
Yes! Give me a Three Musketeers bar or I shall prove that you have devoted your entire existence to a lie using only logic and rhetoric.
What we need is a rationalist hell-house.
Looking back it seems I use curiosity more for hours or days-long knowledge-gaining quests, e.g. immersing myself in a new academic field, whereas I use fear more when philosophizing on my own, especially about AI/FAI. Introspectively it seems that fear is more suited to examining my own thoughts or thoughts I identify with whereas curiosity is more suited to examining ideas that I don't already identify with or things in my environment. I suspect this is because people generally overestimate the worth of their own ideas while underestimating the worth of others' -- negative motivations reliably act as critical inductive biases to counterbalance systematic overconfidence in oneself, whereas positive motivations reliably act as charitable inductive biases to counterbalance systematic underconfidence in others. As you say, it's probable that others would have different cognitive quirks to balance and counterbalance.
1awe lotta
Fear of bad consequences seems to be part of (how this post defines) curiosity. i.e. Exercise 2.1: Visualize the consequences of being wrong.

I consistently fail several times over at this. I always feel I DO know everything worth knowing, and while obviously wrong can't come up with any salient counterexamples. Probably related to memory problems I have, I don't seem able to come up with examples or counterexamples of anything ever.

And when I do consider multiple possibilities, they never seem to matter for what actions I should take, which drains any motivation to find out the answer if it takes more than 30 seconds of googling or I happen to not be at my computer when the question occurs.

All... (read more)

[This comment is no longer endorsed by its author]Reply

Good. Let's see if we can make progress.

  1. New habit: Every time you're wrong, write down what you were wrong about.
  2. Play 'the calibration game': Use Wits & Wagers cards and give your confidence intervals. You'll probably find that 40% of the time, the correct answer was outside your 90% confidence interval. Write down all those failures.
  3. If the different hypotheses don't matter for which actions you take, you're either bad at realizing the decision-theoretic implications of various hypotheses, or you're bad at spending your time thinking about things that matter. Which do you think it is?
  4. Rarely is new information not evidence for or against old ideas. Maybe you need more practice in model-building? This is a separate post I'd like to write at some time; I'm not sure what useful thing I can say about it now.
  5. Re: your "heinous lack of virtue." Reward yourself for effort, not for results. You have more control over the former.
Awesome. I'm going to keep that in mind. I only have a quibble about That could lead me to try but nowhere near as hard as I can, and making excuses when I fail.
To clarify: reward yourself for taking new and improved actions, or for taking more of the right kind of actions, even if these actions don't immediately cause the desired results. Once your new level becomes a habit, stop rewarding yourself and reward the next level up. Rinse and repeat until you're close enough to a goal that it makes sense to reward yourself directly for the results you actually want.
I continue to celebrate a job well done even if it's force of habit, if only to give myself better incentives to form more good habits.
There's signaling effort (especially to yourself), and then there's effort. You want to reward effort but not signaling effort. Often one will make a cursory attempt at something, but with the goal of signaling to themselves or others that they put in effort or tried rather than doing what was most likely to accomplish the goal. This leads to statements like "I tried to get there on time" or "I did everything I was supposed to do." That's excuse making. Don't reward that. Instead, reward yourself to the extent that you did that which you had reason to believe was most likely to work, including doing your best to figure that out, even if it didn't succeed. Do the opposite if you didn't make the best decisions and put forth your best efforts, even if you do succeed. The danger is that effort is much easier to self-deceive about than results - and the people who need this the most will often have the most trouble with that. Not enough attention is paid to this problem, and it may well deserve a top level post.
you need both the instances you are right and the instances you are wrong to do correct stats... otherwise i can have 90% confidence, be wrong one time out of 10, and 100% of those times that i am wrong, have the answer outside 90% confidence interval.
1. I so far have a 100% failure rate in establishing habits that involve writing things down or in other ways externalize memory. 2. I don't have any such cards. I also doubt paying a game once for 5 minutes will help much, and akrasia and stress will prevent any more than that. 3. Of those, absolutely the latter, but neither seems plausible. 4. I have zero control over both, because akrasia. ... my "not true rejection!" alarm is going of but I can't seem to find anything to do with that information either.
Yeah, sounds like you have a general motivation problem that needs fixing before you can get better at a lot of other things.
Not quite, but it seem unlikely this conversation will get further without getting into mental problems I really don't want to discus with someone whose opinion I care about, like you.
I find your honesty in these posts inspiring. I wish more people had such courage.
Ah, yea. Backing out of a conversation and retracting all my posts as soon as it gets uncomfortable sure is courageous!
It still took a good bit of nerve to make those posts.
This is true for me as well. Which is why I try to rely on programs that prompt me to reply at random intervals through computer popups or sms, rather than habit. I highly doubt you have zero control over effort. Akrasia limits your ability to act on willpower, it doesn't negate willpower entirely. Reward yourself for those 30 second googling bursts if nothing else. I'm serious, have a jar of mini chocolate chips by your desk and pop one in your mouth every time you google an interesting question on scholar or wikipedia.

have a jar of mini chocolate chips by your desk and pop one in your mouth every time you google an interesting question on scholar or wikipedia.

Is there any evidence this works? 1) Does the brain treat these discretionary pleasures as reinforcement? 2) If it does, do attribution effects undermine the efficacy? Research in attribution effects show that extrinsic rewards sometimes undermine intrinsic interest, i.e., curiosity. "Negative effects are found on high-interest tasks when the rewards are tangible, expected (offered beforehand), and loosely tied to level of performance."

Disagree. The target of your advice has reported serious health problems (and his akrasia would probably be a lot easier to overcome if it weren't for the health problems, according to my models (which are based only on what he has posted to LW and on information not specific to him)) so I would advise him not to choose what to eat for its reward value. To help him decide what weight to give my advice, I will add that I have had serious health problems for the last 40 years. Moreover, I have serious doubts about the usefulness of setting up blatantly artificial (i.e., self-imposed for the purpose of conditioning oneself) cause-and_effect relationships between desired changes in behavior and rewards even when the rewards have no expected negative effect on health.
You're right. This was very poorly considered advice. I'm ashamed to admit I kind of recognized that as I was writing it, but posted it anyways for reasonable-sounding justifications that now suspiciously elude memory.
I know the feeling (from times I have given advice).
maaaan i have to condition myself NOT to google interesting questions else i can't get any work done for my job. But i see what you mean, that may work for conditioning oneself to work.
(A caution: I've found that naive implementations of the "reward oneself with candy" method for overcoming akrasia don't work because it becomes too tempting to just eat the candy for no reason. It has been suggested to me that it might help to explicitly write down beforehand exactly what actions justify a reward, but I haven't gotten around to testing this yet. Individual results may vary; further research is needed.)
Post some hypotheses and/or predictions at Less Wrong. There's a least a reasonable chance that people will tell you if you're mistaken.

I approve strongly! Publicly-posted exercises may yield practice, practice yields habit, and habit yields changed behavior. Developing deeper, more-focused curiosity would be a grand step towards becoming more awesome. But!

( summary: It is important to practice this skill at appropriate times, like when it is useful and feasible to work on answering the given question, and not just at random, or whenever it's convenient to schedule the practice. I plan to attach a reminder to my research to-do list.)

Alright, says I, this exercise seems plausible enough. So... (read more)

Another idea from Anna Salamon is just to brainstorm a ton of questions on the topic you want to get curious about for a predetermined period of N minutes. Very limited data suggests this method works significantly better for me.

Am I the only one who searched the phrase "I see you start to answer a question, and then you stop, and I see you get curious." to see who it referred to?

Closing my eyes gives me only the feeling of having defensively headed a long ball in soccer a few hours ago. Sometimes I try to think and nothing seems to happen :)

VoI shouldn't be abbreviated (even with hyperlink).

Thinking about how I've been mistaken in the past feels pretty bad for me - akin to true embarrassment. But I suppose it's almost the only reason I'm ever cautiously uncertain, and that seems sad.

I really value your suggestion to purposefully cultivate delight-based exploration, instead of merely looking to minimize regret (even fairly assigned regret at coming up short of boundedly-optimal-rational, without confusing outcome for expected outcome in hindsight).

Maybe I should have emphasized this more.

Setting step one as "Feel that you don't already know the answer" fits with Loewenstein (1994)'s "gap theory of curiosity", summarized by Cooney (2010):

[Loewenstein's] theory is that curiosity happens when people feel a gap in their knowledge about something... Laying out a question and inviting others to ponder it will help keep the individual's attention, because it gets them mentally involved and because there's an element of unexpectedness. This is why cliffhangers are often used at the end of television soap operas, to get viewer

... (read more)

So, should I start consuming butter half-sticks?

The study had just 27 participants, and wasn't double blind. While it was an interesting experiment, I certainly wouldn't act on it, except perhaps to read another, similar experiment.
It doesn't seem like the cost of a self experiment here would be very high, and you are the only research subject that really matters to yourself...
At least eat them with something, ew. Melt it in a pan and fry something in it.

Curious about what though? It seems like a very important piece of the above lesson is missing if we have no guidance as to what we should be curious about. It does me no good, perhaps no small amount of harm, to be intensely curious about the details of a fictional world. I ought not be curious about the personal life of my neighbor. And while curiosity about insects may serve some, it's unlikely to do most people any good at all. I think we have no good reason to believe that we're generally curious about the right sorts of things.

And there seems to be a... (read more) will make you light and eager, and give purpose to your questioning and direction to your skills.

And this article rekindled that for me. I have a motivation to explore I have not felt in quite some time. Thanks for writing this, Luke!

If you have beliefs about the matter already, push the "reset" button and erase that part of your map. You must feel that you don't already know the answer.

It seems like a bad idea to intentionally blank part of your map. If you already know things, you shouldn't forget what you already know. On the other hand, if you have reason to doubt what you think you know, you should blank the suspect parts of your map when you had reason to doubt them, and not artificially as part of a procedure for generating curiosity.

I think what you may be trying t... (read more)

This is all good stuff, but it makes curiosity sound complicated. I thought that the point of using curiosity as a hook into epistemic rationality is that once you feel the emotion of curiosity, your brain often just knows what to do next.

Also curiosity feels good.

0Swimmer963 (Miranda Dixon-Luinenburg)
Curiosity in itself isn't necessarily complicated, and yes it feels good, but a lot of times, for a lot of people, it doesn't happen by itself. And it sounds like the process of producing curiosity in oneself is more complicated than simply feeling it naturally.

Bug report: step 2, exercise 2.1. If the consequences of my current best guess being wrong are much less dire than the consequences of being wrong on recomputing, my social circle thinks that the plan based on this current best guess is very important, and I hate the people who disagree, then I'm terrified of trying to recompute.

People try very hard to ignore the consequences of being wrong. Fear in this case is dangerous, because cause stagnation and break curiosity.

My father was in the Korean war, on the peninsula. He did not have access to butter or milk for something like 9 months. When he got R & R to Tokyo he ate a pound of butter with a knife and fork. I should note that while I don't know how fast he could do math in his head he could count/remember cards like nobody's business. Also he died of a massive coronary at 64 weighing close to 290 pounds.
Are you implying that there is a causal link between his consumption of butter and his weight gain?
Bah. It looks like an eariler, much more detailed and funnier reply got eaten by something. But to answer, no, I don't think specifically and narrowly his butter eating lead to his rather large size, but rather his eating of almost everything that would taste good, and in quantities that were sometimes moderately impressive. Given how much he ate and smoked, and how little he moved it's a wonder he wasn't twice as big and that he lived as long as he did.