Some rationality tweets

by Peter_de_Blanc2 min read30th Dec 201081 comments

43

EpistemologyRationality
Frontpage

Will Newsome has suggested that I repost my tweets to LessWrong. With some trepidation, and after going through my tweets and categorizing them, I picked the ones that seemed the most rationality-oriented. I held some in reserve to keep the post short; those could be posted later in a separate post or in the comments here. I'd be happy to expand on anything here that requires clarity.

Epistemology

  1. Test your hypothesis on simple cases.
  2. Forming your own opinion is no more necessary than building your own furniture.
  3. The map is not the territory.
  4. Thoughts about useless things are not necessarily useless thoughts.
  5. One of the successes of the Enlightenment is the distinction between beliefs and preferences.
  6. One of the failures of the Enlightenment is the failure to distinguish whether this distinction is a belief or a preference.
  7. Not all entities comply with attempts to reason formally about them. For instance, a human who feels insulted may bite you.

Group Epistemology

  1. The best people enter fields that accurately measure their quality. Fields that measure quality poorly attract low quality.
  2. It is not unvirtuous to say that a set is nonempty without having any members of the set in mind.
  3. If one person makes multiple claims, this introduces a positive correlation between the claims.
  4. We seek a model of reality that is accurate even at the expense of flattery.
  5. It is no kindness to call someone a rationalist when they are not.
  6. Aumann-inspired agreement practices may be cargo cult Bayesianism.
  7. Godwin's Law is not really one of the rules of inference.
  8. Science before the mid-20th century was too small to look like a target.
  9. If scholars fail to notice the common sources of their inductive biases, bias will accumulate when they talk to each other.
  10. Some fields, e.g. behaviorism, address this problem by identifying sources of inductive bias and forbidding their use.
  11. Some fields avoid the accumulation of bias by uncritically accepting the biases of the founder. Adherents reason from there.
  12. If thinking about interesting things is addictive, then there's a pressure to ignore the existence of interesting things.
  13. Growth in a scientific field brings with it insularity, because internal progress measures scale faster than external measures.

Learning

  1. It's really worthwhile to set up a good study environment. Table, chair, quiet, no computers.
  2. In emergencies, it may be necessary for others to forcibly accelerate your learning.
  3. There's a difference between learning a skill and learning a skill while remaining human. You need to decide which you want.
  4. It is better to hold the sword loosely than tightly. This principle also applies to the mind.
  5. Skills are packaged into disciplines because of correlated supply and correlated demand.
  6. Have a high discount rate for learning and a low discount rate for knowing.
  7. "What would so-and-so do?" means "try using some of so-and-so's heuristics that you don't endorse in general."
  8. Train hard and improve your skills, or stop training and forget your skills. Training just enough to maintain your level is the worst idea.
  9. Gaining knowledge is almost always good, but one must be wary of learning skills.

Instrumental Rationality

  1. As soon as you notice a pattern in your work, automate it. I sped up my book-writing with code I should've written weeks ago.
  2. Your past and future decisions are part of your environment.
  3. Optimization by proxy is worse than optimization for your true goal, but usually better than no optimization.
  4. Some tasks are costly to resume because of mental mode switching. Maximize the cost of exiting these tasks.
  5. Other tasks are easy to resume. Minimize external costs of resuming these tasks, e.g. by leaving software running.
  6. First eat the low-hanging fruit. Then eat all of the fruit. Then eat the tree.
  7. Who are the masters of forgetting? Can we learn to forget quickly and deliberately? Can we just forget our vices?
  8. What sorts of cultures will endorse causal decision theory?
  9. Big agents can be more coherent than small agents, because they have more resources to spend on coherence.

43

80 comments, sorted by Highlighting new comments since Today at 3:06 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Train hard and improve your skills, or stop training and forget your skills. Training just enough to maintain your level is the worst idea.

Doesn't this depend somewhat on the relevance of the skill to the goal? My skills at cooking are reasonably adequate to the environment in which I live. I would classify them as better than average, but decidedly amateur. I don't particularly want to prioritise them over my skill at playing a musical instrument, for which I get paid, but I wouldn't like to lose too much of what cooking skills I do have as that would make my life more inconvenient, and certainly less enjoyable.

Within my work, musicianship can be broken down into a good many skills, all of which I need to maintain at their current levels to remain in employment, and some of which it may be worth my time and effort to improve. For my church job, if I were to try to improve my organ pedal technique at the expense of maintaining my ability to learn five hymns and two voluntaries per service to reasonable performance standard, I would not hold my position long. There is a limit to how much time I can spend practising each week, and so the pedal technique, while important, will pro... (read more)

There's a certain style distinct to many didactic quotes: they express claims in a wise-sounding but opaque way, so that they automatically appear deep without requiring the reader to actually think about them. This can cloak empty language and doubtful claims in a veneer of impressiveness -- not to mention being uncommunicative if the ideas really are good.

It looks to me like these match that style. The ideas here could be both true and interesting, but making them into aphorisms (to fit Twitter) removes the explanation and examples that would convince me they're true and interesting. As it is, they sound meaningless to me -- the medium totally obscures the message.

I'd be interested in a post exploring some of these ideas, but tweets seem to me to be a format utterly unsuited to the topic.

[Also, I really think that this should not be on the front page. If even commenters have to puzzle over many of these, it's not a good choice for the general audience.]

The real problem with pithy quotes is that if you disagree with the quote, it's hard to argue without appearing unpithy.

(How was that for a pithy quote?)

Mostly, I think of pithy quotes as the conversational equivalent of icons in GUIs. If you don't already have a pretty good clue what an icon does, the icon by itself isn't very helpful... but once you become familiar with it, it can be very helpful.

Similarly, the nice thing about pithy quotes is that once you've understood the associated thought, they provide an easier way to bring that thought to mind on demand.

They can also provide a hook. That is, tossing a pithy quote into a conversation and providing additional explanation if there seems to be interest in it can be more comfortable than trying to toss a large chunk of exposition into conversation. (Well, for me, anyway. Some people seem more comfortable with tossing large chunks of exposition into conversations.)

All of which is to say, I'm fond of them.

1Tesseract10yI agree with you. My dispute is that 'pithy' means not 'short' but 'concisely meaningful.' If a line is short but confusing, it's not pithy.
4EditedToAdd10yMichael Vassar has this quote on Twitter: “Every distinction wants to become the distinction between good and evil.” Which I’m sure I would have understood differently had I not previously read the post from which it (I believe) originated:
0mwengler10yYour wariness of good pithy quotes makes sense, they are intended to be infectious memes. Just as our bodies need lotsa bacteria in order to function, infectious memes increase the carrying and staying power of ideas in our mind. Just as our bodies are screwed if we get the wrong bacteria, our thinking can be ill with the wrong memes. For myself, I take the fact that I like a lot of these quotes as motivation to learn more about the ones I don't understand. I am weak on the difference between beliefs and preferences, yet I'm sure it is on the edge of a very active part of my current thinking. I suspect the "high discount rate for learning, low for knowing" is backwards but I need to work it through to decide. Etc. Pithy quotes like this increase the likelihood that I will follow through on learning more. Hopefully this is an effective way to protect against "bad" pithy quotes getting a neuron-hold on my brain.

Test your hypothesis on simple cases.

I would say rather, "Test your hypotheses first on simple cases." If they can be quickly disproven there, you can move on to more useful hypotheses.

As soon as you notice a pattern in your work, automate it. I sped up my book-writing with code I should've written weeks ago.

I want to know what this code was for.

4Peter_de_Blanc10yConverting Go positions from SGF to LaTeX.

Training just enough to maintain your level is the worst idea.

You don't need to "train" to maintain your skills, using them will maintain them. If you don't need particular skills for a longer time, they will gradually deteriorate. I rarely use all the math skills I have picked up, so I periodically do a fairly intense refresher on things I haven't been using, then let it slide until it is needed, or I think I have forgotten enough to do another refresher.

The best people enter fields that accurately measure their quality. Fields that measure quality poorly attract low quality.

I don't think this necessarily applies to the arts. Or are you just saying that fields that measure quality poorly will attract all sorts of people?

Also, Goodhart's Law applies-- any field with high rewards will attract people who will try to modify the reward system in their favor.

If thinking about interesting things is addictive, then there's a pressure to ignore the existence of interesting things.

Should that be "ignore the existence of uninteresting things"?

2Peter_de_Blanc10yNo; people are often discouraged from trying addictive things.
1Skatche10yAhh, I originally read this as evolutionary pressure, which I think goes in the opposite direction in this case. Dependence on some external stimulus can be adaptive if that stimulus is readily available and has otherwise beneficial effects. It'd be interesting to get a look at some of the game theoretic dynamics of intelligence, novel thinking, and so on - while you would expect an evolutionary pressure to entertain (at least some limited number of) novel ideas, it seems there may also be evolutionary pressure to discourage others from doing so (to maintain social cohesion, i.e., make your neighbours more predictable).
1mwengler10yI question this best people tweet also. Some great people become public school teachers. Some fields are simply harder to measure than others. I have previously known a physicist who DEFINED any question that he couldn't address using the scientific method as "uninteresting." Included in his list of uninteresting questions were things like what is human consciousness. It may be more valuable to search for excellence in fields where the metrics don't work in a trivially transparent way since the fields where it does work will be adequately investigated by following the crowd.
0[anonymous]10yThe arts are an excellent counter-example disproving the claim that able people avoid professions where the measure of product quality is poor. But there are examples within the sciences, too. The summaries I've seen (as well as folk knowledge) indicate that chemists are substantially less intelligent than psychologists. (Physicists, however, are smarter than psychologists.) I think the mistaken assumption stems from the egocentric fallacy: we exaggerate how similar others are to ourselves. Recognition is only one of various motives in choosing a field, and let's not overlook the obvious, interest in the subject matter.
1NancyLebovitz10y"Best people" is vague. "More intelligent" might not be equivalent to "best". Still, I'm intrigued by psychologists turning out to be more intelligent (higher IQ?) than chemists. Maybe chemists tend towards very strong spacial and mathematical intelligence, but are definitely not as good on verbal, while psychologists are very strong on verbal but not comparably weak on spacial/mathematical. This is just a guess, though.

First eat the low-hanging fruit. Then eat all of the fruit. Then eat the tree.

I like this one. It works equally well against people who tend to eat the tree first and look down on fruit-eaters later, and against people who eat the low-hanging fruit and sit down, contented.

5Desrtopa10yWhy eat the tree at all? A live tree will continue to bear fruit. The quote seems to promote immediate use of the closest available resources over patience with higher payoff.

Metaphorical trees are possible to swallow whole, and will thrive in the environment of the metaphorical human stomach, so eating the tree means automatic benefits from all future fruit.

You have tortured this metaphor so hard that you have passed infinite negative utils and come back out on the positive infinity side.

3DSimon10yThe human brain represents utilons using a fixed-length integer field?
6magfrump10yNo, but this metaphor has utility occupying a discrete subset of the projective real line isomorphic to a cyclic group.
1DSimon10yYou win. :-)
1magfrump10yI think the only real way to WIN is for us to torture this metaphor-utility metaphor until IT comes back out on the positive infinite side.
0[anonymous]10y...0?
1Peter_de_Blanc10yVery good!
4luminosity10yI, on the other hand, dislike it. Low hanging fruit is a useful and fairly accurate metaphor. This is taking the metaphor and torturing it. It sounds witty, but I don't think it really gets across anything important, nor easily.
8shokwave10yDon't look now, but the rest of the comments on the grandparent are also torturing the poor thing. Thankfully, metaphors don't have moral significance.
0b1shop10yWhat is this trying to say?
4Peter_de_Blanc10yReap the easy rewards first, but don't stop there.
1shokwave10yThe incorrect view that high difficulty entails high reward (and conversely low difficulty entails low reward), when in reality reward and difficulty are not strongly correlated. As per Peter's comment below.

Big agents can be more coherent than small agents, because they have more resources to spend on coherence.

Yes. Coherence, and persuasiveness.

The individual that argues against whatever political lobby is quick to point out that the lobby gets its way not because it is right, but rather because it has reason to scream louder than the dispersed masses who would oppose it. But indeed, the very arguments the lobby crafts are likely to be more compelling to the masses, because it has the resources to make them so.

The lobby screams louder and better than smaller agents, as far as convincing people goes.

Skills are packaged into disciplines because of correlated supply and correlated demand.

And because of correlated and pre-requisite learning - skills, like other knowledge, builds on previously learned skills.

Some of these are very good, others a little bit less so. Granted, they come from a Twitter feed and are therefore spur of the moment, but I'm going to point out a few I disagree with.

Test your hypothesis on simple cases.

I'm not sure this is always true. Ideally we should test in simple cases, but sometimes ruling out strange stuff requires using complicated cases. I'd prefer something like "Test your hypothesis on the simplest cases necessary to make a useful test."

Forming your own opinion is no more necessary than building your own furni

... (read more)
2Peter_de_Blanc10yLearning math sure isn't useless, and it seems to mostly consist of thinking about useless or nonexistent things. Possible. I didn't check the literature before posting that tweet. Anyway I think both encodings are possible to some extent. "You can't derive ought from is" is a belief. "People should distinguish between beliefs and preferences" is a preference. This refers to a possible difficulty of introspection.
2patrissimo10yI learned a lot of math (undergraduate major), and while it entertained me, it has been almost completely useless in my life. And the forms of math I believe to be most useful and wish I'd learned instead (statistics) are useful because they are so directly applicable to the real world. What useful math have you learned that doesn't involve reference to useful or existent things?
1nshepperd10yI've hypothesised before that learning math might be useful because a) you get lots of practice in understanding abstraction and how abstract objects can meaningfully be manipulated using rules, and b) you hopefully learn that proofs are nobody's opinion. So basically a lot of practice in using basic logic. Neither of which require study of useful or existing things. Though obviously it would be preferable if the actual content were about useful stuff as well, to get double the benefit, it's not inherently useless.
0Peter_de_Blanc10yReal analysis is the first thing that comes to mind. Linear algebra is the second thing. Lately I've been thinking about if and how learning math can improve one's thinking in seemingly unrelated areas. I should be able to report on my findings in a year or two.
0patrissimo10yThis seems like a classic example of the standard fallacious defense of undirected research (that it might and sometimes does create serendipitous results)? Yes, learning something useless/nonexistent might help you learn useful things about stuff that exists, but it seems awfully implausible that it helps you learn more useful things about existence than studying the useful and the existing. Doing the latter will also improve your thinking in seemingly unrelated areas...while having the benefit of not being useless. If instead of learning the clever tricks of combinatorics as an undergraduate, I had learned useful math like statistics or algorithms, I think I would have had just as much mental exercise benefit and gotten a lot more value.
-1shokwave10yI first learned calculus using infinitesimals.
1shokwave10yThe thought "I will not purchase this useless thing" is a thought about a useless thing, and it is not a useless thought. His formulation ("not necessarily") means that technically, it doesn't depend on the definition (given that you accept the previous example, of course). I actually parsed that quote as "Eat all the low-hanging fruit (in the orchard). Then eat all the fruit (in the orchard). Then eat the tree(s)." Well, not specifically thinking orchard, but I imagined running along a row of trees plucking all the low-hanging fruit, then returning for all the fruit, then shrugging and uprooting the trees.

Excellent list. Please could you expand on/clarify these 2, I'm not sure what you mean and why:

  • Science before the mid-20th century was too small to look like a target.

  • There's a difference between learning a skill and learning a skill while remaining human. You need to decide which you want.

4Peter_de_Blanc10yDo you have preconceptions about what thinking should feel like? Do you want your instantiation of the skill to be more similar to other parts of your mind, or more similar to other instantiations of the skill? To get really good at something, it helps to remove constraints about how you achieve it. Basically what JoshuaZ said, although science can also be a victim of convenience rather than just ideology. Using science for status gain without adhering to its rules also counts as targeting it, e.g. plagiarism. I don't really mean that science never looked like a target before the mid-20th, but it's definitely much more of a target now.
0MichaelHoward10yThat makes sense, thanks.
3JoshuaZ10yIn this case I interpreted it to mean that there are many people now who specifically target science as bad (i.e. complain about "scientism" etc.) because science is a large, successful enterprise. He is asserting that before the mid 20th century science was not prominent or successful enough to bother being a target of attack. I'm not sure he's correct here, but the basic notion is plausible.
1MichaelHoward10yAh, I see. I was thinking "small target in a large search space" kind of target. Thanks.
0[anonymous]10yAh, I see. I was thinking "small target in a large search space" kind of target.
2Tenek10yLearning how to transplant a kidney is much easier when you have a few dozen people to experiment on. (I think that was the idea, anyways...)
1Normal_Anomaly10yI thought it referred to training at one thing so hard that you become incapable of doing anything else. Like the programmer who forgets to shower, eat, sleep, etc.
1MichaelHoward10yBut the tweet is directed at the reader. Surely Peter didn't expect many of his readers to be faced with that sort of decision while learning their skills?
0[anonymous]10yBut the tweet is directed at the reader. Surely Peter_de_Blanc didn't expect many of his readers to be faced with that sort of decision while learning their skills?

We seek a model of reality that is accurate even at the expense of flattery.

My observations suggest that many (maybe even most) people will ignore even pretty substantial inaccuracies for even a little flattery.

6MBlume10yI think this tweet was overwhelmingly normative rather than descriptive
3PeterisP10yThen it should be rephrased as 'We should seek a model of reality that is accurate even at the expense of flattery.' Ambiguous phrasings facilitate only confusion.
3shokwave10yI read it as declarative - We (at spaceandgames) seek a model etc etc. Peter isn't the only person on that twitter account.
2Peter_de_Blanc10yHow did you figure that out?
2shokwave10yI misunderstood how retweeting works.

Your past and future decisions are part of your environment.

I like this one.

Train hard and improve your skills, or stop training and forget your skills.

Training just enough to maintain your level is the worst idea. Gaining knowledge is almost always good, but one must be wary of learning skills.

What do those two mean?

1Peter_de_Blanc10yYou reinforce your biases, which makes it harder to resume improvement later. Also, you get comfortable with being at a fixed level. Knowledge tends to come in small chunks. If a chunk turns out to be wrong, you can discard it. Skills are harder to discard, and they're always at least somewhat wrong.
2Jonathan_Graehl10yIf it's ever more efficient to maintain proficiency by occasional practice (as it is with memory of facts - c.f. spaced reptition), than it is to forget but later quickly re-learn, then that seems worth the minor risk you're afraid of. Especially if the skill is one that's useful to have ready, or to regular employ.
0[anonymous]10yAlso, there's usually not a good reason to stop at a particular level of skill. If the skill is worth having at all, it's probably going to be worth having at the highest level you can achieve, and if it's not worth having, continuing to train to keep the same level at it is probably a manifestation of the sunk cost fallacy.

"1.The best people enter fields that accurately measure their quality. Fields that measure quality poorly attract low quality."

I would edit it thus:

A- "Fields that measure quality poorly retain low quality, and repel The Best People." (The Best People will get fed up and leave)

and a related:

B- "People of varying quality enter fields that reflect a lot of different things, low on the list of which is how that field measures quality. High on the list would be how that field is compensated." (in all the variations of "comp... (read more)

Some of these are so Peter de Blanc-ish. (Epistimology 7, Learning 7, etc.)

I suspect a 'forgotten' vice would regrow, and need to be rediscovered.

0Risto_Saarelma10yDepends on how much newly learned things have changed the rest of one's behavior.

I like that you're being terse. Many of these are puzzles - I need to discover a way to interpret them that allows me to like them.

Of these, I'm unsure:

One of the failures of the Enlightenment is the failure to distinguish whether this distinction [between beliefs and preferences] is a belief or a preference.

Huh? It's clearly a belief (part of a map). That seems easily grasped once the question is posed. The early Enlightenment wasn't meta- enough for your liking, for not posing it?

Growth in a scientific field brings with it insularity, because inter

... (read more)
0Peter_de_Blanc10yBy learning, I mean gaining knowledge. Humans can receive enjoyment both from having stuff and from gaining stuff, and knowledge is not an exception. It's true that a dynamically-consistent agent can't have different discount rates for different terminal values, but bounded rationalists might talk about instrumental values using the same sort of math they use for terminal values. In that context it makes sense to use different discount rates for different sorts of good.
3Peter_de_Blanc10yWriting the above comment got me thinking about agents having different discount rates for different sorts of goods. Could the appearance of hyperbolic discounting come from a mixture of different rates of exponential discounting? I remembered that the same sort of question comes up in the study of radioisotope decay. A quick google search turned up this blog [http://mobjectivist.blogspot.com/2010/05/waste-half-life.html], which says that if you assume a maximum-entropy mixture of decay rates (constrained by a particular mean energy), you get hyperbolic decay of the mixture. This is exactly the answer I was looking for.

Gaining knowledge is almost always good, but one must be wary of learning skills.

I disagree strongly with the second part of that; partially because there is no sharp boundary between knowledge and skills. "Skills" are just how-to knowledge that you have practiced using enough that you can do it without having to look up the details as you go.

One of the successes of the Enlightenment is the distinction between beliefs and preferences.
One of the failures of the Enlightenment is the failure to distinguish whether this distinction is a belief or a preference.

Cute. But one of the failures of this ontology is the failure to distinguish between assumptions and assertions. And one of my assertions is that the distinction between beliefs and preferences is an assumption.

0DSimon10yCan you go into more detail about this? I'm not sure what you mean.
2Perplexed10yThe idea is not completely worked out. But the basic idea is that I distinguish between statements used as assertions and statements used as assumptions. (The division of statements into beliefs and preferences is orthogonal.) When a statement is used as an assertion, the meaning of the statement flows from the relationship between the statement and the actual world. The truth of an assertion depends upon evidence. An assertion "pays the rent in anticipated experience". When a statement is used as an assumption, the meaning of the statement flows from the way that the assumption changes the way the world is talked about. The presence or absence of an assumption changes the meaning of statements - it may even change a statement from meaningless to meaningful. An assumption "pays the rent" by providing new ways of thinking about the world. As I understand it, when we choose MWI over Copenhagen, we are making an assumption, not stating an assertion which may (depending on the evidence) be true or false about the world. So, given this framework, I claim that the question of whether beliefs must be distinguished from preferences is not something to be decided by evidence or reasoning. Instead, we make that distinction as an assumption. Having made that assumption, we can then go on to make assertions or form analogies that flow from this assumption. The assumption enables speaking and reasoning. And we can make the "meta-assertion" that the distinction itself should be classified as an assumption.
0DSimon10yI like it. It wasn't what I was expecting when you used the word "assumption", though; I usually think of assumptions as being assertions which are for whatever reason not questioned. Maybe consider using another word, like "lens", or "framework", something that emphasizes their nature as support/modification for assertions.
1shokwave10yPremise? That is what I got from
[-][anonymous]10y 0

Wow! Many of these are worth a new LW post on their own.

Please could you expand on/clarify these 2, I'm not sure what you mean and why:

  • Science before the mid-20th century was too small to look like a target.

  • There's a difference between learning a skill and learning a skill while remaining human. You need to decide which you want.

[-][anonymous]10y -1

I vote for a LW wiki page for this list, where for each quote, people can append links to LW posts relevent to that tweet.

[Edit: forget this bit] Then when we see which quotes don't have any links, I vote for a vote on which of those quotes most deserves a new LW post.

0TheOtherDave10yOr, instead of voting, we could just write posts.
[-][anonymous]10y -2

"Not all entities comply with attempts to reason formally about them. For instance, a human who feels insulted may bite you."

Haha..Anybody else remember Rational!Harry?