SlateStarCodex, EA, and LW helped me get out of the psychological, spiritual, political nonsense in which I was mired for a decade or more.
I started out feeling a lot smarter. I think it was community validation + the promise of mystical knowledge.
Now I've started to feel dumber. Probably because the lessons have sunk in enough that I catch my own bad ideas and notice just how many of them there are. Worst of all, it's given me ambition to do original research. That's a demanding task, one where you have to accept feeling stupid all the time.
But I still look down that old road and I'm glad I'm not walking down it anymore.
7Viliam5moToo smart for your own good. You were supposed to believe it was about
rationality. Now we have to ban you and erase your comment before other people
can see it. :D
Yeah, same here.
Acquiring tools of good judgment and efficient learning
Practice at charitable, informal intellectual argument
Distraction
A somewhat less mind-killed politics
Cons: I'm frustrated that I so often play Devil's advocate, or else make up justifications for arguments under the principle of charity. Conversations feel profit-oriented and conflict-avoidant. Overthinking to the point of boredom and exhaustion. My default state toward books and people is bored skepticism and political suspicion. I'm less playful than I used to be.
Pros: My own ability to navigate life has grown. My imagination feels almost telepathic, in that I have ideas nobody I know has ever considered, and discover that there is cutting edge engineering work going on in that field that I can be a part of, or real demand for the project I'm developing. I am more decisive and confident than I used to be. Others see me as a leader.
5Viliam8moSome people optimize for drama. It is better to put your life in order, which
often means getting the boring things done. And then, when you need some drama,
you can watch a good movie.
Well, it is not completely a dichotomy. There is also some fun to be found e.g.
in serious books. Not the same intensity as when you optimize for drama, but
still. It's like when you stop eating refined sugar, and suddenly you notice
that the fruit tastes sweet.
Math is training for the mind, but not like you think
Just a hypothesis:
People have long thought that math is training for clear thinking. Just one version of this meme that I scooped out of the water:
“Mathematics is food for the brain,” says math professor Dr. Arthur Benjamin. “It helps you think precisely, decisively, and creatively and helps you look at the world from multiple perspectives . . . . [It’s] a new way to experience beauty—in the form of a surprising pattern or an elegant logical argument.”
But math doesn't obviously seem to be the only way to practice precision, decision, creativity, beauty, or broad perspective-taking. What about logic, programming, rhetoric, poetry, anthropology? This sounds like marketing.
As I've studied calculus, coming from a humanities background, I'd argue it differently.
Mathematics shares with a small fraction of other related disciplines and games the quality of unambiguous objectivity. It also has the ~unique quality that you cannot bullshit your way through it. Miss any link in the chain and the whole thing falls apart.
It can therefore serve as a more reliable signal, to self and others, of one's own learning capacity.
Experiencing a subject like that can be training for the mind, because becoming successful at it requires cultivating good habits of study and expectations for coherence.
6niplav7moMath is interesting in this regard because it is both very precise and there's
no clear-cut way of checking your solution except running it by another person
(or becoming so good at math to know if your proof is bullshit).
Programming, OTOH, gives you clear feedback loops.
4AllAmericanBreakfast7moIn programming, that's true at first. But as projects increase in scope, there's
a risk of using an architecture that works when you’re testing, or for your
initial feature set, but will become problematic in the long run.
For example, I just read an interesting article on how a project used a document
store database (MongoDB), which worked great until their client wanted the
software to start building relationships between data that had formerly been
“leaves on the tree.” They ultimately had to convert to a traditional relational
database.
Of course there are parallels in math, as when you try a technique for
integrating or parameterizing that seems reasonable but won’t actually work.
7G Gordon Worley III7moYep. Having worked both as a mathematician and a programmer, the idea of
objectivity and clear feedback loops starts to disappear as the complexity amps
up and you move away from the learning environment. It's not unusual to discover
incorrect proofs out on the fringes of mathematical research that have not yet
become part of the cannon, nor is it uncommon (in fact, it's very common) to
find running production systems where the code works by accident due to some
strange unexpected confluence of events.
3Viliam7moFeedback, yes. Clarity... well, sometimes it's "yes, it works" today, and
"actually, it doesn't if the parameter is zero and you called the procedure on
the last day of the month" when you put it in production.
2MikkW7moProof verification is meant to minimize this gap between proving and programming
2Viliam7moThe thing I like about math is that it gives the feeling that the answers are in
the territory. (Kinda ironic, when you think about what the "territory" of math
is.) Like, either you are right or you are wrong, it doesn't matter how many
people disagree with you and what status they have. But it also doesn't reward
the wrong kind of contrarianism.
Math allows you to make abstractions without losing precision. "A sum of two
integers is always an integer." Always; literally. Now with abstractions like
this, you can build long chains out of them, and it still works. You don't
create bullshit accidentally, by constructing a theory from approximations that
are mostly harmless individually, but don't resemble anything in the real world
when chained together.
Whether these are good things, I suppose different people would have different
opinions, but it definitely appeals to my aspie aesthetics. More seriously, I
think that even when in real world most abstractions are just approximations,
having an experience with precise abstractions might make you notice the
imperfection of the imprecise ones, so when you formulate a general rule, you
also make a note "except for cases such as this or this".
(On the other hand, for the people who only become familiar with math as a
literary genre
[https://www.lesswrong.com/posts/4Bwr6s9dofvqPWakn/science-as-attire], it might
have an opposite effect: they may learn that pronouncing abstractions with
absolute certainty is considered high-status.)
2elityre7moIsn't programming even more like this?
I could get squidgy about whether a proof is "compelling", but when I write a
program, it either runs and does what I expect, or it doesn't, with 0 wiggle
room.
2AllAmericanBreakfast7moSometimes programming is like that, but then I get all anxious that I just
haven’t checked everything thoroughly!
My guess is this has more to do with whether or not you’re doing something basic
or advanced, in any discipline. It’s just that you run into ambiguity a lot
sooner in the humanities
2ChristianKl7moIt helps you to look at the world from multiple perspectives: It gets you into a
position to make a claim like that soley based on anecdotal evidence and wishful
thinking.
On an individualPredictIt market, sometimes you can find a set of "no" contracts whose price (1 share of each) adds up to less than the guaranteed gross take.
Toy example:
Will A get elected? No = $0.30
Will B get elected? No = $0.70
Will C get elected? No = $0.90
Minimum guaranteed pre-fee winnings = $2.00
Total price of 1 share of both No contracts = $1.90
Minimum guaranteed pre-fee profits = $0.10
There's always a risk of black swans. PredictIt could get hacked. You might execute the trade improperly. Unexpected personal expenses might force you to sell your shares and exit the market prematurely.
But excluding black swans, I though that as long as three conditions held, you could make free money on markets like these. The three conditions were:
You take PredictIt's profit fee (10%) into account
You can find enough such "free money" opportunities that your profits compensate for PredictIt's withdrawal fee (5% of the total withdrawal)
You take into account the opportunity cost of investing in the stock market (average of 10% per year)
In the toy example above, I calculated that you'd lose $0.10 x 10% = $0.01 to PredictIt's profit fee if you bought 1 of each "... (read more)
Imagine that the Bay Area rationalist community did all want to move. But no individual was sure enough that others wanted to move to invest energy in making plans for a move. Nobody acts like they want to move, and the move never happens.
Individuals are often willing to take some level of risk and make some sacrifice up-front for a collective goal with big payoffs. But not too much, and not forever. It's hard to gauge true levels of interest based off attendance at a few planning meetings.
Maybe one way to solve this is to ask for escalating credible commitments.
A trusted individual sets up a Rationalist Move Fund. Everybody who's open to the idea of moving puts $500 in a short-term escrow. This makes them part of the Rationalist Move Club.
If the Move Club grows to a certain number of members within a defined period of time (say 20 members by March 2020), then they're invited to planning meetings for a defined period of time, perhaps one year. This is the first checkpoint. If the Move Club has not grown to that size by then, the money is returned and the project is cancelled.
By the end of the pre-defined planning period, there could be one of three majority... (read more)
On the surface, it looks like this community should dissolve. Why are we attracting bread bakers, programmers, stock market investors, epidemiologists, historians, activists, and parents?
Each of these interests has a community associated with it, so why are people choosing to write about their interests in this forum? And why do we read other people's posts on this forum when we don't have a prior interest in the topic?
Rationality should be the art of general intelligence. It's what makes you better at everything. If practice is the wood and nails, then rationality is the blueprint.
To determine whether or not we're actually studying rationality, we need to check whether or not it applies to everything. So when I read posts applying the same technique to a wide variety of superficially unrelated subjects, it confirms that the technique is general, and helps me see how to apply it productively.
This points at a hypothesis, which is that general intelligence is a set of defined, generally applicable techniques. They apply across disciplines. And they apply across problems within disciplines. So why aren't they generally known and appreciated? Sh... (read more)
8Viliam7moFor me, it's the relatively high epistemic standards combined with relative
variety of topics. I can imagine a narrowly specialized website with no
bullshit, but I haven't yet seen a website that is not narrowly specialized and
does not contain lots of bullshit. Even most smart people usually become quite
stupid outside the lab. Less Wrong is a place outside the lab that doesn't feel
painfully stupid. (For example, the average intelligence at Hacker News seems
quite high, but I still regularly find upvoted comments that make me cry.)
6AllAmericanBreakfast7moYeah, Less Wrong seems to be a combination of project and aesthetic. Insofar as
it's a project, we're looking for techniques of general intelligence, partly by
stress-testing them on a variety of topics. As an aesthetic, it's a unique
combination of tone, length, and variety + familiarity of topics that scratches
a particular literary itch.
It's OK for criticism to be imperfect. But the worst sort of criticism has all five of these flaws:
Prickly: A tone that signals a lack of appreciation for the effort that's gone in to presenting the original idea, or shaming the presenter for bringing it up.
Opaque: Making assertions or predictions without any attempt at specifying a contradictory gears-level model, evidence basis, even on the level of anecdote or fiction.
Nitpicky: Attacking the one part of the argument that seems flawed, without arguing for how the full original argument should be reinterpreted in light of the local disagreement.
Disengaged: Not signaling any commitment to continue the debate to mutual satisfaction, or even to listen to/read and respond to a reply.
Shallow: An obvious lack of engagement with the details of the argument or evidence originally offered.
I am absolutely guilty of having delivered Category 5 criticism, the worst sort of cheap shots.
There is an important tradeoff here. If standards are too high for critical commentary, it can chill debate and leave an impression that either nobody cares, everybody's on board, or the argument's simply correct. Sometimes, an idea ca... (read more)
4Matt Goldenberg3moThis seems like a fairly valuable framework. It occurs to me that all 5 of these
flaws are present in the "Snark" genre present in places like Gawker and
Jezebel.
2AllAmericanBreakfast3moI am going to experiment with a karma/reply policy to what I think would be a
better incentive structure if broadly implemented. Loosely, it looks like this:
1. Strong downvote plus a meaningful explanatory comment for infractions worse
than cheap criticism; summary deletions for the worst offenders.
2. Strong downvote for cheap criticism, no matter whether or not I agree with
it.
3. Weak downvote for lazy or distracting comments.
4. Weak upvote for non-cheap criticism or warm feedback of any kind.
5. Strong upvote for thoughtful responses, perhaps including an appreciative
note.
6. Strong upvote plus a thoughtful response of my own to comments that advance
the discussion.
7. Strong upvote, a response of my own, and an appreciative note in my original
post referring to the comment for comments that changed or broadened my
point of view.
1Luke Allen2moI'm trying a live experiment: I'm going to see if I can match your erisology
[https://dailynous.com/2019/04/08/dont-roll-eyes-guy-recently-invented-philosophy/]
one-to-one as antagonists to the Elements of Harmony from My Little Pony:
1. Prickly: Kindness
2. Opaque: Honesty
3. Nitpicky: Generosity
4. Disengaged: Loyalty
5. Shallow: Laughter
Interesting! They match up surprisingly well, and you've somehow also matched
the order of 3 out of 5 of the corresponding "seeds of discord" from 1 Peter 2:1
, CSB: "Therefore, rid yourselves of all malice, all deceit, hypocrisy, envy,
and all slander." If my pronouncement of success seems self-serving and opaque,
I'll elaborate soon:
1. Malice: Kindness
2. Deceit: Honesty
3. Hypocrisy: Loyalty
4. Envy: Generosity
5. Slander: Laughter
And now the reveal. I'm a generalist; I collect disparate lists of qualities (in
the sense of "quality vs quantity"), and try to integrate all my knowledge into
a comprehensive worldview. My world changed
[https://www.wired.com/2011/06/bronies-my-little-ponys/] the day I first saw My
Little Pony; it changed in a way I never expected, in a way many people claim to
have been affected by HPMOR. I believed I'd seen a deep truth, and I've been
subtly sharing it wherever I can.
The Elements of Harmony are the character qualities that, when present, result
in a spark of something that brings people together. My hypothesis is that they
point to a deep-seated human bond-testing instinct. The first time I noticed a
match-up was when I heard a sermon on The Five Love Languages, which are
presented in an entirely different order:
1. Words of affirmation: Honesty
2. Quality time: Laughter
3. Receiving gifts: Generosity
4. Acts of service: Loyalty
5. Physical touch: Kindness
Well! In just doing the basic research to write this reply, it turns out I'm
re-inventing the wheel! Someone else has already written a psychometric analysis
of the Five Love Languages [https://doi.org/10.1080/17464090
Does rationality serve to prevent political backsliding?
It seems as if politics moves far too fast for rational methods can keep up. If so, does that mean rationality is irrelevant to politics?
One function of rationality might be to prevent ethical/political backsliding. For example, let's say that during time A, institution X is considered moral. A political revolution ensues, and during time B, X is deemed a great evil and is banned.
A change of policy makes X permissible during time C, banned again during time D, and absolutely required for all upstanding folk during time E.
Rational deliberation about X seems to play little role in the political legitimacy of X.
However, rational deliberation about X continues in the background. Eventually, a truly convincing argument about the ethics of X emerges. Once it does, it is so compelling that it has a permanent anchoring effect on X.
Although at some times, society's policy on X contradicts the rational argument, the pull of X is such that it tends to make these periods of backsliding shorter and less frequent.
The natural process of developing the rational argument about X also leads to an accretion of arguments that are not only correct... (read more)
Thinking, Fast and Slow was the catalyst that turned my rumbling dissatisfaction into the pursuit of a more rational approach to life. I wound up here. After a few years, what do I think causes human irrationality? Here's a listicle.
Cognitive biases, whatever these are
Not understanding statistics
Akrasia
Little skill in accessing and processing theory and data
Not speaking science-ese
Lack of interest or passion for rationality
Not seeing rationality as a virtue, or even seeing it as a vice.
A sense of futility, the idea that epistemic rationality is not very useful, while instrumental rationality is often repugnant
A focus on associative thinking
Resentment
Not putting thought into action
Lack of incentives for rational thought and action itself
Mortality
Shame
Lack of time, energy, ability
An accurate awareness that it's impossible to distinguish tribal affiliation and culture from a community
Everyone is already rational, given their context
Everyone thinks they're already rational, and that other people are dumb
It's a good heuristic to assume that other people are dumb
Rationality is disruptive, and even very "progressive" people have a conservative bias to stay the same, conform with their pee
8Dagon4moA few other (even less pleasant) options:
51) God is inscrutable and rationality is no better than any other religion.
52) Different biology and experience across humans leads to very different
models of action.
53) Everyone lies, all the time.
Are rationalist ideas always going to be offensive to just about everybody who doesn’t self-select in?
One loved one was quite receptive to Chesterton’s Fence the other day. Like, it stopped their rant in the middle of its tracks and got them on board with a different way of looking at things immediately.
On the other hand, I routinely feel this weird tension. Like to explain why I think as I do, I‘d need to go through some basic rational concepts. But I expect most people I know would hate it.
I wish we could figure out ways of getting this stuff across that was fun, made it seem agreeable and sensible and non-threatening.
Less negativity - we do sooo much critique. I was originally attracted to LW partly as a place where I didn’t feel obligated to participate in the culture war. Now, I do, just on a set of topics that I didn’t associate with the CW before LessWrong.
My guess? This is totally possible. But it needs a champion. Somebody willing to dedicate themselves to it. Somebody friendly, funny, empathic, a good performer, neat and practiced. And it needs a space for the educative process - a YouTube channel, a book, etc. And it needs the courage of its convictions. The sign of that? Not taking itself too seriously, being known by the fruits of its labors.
Traditionally, things like this are socially achieved by using some form of "good cop, bad cop" strategy. You have someone who explains the concepts clearly and bluntly, regardless of whom it may offend (e.g. Eliezer Yudkowsky), and you have someone who presents the concepts nicely and inoffensively, reaching a wider audience (e.g. Scott Alexander), but ultimately they both use the same framework.
The inoffensiveness of Scott is of course relative, but I would say that people who get offended by him are really not the target audience for rationalist thought. Because, ultimately, saying "2+2=4" means offending people who believe that 2+2=5 and are really sensitive about it; so the only way to be non-offensive is to never say anything specific.
If a movement only has the "bad cops" and no "good cops", it will be perceived as a group of assholes. Which is not necessarily bad if the members are powerful; people want to join the winning side. But without actual power, it will not gain wide acceptance. Most people don't want to go into unnecessary conflicts.
On the other hand, a movement with "good cops" without "bad cops" wil... (read more)
6AllAmericanBreakfast7moYou're right.
I need to try a lot harder to remember that this is just a community full of
individuals airing their strongly held personal opinions on a variety of topics.
3Viliam7moThose opinions often have something in common -- respect for the scientific
method, effort to improve one's rationality, concern about artificial
intelligence -- and I like to believe it is not just a random idiosyncratic mix
(a bunch of random things Eliezer likes), but different manifestations of the
same underlying principle (use your intelligence to win, not to defeat
yourself). However, not everyone is interested in all of this.
And I would definitely like to see "somebody friendly, funny, empathic, a good
performer, neat and practiced" promoting these values in a YouTube channel or in
books. But that requires a talent I don't have, so I can only wait until someone
else with the necessary skills does it.
This reminded me of the YouTube channel of Julia Galef
[https://www.youtube.com/user/measureofdoubt/videos], but the latest videos
there are 3 years old.
7Pontor3moHer podcast is really good IMHO. She does a singularly good job of challenging
guests in a friendly manner, dutifully tracking nuance, steelmanning, etc. It
just picked back up after about a yearlong hiatus (presumably due to her book
writing).
Unfortunately, I see the lack of notoriety for her podcast to be some evidence
against the prospects of the "skilled & likeable performer" strategy. I assume
that potential subscribers are more interested in lower-quality podcasts and
YouTubers that indulge in bias rather than confronting it. Dunno what to do
about that, but I'm glad she's back to podcasting.
2Viliam3moThat's wonderful news, thank you for telling me!
For those who have clicked on the YouTube link in my previous comment, there is
no new content as of now, go to the Rationally Speaking
[http://rationallyspeakingpodcast.org/] podcast.
1TAG7moYou're both assuming that you have a set of correct ideas coupled with bad
PR...but how well are Bayes, Aumann and MWI (eg.) actually doing?
-1seed3moLook, I'm neurotypical and I don't find anything Eliezer writes offensive, will
you please stop ostracizing us.
2Ben Pace3moDid either of them say neurotypical? I just heard them say normies.
1seed3moOh, sorry, I've only heard the word used in that context before, I thought
that's what it meant. Turns out it has a broader meaning.
Like to explain why I think as I do, I‘d need to go through some basic rational concepts.
I believe that if the rational concepts are pulling their weight, it should be possible to explain the way the concept is showing up concretely in your thinking, rather than justifying it in the general case first.
As an example, perhaps your friend is protesting your use of anecdotes as data, but you wish to defend it as Bayesian, if not scientific, evidence. Rather than explaining the difference in general, I think you can say "I think that it's more likely that we hear this many people complaining about an axe murderer downtown if that's in fact what's going on, and that it's appropriate for us to avoid that area today. I agree it's not the only explanation and you should be able to get a more reliable sort of data for building a scientific theory, but I do think the existence of an axe murderer is a likely enough explanation for these stories that we should act on it"
If I'm right that this is generally possible, then I think this is a route around the feeling of being trapped on the other side of an inferential gap (which is how I interpreted the 'weird tension')
2AllAmericanBreakfast7moI think you're right, when the issue at hand is agreed on by both parties to be
purely a "matter of fact."
As soon as social or political implications crop in, that's no longer a
guarantee.
But we often pretend like our social/political values are matters of fact. The
offense arises when we use rational concepts in a way that gives the lie to that
pretense. Finding an indirect and inoffensive way to present the materials and
let them deconstruct their pretenses is what I'm wishing for here. LW has a
strong culture surrounding how these general-purpose tools get applied, so I'd
like to see a presentation of the "pure theory" that's done in an engaging way
not obviously entangled with this blog.
The alternative is to use rationality to try and become savvier social
operators. This can be "instrumental rationality" or it can be "dark arts,"
depending on how we carry it out. I'm all for instrumental rationality, but I
suspect that spreading rational thought further will require that other cultural
groups appropriate the tools to refine their own viewpoints rather than us going
out and doing the convincing ourselves.
8Matt Goldenberg7moI used this line when having a conversation at a party with a bunch of people
who turned out to be communists, and the room went totally silent except for one
dude who was laughing.
4AllAmericanBreakfast7moIt was the silence of sullen agreement.
I'm annoyed that I think so hard about small daily decisions.
Is there a simple and ideally general pattern to not spend 10 minutes doing arithmetic on the cost of making burritos at home vs. buying the equivalent at a restaurant? Or am I actually being smart somehow by spending the time to cost out that sort of thing?
Perhaps:
"Spend no more than 1 minute per $25 spent and 2% of the price to find a better product."
This heuristic cashes out to:
Over a year of weekly $35 restaurant meals, spend about $35 and an hour and a half finding better restaurants or meal
5Dagon7moFor some (including younger-me), the opposite advice was helpful - I'd agonize
over "big" decisions, without realizing that the oft-repeated small decisions
actually had a much larger impact on my life.
To account for that, I might recommend you notice cache-ability and repetition,
and budget on longer timeframes. For monthly spending, there's some portion
that's really $120X decade spending (you can optimize once, then continue to buy
monthly for the next 10 years), a bunch that's probably $12Y of annual spending,
and some that's really $Z that you have to re-consider every month.
Also, avoid the mistake of inflexible permissions. Notice when you're spending
much more (or less!) time optimizing a decision than your average, but there are
lots of them that actually benefit from the extra time. And lots that additional
time/money doesn't change the marginal outcome by much, so you should spend less
time on.
3AllAmericanBreakfast7moI wonder if your problem as a youth was in agonizing over big decisions, rather
than learning a productive way to methodically think them through. I have lots
of evidence that I underthink big decisions and overthink small ones. I also
tend to be slow yet ultimately impulsive in making big changes, and fast yet
hyper-analytical in making small changes.
Daily choices have low switching and sunk costs. Everybody's always comparing,
so one brand at a given price point tends to be about as good as another.
But big decisions aren't just big spends. They're typically choices that you're
likely stuck with for a long time to come. They serve as "anchors" to your life.
There are often major switching and sunk costs involved. So it's really
worthwhile anchoring in the right place. Everything else will be influenced or
determined by where you're anchored.
The 1 minute/$25 + 2% of purchase price rule takes only a moment's thought. It's
a simple but useful rule, and that's why I like it.
There are a few items or services that are relatively inexpensive, but have high
switching costs and are used enough or consequential enough to need extra
thought. Examples include pets, tutors, toys for children, wedding rings,
mattresses, acoustic pianos, couches, safety gear, and textbooks. A heuristic
and acronym for these exceptions might be CHEAPS: "Is it a Curriculum? Is it
Heavy? Is it Ergonomic? Is it Alive? Is it Precious? Is it Safety-related?"
The structure of knowledge is an undirected cyclic graph between concepts. To make it easier to present to the novice, experts convert that graph into a tree structure by removing some edges. Then they convert that tree into natural language. This is called a textbook.
Scholarship is the act of converting the textbook language back into nodes and edges of a tree, and then filling in the missing edges to convert it into the original graph.
The mind cannot hold the entire graph in working memory at once. It's as important to practice navigating between concept... (read more)
I want to put forth a concept of "topic literacy."
Topic literacy roughly means that you have both the concepts and the individual facts memorized for a certain subject at a certain skill level. That subject can be small or large. The threshold is that you don't have to refer to a reference text to accurately answer within-subject questions at the skill level specified.
This matters, because when studying a topic, you always have to decide whether you've learned it well enough to progress to new subject matter. This offers a clean "yes/no" answer to that ess... (read more)
I was having a bad day today. Unlikely to have time this weekend for something I'd wanted to do. Crappy teaching in a class I'm taking. Ever increasing and complicating responsibilities piling up.
So what did I do? I went out and bought half a cherry pie.
Will that cherry pie make me happy? No. I knew this in advance. Consciously and unconsciously: I had the thought, and no emotion compelled me to do it.
In fact, it seemed like the least-efficacious action: spending some of my limited money, to buy a pie I don't... (read more)
4Viliam5moSo the "stupid solutions to problems of life" are not really about improving the
life, but about signaling to yourself that... you still have some things under
control? (My life may suck, but I can have a cherry pie whenever I want to!)
This would be even more important if the cherry pie would somehow actively make
your life worse. For example, if you are trying to lose weight, but at the same
time keep eating cherry pie every day in order to improve the story of your day.
Or if instead of cherry pie it would be cherry liqueur.
Just guessing, but it would probably help to choose the story in advance. "If I
am doing X, my life is great, and nothing else matters" -- and then make X
something useful that doesn't take much time. Even better, have multiple
alternatives X, Y, Z, such that doing any of them is a "proof" of life being
great.
2AllAmericanBreakfast5moI do chalk a lot of dysfunction up to this story-centric approach to life. I
just suspect it’s something we need to learn to work with, rather than against
(or to deny/ignore it entirely).
My sense is that storytelling - to yourself or others - is an art. To get the
reaction you want - from self or others - takes some aesthetic sensitivity.
My guess is there’s some low hanging fruit here. People often talk about doing
things “for the story,” which they resort to when they're trying to justify
doing something dumb/wasteful/dangerous/futile. Perversely, it often seems that
when people talk in detail about their good decisions, it comes of as arrogant.
Pointless, tidy philosophical paradoxes seem to get people's puzzle-solving
brains going better than confronting the complexity of the real world.
But maybe we can simply start building habits of expressing gratitude. Finding
ways to present good ideas and decisions in ways that are delightful in
conversation. Spinning interesting stories out of the best parts of our lives.
A lot of my akrasia is solved by just "monkey see, monkey do." Physically put what I should be doing in front of my eyeballs, and pretty quickly I'll do it. Similarly, any visible distractions, or portals to distraction, will also suck me in.
But there also seems to be a component that's more like burnout. "Monkey see, monkey don't WANNA."
On one level, the cure is to just do something else and let some time pass. But that's not explicit enough for my taste. For one thing, something is happening that recovers my motivation. For another, "letting time pass" i... (read more)
I'm managing a project to install signage for a college campus's botanical collection.
Our contractor, who installed the sign posts in the ground, did a poor job. A lot of them pulled right out of the ground.
Nobody could agree on how many posts were installed: the groundskeeper, contractor, and two core team members, each had their own numbers from "rough counts" and "lists" and "estimates" and "what they'd heard."
The best decision I've made on this project was to do a precise inventory of exactly which sign posts are installed correctly, comple... (read more)
I'm in school at the undergraduate level, taking 3 difficult classes while working part-time.
For this path to be useful at all, I have to be able to tick the boxes: get good grades, get admitted to grad school, etc. For now, my strategy is to optimize to complete these tasks as efficiently as possible (what Zvi calls "playing on easy mode"), in order to preserve as much time and energy for what I really want: living and learning.
Are there dangers in getting really good at paying your dues?
1) Maybe it distracts you/diminishes the incen... (read more)
7NaiveTortoise4moIf you haven't seen Half-assing it with everything you've got
[http://mindingourway.com/half-assing-it-with-everything-youve-got/], I'd
definitely recommend it as an alternative perspective on this issue.
3AllAmericanBreakfast4moI see my post as less about goal-setting ("succeed, with no wasted motion") and
more about strategy-implementing ("Check the unavoidable boxes first and
quickly, to save as much time as possible for meaningful achievement").
4Dagon4moI suspect "dues" are less relevant in today's world than a few decades ago. It
used to be a (partial) defense against being judged harshly for your success, by
showing that you'd earned it without special advantage. Nowadays, you'll be
judged regardless, as the assumption is that "the system" is so rigged that
anyone who succeeds had a headstart.
To the extent that the dues do no actual good (unlike literal dues, which the
recipient can use to buy things, presumably for the good of the group), skipping
them seems very reasonable to me. The trick, of course, is that it's very hard
to distinguish unnecessary hurdles ("dues") from socially-valuable lessons in
conformity and behavior ("training").
Relevant advice when asked if you've paid your dues:
https://www.youtube.com/watch?v=PG0YKVafAe8
I've been thinking about honesty over the last 10 years. It can play into at least three dynamics.
One is authority and resistance. The revelation or extraction of information, and the norms, rules, laws, and incentives surrounding this, including moral concepts, are for the primary purpose of shaping the power dynamic.
The second is practical communication. Honesty is the idea that specific people have a "right to know" certain pieces of information from you, and that you meet this obligation. There is wide latitude for "white lies," exaggeration, storytell... (read more)
5Dagon6moI like this line of reasoning, but I'm not sure it's actually true. "better"
rationality should lead your thinking to be more effective - better able to take
actions that lead to outcomes you prefer. This could express as less thinking,
or it could express as MORE thinking, for cases where return-to-thinking is much
higher due to your increase in thinking power.
Whether you're thinking less for "still having good outcomes", or thinking the
same amount for "having better outcomes" is a topic for introspection and
rationality as well.
3AllAmericanBreakfast6moThat's true, of course. My post is really a counter to a few straw-Vulcan
tendencies: intelligence signalling, overthinking everything, and being super
argumentative all the time. Just wanted to practice what I'm preaching!
How should we weight and relate the training of our mind, body, emotions, and skills?
I think we are like other mammals. Imitation and instinct lead us to cooperate, compete, produce, and take a nap. It's a stochastic process that seems to work OK, both individually and as a species.
We made most of our initial progress in chemistry and biology through very close observation of small-scale patterns. Maybe a similar obsessiveness toward one semi-arbitrarily chosen aspect of our own individual behavior would lead to breakthroughs in self-understanding?
I'm experimenting with a format for applying LW tools to personal social-life problems. The goal is to boil down situations so that similar ones will be easy to diagnose and deal with in the future.
To do that, I want to arrive at an acronym that's memorable, defines an action plan and implies when you'd want to use it. Examples:
OSSEE Activity - "One Short Simple Easy-to-Exit Activity." A way to plan dates and hangouts that aren't exhausting or recipes for confusion.
DAHLIA - "Discuss, Assess, Help/Ask, Leave, Intervene, Accept." An action plan for how to de... (read more)
Is a rationalist someone famous for being rational? Someone who’s leveraged their reputation to gain privileged access to opportunity, other people’s money, credit, credence, prestige?
Are there any arenas of life where reputation-building is not a heavy determinant of success?
4Ben Pace3moA physicist is someone who is interested in and studies physics.
A rationalist is someone who is interested in and studies rationality.
2Viliam3moA rationalist is someone who can talk rationally about rationality, I guess. :P
One difference between rationality and fame is that you need some rationality in
order to recognize and appreciate rationality, while fame can be recognized and
admired also (especially?) by people who are not famous. Therefore, rationality
has a limited audience.
Suppose you have a rationalist who "wins at life". How would a non-rational
audience perceive them? Probably as someone "successful", which is a broad
category that also includes e.g. lottery winners.
Even people famous for being smart, such as Einstein, are probably perceived as
"being right" rather than being good at updating, research, or designing
experiments.
A rationalist can admire another rationalist's ability of changing their mind.
And also "winning at life" to the degree we can control for their circumstances
(privilege and luck), so that we can be confident it is not mere "success" we
admire, but rather "success disportionate to resources and luck". This would
require either that the rationalist celebrity regularly publishes their though
processes, or that you know them personally. Either way, you need lots of data
about how they actually succeeded.
You could become a millionaire by buying Bitcoin anonymously, so that would be
one example.
Depends on what precisely you mean by "success": it is something like
"doing/getting X" or rather "being recognized as X"? The latter is inherently
social, the former you can often achieve without anyone knowing about it.
Sometimes it easier to achieve things if you don't want to take credit; for
example if you need a cooperation of a powerful person, it can be useful to
convince them that X was actually their idea. Or you can have the power, but
live in the shadows, while other people are in the spotlight, and only they know
that they actually take commands from you.
To be more specific, I think you could make a lot of money by learning something
like programming, getting
4AllAmericanBreakfast3moCertainly it is possible to find success in some areas anonymously. No argument
with you there!
I view LW-style rationality as a community of practice, a culture of people
aggregating, transmitting, and extending knowledge about how to think
rationally. As in "The Secret of Our Success," we don't accomplish this by
independently inventing the techniques we need to do our work. We accomplish
this primarily by sharing knowledge that already exists.
Another insight from TSOOS is that people use prestige as a guide for who they
should imitate. So rationalists tend to respect people with a reputation for
rationality.
But what if a reputation for rationality can be cultivated separately from
tangible accomplishments?
In fact, prestige is already one step removed from the tangible accomplishments.
But how do we know if somebody is prestigious?
Perhaps a reputation can be built not by gaining the respect of others through a
track record of tangible accomplishments, but by persuading others that:
a) You are widely respected by other people whom they haven't met, or by
anonymous people they cannot identify, making them feel behind the times, out of
the loop.
b) That the basis on which people allocate prestige conventionally is flawed,
and that they should do it differently in a way that is favorable to you, making
them feel conformist or conservative.
c) That other people's track record of tangible accomplishments are in fact
worthless, because they are not of the incredible value of the project that the
reputation-builder is "working on," or are suspect in terms of their actual
utility. This makes people insecure.
d) Giving people an ability to participate in the incredible value you are
generating by convincing them to evangelize your concept, and thereby to
evangelize you. Or of course, just donating money. This makes people feel a
sense of meaning and purpose.
I could think of other strategies for building hype. One is to participate in
cooperative games, whereb
4Viliam3moAh, so you mean within the rationalist (and adjacent) community; how can we make
sure that we instinctively copy our most rational members, as opposed to random
or even least rational ones.
When I reflect on what I do by default... well, long ago I perceived "works at
MIRI/CFAR" as the source of prestige, but recently it became "writes articles I
find interesting". Both heuristics have their advantages and disadvantages. The
"MIRI/CFAR" heuristic allows me to outsource judgment to people who are smarter
than me and have more data about their colleagues; but it ignores people outside
Bay Area and those who already have another job. The "blogging" heuristic allows
me to judge the thinking of authors; but it ignores people who are too busy
doing something important or don't wish to write publicly.
Here is how to exploit my heuristics:
* Be charming, and convince people at MIRI/CFAR/GiveWell/etc. to give you some
role in their organization; it could be a completely unimportant one. Make
your association known.
* Have good verbal skills, and deep knowledge of some topic. Write a blog about
that topic and the rationalist community.
Looking at your list: Option a) if someone doesn't live in Bay Area, it could be
quite simple to add a few rationalist celebrities as friends on Facebook, and
then pretend that you have some deeper interaction with them. People usually
don't verify this information, so if no one at your local meetup is in regular
contact with them, the risk of exposure is low. Your prestige is then limited to
the local meetup.
Options b) and c) would probably lead to a big debate. Arguably,
"metarationality" is an example of "actually, all popular rationalists are doing
it wrong, this is the true rationality" claim.
Option d) was tried by Intentional Insights, Logic Nation, and I have heard
about people who try to extract free work from programmers at LW meetups. Your
prestige is limited to the few people you manage to recruit.
Rationalist com
Each person chooses a charity and an amount of money that you must donate to swipe right on them. This leads to higher-fidelity match information while also giving you a meaningful topic to kick the conversation off.
If a gears-level understanding becomes the metric of expertise, what will people do?
Go out and learn until they have a gears-level understanding?
Pretend they have a gears-level understanding by exaggerating their superficial knowledge?
Feel humiliated because they can't explain their intuition?
Attack the concept of gears-level understanding on a political or philosophical level?
Use the concept of gears-level understanding to debug your own knowledge. Learn for your own sake, and allow your learning to naturally attract the credibility
The reticence many LWers feel about the term "rationalist" stems from a paradox: it feels like a status-grab and low-status at the same time.
It's a status grab because LW can feel like an exclusive club. Plenty of people say they feel like they can hardly understand the writings here, and that they'd feel intimidated to comment, let alone post. Since I think most of us who participate in this community wish that everybody would be more into being rational and that it wasn't an exclusive club, this feels unfortunate.
2ChristianKl21dEverybody in our community knows what it means but people outside of our
community frequently think that it's about what philosophers call rationality.
I use LessWrong as a place not just to post rambly thoughts and finished essays, but something in between.
The in between parts are draft essays that I want feedback on, and want to get out while the ideas are still hot. Partly it's so that I can have a record of my thoughts that I can build off of and update in the future. Partly it's that the act of getting my words together in a way I can communicate to others is an important part of shaping my own views.
I wish there was a way to tag frontpage posts with something like "Draft - seeking feedback" vs. "Fin... (read more)
2AllAmericanBreakfast1moYeah, just a tag like that would be ideal as far as I'm concerned. You could
also allow people to filter those in or out of their feed.
Who can argue against gathering more evidence? I can. Evidence is often costly, and worse, slow, and there is certainly nothing virtuous about refusing to integrate the evidence you already have. You can always change your mind later."
This is often not true, though, for example with regard to whether or not it's ethical to have kids. So how to make these sorts of decisions?
I don't have a good answer for this. I sort of think that there are certain superhuman forces or drives that "win out." The drive ... (read more)
2Pattern1moIf you will get more evidence, whether you want it or not, is there a way you
can do something with that?
Ba zbgvingrq fgbccvat vgfrys - jul fgngvp cebprffrf, engure guna qlanzvp barf?
The first time you read a textbook on a new subject, you're taking in new knowledge. Re-read the same passage a day later, a week later, or a year later, and it will qualitatively feel different.
You'll recognize the sentences. In some parts, you'll skim, because you know it already. Or because it looks familiar -- are you sure which?
And in that skimming mode, you might zoom into and through a patch that you didn't know so well.
When you're reading a textbook for the first time, in short, there are more inherent safeguards to keep you f... (read more)
I just started using GreaterWrong.com, in anti-kibitzer mode. Highly recommended. I notice how unfortunately I've glommed on to karma and status more than is comfortable. It's a big relief to open the front page and just see... ideas!
6Raemon2moI just went to try this, and something I noticed immediately was that while the
anti-kibbitzer applies itself to the post list and to the post page, it doesn't
seem to apply to the post-hover-preview.
There's a pretty simple reason why the stock market didn't tank long-term due to COVID. Even if we get 3 million total deaths due to the pandemic, that's "only" around a 5% increase in total deaths over the year where deaths are at their peak. 80% of those deaths are among people of retirement age. Though their spending is around 34% of all spending, the money of those who die from COVID will flow to others who will also spend it.
My explanation for the original stock market crash back in Feb/March is that investors were nervous that we'd impose truly strict lockdown measures, or perhaps that the pandemic would more seriously harm working-age people than it does. That would have had a major effect on the economy.
At any given time, many doors stand wide open before you. They are slowly closing, but you have plenty of time to walk through them. The paths are winding.
Striving is when you recognize that there are also many shortcuts. Their paths are straighter, but the doors leading to them are almost shut. You have to run to duck through.
And if you do that, you'll see that through the almost-shut doors, there are yet straighter roads even further ahead, but you can only make it through if you make a mad dash. There's no guarantee.
The direction I'd like to see LW moving in as a community
Criticism has a perverse characteristic:
Fresh ideas are easier to criticize than established ideas, because the language, supporting evidence, and theoretical mechanics have received less attention.
Criticism has more of a chilling effect on new thinkers with fresh ideas than on established thinkers with popular ideas.
Ideas that survive into adulthood will therefore tend to be championed by thinkers who are less receptive to criticism.
Maybe we need some sort of "baby criticism" for new ideas. A "devel... (read more)
4Viliam2moThis reminds me of the "babble and prune" concept. We should allow... maybe not
literally the "babble" stage, but something in between, when the idea is already
half-shaped but not completed.
I think the obvious concern is that all kinds of crackpottery may try to enter
this open door, so what would be the balance mechanism? Should authors specify
their level of certainty and be treated accordingly? (Maybe choose one of
predefined levels from "just thinking aloud" to "nitpicking welcome".) In a
perfect world, certainty could be deduced from the tone of the article, but this
does not work reliably. Something else...?
2ChristianKl2moWhile this sounds nice on the abstract level I'm not sure what concrete behavior
you are pointing to. Could you link to examples of comments that you think do
this well?
4AllAmericanBreakfast2moI don't want to take the time to do what you've requested. Some hypothetical
concrete behaviors, however:
* Asking questions with a tone that conveys a tentative willingness to play
with the author's framework or argument, and an interest in hearing more of
the authors' thoughts.
* Compliments, "this made me think of," "my favorite part of your post was"
* Noting connections between a post and the authors' previous writings.
* Offers to collaborate or edit.
When I consider doing a difficult/time-consuming/expensive but potentially rewarding activity, it often provokes anxiety. Examples include running ten miles, doing an extensive blog post series on regenerative medicine, and going to grad school. Let's call this cost/benefit anxiety.
Other times, the immediate actions I'm considering are equally "costly," but one provokes more fear than the others even though it is not obviously stupid. One example is whether or not to start blogging under my real name. Call it ... (read more)
The US recommended daily amount (RDA) of vitamin D is about 600 IUs per day. This was established in 2011, and hasn't been updated since. The Food and Nutrition Board of the Institute of Medicine at the National Academy of Sciences sets US RDAs.
According to a 2017 paper, "The Big Vitamin D Mistake," the right level is actually around 8,000 IUs/day, and the erroneously low level is due to a statistical mistake. I haven't been able to find out yet whether there is any transparency about when the RDA will be reconsidered.
2Dagon3moSo, you can't trust the government. Why do you trust that study? I talked to my
MD about it, and he didn't actually know any more than I about reasoning, but
did know that there is some toxicity at higher levels, and strongly recommended
I stay below 2500 IU/day. I haven't fully followed that, as I still have a large
bottle of 5000 IU pills, which I'm now taking every third day (with 2000 IUs on
the intervening days).
EU Food Safety Administration in 2006 (2002 for vitamin D, see page 167 of
https://www.efsa.europa.eu/sites/default/files/efsa_rep/blobserver_assets/ndatolerableuil.pdf.
[https://www.efsa.europa.eu/sites/default/files/efsa_rep/blobserver_assets/ndatolerableuil.pdf.]
Page 180 for the recommendation) found that 50ug (2000IU) per day is the safe
upper limit.
I'm not convinced it's JUST bureaucratic inefficiency - there may very well be
difficulties in finding a balanced "one-size-fits-all" recommendation as well,
and the judgement of "supplement a bit lightly is safer than over-supplementing"
is well in-scope for these general guidelines.
2AllAmericanBreakfast3moYou raise two issues here. One is about vitamin D, and the other is about trust.
Regarding vitamin D, there is an optimal dose for general population health that
lies somewhere in between "toxically deficient" and "toxically high." The range
from the high hundreds to around 10,000 appears to be well within that safe
zone. The open question is not whether 10,000 IUs is potentially toxic - it
clearly is not - but whether, among doses in the safe range, a lower dose can be
taken to achieve the same health benefits.
One thing to understand is that in the outdoor lifestyle we evolved for, we'd be
getting 80% of our vitamin D from sunlight and 20% through food. In our modern
indoor lifestyles, we are starving ourselves for vitamin D.
"Supplement a bit lightly is safer than over-supplementing" is only a meaningful
statement if you can define the dose that constitutes "a bit lightly" and the
dose that is "over-supplementing." Beyond these points, we'd have "dangerously
low" and "dangerously high" levels.
To assume that 600 IU is "a bit lightly" rather than "dangerously low" is a
perfect example of begging the question.
[https://en.wikipedia.org/wiki/Begging_the_question]
On the issue of trust, you could just as easily say "so you don't trust these
papers, why do you trust your doctor or the government?"
The key issue at hand is that in the absence of expert consensus, non-experts
have to come up with their own way of deciding who to trust.
In my opinion, there are three key reasons to prefer a study of the evidence to
the RDA in this particular case:
1. The RDA hasn't been revisited in almost a decade, even simply to reaffirm
it. This is despite ongoing research in an important area of study that may
have links to our current global pandemic. That's strong evidence to me that
the current guidance is as it is for reasons other than active engagement by
policy-makers with the current state of vitamin D research.
2. The statistical error identified in
Explanation for why displeasure would be associated with meaningfulness, even though in fact meaning comes from pleasure:
Meaningful experiences involve great pleasure. They also may come with small pains. Part of how you quantify your great pleasure is the size of the small pain that it superceded.
Pain does not cause meaning. It is a test for the magnitude of the pleasure. But only pleasure is a causal factor for meaning.
4Viliam5moIn a perfect situation, it would be possible to achieve meaningful experiences
without pain, but usually it is not possible. A person who optimizes for
short-term pain avoidance, will not reach the meaningful experience. Because
optimizing for short-term pain avoidance is natural, we have to remind ourselves
to overcome this instinct.
2AllAmericanBreakfast5moThis fits with the idea that meaning comes from pleasure, and that great
pleasure can be worth a fair amount of pain to achieve. The pain drains meaning
away, but the redeeming factor is that it can serve as a test of the magnitude
of pleasure, and generate pleasurable stories in the future.
An important counter argument to my hypothesis is how we may find a privileged
“high road” to success and pleasure to be less meaningful. This at first might
seem to suggest that we do inherently value pain.
In fact, though, what frustrates people about people born with a silver spoon in
their mouths is that society seems set up to ensure their pleasure at another’s
expense.
It’s not their success or pleasure we dislike. It’s the barriers and pain that
we think it’s contextualized in. If pleasure for one means pain for another,
then of course we find the pleasure to be less meaningful.
So this isn’t about short-term pain avoidance. It’s about long-term, overall,
wise and systemic pursuit of pleasure.
And that pleasure must be not only in the physical experiences we have, but in
the stories we tell about it - the way we interpret life. We should look at it,
and see that it is good.
If people are wireheading, and we look at that tendency and it causes us great
displeasure, that is indeed an argument against wireheading.
We need to understand that there’s no single bucket where pleasure can
accumulate. There is a psychological reward system where pleasure is evaluated
according to the sensory input and brain state.
Utilitarian hedonism isn’t just about nerve endings. It’s about how we interpret
them. If we have a major aesthetic objection to wireheading, that counts from
where we’re standing, no matter how much you rachet up the presumed pleasure of
wireheading.
The same goes recursively for any “hack” that could justify wireheading. For
example, say you posited that wireheading would be seen as morally good, if only
we could find a catchy moral justification for it.
So w
2Matt Goldenberg5moI looked through that post but didn't see any support for the claim that meaning
comes from pleasure.
My own theory is that meaning comes from values, and both pain and pleasure are
a way to connect to the things we value, so both are associated with meaning.
5AllAmericanBreakfast5moI'm a classically trained pianist. Music practice involves at least four kinds
of pain:
* Loneliness
* Frustration
* Physical pain
* Monotony
I perceive none of these to add meaning to music practice. In fact, it was
loneliness, frustration, and monotony that caused my music practice to be slowly
drained of its meaning and led me ultimately to stop playing, even though I
highly valued my achievements as a classical pianist and music teacher. If
there'd been an issue with physical pain, that would have been even worse.
I think what pain can do is add flavor to a story. And we use stories as a way
to convey meaning. But in that context, the pain is usually illustrating the
pleasures of the experience or of the positive achievement. In the context of my
piano career, I was never able to use these forms of pain as a contrast to the
pleasures of practice and performance. My performance anxiety was too intense,
and so it also was not a source of pleasure.
By contrast, I herded sheep on the Navajo reservation for a month in the middle
of winter. That experience generated many stories. Most of them revolve around a
source of pain, or a mistake. But that pain or mistake serves to highlight an
achievement.
That achievement could be the simple fact of making it through that month while
providing a useful service to my host. Or moments of success within it: getting
the sheep to drink from the hole I cut in the icy lake, busting a tunnel through
the drifts with my body so they could get home, finding a mother sheep that had
gotten lost when she was giving birth, not getting cannibalized by a Skinwalker.
Those make for good stories, but there is pleasure in telling those stories. I
also have many stories from my life that are painful to tell. Telling them makes
me feel drained of meaning.
So I believe that storytelling has the ability to create pleasure out of painful
or difficult memories. That is why it feels meaningful: it is pleasurable to
tell stories. And being a
2Matt Goldenberg5moIt really depends on what you mean by "pleasure". If pleasure is just "things
you want", then almost tautologically meaning comes from pleasure, since you
want meaning.
If instead, pleasure is a particular phenomological feeling similar to feeling
happy or content, I think that many of us actually WANT the meaning that comes
from living our values, and it also happens to give us pleasure. I think that
there are also people that just WANT the pleasure, and if they could get it
while ignoring their values, they would.
I call this the"Heaven/Enlightenment" dichotomy, and I think it's a frequent
misunderstanding.
I've seen some people say "all we care about is feeling good, and people who
think they care about the outside world are confused." I've also seen people say
"All we care about is meeting our values, and people who think it's about
feeling good are confused."
Personally, I think that people are more towards one side of the spectrum or the
other along different dimensions, and I'm inclined to believe both sides about
their own experience.
2AllAmericanBreakfast5moI think we can consider pleasure, along with altruism, consistency, rationality,
fitting the categorical imperative, and so forth as moral goods.
People have different preferences for how they trade off one against the other
when they're in conflict. But they of course prefer them not to be in conflict.
What I'm interested is not what weights people assign to these values - I agree
with you that they are diverse - but on what causes people to adopt any set of
preferences at all.
My hypothesis is that it's pleasure. Or more specifically, whatever moral
argument most effectively hijacks an individual person's psychological reward
system.
So if you wanted to understand why another person considers some strange action
or belief to be moral, you'd need to understand why the belief system that they
hold gives them pleasure.
Some predictions from that hypothesis:
* People who find a complex moral argument unpleasant to think about won't
adopt it.
* People who find a moral community pleasant to be in will adopt its values.
* A moral argument might be very pleasant to understand, rehearse, and think
about, and unpleasant to abandon. It might also be unpleasant in the actions
it motivates its subscriber to undertake. It will continue to exist in their
mind if the balance of pleasure in belief to displeasure in action is
favorable.
* Deprogramming somebody from a belief system you find abhorrent is best done
by giving them alternative sources of "moral pleasure." Examples of this
include the ways people have deprogrammed people from cults and the KKK, by
including them in their social gatherings, including Jewish religious
dinners, and making them feel welcome. Eventually, the pleasure of adopting
the moral system of that shared community displaces whatever pleasure they
were deriving from their former belief system.
* Paying somebody in money and status to uphold a given belief system is a
great way to keep them doing it, no matt
2Matt Goldenberg5moThis just kicks the can down the road on you defining pleasure, all of my points
still apply
That is, I think it's possible to say that pleasure kicks in around values that
we really want, rather than vice versa.
6Viliam5moThere are things like "lying for a good cause", which is a textbook example of
what will go horribly wrong because you almost certainly underestimate the
second-order effects. Like the "do not wear face masks, they are useless" expert
advice for COVID-19, which was a "clever" dark-arts move aimed to prevent people
from buying up necessary medical supplies. A few months later, hundreds of
thousands have died (also) thanks to this advice.
(It would probably be useful to compile a list of lying for a good cause gone
wrong, just to drive home this point.)
Thinking about historical record of people promoting the use of dark arts within
rationalist community, consider Intentional Insights
[https://forum.effectivealtruism.org/posts/fn7bo8sYEHS3RPKQG/concerns-with-intentional-insights]
. Turned out, the organization was also using the dark arts against the
rationalist community itself. (There is a more general lesson here: whenever a
fan of dark arts tries to make you see the wisdom of their ways, you should
assume that at this very moment they are probably already using the same
techniques on you. Why wouldn't they, given their expressed belief that this is
the right thing to do?)
The general problem with lying is that people are bad at keeping multiple
independent models of the world in their brains. The easiest, instinctive way to
convince others about something is to start believing it yourself. Today you
decide that X is a strategic lie necessary for achieving goal Y, and tomorrow
you realize that actually X is more correct than you originally assumed (this is
how self-deception feels from inside). This is in conflict with our goal to
understand the world better. Also, how would you strategically lie as a group?
Post it openly online: "Hey, we are going to spread the lie X for instrumental
reasons, don't tell anyone!" :)
Then there are things like "using techniques-orthogonal-to-truth to promote true
things". Here I am quite guilty myself, because I have long ago a
2AllAmericanBreakfast5moWe already had words for lies, exaggerations, incoherence, and advertising.
Along with a rich discourse of nuanced critiques and defenses of each one.
The term “dark arts” seems to lump all these together, then uses cherry picked
examples of the worst ones to write them all off. It lacks the virtue of
precision. We explicitly discourage this way of thinking in other areas. Why do
we allow it here?
You can start with complexity, then simplify. But that's style.
What would it mean to think simple?
I don't know. But maybe...
Accept accepted wisdom.
Limit your words.
Rehearse your core truths, think new thoughts less.
Start with inner knowledge. Intuition. Genius. Vision. Only then, check yourself.
Argue if you need to, but don't ever debate. Other people can think through any problem you can. Don't let them stand in your way just because they haven't yet.
If you know, let others find their own proofs. Move on with the plan.
Let R be the ratio of the number of “true relationships” to “no relationships” among those tested in the field... The pre-study probability of a relationship being true is R/(R + 1).
What is the difference between "the ratio of the number of 'true relationships' to 'no relationships' among those tested in the field" and "the pre-study probability of a relationship being true"?
2AllAmericanBreakfast7moFrom Reddit:
You could think of it this way: If R is the ratio of (combinations that total N
on two dice) to (combinations that don't total N on two dice), then the chance
of (rolling N on two dice) is R/(R+1). For example, there are 2 ways to roll a 3
(1 and 2, and 2 and 1) and 34 ways to not roll a 3. The probability of rolling a
3 is thus (2/34)/(1+2/34)=2/36.
There are lots of reasons to measure a person's ability level in some skill. One such reason is to test your understanding in the early stages of learning a new set of concepts.
You want a system that's:
Fast, intuitive
Suggests what you need to spend more time on
Relies on the textbook and your notes to guide your next study activity, rather than needing to "compute" it separately.
Flashcards/reciting concepts from notes is a nice example. It's fast and intuitive, tells you what concepts you're still struggling with. Knowing that, you can look over the materia... (read more)
Cultural transmission of knowledge is the secret of our success.
Children comprise a culture. They transmit knowledge of how to insult and play games, complain and get attention. They transmit knowledge on how to survive and thrive with a child's priorities, in a child's body, in a culture that tries to guarantee that the material needs of children are taken care of.
General national cultures teach people very broad, basic skills. Literacy, the ability to read and discuss the newspaper. How to purchase consumer goods. H... (read more)
What is the #1 change that LW has instilled in me?
Participating in LW has instilled the virtue of goal orientation. All other virtues, including epistemic rationality, flow from that.
Learning how to set goals, investigate them, take action to achieve them, pivot when necessary, and alter your original goals in light of new evidence is a dynamic practice, one that I expect to retain for a long time.
Many memes circulate around this broad theme. But only here have I been able to develop an explicit, robust, ever-expanding framework for making and thinking abo... (read more)
Putting a serious effort into learning Italian in the classroom can make it possible to immerse yourself in the language when you visit Italy. Studying hard for an engineering interview lets you get a job where you'll be able to practice a set of related skills all the time. Reading a scientist's research papers makes you seem like an attractive candidate to work in their lab, where you'll gain a much more profound knowledge of the field.
This isn't just signaling. It's much more about acquiring the minimal competency to participate i... (read more)
2Dagon1dThat's the crux of most of the education debates. In reality, almost nothing is
just signaling - it's a mix of value and signaling, because that value is
actually what's being signaled. The problem is that it's hard to identify the
ratio of real and signaled value without investing a whole lot, and that leads
to competitive advantage (in some aspects) to those who can signal without the
expense of producing the real value.
2AllAmericanBreakfast1dAbsolutely. There are plenty, plenty of parasites out there. And I hope we can
improve the incentives. Thing is, it also takes smart people with integrity just
showing up and insisting on doing the right thing, treating the system with
savvy, yes, but also acting as if the system works the way it’s supposed to
work.
I’m going into a scientific career. I immediately saw the kind of lies and
exploitations that are going hand in hand with science. At the same time, there
are a lot of wonderful people earnestly doing the best research they can.
One thing I’ve seen. Honest people aren’t cynical enough, and they’re often
naive. I have met people who’ve thrown years away on crap PIs, or decades on
opaque projects with no foundation.
I know that for me, if I’m going to wade into it, I have to keep a vision of how
things are supposed to be, as well as the defects and parasitism.
There are probably lots of wealthy celebrities who’d like to lose their fame and resume a normal life. Imagine a service akin to witness protection that helped them disappear and start a new life.
I imagine this would lead to journalists and extortionists trying to track them down, so maybe it’s not tractable in the end.
Just a notepad/stub as I review writings on filtered evidence:
One possible solution to the problem of the motivated arguer is to incentivize in favor of all arguments being motivated. Eliezer covered this in "What Evidence Filtered Evidence?" So a rationalist response to the problem of filtered evidence might be to set up a similar structure and protect it against tampering.
What would a rationalist do if they suspected a motivated arguer was calling a decision to their attention and trying to persuade them of option A? It might be to become a motivated arg... (read more)
Aspects of learning that are important but I haven't formally synthesized yet:
Visual/spatial approaches to memorization
Calibrating reading speeds/looking up definitions/thinking up examples: filtering and organizing to distinguish medium, "future details," and the "learning edge"
Mental practice/review and stabilizing an inner monologue/thoughts
Organization and disambiguation of review questions/procedures
Establishing procedures and stabilizing them so you can know if they're working
When to carefully tailor your approach to a particular learning challenge,
Cognitive vs. behaviorist approaches to the study of learning
I. Cognitivist approaches
To study how people study on an internal, mental level, you could do a careful examination of what they report doing with their minds as they scan a sentence of a text that they're trying to learn from.
For example, what does your mind do if you read the following sentence, with the intent to understand and remember the information it contains?
"The cerebral cortex is the site where the highest level of neural processing takes place, including language, memory and cognitive... (read more)
Spaced repetition helps, but how do spaced-repetition researchers have their subjects practice within a single practice session? I'd expect optimized practice to involve not only spacing and number of repetitions, but also an optimal way of practicing within sessions.
So far, I've seen a couple formats:
Subjects get an explanation, examples, and a short, timed set of practice problems.
Subjects practice with flash cards. Each "round" of flash cards involves looking at only the cards they haven't seen or got wro
Are democracies doomed to endless intense, intractable partisanship?
Model for Yes: In a democracy, there will be a set of issues. Each has a certain level of popular or special-interest support, as well as constitutionality.
Issues with the highest levels of popular support and constitutionality will get enacted first, if they weren't already in place before the democracy was founded.
Over time, issues with more marginal support and constitutionality will get enacted, until all that's left are the most marginal issues. The issues that remain live issues will... (read more)
1Gerald Monroe1moI think the current era is a novel phenomena.
Consider that 234 years ago, long dead individuals wrote in a statement calling
for there to be "or abridging the freedom of speech, or of the press".
Emotionally this sounds good, but consider. In a our real universe, information
is not always a net gain. It can be hostile propaganda or a virus designed to
spread rapidly causing harm to it's hosts.
Yet in a case of 'bug is a feature', until recently most individuals didn't
really have freedom of speech. They could say whatever they wanted, but had no
practical way for extreme ideas to reach large audiences. There was a finite
network of newspapers and TV news networks - less than about 10 per city and in
many cases far less than that.
Newspapers and television could be held liable for making certain classes of
false statements, and did have to routinely
[https://en.wikipedia.org/wiki/Richard_Jewell#Libel_cases]pay fines. Many of the
current QAnon conspiracy theories are straight libel and if the authors and
publishers of the statements were not anonymous they would be facing civil
lawsuits.
The practical reason to allow a freedom of speech today is current technology
has no working method to objectively decide if a piece of information is true,
partially true, false, or is hostile information intended to cause harm. (we
rely on easily biased humans to make such judgements and this is error prone and
subject to the bias of whoever pays the humans - see Russia Today)
I don't know what to do about this problem. Just that it's part of the reason
for the current extremism.
I've noticed that when I write posts or questions, much of the text functions as "planning" for what's to come. Often, I'm organizing my thoughts as I write, so that's natural.
But does that "planning" text help organize the post and make it easier to read? Or is it flab that I should cut?
I've noticed that there are two important failure modes in studying for my classes.
Too Fast: This is when learning breaks down because I'm trying to read, write, compute, or connect concepts too quickly.
Too Slow: This is when learning fails, or just proceeds too inefficiently, because I'm being too cautious, obsessing over words, trying to remember too many details, etc.
One hypothesis is that there's some speed of activity that's ideal for any given person, depending on the subject matter and their current level of comfort wi... (read more)
Different approaches to learning seem to be called for in fields with varying levels of paradigm consensus. The best approach to learning undergraduate math/CS/physics/chemistry seems different from the best one to take for learning biology, which again differs from the best approach to studying the economics/humanities*.
High-consensus disciplines have a natural sequential order, and the empirical data is very closely tied to an a priori predictive structure. You develop understanding by doing calculations and making theory-based arguments, along with empi... (read more)
What rationalists are trying to do is something like this:
Describe the paragon of virtue: a society of perfectly rational human beings.
Explain both why people fall short of that ideal, and how they can come closer to it.
Explore the tensions in that account, put that plan into practice on an individual and communal level, and hold a meta-conversation about the best ways to do that.
Now, we have heard that the meek shall inherit the earth. So we eschew the dark arts; embrace the virtues of accuracy, precision, and charity... (read more)
You can justify all sorts of spiritual ideas by a few arguments:
They're instrumentally useful in producing good feelings between people.
They help you escape the typical mind fallacy.
They're memetically refined, which means they'll fit better with your intuition than, say, trying to guess where the people you know fit on the OCEAN scale.
They're provocative and generative of conversation in a way that scientific studies aren't. Partly that's because the language they're wrapped in is more intriguing, and partly isn't because everybody's on a level playing fi
2AllAmericanBreakfast6moI would be interested in arguments about why we should eschew them that don't
resort to activist ideas of making the world a "better place" by purging the
world of irrationality and getting everybody on board with a more scientific
framework for understanding social reality or psychology.
I'm more interested in why individual people should anticipate that exploring
these spiritual frameworks will make their lives worse, either hedonistically or
by some reasonable moral framework. Is there a deontological or utilitarian
argument against them?
4Dagon7moThere's plenty of research going on, but AFAIK, no particular large-scale push
for implementation. I haven't studied the topic, but my impression is that this
is mostly something they can get by with current sources and conservation for a
few decades yet. Desalinization is expensive, not just in terms of money, but in
terms of energy - scaling it up before absolutely needed is a net environmental
harm.
2ChristianKl7moThis article
[https://www.mercurynews.com/2014/05/29/nations-largest-ocean-desalination-plant-goes-up-near-san-diego-future-of-the-california-coast/]
seems to be about the case. The economics seem unclear. The politics seem bad
because it means taking on the enviromentalists.
My modified Pomodoro has been working for me. I set a timer for 5 minutes and start working. Every 5 minutes, I just reset the timer and continue.
For some reason it gets my brain into "racking up points" mode. How many 5-minute sessions can I do without stopping or getting distracted? Aware as I am of my distractability, this has been an unquestionably powerful technique for me to expand my attention span.
All actions have an exogenous component and an endogenous component. The weights we perceive differ from action to action, context to context.
The endogenous component has causes and consequences that come down to the laws of physics.
The exogenous component has causes and consequences from its social implications. The consequences, interpretation, and even the boundaries of where the action begins and ends are up for grabs.
Good reading habit #1: Turn absolute numbers into proportions and proportions into absolute numbers.
For example, in reading "With almost 1,000 genes discovered to be differentially expressed between low and high passage cells [in mouse insulinoma cells]," look up the number of mouse genes (25,000) and turn it into a percentage so that you can see that 1,000 genes is 4% of the mouse genome.
What is the difference between playing devil's advocate and steelmanning an argument? I'm interested in any and all attempts to draw a useful distinction, even if they're only partial.
Attempts:
Devil's advocate comes across as being deliberately disagreeable, while steelmanning comes across as being inclusive.
Devil's advocate involves advancing a clearly-defined argument. Steelmanning is about clarifying an idea that gets a negative reaction due to factors like word choice or some other superficial factor.
Devil's advocate is a political act and is only rele
Empathy is inexpensive and brings surprising benefits. It takes a little bit of practice and intent. Mainly, it involves stating the obvious assumption about the other person's experience and desires. Offer things you think they'd want and that you'd be willing to give. Let them agree or correct you. This creates a good context in which high-value trades can occur, without needing an conscious, overriding, selfish goal to guide you from the start.
2Matt Goldenberg8moFWIW, I like to be careful about my terms here.
Empathy is feeling what the other person is feeling.
Understanding is understanding what the other person is feeling.
Active Listening is stating your understanding and letting the other person
correct you.
Empathic listening is expressing how you feel what the other person is feeling.
In this case, you stated Empathy, but you're really talking about Active
Listening. I agree it's inexpensive and brings surprising benefits.
2Raemon8moI think whether it's inexpensive isn't that obvious. I think it's a skill/habit,
and it depends a lot on whether you've cultivated the habit, and on your mental
architecture.
2Matt Goldenberg8moActive listening at a low level is fairly mechanical, and can still acrue quite
a few benefits. Its not as dependent on mental architecture as something like
empathic listening. It does require some mindfulness to create the habit, but
for most people I'd put it on only a slightly higher level of difficulty to
acquire than e.g. brushing your teeth.
2Raemon8moFair, but I think gaining a new habit like brushing your teeth is actually
pretty expensive.
1AllAmericanBreakfast8moEmpathy isn't like brushing your teeth. It's more like berry picking. Evolution
built you to do it, you get better with practice, and it gives immediate
positive feedback. Nevertheless, due to a variety of factors, it is a sorely
neglected practice, even when the bushes are growing in the alley behind your
house.
1AllAmericanBreakfast8moI don't think what I'm calling empathy, either in common parlance or in actual
practice, decomposes neatly. For me, these terms comprise a model of intuition
that obscures with too much artificial light.
2Matt Goldenberg8moIn that case, I don't agree that the thing you're claiming has low costs. As
Raemon says in another comment this type of intuition only comes easily to
certain people. If you're trying to lump together the many skills I just pointed
to, some are easy for others and some harder.
If however, the thing you're talking about is the skill of checking in to see if
you understand another person, then I would refer to that as active listening.
1AllAmericanBreakfast8moOf course, you're right. This is more a reminder to myself and others who
experience empathy as inexpensive.
Though empathy is cheap, there is a small barrier, a trivial inconvenience, a
non-zero cost to activating it. I too often neglect it out of sheer laziness or
forgetfulness. It's so cheap and makes things so much better that I'd prefer to
remember and use it in all conversations, if possible.
Chris Voss thinks empathy is key to successful negotiation.
Is there a line between negotiating and not, or only varying degrees of explicitness?
Should we be openly negotiating more often?
How do you define success, when at least one of his own examples of a “successful negotiation” is entirely giving over to the other side?
I think the point is that the relationship comes first, greed second. Negotiation for Voss is exchange of empathy, seeking information, being aware of your leverage. Those factors are operating all the time - that’s the relationship.
Hot top: "sushi-grade" and "sashimi-grade" are marketing terms that mean nothing in terms of food safety. Freezing inactivates pretty much any parasites that might have been in the fish.
I'm going to leave these claims unsourced, because I think you should look it up and judge the credibility of the research for yourself.
2Matt Goldenberg2moIt's partially about taste isn't it? Sushi grade and sashimi grade will
theoretically smell less fishy
2AllAmericanBreakfast2moFishy smell in saltwater fish is caused by breakdown of TMAO to TME. You can
rinse off TME on the surface to reduce the smell. Fresher fish should also have
less smell.
So if people are saying “sushi grade” when what they mean is “fresh,” then why
not just say “fresh?” It’s a marketing term.
2Matt Goldenberg2moI always thought sushi grade was just the term for "really really fresh :)"
SlateStarCodex, EA, and LW helped me get out of the psychological, spiritual, political nonsense in which I was mired for a decade or more.
I started out feeling a lot smarter. I think it was community validation + the promise of mystical knowledge.
Now I've started to feel dumber. Probably because the lessons have sunk in enough that I catch my own bad ideas and notice just how many of them there are. Worst of all, it's given me ambition to do original research. That's a demanding task, one where you have to accept feeling stupid all the time.
But I still look down that old road and I'm glad I'm not walking down it anymore.
Things I come to LessWrong for:
Cons: I'm frustrated that I so often play Devil's advocate, or else make up justifications for arguments under the principle of charity. Conversations feel profit-oriented and conflict-avoidant. Overthinking to the point of boredom and exhaustion. My default state toward books and people is bored skepticism and political suspicion. I'm less playful than I used to be.
Pros: My own ability to navigate life has grown. My imagination feels almost telepathic, in that I have ideas nobody I know has ever considered, and discover that there is cutting edge engineering work going on in that field that I can be a part of, or real demand for the project I'm developing. I am more decisive and confident than I used to be. Others see me as a leader.
Math is training for the mind, but not like you think
Just a hypothesis:
People have long thought that math is training for clear thinking. Just one version of this meme that I scooped out of the water:
But math doesn't obviously seem to be the only way to practice precision, decision, creativity, beauty, or broad perspective-taking. What about logic, programming, rhetoric, poetry, anthropology? This sounds like marketing.
As I've studied calculus, coming from a humanities background, I'd argue it differently.
Mathematics shares with a small fraction of other related disciplines and games the quality of unambiguous objectivity. It also has the ~unique quality that you cannot bullshit your way through it. Miss any link in the chain and the whole thing falls apart.
It can therefore serve as a more reliable signal, to self and others, of one's own learning capacity.
Experiencing a subject like that can be training for the mind, because becoming successful at it requires cultivating good habits of study and expectations for coherence.
A Nonexistent Free Lunch
On an individualPredictIt market, sometimes you can find a set of "no" contracts whose price (1 share of each) adds up to less than the guaranteed gross take.
Toy example:
There's always a risk of black swans. PredictIt could get hacked. You might execute the trade improperly. Unexpected personal expenses might force you to sell your shares and exit the market prematurely.
But excluding black swans, I though that as long as three conditions held, you could make free money on markets like these. The three conditions were:
In the toy example above, I calculated that you'd lose $0.10 x 10% = $0.01 to PredictIt's profit fee if you bought 1 of each "... (read more)
The Rationalist Move Club
Imagine that the Bay Area rationalist community did all want to move. But no individual was sure enough that others wanted to move to invest energy in making plans for a move. Nobody acts like they want to move, and the move never happens.
Individuals are often willing to take some level of risk and make some sacrifice up-front for a collective goal with big payoffs. But not too much, and not forever. It's hard to gauge true levels of interest based off attendance at a few planning meetings.
Maybe one way to solve this is to ask for escalating credible commitments.
A trusted individual sets up a Rationalist Move Fund. Everybody who's open to the idea of moving puts $500 in a short-term escrow. This makes them part of the Rationalist Move Club.
If the Move Club grows to a certain number of members within a defined period of time (say 20 members by March 2020), then they're invited to planning meetings for a defined period of time, perhaps one year. This is the first checkpoint. If the Move Club has not grown to that size by then, the money is returned and the project is cancelled.
By the end of the pre-defined planning period, there could be one of three majority... (read more)
What gives LessWrong staying power?
On the surface, it looks like this community should dissolve. Why are we attracting bread bakers, programmers, stock market investors, epidemiologists, historians, activists, and parents?
Each of these interests has a community associated with it, so why are people choosing to write about their interests in this forum? And why do we read other people's posts on this forum when we don't have a prior interest in the topic?
Rationality should be the art of general intelligence. It's what makes you better at everything. If practice is the wood and nails, then rationality is the blueprint.
To determine whether or not we're actually studying rationality, we need to check whether or not it applies to everything. So when I read posts applying the same technique to a wide variety of superficially unrelated subjects, it confirms that the technique is general, and helps me see how to apply it productively.
This points at a hypothesis, which is that general intelligence is a set of defined, generally applicable techniques. They apply across disciplines. And they apply across problems within disciplines. So why aren't they generally known and appreciated? Sh... (read more)
Thoughts on cheap criticism
It's OK for criticism to be imperfect. But the worst sort of criticism has all five of these flaws:
I am absolutely guilty of having delivered Category 5 criticism, the worst sort of cheap shots.
There is an important tradeoff here. If standards are too high for critical commentary, it can chill debate and leave an impression that either nobody cares, everybody's on board, or the argument's simply correct. Sometimes, an idea ca... (read more)
Does rationality serve to prevent political backsliding?
It seems as if politics moves far too fast for rational methods can keep up. If so, does that mean rationality is irrelevant to politics?
One function of rationality might be to prevent ethical/political backsliding. For example, let's say that during time A, institution X is considered moral. A political revolution ensues, and during time B, X is deemed a great evil and is banned.
A change of policy makes X permissible during time C, banned again during time D, and absolutely required for all upstanding folk during time E.
Rational deliberation about X seems to play little role in the political legitimacy of X.
However, rational deliberation about X continues in the background. Eventually, a truly convincing argument about the ethics of X emerges. Once it does, it is so compelling that it has a permanent anchoring effect on X.
Although at some times, society's policy on X contradicts the rational argument, the pull of X is such that it tends to make these periods of backsliding shorter and less frequent.
The natural process of developing the rational argument about X also leads to an accretion of arguments that are not only correct... (read more)
Thinking, Fast and Slow was the catalyst that turned my rumbling dissatisfaction into the pursuit of a more rational approach to life. I wound up here. After a few years, what do I think causes human irrationality? Here's a listicle.
- Cognitive biases, whatever these are
- Not understanding statistics
- Akrasia
- Little skill in accessing and processing theory and data
- Not speaking science-ese
- Lack of interest or passion for rationality
- Not seeing rationality as a virtue, or even seeing it as a vice.
- A sense of futility, the idea that epistemic rationality is not very useful, while instrumental rationality is often repugnant
- A focus on associative thinking
- Resentment
- Not putting thought into action
- Lack of incentives for rational thought and action itself
- Mortality
- Shame
- Lack of time, energy, ability
- An accurate awareness that it's impossible to distinguish tribal affiliation and culture from a community
- Everyone is already rational, given their context
- Everyone thinks they're already rational, and that other people are dumb
- It's a good heuristic to assume that other people are dumb
- Rationality is disruptive, and even very "progressive" people have a conservative bias to stay the same, conform with their pee
... (read more)Are rationalist ideas always going to be offensive to just about everybody who doesn’t self-select in?
One loved one was quite receptive to Chesterton’s Fence the other day. Like, it stopped their rant in the middle of its tracks and got them on board with a different way of looking at things immediately.
On the other hand, I routinely feel this weird tension. Like to explain why I think as I do, I‘d need to go through some basic rational concepts. But I expect most people I know would hate it.
I wish we could figure out ways of getting this stuff across that was fun, made it seem agreeable and sensible and non-threatening.
Less negativity - we do sooo much critique. I was originally attracted to LW partly as a place where I didn’t feel obligated to participate in the culture war. Now, I do, just on a set of topics that I didn’t associate with the CW before LessWrong.
My guess? This is totally possible. But it needs a champion. Somebody willing to dedicate themselves to it. Somebody friendly, funny, empathic, a good performer, neat and practiced. And it needs a space for the educative process - a YouTube channel, a book, etc. And it needs the courage of its convictions. The sign of that? Not taking itself too seriously, being known by the fruits of its labors.
Traditionally, things like this are socially achieved by using some form of "good cop, bad cop" strategy. You have someone who explains the concepts clearly and bluntly, regardless of whom it may offend (e.g. Eliezer Yudkowsky), and you have someone who presents the concepts nicely and inoffensively, reaching a wider audience (e.g. Scott Alexander), but ultimately they both use the same framework.
The inoffensiveness of Scott is of course relative, but I would say that people who get offended by him are really not the target audience for rationalist thought. Because, ultimately, saying "2+2=4" means offending people who believe that 2+2=5 and are really sensitive about it; so the only way to be non-offensive is to never say anything specific.
If a movement only has the "bad cops" and no "good cops", it will be perceived as a group of assholes. Which is not necessarily bad if the members are powerful; people want to join the winning side. But without actual power, it will not gain wide acceptance. Most people don't want to go into unnecessary conflicts.
On the other hand, a movement with "good cops" without "bad cops" wil... (read more)
I believe that if the rational concepts are pulling their weight, it should be possible to explain the way the concept is showing up concretely in your thinking, rather than justifying it in the general case first.
As an example, perhaps your friend is protesting your use of anecdotes as data, but you wish to defend it as Bayesian, if not scientific, evidence. Rather than explaining the difference in general, I think you can say "I think that it's more likely that we hear this many people complaining about an axe murderer downtown if that's in fact what's going on, and that it's appropriate for us to avoid that area today. I agree it's not the only explanation and you should be able to get a more reliable sort of data for building a scientific theory, but I do think the existence of an axe murderer is a likely enough explanation for these stories that we should act on it"
If I'm right that this is generally possible, then I think this is a route around the feeling of being trapped on the other side of an inferential gap (which is how I interpreted the 'weird tension')
Markets are the worst form of economy except for all those other forms that have been tried from time to time.
I'm annoyed that I think so hard about small daily decisions.
Is there a simple and ideally general pattern to not spend 10 minutes doing arithmetic on the cost of making burritos at home vs. buying the equivalent at a restaurant? Or am I actually being smart somehow by spending the time to cost out that sort of thing?
Perhaps:
"Spend no more than 1 minute per $25 spent and 2% of the price to find a better product."
This heuristic cashes out to:
- Over a year of weekly $35 restaurant meals, spend about $35 and an hour and a half finding better restaurants or meal
... (read more)The structure of knowledge is an undirected cyclic graph between concepts. To make it easier to present to the novice, experts convert that graph into a tree structure by removing some edges. Then they convert that tree into natural language. This is called a textbook.
Scholarship is the act of converting the textbook language back into nodes and edges of a tree, and then filling in the missing edges to convert it into the original graph.
The mind cannot hold the entire graph in working memory at once. It's as important to practice navigating between concept... (read more)
I want to put forth a concept of "topic literacy."
Topic literacy roughly means that you have both the concepts and the individual facts memorized for a certain subject at a certain skill level. That subject can be small or large. The threshold is that you don't have to refer to a reference text to accurately answer within-subject questions at the skill level specified.
This matters, because when studying a topic, you always have to decide whether you've learned it well enough to progress to new subject matter. This offers a clean "yes/no" answer to that ess... (read more)
We do things so that we can talk about it later.
I was having a bad day today. Unlikely to have time this weekend for something I'd wanted to do. Crappy teaching in a class I'm taking. Ever increasing and complicating responsibilities piling up.
So what did I do? I went out and bought half a cherry pie.
Will that cherry pie make me happy? No. I knew this in advance. Consciously and unconsciously: I had the thought, and no emotion compelled me to do it.
In fact, it seemed like the least-efficacious action: spending some of my limited money, to buy a pie I don't... (read more)
A lot of my akrasia is solved by just "monkey see, monkey do." Physically put what I should be doing in front of my eyeballs, and pretty quickly I'll do it. Similarly, any visible distractions, or portals to distraction, will also suck me in.
But there also seems to be a component that's more like burnout. "Monkey see, monkey don't WANNA."
On one level, the cure is to just do something else and let some time pass. But that's not explicit enough for my taste. For one thing, something is happening that recovers my motivation. For another, "letting time pass" i... (read more)
Hard numbers
I'm managing a project to install signage for a college campus's botanical collection.
Our contractor, who installed the sign posts in the ground, did a poor job. A lot of them pulled right out of the ground.
Nobody could agree on how many posts were installed: the groundskeeper, contractor, and two core team members, each had their own numbers from "rough counts" and "lists" and "estimates" and "what they'd heard."
The best decision I've made on this project was to do a precise inventory of exactly which sign posts are installed correctly, comple... (read more)
Paying your dues
I'm in school at the undergraduate level, taking 3 difficult classes while working part-time.
For this path to be useful at all, I have to be able to tick the boxes: get good grades, get admitted to grad school, etc. For now, my strategy is to optimize to complete these tasks as efficiently as possible (what Zvi calls "playing on easy mode"), in order to preserve as much time and energy for what I really want: living and learning.
Are there dangers in getting really good at paying your dues?
1) Maybe it distracts you/diminishes the incen... (read more)
I've been thinking about honesty over the last 10 years. It can play into at least three dynamics.
One is authority and resistance. The revelation or extraction of information, and the norms, rules, laws, and incentives surrounding this, including moral concepts, are for the primary purpose of shaping the power dynamic.
The second is practical communication. Honesty is the idea that specific people have a "right to know" certain pieces of information from you, and that you meet this obligation. There is wide latitude for "white lies," exaggeration, storytell... (read more)
Better rationality should lead you to think less, not more. It should make you better able to
while still having good outcomes. What's your rationality doing to you?
How should we weight and relate the training of our mind, body, emotions, and skills?
I think we are like other mammals. Imitation and instinct lead us to cooperate, compete, produce, and take a nap. It's a stochastic process that seems to work OK, both individually and as a species.
We made most of our initial progress in chemistry and biology through very close observation of small-scale patterns. Maybe a similar obsessiveness toward one semi-arbitrarily chosen aspect of our own individual behavior would lead to breakthroughs in self-understanding?
I'm experimenting with a format for applying LW tools to personal social-life problems. The goal is to boil down situations so that similar ones will be easy to diagnose and deal with in the future.
To do that, I want to arrive at an acronym that's memorable, defines an action plan and implies when you'd want to use it. Examples:
OSSEE Activity - "One Short Simple Easy-to-Exit Activity." A way to plan dates and hangouts that aren't exhausting or recipes for confusion.
DAHLIA - "Discuss, Assess, Help/Ask, Leave, Intervene, Accept." An action plan for how to de... (read more)
A celebrity is someone famous for being famous.
Is a rationalist someone famous for being rational? Someone who’s leveraged their reputation to gain privileged access to opportunity, other people’s money, credit, credence, prestige?
Are there any arenas of life where reputation-building is not a heavy determinant of success?
Idea for online dating platform:
Each person chooses a charity and an amount of money that you must donate to swipe right on them. This leads to higher-fidelity match information while also giving you a meaningful topic to kick the conversation off.
Goodhart's Epistemology
If a gears-level understanding becomes the metric of expertise, what will people do?
Use the concept of gears-level understanding to debug your own knowledge. Learn for your own sake, and allow your learning to naturally attract the credibility
... (read more)Status and Being a "Rationalist"
The reticence many LWers feel about the term "rationalist" stems from a paradox: it feels like a status-grab and low-status at the same time.
It's a status grab because LW can feel like an exclusive club. Plenty of people say they feel like they can hardly understand the writings here, and that they'd feel intimidated to comment, let alone post. Since I think most of us who participate in this community wish that everybody would be more into being rational and that it wasn't an exclusive club, this feels unfortunate.
It's low ... (read more)
I use LessWrong as a place not just to post rambly thoughts and finished essays, but something in between.
The in between parts are draft essays that I want feedback on, and want to get out while the ideas are still hot. Partly it's so that I can have a record of my thoughts that I can build off of and update in the future. Partly it's that the act of getting my words together in a way I can communicate to others is an important part of shaping my own views.
I wish there was a way to tag frontpage posts with something like "Draft - seeking feedback" vs. "Fin... (read more)
Yeah, I've been thinking about this for a while. Like, maybe we just want to have a "Draft - seeking feedback" tag, or something. Not sure.
Eliezer's post on motivated stopping contains this line:
Who can argue against gathering more evidence? I can. Evidence is often costly, and worse, slow, and there is certainly nothing virtuous about refusing to integrate the evidence you already have. You can always change your mind later."
This is often not true, though, for example with regard to whether or not it's ethical to have kids. So how to make these sorts of decisions?
I don't have a good answer for this. I sort of think that there are certain superhuman forces or drives that "win out." The drive ... (read more)
Reading and re-reading
The first time you read a textbook on a new subject, you're taking in new knowledge. Re-read the same passage a day later, a week later, or a year later, and it will qualitatively feel different.
You'll recognize the sentences. In some parts, you'll skim, because you know it already. Or because it looks familiar -- are you sure which?
And in that skimming mode, you might zoom into and through a patch that you didn't know so well.
When you're reading a textbook for the first time, in short, there are more inherent safeguards to keep you f... (read more)
I just started using GreaterWrong.com, in anti-kibitzer mode. Highly recommended. I notice how unfortunately I've glommed on to karma and status more than is comfortable. It's a big relief to open the front page and just see... ideas!
There's a pretty simple reason why the stock market didn't tank long-term due to COVID. Even if we get 3 million total deaths due to the pandemic, that's "only" around a 5% increase in total deaths over the year where deaths are at their peak. 80% of those deaths are among people of retirement age. Though their spending is around 34% of all spending, the money of those who die from COVID will flow to others who will also spend it.
My explanation for the original stock market crash back in Feb/March is that investors were nervous that we'd impose truly strict lockdown measures, or perhaps that the pandemic would more seriously harm working-age people than it does. That would have had a major effect on the economy.
Striving
At any given time, many doors stand wide open before you. They are slowly closing, but you have plenty of time to walk through them. The paths are winding.
Striving is when you recognize that there are also many shortcuts. Their paths are straighter, but the doors leading to them are almost shut. You have to run to duck through.
And if you do that, you'll see that through the almost-shut doors, there are yet straighter roads even further ahead, but you can only make it through if you make a mad dash. There's no guarantee.
To run is exhilarating at fir... (read more)
The direction I'd like to see LW moving in as a community
Criticism has a perverse characteristic:
Ideas that survive into adulthood will therefore tend to be championed by thinkers who are less receptive to criticism.
Maybe we need some sort of "baby criticism" for new ideas. A "devel... (read more)
Cost/benefit anxiety is not fear of the unknown
When I consider doing a difficult/time-consuming/expensive but potentially rewarding activity, it often provokes anxiety. Examples include running ten miles, doing an extensive blog post series on regenerative medicine, and going to grad school. Let's call this cost/benefit anxiety.
Other times, the immediate actions I'm considering are equally "costly," but one provokes more fear than the others even though it is not obviously stupid. One example is whether or not to start blogging under my real name. Call it ... (read more)
A machine learning algorithm is advertising courses in machine learning to me. Maybe the AI is already out of the box.
An end run around slow government
The US recommended daily amount (RDA) of vitamin D is about 600 IUs per day. This was established in 2011, and hasn't been updated since. The Food and Nutrition Board of the Institute of Medicine at the National Academy of Sciences sets US RDAs.
According to a 2017 paper, "The Big Vitamin D Mistake," the right level is actually around 8,000 IUs/day, and the erroneously low level is due to a statistical mistake. I haven't been able to find out yet whether there is any transparency about when the RDA will be reconsidered.
But 3... (read more)
Explanation for why displeasure would be associated with meaningfulness, even though in fact meaning comes from pleasure:
Meaningful experiences involve great pleasure. They also may come with small pains. Part of how you quantify your great pleasure is the size of the small pain that it superceded.
Pain does not cause meaning. It is a test for the magnitude of the pleasure. But only pleasure is a causal factor for meaning.
Sci-hub has moved to https://sci-hub.st/
Do you treat “the dark arts” as a set of generally forbidden behaviors, or as problematic only in specific contexts?
As a war of good and evil or as the result of trade-offs between epistemic rationality and other values?
Do you shun deception and manipulation, seek to identify contexts where they’re ok or wrong, or embrace them as a key to succeeding in life?
Do you find the dark arts dull, interesting, or key to understanding the world, regardless of whether or not you employ them?
Asymmetric weapons may be the only source of edge for the truth itself. But s... (read more)
How to reach simplicity?
You can start with complexity, then simplify. But that's style.
What would it mean to think simple?
I don't know. But maybe...
- Accept accepted wisdom.
- Limit your words.
- Rehearse your core truths, think new thoughts less.
- Start with inner knowledge. Intuition. Genius. Vision. Only then, check yourself.
- Argue if you need to, but don't ever debate. Other people can think through any problem you can. Don't let them stand in your way just because they haven't yet.
- If you know, let others find their own proofs. Move on with the plan.
- Be slow. Rest
... (read more)Question re: "Why Most Published Research Findings are False":
What is the difference between "the ratio of the number of 'true relationships' to 'no relationships' among those tested in the field" and "the pre-study probability of a relationship being true"?
There are lots of reasons to measure a person's ability level in some skill. One such reason is to test your understanding in the early stages of learning a new set of concepts.
You want a system that's:
Flashcards/reciting concepts from notes is a nice example. It's fast and intuitive, tells you what concepts you're still struggling with. Knowing that, you can look over the materia... (read more)
How much of rationality is specialized?
Cultural transmission of knowledge is the secret of our success.
Children comprise a culture. They transmit knowledge of how to insult and play games, complain and get attention. They transmit knowledge on how to survive and thrive with a child's priorities, in a child's body, in a culture that tries to guarantee that the material needs of children are taken care of.
General national cultures teach people very broad, basic skills. Literacy, the ability to read and discuss the newspaper. How to purchase consumer goods. H... (read more)
What is the #1 change that LW has instilled in me?
Participating in LW has instilled the virtue of goal orientation. All other virtues, including epistemic rationality, flow from that.
Learning how to set goals, investigate them, take action to achieve them, pivot when necessary, and alter your original goals in light of new evidence is a dynamic practice, one that I expect to retain for a long time.
Many memes circulate around this broad theme. But only here have I been able to develop an explicit, robust, ever-expanding framework for making and thinking abo... (read more)
Learning feedback loops
Putting a serious effort into learning Italian in the classroom can make it possible to immerse yourself in the language when you visit Italy. Studying hard for an engineering interview lets you get a job where you'll be able to practice a set of related skills all the time. Reading a scientist's research papers makes you seem like an attractive candidate to work in their lab, where you'll gain a much more profound knowledge of the field.
This isn't just signaling. It's much more about acquiring the minimal competency to participate i... (read more)
Business idea: Celebrity witness protection.
There are probably lots of wealthy celebrities who’d like to lose their fame and resume a normal life. Imagine a service akin to witness protection that helped them disappear and start a new life.
I imagine this would lead to journalists and extortionists trying to track them down, so maybe it’s not tractable in the end.
Just a notepad/stub as I review writings on filtered evidence:
One possible solution to the problem of the motivated arguer is to incentivize in favor of all arguments being motivated. Eliezer covered this in "What Evidence Filtered Evidence?" So a rationalist response to the problem of filtered evidence might be to set up a similar structure and protect it against tampering.
What would a rationalist do if they suspected a motivated arguer was calling a decision to their attention and trying to persuade them of option A? It might be to become a motivated arg... (read more)
Aspects of learning that are important but I haven't formally synthesized yet:
- Visual/spatial approaches to memorization
- Calibrating reading speeds/looking up definitions/thinking up examples: filtering and organizing to distinguish medium, "future details," and the "learning edge"
- Mental practice/review and stabilizing an inner monologue/thoughts
- Organization and disambiguation of review questions/procedures
- Establishing procedures and stabilizing them so you can know if they're working
- When to carefully tailor your approach to a particular learning challenge,
... (read more)Cognitive vs. behaviorist approaches to the study of learning
I. Cognitivist approaches
To study how people study on an internal, mental level, you could do a careful examination of what they report doing with their minds as they scan a sentence of a text that they're trying to learn from.
For example, what does your mind do if you read the following sentence, with the intent to understand and remember the information it contains?
"The cerebral cortex is the site where the highest level of neural processing takes place, including language, memory and cognitive... (read more)
Practice sessions in spaced-repetition literature
Spaced repetition helps, but how do spaced-repetition researchers have their subjects practice within a single practice session? I'd expect optimized practice to involve not only spacing and number of repetitions, but also an optimal way of practicing within sessions.
So far, I've seen a couple formats:
- Subjects get an explanation, examples, and a short, timed set of practice problems.
- Subjects practice with flash cards. Each "round" of flash cards involves looking at only the cards they haven't seen or got wro
... (read more)Are democracies doomed to endless intense, intractable partisanship?
Model for Yes: In a democracy, there will be a set of issues. Each has a certain level of popular or special-interest support, as well as constitutionality.
Issues with the highest levels of popular support and constitutionality will get enacted first, if they weren't already in place before the democracy was founded.
Over time, issues with more marginal support and constitutionality will get enacted, until all that's left are the most marginal issues. The issues that remain live issues will... (read more)
I've noticed that when I write posts or questions, much of the text functions as "planning" for what's to come. Often, I'm organizing my thoughts as I write, so that's natural.
But does that "planning" text help organize the post and make it easier to read? Or is it flab that I should cut?
Thinking, Too Fast and Too Slow
I've noticed that there are two important failure modes in studying for my classes.
Too Fast: This is when learning breaks down because I'm trying to read, write, compute, or connect concepts too quickly.
Too Slow: This is when learning fails, or just proceeds too inefficiently, because I'm being too cautious, obsessing over words, trying to remember too many details, etc.
One hypothesis is that there's some speed of activity that's ideal for any given person, depending on the subject matter and their current level of comfort wi... (read more)
Different approaches to learning seem to be called for in fields with varying levels of paradigm consensus. The best approach to learning undergraduate math/CS/physics/chemistry seems different from the best one to take for learning biology, which again differs from the best approach to studying the economics/humanities*.
High-consensus disciplines have a natural sequential order, and the empirical data is very closely tied to an a priori predictive structure. You develop understanding by doing calculations and making theory-based arguments, along with empi... (read more)
What rationalists are trying to do is something like this:
This looks exactly like virtue ethics.
Now, we have heard that the meek shall inherit the earth. So we eschew the dark arts; embrace the virtues of accuracy, precision, and charity... (read more)
You can justify all sorts of spiritual ideas by a few arguments:
- They're instrumentally useful in producing good feelings between people.
- They help you escape the typical mind fallacy.
- They're memetically refined, which means they'll fit better with your intuition than, say, trying to guess where the people you know fit on the OCEAN scale.
- They're provocative and generative of conversation in a way that scientific studies aren't. Partly that's because the language they're wrapped in is more intriguing, and partly isn't because everybody's on a level playing fi
... (read more)A checklist for the strength of ideas:
Think "D-SHARP"
Worthwhile research should help the idea move either forward or backward through this sequence.
Why isn’t California investing heavily in desalination? Has anybody thought through the economics? Is this a live idea?
My modified Pomodoro has been working for me. I set a timer for 5 minutes and start working. Every 5 minutes, I just reset the timer and continue.
For some reason it gets my brain into "racking up points" mode. How many 5-minute sessions can I do without stopping or getting distracted? Aware as I am of my distractability, this has been an unquestionably powerful technique for me to expand my attention span.
All actions have an exogenous component and an endogenous component. The weights we perceive differ from action to action, context to context.
The endogenous component has causes and consequences that come down to the laws of physics.
The exogenous component has causes and consequences from its social implications. The consequences, interpretation, and even the boundaries of where the action begins and ends are up for grabs.
Failure modes in important relationships
Practice this:
- Focusing to identify your own elusive feelings
- Empathy to id
... (read more)Good reading habit #1: Turn absolute numbers into proportions and proportions into absolute numbers.
For example, in reading "With almost 1,000 genes discovered to be differentially expressed between low and high passage cells [in mouse insulinoma cells]," look up the number of mouse genes (25,000) and turn it into a percentage so that you can see that 1,000 genes is 4% of the mouse genome.
What is the difference between playing devil's advocate and steelmanning an argument? I'm interested in any and all attempts to draw a useful distinction, even if they're only partial.
Attempts:
- Devil's advocate comes across as being deliberately disagreeable, while steelmanning comes across as being inclusive.
- Devil's advocate involves advancing a clearly-defined argument. Steelmanning is about clarifying an idea that gets a negative reaction due to factors like word choice or some other superficial factor.
- Devil's advocate is a political act and is only rele
... (read more)Empathy is inexpensive and brings surprising benefits. It takes a little bit of practice and intent. Mainly, it involves stating the obvious assumption about the other person's experience and desires. Offer things you think they'd want and that you'd be willing to give. Let them agree or correct you. This creates a good context in which high-value trades can occur, without needing an conscious, overriding, selfish goal to guide you from the start.
Chris Voss thinks empathy is key to successful negotiation.
Is there a line between negotiating and not, or only varying degrees of explicitness?
Should we be openly negotiating more often?
How do you define success, when at least one of his own examples of a “successful negotiation” is entirely giving over to the other side?
I think the point is that the relationship comes first, greed second. Negotiation for Voss is exchange of empathy, seeking information, being aware of your leverage. Those factors are operating all the time - that’s the relationship.
The d
... (read more)Hot top: "sushi-grade" and "sashimi-grade" are marketing terms that mean nothing in terms of food safety. Freezing inactivates pretty much any parasites that might have been in the fish.
I'm going to leave these claims unsourced, because I think you should look it up and judge the credibility of the research for yourself.