All of betulaster's Comments + Replies

Congratulations on making your first Less Wrong post about a relatively current political topic and not being downvoted! You may be the first person in history who achieved this!

Hah, thanks. At the risk of stroking my ego one too many times - can I ask you to speculate on why that might be the case?
What I mean is - I'm sure what I wrote has some meritoric value (I would've kept it to myself otherwise), but I expected this post to do similarly to how other comparable posts do on LW (first-time post, political topic, not a lot of hard analysis and abstractio... (read more)

[This comment is no longer endorsed by its author]Reply
What? You expected to be downvoted and you had the audacity to post anyway? From my perspective, it was the length of text, the dislaimer at the beginning, the fact that (at least how it seemed to me) you didn't obviously attack anyone, and the few interesting ideas in the article. Plus you had the luck that someone else didn't write a horrible comment "inspired by" your article, which could have also reflected badly on you. No idea how to do that on Twitter (the amount of data there is insane), it just reminded me of a "cancel watch" for higher education [] .

There is a Russian saying (it's frequently ascribed to Saltykov-Shchedrin, and sounds to me like something he'd write, but I'm not able to properly source this) to precisely that effect... but not in the way you imagine. Roughly translated, it goes like this - "In Russia, the severity of the laws is compensated by the non-neccesity to obey them". 

(I don't have too much time for this, so apologies for shoddy sources down in the answer. Please let me know if you'd like more proper ones, I'll be sure to come back to that later.)

My personal model of Russi... (read more)

One thing that seems important to note: nuclear warfare need not occur in a vacuum. If countries possessing nuclear weapons are trading all-out strikes, as in your model, they probably are in a state of (World?) war already, and either have fought with other weapons prior to the nuclear exchange, or plan to continue to do so after it. This may include use of non-nuclear weapons with high collateral damage, like chemical or biological agents, or saturation bombardment targeting high-population areas. I wonder if that skews the assessment of damage in any meaningful way.

9Jeffrey Ladish2y
Yeah, the point that risks from nuclear war would be coupled with risks from great power conflict is a good one. I expect this to be more of a problem in the future, but there could be some risks at present from secret bioweapon systems or other kinds of WMDs. My mainline expectation is that in a nuclear war scenario, chemical, biological, and conventional weapon effects would be dwarfed by the effects from nuclear weapons. This is based on my understanding of the major powers deterrence strategy, but might be wrong if there are secret weapons I'm not aware of. The logic of deterrence makes this a little less likely, fortunately. The whole point of a deterrent is lost if you keep it a secret. Of course it's possible that it's kept secret from the public but not from other countries, but this seems harder to keep secret, especially since it relies on one's potential enemies to keep it secret.

This is not really erisology in any way, but I think specific topics of discussion/interactions with specific people may very well become Ugh Fielded if you have an initially bad experience.

Since you address "how likely meeting a certain politically charged event would be", I assume your question is focussed on what I've called "Polling 2", which concerns itself with predicting future events. 

Yes, you're right, and I should have been more clear - thanks for pointing that out.

The best way to put the matter into quantitative terms may be to ask the interviewee what odds he would give in a bet on the event occuring

I don't know if I'm convinced that would work. I think that most people fall into two camps regarding betting odds. Camp A is no... (read more)

I don't have good data to back this up, but I have a feeling that people are thinking in more binary terms than you expect. More specifically, I conjecture that if you were to ask someone for how likely meeting a certain politically charged event would be, they would parse your question as a binary one and answer either "almost certainly" or "very unlikely" - and when pressed for a number, would give you either between 90-100%, or 0-10% respectively. 

Thanks, betulaster, Since you address "how likely meeting a certain politically charged event would be", I assume your question is focussed on what I've called "Polling 2", which concerns itself with predicting future events. These tend to be less politically charged than than "Polling 1", but I agree you are right in pointing out the need to relativise respondents answers. People who identify strongly with a cause, especially if they are not used to dealing with probability, might confuse a question about an event likelihood with the strength of their allegiance. Thus "How likely do you think the Dodgers are to win the World Series?" might be met with "I'd bet my life on it", which is not very helpful for computing statistics :-) The best way to put the matter into quantitative terms may be to ask the interviewee what odds he would give in a bet on the event occuring. It may seem redundant, but I would also ask the odds they'd give on a non-occurence. (People's grasp on probability is shaky, so overdeterminining their perception helps to reduce error). You will notice that for Polling 1 type questions I avoided the natural step of asking people to say how much money it would take to get them to change their mind. For one thing, it would be tasteless to appear to be offering money to get someone to change a vote (for instance). Another reason is that people's perceptions of money vary widely, injecting a confounding variable. The rather convoluted question I came up with to assess an interviewee's resistance to chance of intent has the disadvantage of generating a discreet (non-continuous) answer. and I worry it might also confuse some interviewees, but at least it makes a quantification in terms of a comparable quantity.

I don't have a full answer, but here's what seems important to consider - in my experience, the baseline for the level of confidence in speech that is associated with competence and authority is a lot lower in intellectual circles like LessWrong, compared to the general public. 

This is because exposure to rationality and science usually impresses into someone that making mistakes is "fine" and an unavoidable component of learning, and that while science has made very impressive progress there is still a lot to learn and understand about the world. On ... (read more)

2Adam Zerner2y
Agreed. That makes me think back to the following story. It was my first job as a programmer. The PM would ask me questions about whether X is doable for me or how long it would take me to do Y. I would give my honest response, which often would be that I can't do it without help or that it would take me a while. Then one day the tech lead sat me down and said that it's important for me to project confidence. It frustrated me a lot because in theory the best thing to do for the company would be for me to provide accurate information and then try to make the best decision based off of that accurate information. Now I realize that the tech lead probably wasn't thinking about that and probably was just trying to look out for me, knowing that my lack of confidence would end up hurting me.

How do people read LessWrong? I subscribe to the RSS feed of the front page, but that tends to be suboptimal, as some posts aren't that well-aligned with my interests or are questions/discussion starters as opposed to being mid/longform reads that I'd mostly want to read LW for.

I was subscribed to the RSS feed of all posts, but that was a bit overwhelming; so I'm now subscribed to the feed of "posts with 30 or more karma" that they provide, but I'm now finding that LW is much less interesting or useful to me - which I could have predicted, given that I'd already noticed that the karma of a post often turned out a bad indicator of how useful/interesting it would be to me. The posts I found to be the best were most often in the 5-15 karma range - but alas, there's no feed for that. Now I'm thinking about just unsubscribing from the RSS feed and setting LW as my homepage.
I bookmarked []

Illustration from Michael Haddad for Wired. It was originally commissioned for an article about biohackers, but I find that it captures the spirit of agency and self-improvement that is well-aligned with some of rationalist values. 

2Ben Pace2y
I really like this one, thanks.
Illustration from Michael Haddad for Wired. [] It was originally commissioned for an article about biohackers, but I find that it captures the spirit of agency and self-improvement that is well-aligned with some of rationalist values.

I may attempt a more comprehensive analysis to suggest some tests later (although I'm not sure that would be very successful  - my rationality skills feel like they are nascent at best), but from a superficial read, it seems to me that points A13 and B10 are essentially the same - both deal with stupidity becoming more widespread as a matter of consumerist/capitalist politics/market foces. That could be opposed by noticing that B10 deals with actually smart individuals who pretend to be stupid to reap the benefits, and A13 deals with actually stupid i... (read more)

Of course, you have to modulate that by the possibility that allowing people to live off their UBI or blow it on frivolous spending will cancel out those good effects. That question is beyond my pay grade, and I suspect nobody really knows

This makes me think of something. Can't we look at what people who experienced windfall gains spent their newfound money on? Looking into lottery winners seems like an easy enough to obtain sample, although not unproblematic - it takes a certain kind of person to participate in a lottery in the first place. But if we can ... (read more)

Another one I heard recently is re-enlistment bonuses in the US military. Soldiers can get up to around $100,000 for signing up for another tour of duty. They're apparently notorious for blowing it on stupid shit almost immediately. But maybe that's just the vivid stories. I'd like to see empirical data before I made up my mind.

This is interesting. I wonder how this would apply to the literature on investing mistakes (esp. amateur investors like people who get burned on Robinhood). 

If your idea is correct, IMO people then must be thinking about investing money using priors, concepts and feelings from their own spending experience. Maybe some patterns of systemic bias/mistakes associated with trying to use the "feeling for what N dollars buys and feels like spending" can be teased out.

Eventually - sure. But for that eventuality to take place, the "electrical shock tyranny" would have to be more resilient than any political faction we've known of and persist for thousands of year. I doubt that this would be possible.

Sorry if I wasn't clear enough. My critique refers to your point about scenarios where humans evolve like a dystopia not being applicable because if it were, suffering should be a rare occurence - if I understand you correctly, you're stating that if we could evolve to like dystopias, by this point in time we would have evolved to either avoid or like any source of suffering. My counterpoint to this is that there is a massive sub-multitude of sources of suffering that do not affect evolution in any way because they are too transient to effect any serious selection pressure.

I'm still confused about your critique, so let me ask you directly: In the scenario outlined by the OP, do you expect humans to eventually evolve to stop feeling pain from electrical shocks?
You could perhaps engineer scenarios where humans will genuinely evolve to like a dystopia

I think that this kind of misrepresents the scale on which evolution happens - it's not one generation, or two, it's hundreds and thousands, and it's taken relatively good care of the sources of suffering that are fundamental enough to persist and keep the selection pressure on across that time frame - we're pretty good at not eating things that are toxic, breeding, avoiding predators and so on. The problem with evolution is that a significant n... (read more)

Yes, that's what pessimistic errors are about. I'm not sure what exactly you're critiquing though?

This post reminds me of an insight from one of my uni professors.

Early on at university, I was very frustrated with that the skills that were taught to us did not seem to be immediately applicable to the real world. That frustration was strong enough to snuff out most of the interest I had for studying genuinely (that is, to truly understand and internalize the concepts taught to us). Still, studying was expensive, dropping out was not an option, and I had to pass exams, which is why very early on I started, in what seemed to me to be a classic instance o... (read more)

I understand that this may be well outside the scope of your writing, but still - any chance you could actually post some epistemic defense decks for Anki? Or are there any good ones already available?

(Apologies if the question is stupid, I'm somewhat new to LW)

It's a good idea. I'm not familiar with any existing decks, but a search on AnkiWeb shows a few LW-influenced decks: [] One of them looks like what I would be inclined to design as an "Epistemic Defenses Deck", although I may (or may not) approach it differently. If I do anything along those lines, I'll let you know.

Disclaimer: this comment includes a lot of speculation on philosophy and art movements that I myself don't have an in-depth understanding of. Please take this with a grain of salt. If anyone reading this understands the matter better and sees me saying BS, please correct me.

I think that one thing that can be helpful to examine is postmodernism. As Jean-Francois Lyotard had originally described it in the late 70s, it is "incredulity towards metanarratives". For Lyotard this meant rejecting the idea that the world is described or describable b... (read more)

When I first read the original post, I thought of how in culture there seems (to me) to be a drop off of passion somewhere in the 1970s, and that postmodernism kind of goes with that. Passion about something that happens is more powerful if it really is -- if things no longer seem to be as firmly to us, there is less passion. From a naive outsider point of view, it looks like Buddhism softens is as well, so the hypothesis that celebration follows being could be tested by seeing if Buddhist countries have as intense of celebrations about progress, or anything in general. But then I see in the comments people talking about how there were celebrations and excitement in America in the last 50 years -- maybe not as big, but still significant. Maybe postmodernism doesn't affect sports fans as much, or minorities who see their first president, or space fans?

Thanks for the reply and sorry I couldn't get to this for some time! Hope you're still interested in the discussion.

I expect that politics in most places, and US Congressional politics especially, is usually much more heavily focused on special interests than the overall media narrative would suggest

This is really interesting and you probably have a good point. Do you think there's a more reliable way (for an outsider like myself, who's not able to, I dunno, go and ask people in a dive bar what they think) to get the lay of the politi... (read more)

Great points! This is wayyyy outside my zone of expertise, but I would look for specialist-oriented publications - e.g. newsletters specifically targeted at lobbyists/policymakers, or political information in the industry publications of special-interest industries. I'd say the key is to generate your own questions, then proactively look for the answers rather than waiting around for whatever information comes to you. There's plenty of good information out there, it just isn't super-viral, so you have to go looking for it. Important point here: these people don't actually have different Schelling points. They presumably all agree that if Alice wins the election, then whatever Alice signs into law will be the new Schelling point. What these people disagree on is their expectations for what the future Schelling point will be.

So... it's possible that there is something about Middle Eastern politics that I don't understand, and it would be cool if you could clarify. If I understand you correctly, you write that farms in the South are owned by rich people. At the same time, you write that farms in the North are somehow connected to the ruling coalition, and because of this the government had to signal loyalty to them.

I was under the impression that in monarchic/autocratic countries it was near-impossible to be rich while not being connected to the ruling group (= not being the kind of agent the ruling group would need to signal loyalty to). The farmers in the South contradict that. How does this work?

Great question. The truth in your model. Both the Saudi and Jordanian regimes give "import licenses" to families (Corleoni-style extended families). They basically say "only this family can import (nitrogen or automobiles or washer-dyers). The Jordanians have one on tomatoe paste which is why their tomato paste tastes crappy. Particularly when a autocrat dies, some companies or families are dispossessed if the ruler suspects they will be disloyal. Mohammad Bin Salman did this recently. What your model is missing. Factions are rarely just the top leadership. The most stable faction type is basically a pyramid scheme. You have some leader at the top - a tribal elder, a warlord, a colonel, a utilities company owner etc. who has a direct relationship to the autocrat usually. Then beneath him are other elders, and beneath them are heads of nuclear families or clans, and beneath them are usually prosperous peasants. As you go down the rewards decrease, but the reward-responsibility combination continues. IIRC, the Mafraq tribes people saved the Jordanian monarchy as recently as Black September and continue to serve in the army in high numbers. If you just recruited from every tribe equally there could be a revolution! Can't have that. TLDR: Even though the tribes people are not elites, this tribe supports the Jordanian army. Fun fact: Arab tribes elect their sheyoukh, so are a more "egalitarian" faction type than is typical.

I'm probably missing something obvious, but I don't trivially see how this

Interestingly, this suggests that a leader can get high value from a group whose preferences are orthogonal to their own; pursue power in groups which care about different things than you!

follows from this

A leader’s power is high when group members all want to coordinate their choices, but care much less about which choice is made, so long as everyone “matches”. Then the leader can just choose anything they please, and everyone will go along with it.
... (read more)
On "group with preferences orthogonal to your own": the idea is you can give the members exactly what they want, and then independently get whatever you want as well. Since they're indifferent to the things you care about, you can choose those things however you please. I expect that politics in most places, and US Congressional politics especially, is usually much more heavily focused on special interests than the overall media narrative would suggest. For instance, voters in Kansas care a lot about farm subsidies, but the news will mostly not talk about that because most of us find the subject rather boring. The media wants to talk about the things everyone is interested in, which is exactly the opposite of special interests. Also I am extremely skeptical that racial issues played more than a minor role in the election, even assuming that they played a larger role in 2016 than in other elections. Every media outlet in the country (including 538) wanted to run stories about how race was super-important to the election, because those stories got tons of clicks, but that's very different from actually playing a role. Nope, you are completely right on that front, poor information/straight-up lying were issues I basically ignored for purposes of this post. That said, most of the post still applies once we add in lying/bullshit; the main change is that, whenever they can get away with it, leaders will lie/bullshit in order to simultaneously satisfy two groups with conflicting goals. As long as at least some people in each constituency see through the lies/bullshit, there will still be pressure to actually do what those people want. On the other hand, people who can be fooled by lies/bullshit are essentially "neutral" for purposes of influencing the political equilibrium; there's no particular reason to worry about their preferences at all. So we just ignore the gullible people, and apply the discussion from the post to everybody else.