Recent Discussion

We know that mainstream thinking gets a lot of things wrong. Many of us have experienced being mocked because of our concern for AI extinction-risk. There are plenty of other examples of times where now well-evidenced beliefs were seen as crazy in some way. This post was prompted by my reading around meditation and mindfulness - twenty years ago if you said that meditation had a number of mental and even physical health benefits and was worth practicing for non-religious reasons, then you would be laughed at as a New Age type who probably believed in crystal healing and astrology too. Now there's stacks of scientific evidence supporting that view.

I would like to keep an open mind and not dismiss fact-claims just because they pattern-match to weird...

It cannot be stated with <99% certainty that members of the Bush Administration did not have definite prior information of the events of 9/11 or played a role in it.

1Answer by Zac Hatfield Dodds29mTaking information hazards seriously. This can range from the benign (is it a good idea to post very weird beliefs here?) to the more worrying (plausible attacks on $insert_important_system_here), and upwards.
1Zac Hatfield Dodds26mAt a more concrete level, I've spent the last ~14 months holding strong and unusual views on most pandemic-related matters, though I don't think any of them would raise eyebrows on LessWrong. A minority are probably now mainstream, the others - unfortunately - remain weird.
2Answer by Dumbledore's Army1hI think that we should be taking the possibility of UFOs more seriously. Over the last year, I've updated from thinking that UFOs are laughable to thinking there's a 10-20% chance of actual alien visitation, and about another 10-20% of something else important going on. (Ie someone - presumably China - has either made a huge leap in drone technology or is getting good at spoofing multiple US military systems simultaneously.) Why? Because a number of senior and generally sane people seem to be taking this seriously. The US military forces in particular are seeing a number of cases of unidentified phenomena - not just aerial, also submarine - where they see things that look like craft that have capabilities not currently possible with modern technology. Some of these things like the 2004 USS Nimitz [https://en.wikipedia.org/wiki/Pentagon_UFO_videos]incident [https://www.nbcnews.com/news/us-news/navy-confirms-videos-did-capture-ufo-sightings-it-calls-them-n1056201] have been captured on multiple systems like the ship's radar, and aircraft cameras and visually spotted by the pilots. The former Direction of National Intelligence has said [https://www.theguardian.com/us-news/2021/mar/22/us-government-ufo-report-sightings] recently that there are a lot more sightings which haven't been made public. Yes, I know there are still other explanations, and the track record suggests sightings will turn out to be some kind of optical illusion or something, but I'm open to the possibility that not every incident is explicable in terrestrial terms. The link below is a good long-form read which argues that the US Department of Defence is taking the possibility seriously. https://www.thedebrief.org/fast-movers-and-transmedium-vehicles-the-pentagons-uap-task-force/ [https://www.thedebrief.org/fast-movers-and-transmedium-vehicles-the-pentagons-uap-task-force/]

There’s a tendency to want to score high on every metric you come across. When I first read Kegan’s 5 stages of adult development, I wanted to be a stage 5 meta-rationalist! Reading the meditation book “The Mind Illuminated” (TMI), I wanted to be stage 10 (and enlightened and stage 8 jhana and…)!  I remember seeing this dancer moonwalk sideways and wanting to be that good too! 

This tendency is harmful.

But isn’t it good to want to be good at things? Depends on the "things" and your personal goals. What I’m pointing out is a tendency to become emotionally invested in metrics and standards, without careful thought on what you actually value. If you don’t seriously investigate your own personal preferences and taste, you may spend years of...

2johnswentworth14hYup, exactly.
10cata14hI am a pretty serious chess player and among other things, chess gave me a clearer perception of the direct cost in time and effort involved in succeeding at any given pursuit. You can look at any chess player's tournament history and watch as they convert time spent into improved ability at an increasingly steep rate. As a result, I can confidently think things like "I could possibly become a grandmaster, but I would have to dedicate my life to it for ten or twenty years as a full-time job, and that's not worth it to me. On the other hand, I could probably become a national master with several more years of moderate work in evenings and weekends, and that sounds appealing." and I now think of my skill in other fields in similar terms. As a result, I am less inclined to, e.g. randomly decide that I want to become a Linux kernel wizard, or learn a foreign language, or learn to draw really well, because I clearly perceive that those actions have a quite substantial cost.
1elriggs9hThere is something here along the lines of "becoming skilled at a thing helps you better understand the appeal (and costs) of being skilled at other things". It's definitely not the only thing you need because I've been highly skilled at improv piano, but still desired these other things. What I want to point out in the post is the disconnect between becoming highly skilled and what you actually value. It's like eating food because it's popular as opposed to actually tasting it and seeing if you like that taste (there was an old story here on LW about this, I think). Making the cost explicit does help ("it would take decades to become a grandmaster"), but there can be a lack of feedback on why becoming a national master sounds appealing to you. Like the idea of being [cool title] sounds appealing, but is the actual, visceral, moment-to-moment experience of it undeniably enjoyable to you? (in this case, you can only give an educated guess until you become it, but an educated guess can be good enough!)

Epistemic status: Anecdote.

tl;dr: I had multiple opportunities to notice my confusion, but failed every time until I actually thought about it. Ended up testing positive for COVID, despite expectations.

Background

I started preparing for the pandemic in January 2020. At the time, I was living alone. In January, my visa lapsed, and I moved in with someone else while waiting for a new visa to finish processing. Among all the people I know around the world, I think I was the single most cautious. Nobody had seen my face directly for almost a year, I was always masked (and, since October, double masked), I washed fastidiously, and never spent too long near others.

With the move, though, I was exposed to far more risk, which was the other person's activities....

1cistran14hI understand the assumption of vulnerability. But how does one assume that one is an asymptomatic or pre-symptomatic carrier if the chance of that is less than 10% on any given day? By itself it doesn't seem rational because if you assume you are pre-symptomatic you have to do something about it. Like testing. Testing very often for no reason comprehensible to the outside world.
1masasin10hUntil I get symptoms, the highest probability was that I hadn't gotten COVID yet. On the other hand, even if there was a 1% chance that I was infectious (able to spread COVID to others) on any given day, it's not high enough to warrant a test, which is uncomfortable, and expensive unless it was positive or I had a confirmed exposure. At the same time, it was high enough that e.g. my neighbours (80+ years old) or the person at the supermarket might get sick from it, and be hospitalized, not to mention the secondary effects, so I made sure to breathe slowly around people, and wear masks and keep my distance. Most of my communication ended up being gestures (and even then it was mostly "thank you!") instead of words. In other words, 99% chance that I'm vulnerable, take precautions to avoid getting infected if I'm not infected already. 1% risk of preventably murdering someone else, take precautions to avoid that consequence just in case I am infected already.

You severely overestimate your chance of actually murdering someone. Lets go through the numbers. Lets be generous and assume a 10% chance that you are an asymptomatic carrier. If you are, you have no more than 50% chance of infecting someone even if you don't wear a mask, so lets say with mask properly worn that is reduced to 30%. Now you are already down to 3% chance of infecting any person you encounter. Now, for you 80+ year old neighbor the chance of actually dying from infection is around 5%. So multiply your 3% chance of infecting the neighbor by 5% chance of death and you get 0.15% chance of murdering a person of advanced age. You'd need to encounter 7 of them to get to 1% chance of murder. 

Last month, Julia Galef interviewed Vitalik Buterin. Their responses to Glen Weyl’s critiques of the EA community struck me as missing perspectives he had tried to raise.

So I emailed Julia to share my thoughts. Shortened version:

Personally, I thought Vitalik’s and your commentary on Glen Weyl’s characterisation of the EA and rationality community missed something important. 

Glen spent a lot of time interacting with people from the black community and other cultural niches and asking for their perspectives. He said that he learned more from that than from the theoretical work he did before. 

To me, Glen’s criticism came across as unnuanced (eg. EAs also donate to GiveDirectly, and it’s not like we force people to take what we give them). I also resonate with that critiques of rationality and

...

To disentangle the confusion I took a look around about a few different definitions of the concepts. The definitions were mostly the same kind of vague statement of the type:

  • Aleatoric uncertainty is from inherent stochasticity and does not reduce with more data.
  • Epistemic uncertainty is from lack of knowledge and/or data and can be further reduced by improving the model with more knowledge and/or data.

However, I found some useful tidbits

Uncertainties are characterized as epistemic, if the modeler sees a possibility to reduce them by gathering more data or b

... (read more)
1viktor.rehnberg1hGood catch Yes, I think you are right. Usually when modeling you can learn correlations that are useful for predictions but if the correlations are spurious they might disappear when the distributions changes. As such to know if p(y|x) changes from only observing x, then we would probably need that all causal relationships to y are captured in x?

Political polarization in the USA has been increasing for decades, and has become quite severe. This may have a variety of causes, but it seems highly probable that the internet has played a large role, by facilitating the toxoplasma of rage to an unprecedented degree.

Recently I have the (wishful) feeling that the parties have moved so far apart that there is "room in the center". The left is for people who are fed up with the extremes of the right. The right is for people who are fed up with the extremes of the left. But where do people go if they've become fed up with both extremes?

The question is: how would the new center work? There's not room for a new political party; plurality voting makes...

Sorry, let me try again, and be a little more direct. If the New Center starts to actually swing votes, Republicans will join and pretend to be centrists, while trying to co-opt the group into supporting Republicans.

Meanwhile, Democrats will join and try to co-opt the group into supporting Democrats.

Unless you have a way to ensure that only actual centrists have any influence, you'll end up with a group that's mostly made up of extreme partisans from both sides. And that will make it impossible for the group to function as intended.

7abramdemski10hSome of the other comments have reminded me of your linkpost about digital democracy [https://www.lesswrong.com/posts/5jW3hzvX5Q5X4ZXyd/link-digital-democracy-is-within-reach] . Specifically, the idea of seeking surprising agreement which was mentioned. In the OP, I posited that "the new center" should have a strong, simple set of issues, pre-selected to cater to people who are sick of both sides. But I think Stuart Anderson [https://www.lesswrong.com/posts/FHtA2uEBpecdRturb/a-new-center-politics-wishful-thinking?commentId=mmuKStnakzKePFCCH] is right: it shouldn't focus so much on the battle between the two sides; it should focus on the surprising commonality between people. As Steven Byrnes mentioned [https://www.lesswrong.com/posts/FHtA2uEBpecdRturb/a-new-center-politics-wishful-thinking?commentId=S6qiLpP4kYQXWDeaA] , swing voters aren't exactly moderate; rather, they tend to have extreme views which don't fit within existing party lines. The article Byrnes linked to also points out that the consensus within party elites of both parties is very different from the consensus within the party base. I find myself forming the hypothesis that politicians have a tendency to over-focus on divisive issues, and miss some issues on which there is broad agreement. (This would be an interesting question to investigate, if someone really did a feasibility study on the whole idea.) My new suggestion for the new-center platform would be, rather than distilling complaints about both sides, seek surprising agreement in the way mentioned in that podcast you linked. The proposal would be something like this: * You register with The New Center platform. This involves "signing" a non-binding agreement to vote according to the New Center recommendations. * I'm imagining that you're never asked to promise to vote a specific way, but rather, you are asked to affirm that you agree with the argument that making such a commitment would increase your voting powe
1crl82611hWhat values are exclusively centrist?
4abramdemski10hThat's an empirical question! See my refined proposal. [https://www.lesswrong.com/posts/FHtA2uEBpecdRturb/a-new-center-politics-wishful-thinking?commentId=REm5sthnDuNeptHub]

Years after I first thought of it, I continue to think that this chain reaction is the core of what it means for something to be an agent, AND why agency is such a big deal, the sort of thing we should expect to arise and outcompete non-agents. Here's a diagram:

Roughly, plans are necessary for generalizing to new situations, for being competitive in contests for which there hasn't been time for natural selection to do lots of optimization of policies. But plans are only as good as the knowledge they are based on. And knowledge doesn't come a priori; it nee... (read more)

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Subscribe to Curated posts
Log In Reset Password
...or continue with

I owe tremendous acknowledgments to Kelsey Piper, Oliver Habryka, Greg Lewis, and Ben Shaya. This post is built on their arguments and feedback (though I may have misunderstood them).

I plead before the Master of Cost-Benefit Ratios. “All year and longer I have followed your dictates. Please, Master, can I burn my microCovid spreadsheets? Can I bury my masks? Pour out my hand sanitizer as a libation to you? Please, I beseech thee.”

“Well, how good is your vaccine?” responds the Master. 

“Quite good!” I beg. “We’ve all heard the numbers, 90-95%. Even MicroCOVID.org has made it official: a 10x reduction for Pfizer and Moderna!” 

The Master of Cost-Benefit Ratio shakes his head. “It helps, it definitely helps, but don’t throw out that spreadsheet just yet. One meal at a crowded

...

I now have covid after being vaccinated 3 months ago by Russian Sputnik-V vaccine. For now, it is mild: one day of 38 C, 3 days of 37.5C, only upper level infection, no cough. I lost smell, but it is is slowly returning. Oxygen at my normal level. 

1thjread3hI'm pretty happy that this shows the data is consistent with extreme effectiveness in young healthy people, and this post has definitely updated me in that direction. But I'm nervous that the only actual evidence for such high effectiveness is the Israeli observational study, so personally I wouldn't want to take any actions which depend on being very confident in extreme effectiveness. I really wish more studies would report on this - seems like information they either already have or could get quite easily.
2Ruby6hThanks, fixed.
4Jonathan_Graehl6hWhat of the 6x or worse effectiveness against a few strains gaining currency e.g. the brazilian one? Seems still valuable under this model.

I'm not a biologist, but I'm trying to get something like a gears-level understanding of the situation we are in. There are phenomena that seem superficially clear to me, but if I'm honest, I do not really understand what's going on.

For example, take the new Covid variants from Great Britain, South Africa and India. It is often said that the danger for such mutations is higher if a population already has some immunity because it increases the selection pressure. First, I thought "that's obvious, it's just how evolution works". But, how exactly does this work?

Here is my current understanding:
* Viruses do not have a metabolism, which is why I suspect that there is no strong competition in the human body among different virus variants. (Please correct me...

So could I summarize this as follows? The MPG asserts in the linked article that the rapid evolution might arise from pre-existing immunity in a population because of some "increasing [...] selection pressure". On the other hand, you argue since that the new variants did not just change superficially to evade being recognized but seemed to have adapted to the human host, and this is not what one would expect if the main driving force would be immune evasion.

Thanks for you response -- if you have any thoughts to this proposal for a summary, I'd be very interested.

Sometimes when I talk to people about how to be a strong rationalist, I get the impression they are making a specific error.

The error looks like this: they think that good thinking is good thinking irrespective of environment. If they just learn to avoid rationalization and setting the bottom-line first, then they will have true beliefs about their environment, and if there's something that's true and well-evidenced, they will come to believe it in time.

Let me give an extreme example.

Consider what a thoughtful person today thinks of a place like the Soviet Union under Stalin. This was a nation with evil running through their streets. People were vanished in the night, whole communities starved to death, the information sources were controlled by the powerful, and many other...

Related to rationalists in Stalinist Russia: Kolmogorov Complicity and The Parable of Lightning

4FeepingCreature4hThis sounds like an instance of https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science [https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science] .
2Ben Pace4hJust after posting this on "Context-Free Integrity", I checked Marginal Revolution and saw Tyler's latest post [https://marginalrevolution.com/marginalrevolution/2021/04/free-floating-credibility-is-underrated.html] was on "Free-Floating Credibility". These two terms feel quite related... The kabbles are strong tonight.