# Open & Welcome Thread - June 2020

by habryka1 min read2nd Jun 2020109 comments

# 19

Write a Review

If it’s worth saying, but not worth its own post, here's a place to put it. (You can also make a shortform post)

And, if you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ.

The Open Thread sequence is here.

102 comments, sorted by Highlighting new comments since
New Comment
Some comments are truncated due to high volume. Change truncation settings

How much rioting is actually going on in the US right now?

If you trust leftist (i.e. most US) media, the answer is "almost none, virtually all protesting has been peaceful, nothing to see here, in fact how dare you even ask the question, that sounds suspiciously like something a racist would ask".

If you take a look on the conservative side of the veil, the answer is "RIOTERS EVERYWHERE! MINNEAPOLIS IS IN FLAMES! MANHATTEN IS LOST! TAKE YOUR KIDS AND RUN!"

So...how much rioting has there actually been? How much damage (very roughly)? How many deaths? Are there estimates of the number of rioters vs peaceful protesters?

(I haven't put much effort into actually trying to answer these questions, so no-one should feel much obligation to make the effort for me, but if someone already knows some of these answers, that would be cool.)

2Charlie Steiner1yI don't know much about nationally, but I know that locally there's been none to indetectable rioting, some amount of looting from opportunistic criminals / idiot teenagers (say, 1 per 700 protesters) and a less than expected but still some cop/protester violence that could look like rioting if you squint.

Hi, I joined because I was trying to understand Pascal’s Wager, and someone suggested I look up “Pascal’s mugging”... next thing I know I’m a newly minted HPMOR superfan, and halfway through reading every post Yudkowsky has ever written. This place is an incredible wellspring of knowledge, and I look forward to joining in the discussion!

Welcome Yitz!

LessWrong warned me two months before it occurred here. The suggested preparedness was clear and concise, and I felt the power on my hands. I had valuable info no one on my tribe had. I alarmed my mom and she listened me, stayed home and safe, when everyone was partying out (carnival). I long-talked with friends, and explained to them what I believed it was happening and why I believed that. I showed the numbers, the math, the predictions to the next week, next week came, and reality presented its metallic taste. Week after week, the light was getting brighter and brigher until it turned really hard to refuse to see it, or to believe on the belief that everything was just fine.

One thing I learned is that it doesn't matter if you just know something really valuable, but can't convince those that do matter for you. I tried to explain my 50 years experienced physician father that he should listen to me. He blamed my low status. But even after weeks, police at the streets forcing citizens to stay at home, he could not believe. He was in denial and my incompetence to change his mind made him to the hospital, 16 days and he isn't still back. Don't worry, he is getting... (read more)

I'm glad you're trying, and am sorry to hear it is so hard; that sounds really hard. You might try the book "How to have Impossible Conversations." I don't endorse every bit of it, but there's some good stuff in there IMO, or at least I got mileage from it.

Please share ideas/articles/resources for immunizing ones' kids against mind viruses.

I think I was lucky myself in that I was partially indoctrinated in Communist China, then moved to the US before middle school, which made it hard for me to strongly believe any particular religion or ideology. Plus the US schools I went to didn't seem to emphasize ideological indoctrination as much as schools currently do. Plus there was no social media pushing students to express the same beliefs as their classmates.

What can I do to help prepare my kids? (If you have specific ideas or advice, please mention what age or grade they are appropriate for.)

Do you think that having your kids consume rationalist and effective altruist content and/or doing homeschooling/unschooling are insufficient for protecting your kids against mind viruses? If so, I want to understand why you think so (maybe you're imagining some sort of AI-powered memetic warfare?).

Eliezer has a Facebook post where he talks about how being socialized by old science fiction was helpful for him.

For myself, I think the biggest factors that helped me become/stay sane were spending a lot of time on the internet (which led to me discovering LessWrong, effective altruism, Cognito Mentoring) and not talking to other kids (I didn't have any friends from US public school during grades 4 to 11).

Do you think that having your kids consume rationalist and effective altruist content and/or doing homeschooling/unschooling are insufficient for protecting your kids against mind viruses?

Homeschooling takes up too much of my time and I don't think I'm very good at being a teacher (having been forced to try it during the current school closure). Unschooling seems too risky. (Maybe it would produce great results, but my wife would kill me if it doesn't. :) "Consume rationalist and effective altruist content" makes sense but some more specific advice would be helpful, like what material to introduce, when, and how to encourage their interest if they're not immediately interested. Have any parents done this and can share their experience?

and not talking to other kids (I didn’t have any friends from US public school during grades 4 to 11)

Yeah that might have been a contributing factor for myself as well, but my kids seem a lot more social than me.

“Consume rationalist and effective altruist content” makes sense but some more specific advice would be helpful, like what material to introduce, when, and how to encourage their interest if they’re not immediately interested. Have any parents done this and can share their experience?

I don't have kids (yet) and I'm planning to delay any potential detailed research until I do have kids, so I don't have specific advice. You could talk to James Miller and his son. Bryan Caplan seems to also be doing well in terms of keeping his sons' views similar to his own; he does homeschool, but maybe you could learn something from looking at what he does anyway. There are a few other rationalist parents, but I haven't seen any detailed info on what they do in terms of introducing rationality/EA stuff. Duncan Sabien has also thought a lot about teaching children, including designing a rationality camp for kids.

I can also give my own data point: Before discovering LessWrong (age 13-15?), I consumed a bunch of traditional rationality content like Feynman, popular science, online philosophy lectures, and lower quality online discourse like the xkcd forums. I discovered LessWrong when I was 14-16 (

5Ben Pace1yIf you're looking for a datapoint, I found and read this ePub of all of Eliezer's writing [https://www.lesswrong.com/posts/ZYtwnKwXmEAWhm8dT/an-epub-of-eliezer-s-blog-posts] when I was around 13 or 14. Would read it late into the night every day (1am, 2am) on the tablet I had at the time, I think an iPhone. Before that... the first book I snuck out to buy+read was Sam Harris's "Letter to a Christian Nation" when I was 12-13, and I generally found his talks and books to be really exciting and mind-expanding.
8romeostevensit1yOpening the Heart of Compassion [https://www.dli.org/Files/Other/Heart%20of%20Compassion.pdf] outlines the Buddhist model of 6 deleterious configurations that people tend to fall into. On top of this I would add that much of the negative consequences of this come from our tendency towards monism: to find one thing that works and then try to build an entire worldview out of it.
5ESRogs1yAre you most concerned that: 1) they will believe false things (which is bad for its own sake) 2) they will do harm to others due to false beliefs 3) harm will come to them because of their false beliefs 4) they will become alienated from you because of your disagreements with each other 5) something else? It seems like these different possibilities would suggest different mitigations. For example, if the threat model is that they just adopt the dominant ideology around them (which happens to be false on many points), then that results in them having false beliefs (#1), but may not cause any harm to come to them from it (#3) (and may even be to their benefit, in some ways). Similarly, depending on whether you care more about #1 or #4, you may try harder to correct their false ideas, or to establish a norm for your relationship that it's fine to disagree with each other. (Though I suspect that, generally speaking, efforts that tend to produce a healthy relationship will also tend to produce true beliefs, in the long run.)
9Wei_Dai1yI should also address this part: Many Communist true believers in China met terrible ends as waves of "political movements" swept through the country after the CCP takeover, and pitted one group against another, all vying to be the most "revolutionary". (One of my great-grandparents could have escaped but stayed in China because he was friends with a number of high-level Communists and believed in their cause. He ended up committing suicide when his friends lost power to other factions and the government turned on him.) More generally, ideology can change so quickly that it's very difficult to follow it closely enough to stay safe, and even if you did follow the dominant ideology perfectly you're still vulnerable to the next "vanguard" who pushes the ideology in a new direction in order to take power. I think if "adopt the dominant ideology" is sensible as a defensive strategy for living in some society, you'd still really want to avoid getting indoctrinated into being a true believer, so you can apply rational analysis to the political struggles that will inevitably follow.
6Wei_Dai1yI guess I'm worried about 1. They will "waste their life", for both the real opportunity cost and the potential regret they might feel if they realize the error later in life. 2. My own regret in knowing that they've been indoctrinated into believing wrong things (or into having unreasonable certainty about potentially wrong things), when I probably could have done something to prevent that. 3. Their views making family life difficult. (E.g., if they were to secretly record family conversations and post them on social media as examples of wrongthink, like some kids have done.) Can't really think of any mitigations for these aside from trying not to let them get indoctrinated in the first place...
4NancyLebovitz1yI don't have children, and my upbringing wasn't especially good or bad on learning rationality. Still, what I'm noticing in your post and the comments so far is the idea that rationality is something to put into your children. I believe that rationality mostly needs to be modeled. Take your mind and your children's connection to the universe seriously. Show them that thinking and arguing are both fun and useful.
4AllAmericanBreakfast1yDo you mean how to teach them critical thinking skills? Or how to get them to prize the truth over fitting in? I'm going to assume you're not a radical leftist. What if your 16 year old kid started sharing every leftist meme because they've really thought about it and think it's true? What if they said "it doesn't matter if there's pressure to hold these political opinions; they're as true as gravity!" Would you count that as a success, since they're bold enough to stand up to an authority figure (you) to honestly express their deeply-considered views? Or a failure? If the latter, why?
3ChristianKl1yI don't think that most people who really think issues through agree with every leftists meme and think the meme is true. Part of modern leftish ideology is that you should say certain things even when they are not true, because you want to show solidarity. There's also a belief that certain values shouldn't be "thought through". They are sacred and not supposed to be questioned.
2AllAmericanBreakfast1yIt sounds like you're setting the bar for epistemic hygiene (i.e. not being infected by a mind virus) at being able to justify your worldview from the ground up. Is that an isolated demand for rigor [https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/], or would you view anyone unable to do that as an unreasonable conformist?
4ChristianKl1yI think you ignore that plenty of people do believe in epistemics that value not engaging in critical analysis in the sense of critical thinking but only in the sense of critical theory. In leftish activism people are expected to be able to approve at the same time of the meme "homophobia should always be challenged" and "Islam shouldn't be challenged". Explicit discussions about how those values should be traded of against each other are shunned because they violate the underlying sacredness. Frequently, there's an idea that beliefs should be based on experience or trusting people with experience and not based on thinking thing things through. Valuing thinking things through is not universal.
4AllAmericanBreakfast1yI'm just not convinced that the radical left has epistemic norms or value priorities that are unusually bad. Imagine you were about to introduce me to five of your friends to talk politics. One identifies as a radical leftist, one a progressive moderate, another a libertarian, the fourth a conservative, and the fifth apolitical. All five of them share a lot of memes on Facebook. They also each have a blog where they write about their political opinions. I would not be particularly surprised if I had a thoughtful, stimulating conversation with any of them. My prior is that intellectual profiling based on ideology isn't a good way to predict how thoughtful somebody is. So for me, if Wei Dei Jr. turned out to be a 16 year old radical leftist, I wouldn't think he's any more conformist than if he'd turned out to be a progressive, libertarian, conservative, or apolitical. That might just be a crux of disagreement for us based on differing experiences in interacting with each of these groups.
2frontier641yA 16yo going into the modern school system and turning into a radical leftist is much more often than not a failure state than a success state. Young leftist conformists outnumber the thought-out and well-reasoned young leftists by at least 10 to 1 so that's where our prior should be at. Hypothetical Wei then has a few conversations with his hypothetical, radical leftist kid and the kid reasons well for a 16yo. We would expect a well-reasoned leftist to reason well more often than a conformed leftist so that updates our priors, but I don't think we'd go as far as saying that it overcomes our original 10 to 1 prior. Well-reasoned people only make arguments sound well-reasoned to others maybe 90% of the time max and even conformists can make nice-sounding arguments (for a 16yo) fairly often. Even after the conversations, it's still more likely that the hypothetical radical leftist kid is a conformist rather than well-reasoned. If hypothetical Wei had some ability to determine to a high degree of certainty whether his kid was a conformist or well-reasoned then that would be a very different case and he likely wouldn't have the concerns that his children will be indoctrinated that he expressed in the original post.
3AllAmericanBreakfast1yYou're neglecting the base rate of 16 year old conformity. I think this is some pretty silly speculation, but let's run with it. Isn't the base rate for 16 year old conformity at least 10 to 1? If so, a 16 year old who's a leftist is no more likely to be a conformist than any other. In the end, what we're looking for is a reliable signal that, whatever the 16 year old thinks, it's due to their independent reasoning. Widely shared reasonable beliefs won't cut it, because they wouldn't have to think it out for themselves. Outrageous contrarian views won't work, because that's not reasonable. You'd have to look for them to hold views that are both reasonable and contrarian. So, a genius. Is that a realistic bar to diagnose your kid as uninfected by mind viruses?
3frontier641yIdeological conformity in the school system is not uniform. A person turning left when everybody else is turning right is much less likely to be a conformist than someone else turning right. ETA: Without metaphor, our priors for conformist vs. well-reasoned is different for young rightists or non-leftists in the school system.
2ryan_b1yMy daughter is 2. Everything we do with her is either indoctrination or play; she doesn't have enough language yet for the learning-begets-learning we naturally assume with older kids and adults. I was in the military, which is probably the most successful employer of indoctrination in the US. I believe the key to this success rests with the clarity of the indoctrination's purpose and effectiveness: the purpose is to keep everyone on the same page, because if we aren't our people will die (where our people means the unit). Indoctrination is the only tool available for this because there isn't time for sharing all the relevant information or doing analysis. I plan to capture these benefits for my daughter by being specific about the fact that I'm using indoctrination and why indoctrination is a good tool for the situation instead of how we think or feel about it, when she inevitably has questions. The bearing I think this has on the question of mind viruses is that she will know what indoctrination looks like when she sees it. Further, she will have expectations of purpose and impact; political indoctrination fails these tests, which I hope will trigger rejection (or at least forestall overcommitment).
1Rudi C1yHow are you handling the problem that rationality will often pay negative if not over a critical mass (e.g., it often leads to poor signaling or anti-signaling if one is lucky)?
1. People I followed on Twitter for their credible takes on COVID-19 now sound insane. Sigh...

2. I feel like I should do something to prep (e.g., hedge risk to me and my family) in advance of AI risk being politicized, but I'm not sure what. Obvious idea is to stop writing under my real name, but cost/benefit doesn't seem worth it.

Re hedging, a common technique is having multiple fairly different citizenships and foreign-held assets, i.e. such that if your country become dangerously oppressive you or your assets wouldn't be handed back to it. E.g. many Chinese elites pick up a Western citizenship for them or their children, and wealthy people fearing change in the US sometimes pick up New Zealand or Singapore homes and citizenship.

There are many countries with schemes to sell citizenship, although often you need to live in them for some years after you make your investment. Then emigrate if things are starting to look too scary before emigration is restricted.

My sense, however, is that the current risk of needing this is very low in the US, and the most likely reason for someone with the means to buy citizenship to leave would just be increases in wealth/investment taxes through the ordinary political process, with extremely low chance of a surprise cultural revolution (with large swathes of the population imprisoned, expropriated or killed for claimed ideological offenses) or ban on emigration. If you take enough precautions to deal with changes in tax law I think you'll be taking more than you need to deal with the much less likely cultural revolution story.

7hg002yPermanent residency (as opposed to citizenship) is a budget option. For example, for Panama, I believe if you're a citizen of one of 50 nations on their "Friendly Nations" list, you can obtain permanent residency by depositing $10K in a Panamanian bank account. If I recall correctly, Paraguay's permanent residency has similar prerequisites ($5K deposit required) and is the easiest to maintain--you just need to be visiting the country every 3 years.
6Wei_Dai1yI was initially pretty excited about the idea of getting another passport, but on second thought I'm not sure it's worth the substantial costs involved. Today people aren't losing their passports or having their movements restricted for (them or their family members) having expressed "wrong" ideas, but just(!) losing their jobs, being publicly humiliated, etc. This is more the kind of risk I want to hedge against (with regard to AI), especially for my family. If the political situation deteriorates even further to where the US government puts official sanctions on people like me, humanity is probably just totally screwed as a whole and having another passport isn't going to help me that much.
5hg001yI spent some time reading about the situation in Venezuela, and from what I remember, a big reason people are stuck there is simply that the bureaucracy for processing passports is extremely slow/dysfunctional (and lack of a passport presents a barrier for achieving a legal immigration status in any other country). So it might be worthwhile to renew your passport more regularly than is strictly necessary, so you always have at least a 5 year buffer on it say, in case we see the same kind of institutional dysfunction. (Much less effort than acquiring a second passport.) Side note: I once talked to someone who became stuck in a country that he was not a citizen of because he allowed his passport to expire and couldn't travel back home to get it renewed. (He was from a small country. My guess is that the US offers passport services without needing to travel back home. But I could be wrong.)

People I followed on Twitter for their credible takes on COVID-19 now sound insane. Sigh...

Are you saying that you initially followed people for their good thoughts on COVID-19, but (a) now they switched to talking about other topics (George Floyd protests?), and their thoughts are much worse on these other topics, (b) their thoughts on COVID-19 became worse over time, (c) they made some COVID-19-related predictions/statements that now look obviously wrong, so that what they previously said sounds obviously wrong, or (d) something else?

You'll have to infer it from the fact that I didn't explain more and am not giving a straight answer now. Maybe I'm being overly cautious, but my parents and other relatives lived through (and suffered in) the Cultural Revolution and other "political movements", and wouldn't it be silly if I failed to "expect the Spanish Inquisition" despite that?

It's helpful (to me, in understanding the types of concerns you're having) to have mentioned the Cultural Revolution. For this, posting under a pseudonym probably doesn't help - the groups who focus on control rather than thriving have very good data collection and processing capability, and that's going to leak to anyone who gets sufficient power with them. True anonymity is gone forever, except by actually being unimportant to the new authorities/mobs.

I wasn't there, but I had neighbors growing up who'd narrowly escaped and who had friends/relatives killed. Also, a number of friends who relayed family stories from the Nazi Holocaust. The lesson I take is that it takes off quickly, but not quite overnight. There were multi-month windows in both cases where things were locked down, but still porous for those lucky enough to have planned for it, or with assets not yet confiscated, or willing to make sacrifices and take large risks to get out. I suspect those who want to control us have _ALSO_ learned this lesson, and the next time will have a smaller window - perhaps as little as a week. Or perhaps I'm underestimating the slope and it's... (read more)

Get your exit plan ready to execute on very short notice, and understand that it’ll be costly if you do it.

What would be a good exit plan? If you've thought about this, can you share your plan and/or discuss (privately) my specific situation?

Do what you can to keep your local environment sane, so you don’t have to run, and so the world gets back onto a positive trend.

How? I've tried to do this a bit, but it takes a huge amount of time, effort, and personal risk, and whatever gains I manage to eek out seem to be highly ephemeral at best. It doesn't seem like a very good use of my time when I can spend it on something like AI safety instead. Have you been doing this yourself, and if so what has been your experience?

7Dagon2yI do not intend to claim that I'm particularly great at this, and I certainly don't think I have sufficient special knowledge for 1-1 planning. I'm happy to listen and make lightweight comments if you think it'd be helpful. My plans are half-formed, and include maintaining some foundational capabilities that will help in a large class of disasters that require travel. I have bank accounts in two nations and currencies, and I keep some cash in a number of currencies. Some physical precious metals or hard-to-confiscate digital currency is a good idea too. I have friends and coworkers in a number of countries (including over a border I can cross by land), who I visit enough that it will seem perfectly normal for me to want to travel there. I'm seriously considering real-estate investments in one or two of those places, to make it even easier to justify travel if it becomes restricted or suspicious. I still think that the likelihood is low that I'll need to go, but there may come a point where the tactic of maintaining rolling refundable tickets becomes reasonable - buy a flight out at 2 weeks and 4 weeks, and every 2 weeks cancel the near one and buy a replacement further one. This is harder to advise. I'm older than most people on LW, and have been building software and saving/investing for decades, so I have resources that can help support what seem to be important causes, and I have a job that has (indirect, but clear) impact on keeping the economy and society running. I also support and participate in protests and visibility campaigns to try to make it clear to the less-foresightful members of society that tightening control isn't going to work. This part is more personal, less clearly impactful toward my goals, and takes a huge amount of time, effort, and personal risk. It's quite possible that I'm doing it more for the social connections with friends and peers than for purely rational goal-seeking. I wouldn't fault anyone for preferring to put their effort
5Bjartur Tómas2y+1 for this. Would love to talk to other people seriously considering exit. Maybe we could start a Telegram or something.
1Xodarap1yI also have started making plans and would love to hear what others are thinking.
3[anonymous]2yI saw this "stopped clock" assumption catching a bunch of people with COVID-19, so I wrote a quick post [https://www.lesswrong.com/posts/e8Cgsh8zB5CYoYPEW/the-stopped-clock-problem] on why it seems unlikely to be a good strategy.
2Pattern2yAre they still good sources on biology?
2cousin_it2yI don't know the US situation firsthand, but it seems like it could get worse toward the election. Maybe move to Europe?
1Liam Donovan1yWhat/who does #1 refer to? I've changed my mind a lot due to reading tweets from people I initially followed due to their credible COVID-19 takes, and you saying they sound insane would be a major update for me.

Scott's new post on Problems With Paywalls reminds me to mention the one weird trick I use to get around paywalls. Many places like NYT will make the paywall appear a few seconds after landing on the page, so I reliably hit cmd-a and cmd-c and then paste the whole post into a text editor, and read it there instead of on the site. This works for the majority of paywalled articles I encounter personally.

Or you can use Bypass Paywalls with Firefox or Chrome.

2ryan_b1yExperimenting with this now!

If you use Firefox, there is an extension called Temporary Containers. This allows you to load a site in a temporary container tab, which is effectively like opening the site in a fresh install of a browser or on a new device. For sites with rate limited pay walls like the NYT, this effectively defeats the paywall as it never appears to them that you have gone over their rate limit.

The extension can be configured so that every instance of a particular url is automatically opened in its own temporary container, which defeats these paywalls at very little cost to convenience.

1gilch1yYou can often find articles in the Wayback Machine [https://web.archive.org/] even if they're paywalled.

Personal update: Over the last few months, I've become much less worried that I have a tendency to be too pessimistic (because I frequently seem to be the most pessimistic person in a discussion). Things I was worried about more than others (coronavirus pandemic, epistemic conditions getting significantly worse) have come true, and when I was wrong in a pessimistic direction, I updated quickly after coming across a good argument (so I think I was wrong just because I didn't think of that argument, rather than due to a tendency to be pessimistic).

Feedback welcome, in case I've updated too much about this.

Hi guys,been a long time lurker here.Wanted to ask this,have you guys ever done rereads for the Sequences so that new guys can engage with he content better and discuss..Just a thought

8gilch1yI recall a [SEQ RERUN] in the past, yes. You are also allowed to comment on old posts. LessWrong shows "recent discussion" on its front page, so these do get replies sometimes. There was also talk of a book group.

Hello; just joined; working through the Library. I appreciate the combination of high standards and welcoming tone. I'm a homeschooling (pre-Covid-19) parent in the US South, so among other things I'm looking forward to finding thoughts here on education for children.

I found Slate Star Codex before LessWrong and hope this doxxing/outing situation works out safely.

3mingyuan1yThere are certainly a lot of people here interested in the same topic! Jeff ( https://www.lesswrong.com/users/jkaufman [https://www.lesswrong.com/users/jkaufman]) is probably the most prolific poster on raising children, though his kids are still quite young. Good luck and have fun!
3habryka1yWelcome limestone! And feel free to leave comments here or ping the admins on Intercom (the small chat bubble in the bottom right) if you run into any problems!

I noticed that all posts for the last day and a half are still personal blogposts, even though many are more "Frontpage" kind of stuff. Is there a bug in the site, is it a new policy for what makes it to frontpage, or is it just that the moderation team didn't have time to go through the post?

3Ben Pace2yThanks for commenting. So, the latest admin-UI is that we have decide which core tags to give a post before deciding whether to frontpage it, which is a trivial inconvenience, which leads to delays. At the minute I do care a fair bit about getting the core tags right, so I'm not sure what the best thing to do about this is.
6Zack_M_Davis2yThis seems kind of terrible? I expect authors and readers care more about new posts being published than about the tags being pristine.
5Ben Pace2yYeah, to be clear, I agree on both counts, see my reply to adam below about how long I think the frontpage decisions should take. I do think the tags are important so it's been good to experiment with this, but it isn't the right call to have delays of this length in general and I/the team should figure out a way to prevent the delays pretty soon. Added: Actually, I think that as readers use tags more to filter their frontpage posts, it'll be more important to many of them that a post is filtered in/out of their feed, than whether it was frontpaged efficiently. But I agree that for author experience, efficiency of frontpage is a big deal.
1adamShimi2yOkay, this makes sense. Personally, that's slightly annoying because this means a post I wrote yesterday will probably be lost in the burst of posts pushed to Frontpage (as I assume it would be going to Frontpage), but I also value the tag system, so I can take a hit or two for that. That being said, it doesn't seem sustainable for you: the backlog keeps growing, and I assume the delays will too, resulting in posts pushed to Frontpage a long time after they were posted.
3Ben Pace2yI just went through and tagged+frontpaged the 10 outstanding posts. In general I think it's necessary for at least 95% of posts to be frontpaged-or-not within 24 hours of being published, and I think we can get the median to be under 12 hours, and potentially much faster. I don't actually have a number for that, maybe we should just put the average time for the past 14 days on the admin-UI to help us keep track.
2adamShimi2yThanks! And I think the delay you mention fit with my intuition about this.
2Zack_M_Davis2yI was wondering about this, too. (If the implicit Frontpaging queue is "stuck", that gives me an incentive to delay publishing my new post, so that it doesn't have to compete with a big burst of backlogged posts being Frontpaged at the same time.)

Hi! I've been reading LessWrong and Slate Star Codex for years, but until the today's events commented pretty much exclusively on SSC. Hope everything will resolve to the better, although personally I'm rather pessimistic.

In any case, I've been wondering for a while is there any online places for casual discussions a-la SSC Open Threads, but more closely related to Less Wrong and the Bay Area rationalist community? Threads like this are one such place obviously, but they seem rare and unpopulated. I've tried to fins facebook groups, but with very limited success. Any recommendations?

6habryka1yI think various discussions on LessWrong are probably your best bet. A lot of LessWrong discussion is distributed over a large number of posts and platforms, so things end up less centralized than for SSC stuff, which has benefits and drawbacks. For an experience similar to an SSC Open Thread, I think the LW Open Threads are your best bet, though they are definitely a lot less active than the SSC ones.

I'm sure this phenomenon has a name by now, but I'm struggling to find it. What is it called when requirements are applied to an excess of applicants solely for the purpose of whittling them down to a manageable number, but doing so either filters no better than chance or actively eliminates the ideal candidate?

For example, a job may require a college degree, but its best workers would be those without one. Or an apartment complex is rude to applicants knowing there are an excess, scaring off good tenants in favor of those desperate. Or someone finds exceptional luck securing online dating "matches" and begins to fill their profile with requirements that put off worthwhile mates.

3alex_lw1yI think something like "market inefficiency" might be the word. Disclaimer - I'm not an economist and don't know the precise technical meaning of this term. But roughly speaking, the situations you describe seem to be those where the law of supply and demand is somehow prevented from acting directly on the monetary price, so the non-monetary "price" is increased/decreased instead. In the case of the apartments, they'd probably be happy to increase the price until they've got exactly the right number of applicants but are kept from doing it by rent control or reputation or something, so they incur moral costs on the applicants. In case of hiring, they're probably kept from lowering their wages through some combination of: inability to lower wages of the existing employees on the similar positions, wages not being exactly public anyway, and maybe some psychological expectations where nobody with required credentials will agree to work for less than X, no matter how good the conditions are (or alternatively they're genuinely trying to pick the best and failing, than it's Goodheart's law). And in the case of the dating market there simply is no universal currency to begin with.
3Thomas Kwa1y"throwing the baby out with the bathwater"?
2NancyLebovitz1yConservation of thought, perhaps. The root problem is having more options than you can handle, probably amplified by bad premises. Or the other hand, if you're swamped, when will you have time to improve your premises? "Conservation of thought" is from an early issue of The New York Review of Science Fiction.
1Filipe Marchesini1yI think you are referring to Goodheart's law [https://en.wikipedia.org/wiki/Goodhart%27s_law], because all the measures your examples used as a proxy to achieve some goal were gamified in a way that the proxy stopped working reliably.
3ESRogs1yHmm, this seems a little different from Goodhart's law (or at least it's a particular special case that deserves its own name). This concept, as I understand it, is not about picking the wrong metric to optimize. It's more like picking the wrong metric to satisfice, or putting the bar for satisficing in the wrong place.

Sorry for the outages today (we had two outages, one around 1:00PM PT, one around 3:30PM PT, with intermittent slow requests in the intervening time). As far as I can tell it was caused by a bot that was crawling particularly expensive pages (pages with tons of comments) at a relatively high rate. We've banned the relevant IP range and everything appears back to normal, though I am still watching the logs and server metrics attentively.

Again, sorry for any inconveniences this caused, and please let us know via Intercom if you run into any further problems.

2habryka1yWe might still have some problems with comment and PM submissions that I am looking into. Not sure what's causing that.
4habryka1yAll remaining problems with document submission should be resolved. If you had opted into beta features and had trouble submitting documents in the past few hours, you should be able to do that again, and please let me know via Intercom if you can't.

Comment and post text fields default to "LessWrong Docs [beta]" for me, I assume because I have "Opt into experimental features" checked in my user settings. I wonder if the "Activate Markdown Editor" setting should take precedence?—no one who prefers Markdown over the Draft.js WYSIWYG editor is going to switch because our WYSIWYG editor is just that much better, right? (Why are you guys writing an editor, anyway? Like, it looks fun, but I don't understand why you'd do it other than, "It looks fun!")

8habryka1yJust to clarify, I wouldn't really say that "we are building our own editor". We are just customizing the CKEditor 5 framework. It is definitely a bunch of work, but we aren't touching any low-level abstractions (and we've spent overall more time than that trying to fix bugs and inconsistencies in the current editor framework we are using, so hopefully it will save us time in the long-run).
8Raemon1yAh, yeah that makes sense, just an oversight. I'll try to fix that next week. We're using CkEditor5 as a base to build some new features. There are a number of reasons for this (in the immediate future, it means you can finally have tables), but the most important (later on down the line) reason is that it provides Google Docs style collaborative editing. In addition to being a generally nice set of features for coauthors, I'm hoping that it dovetails significantly with the LW 2019 review in December, allowing people to suggest changes for nominated posts.

Since posting this, I've revised my paper, now called "Unbounded utility and axiomatic foundations", and eliminated all the placeholders marking work still to be done. I believe it's now ready to send off to a journal. If anyone wants to read it, and especially if anyone wants to study it and give feedback, just drop me a message. As a taster, here's the introduction.

Several axiomatisations have been given of preference among actions, which all lead to the conclusion that these preferences are equivalent to numerical comparison of

Can someone please explain what the following sentence from the terms of use means? "In submitting User-Generated Content to the Website, you agree to waive all moral rights in or to your User-Generated Content across the world, whether you have or have not asserted moral rights."

5habryka1y(We inherited the terms of use from the old LessWrong so while I tried my best to understand them, I don't have as deep of a sense of understanding as I wish I had, and it seemed more important to keep the terms of use consistent than to rewrite everything, to maintain consistency between the agreements that authors made when they contributed to the site at different points in time) The Wikipedia article on Moral Rights [https://en.wikipedia.org/wiki/Moral_rights]: What exactly these rights are seems to differ a bunch from country to country. In the U.S. the protection of these moral rights is pretty limited. From the same article: Concrete instances where I can imagine this waiving becoming relevant, and where I think this makes sense (though this is just me guessing, I have not discussed this in detail with a lawyer): * An author leaves a comment on a post that starts with a steelman of an opposing position. We display a truncated version of the comment by default, which now only shows them arguing for a position they find abhorrent. This could potentially violate their moral rights by altering their contribution in a way that violates their honor or reputation. * An author leaves a comment and another user quotes a subsection of that comment, bolding, or italicizing various sections that they disagree with, and inserting sentences using [...] notation.

Some good news for the claim public awareness of X risk in general should go up after coronavirus - the economist cover story: https://www.economist.com/node/21788546?frsc=dg|e, https://www.economist.com/node/21788589?frsc=dg|e

I've been searching for a LW post for half an hour. I think it was written within the last few months. It's about how to understand beliefs that stronger people have, without simply deferring to them. It was on the front page while I was reading the comments to this post of mine, which is how I found it. Anyone know which post I'm trying to find?

5TurnTrout1yIt's on EAForum [https://forum.effectivealtruism.org/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models] , perhaps?
3Thomas Kwa1yThat was it, thanks!

Is there a name for intuition/fallacy that an advanced AI or alien race must also be morally superior?

7Filipe Marchesini1yI think you can refer the person to orthogonality thesis [https://wiki.lesswrong.com/wiki/Orthogonality_thesis]
-3frontier641ySeems like an appeal to ?false? authority. May not be a fallacy because there's a demonstrable trend between technological superiority and moral superiority at least on Earth. Assuming that trend extends to other civilizations off Earth? I'm sure there's something fallacious about that, maybe too geocentric.
-4MakoYass1yIt might generally be Moral Realism (anti-moral-relativism). The notion that morality is some universal objective truth that we gradually uncover more of as we grow wiser. That's how those people usually conceive it. I sometimes call it anti-orthogonalism.
7Rafael Harth1yI want to explain my downvoting this post. I think you are attacking a massive strawman by equating moral realism with [disagreeing with the orthogonality thesis]. Moral realism says that moral questions have objective answers. I'm almost certain this is true. The relevant form of the orthogonality thesis says that there exist minds such that intelligence is independent of goals. I'm almost certain this is true. It does not say that intelligence is orthogonal to goals for all agents. Relevant quote from EY: [https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/] And the wiki page Filipe Marchesini [https://www.lesswrong.com/users/filipe-marchesini] linked to also gets this right:
2MakoYass1yGood comment, but... Have you read Three Worlds Collide? If you were in a situation similar to what it describes, would you still be calling your position moral realism? I am not out to attack the position that humans fundamentally, generally align with humans. I don't yet agree with it, its claim, "every moral question has a single true answer" might turn out to be a confused paraphrasing of "every war has a victor", but I'm open to the possibility that it's meaningfully true as well.
1Rafael Harth1yYes and yes. I got very emotional when reading that. I thought rejecting the happiness... surgery or whatever it wast that the advanced alien species prescribed was blatantly insane.

Hi, just as a note: https://www.lesswrong.com/allPosts?filter=curated&view=new looks really weird (which you get from googling for curated posts) because the shortform posts are not filtered out.

This is a question about prioritization and to do lists. I find that my affairs can be sorted into:

• urgent and important (do this or else you will lose your job; the doctor says to do X or you will die a horrible death in Y days)
• This stuff really needs to get done soon but the world won't end (paying bills/dealing with bugs in one's life/fixing chronic health issues)
• Everything else

Due to some of the things in the 2nd category, I have very little time to spend on the latter 2 categories. Therefore, I find that when I have a moment to sit down and ... (read more)

" In 2017, a federal court, the U.S. Southern District Court of New York, sided with Elsevier and ruled Sci-Hub should stop operating and pay $15 million in damages. In a similar lawsuit, the American Chemistry Society won a case against Elbakyan and the right to demand another$4.8 million in damages.

In addition, both courts effectively prohibited any U.S. company from facilitating Sci-Hub’s work. Elbakyan had to migrate the websit... (read more)

This clip (from The Office) reminds me of when people suggest various patches to AI after it violates some safety concern / fails an experiment.

Am I the only one for whom all comments in the Alignment Forum have 0 votes?

3habryka1yNope, looks like a bug, will look into it.

Just got a Roam account; is there any good resource on how to use it? I looked into the help page, but most links don't lead anywhere. Thanks.

3Max Dalton1yI'm planning to try this: https://learn.nateliason.com/. [https://learn.nateliason.com/.] I think that the Roam founder also recommends it.
1So-Low Growth1yI recently applied for a Roam account. Can I ask when it was you applied and how long before they got back to you?
2adamShimi1yI would say I applied a couple of weeks ago (maybe 3 weeks), and received an email yesterday telling me that accounts were opening again.
1So-Low Growth1yThanks.

Mod note: Copied over from one of Zvi's Covid posts to keep the relevant thread on-topic:

[...]

Murdered

I would simply like to point out here 3 things.

1. The definition of homicide from wikipedia "A homicide requires only a volitional act by another person that results in death, and thus a homicide may result from accidental, reckless, or negligent acts even if there is no intent to cause harm" Such a finding in an autopsy report does not imply a crime let alone murder.

2. The autopsy report ordered by his family showed quantities of numerous ... (read more)

[+][comment deleted]1y 1