I have 2-3 friends I know about who lost $5k+ on Mt. Gox going down, and didn't hold crypto anywhere else.
Yeah, the current phrase feels confusing to me. If a human takes something else as a subject that... feels like it has some different connotations. In my mind the two opposing phrases are "being subject to" (passive) and "taking as object" (active).
There's some complexity here because English offers two words here, "subject" and "object" that can be used somewhat interchangeably in some situations but in most situations we have some notion that "subject" is on the left/upstream side of the causal arrow and "object" is on the right/downstream side. However Ben's reuse of "subject" by shifting it from actor ("subject to") to the acted upon ("as subject") seems mostly poetic and a reasonable alternative to talking about object.
Of course, because English is noun-focused, it's rather nice to have two diff... (read more)
Yep, that's fine. I am not a moral prescriptivist who tells you what you have to care about.
I do think that you are probably going to change your mind on this at some point in the next millennium if we ever get to live that long, and I do have a bunch of arguments that feel relevant, but I don't think it's completely implausible you really don't care.
I do think that not caring about how people are far away is pretty common, and building EA on that assumption seems fine. Not all clubs and institutions need to be justifiable to everyone.
Hmm, I do think I honestly believe that behavioral scientists might be worse than the average politician at predicting public response. Like, I am not totally confident, but I think I would take a 50% bet. So this strikes me as overall mildly bad (though not very bad, I don't expect either of these two groups to be very good at doing this).
I don't know, I think it's a pretty decent argument. I agree it sometimes gets overused, but I do think given it's assumptions "you care about people far away as much as people closeby" and "there are lots of people far away you can help much more than people close by" and "here is a situation where you would help someone closeby, so you might also want to help the people far away in the same way" are all part of a totally valid logical chain of inference that seems useful to have in discussions on ethics.
Like, you don't need to take it to an extreme, but it seems locally valid and totally fine to use, even if not all the assumptions that make it locally valid are always fully explicated.
Yeah, that makes sense. Will be more careful with moving old historical posts to the frontpage for this reason.
Can you paste the link of the RSS feed? We've recently moved a bunch of old sequences post to the frontpage that we missed when we did our initial pass in 2017, so that seems like the most reasonable cause, if you are subscribed to a feed that filters only on frontpage posts.
Yeah, I really want to get around to this. I am sorry for splitting the feature-set awkwardly across two editors!
Woah, at least one of those summaries seems really quite inaccurate. Bad enough that like, I feel like I should step in as a moderator and be like "wait, this doesn't seem OK".
I am not very familiar with ESR's opinions, but your summary of "white people at BLM protests should be assumed to be communists and shot at will" is really misrepresenting the thing he actually said. What he actually said was "White rioters, on the other hand, will be presumed to be Antifa Communists attempting to manipulate this tragedy for Communist political ends;", with th... (read more)
Lukeprog wrote some related posts a while ago: https://www.lesswrong.com/posts/x8Fp9NMgDWbuMpizA/rationality-lessons-learned-from-irrational-adventures-in
In particular the stuff on fashion.
This is great, thank you!
Minor formatting note: The italics font on both the AI Alignment Forum and LessWrong isn't super well suited to large block of text, so I took the liberty to unitalicize a bunch of the large blockquotes (which should be sufficiently distinguishable as blockquotes without the italics). Though I am totally happy to reverse it if you prefer the previous formatting.
This post of mine feels closely related: https://www.lesswrong.com/posts/xhE4TriBSPywGuhqi/integrity-and-accountability-are-core-parts-of-rationality
I have come to believe that people's ability to come to correct opinions about important questions is in large part a result of whether their social and monetary incentives reward them when they have accurate models in a specific domain. This means a person can have extremely good opinions in one domain of reality, because they are subject to good incentives, while having highly inaccurate models in a la
Mod note: I moved this to frontpage despite it being a bit similar to things we've historically left on people's personal blog. Usually there are three checks I run for deciding whether to put something on the frontpage:
For this ... (read more)
Hi, I'm new to this site so not sure if late comments are still answered...
Late comments are generally encouraged around here, and we generally aim to have discussion that stands the test of time, and don't ever shut down comment threads because they are too old.
That sucks. Sorry for your loss.
Edit note: I fixed your images for you. They seemed broken on Chrome since the server on which they were hosted didn't support https.
I think the Open Thread is probably a generally better place to bring up random new ideas related to Roko's basilisk stuff. This page is more for discussing the current content of the page, and how it might be improved.
Yep, agree, also want this. Just a bit complicated tech-wise and UI-wise, so it's a reasonably large investment.
Note that guzey's excellent writeup on this definitely qualifies, and I offered to send him the money, but if I remember correctly he didn't want it, and we will settle it informally when we hang out in the future sometime.
This is great, thank you for writing this up and I am looking forward to where this goes!
(I probably have more detailed thoughts, but not sure whether I will get around to writing them up, so it seemed better to leave encouragement instead of nothing)
Airpods are amazing at switching between devices (in particular macs and iPhones). Only set of headphones that seems to have made this work reliably.
Yeah, I meant it as "I think this comment is OK and shouldn't be deleted or cause the author to get a warning, but it seemed like the kind of thing that could lead to followup comments that would be quite bad"
We have a setting that allows you to view them exactly that way. But I think too large of a fraction of people reading LW posts are lurkers, and this means that I am hesitant. to force them to press an additional button for every poll, but it seems like a reasonable setting that allows some people to opt into that.
In the left sidebar menu, click on Subscribe (RSS/Email).
Mod note: We generally try to keep generalizations about political parties, and general central culture-war stuff out of most of the site discussion. I think this comment is fine, but I would prefer the comments on this post not become a "the left thinks or the right thinks" type of discussion, which I think is rarely fruitful.
David Gerard not only has 1000 karma but for a long time admin rights at as least our Wiki. I think it's strawmanning him to say that he just doesn't understand LessWrong when he spent years in our community and then decided that it's not the right place for him anymore.
No, just because you spend years here does not mean you understand the core ideas.
I think we have plenty of evidence that David Gerard frequently completely makes up random strawmans that have nothing to do with us, and maybe there is a small corner of his mind that does have an accur... (read more)
No, his critique of bayesianism is also attacking something very different from the sequences, it is again talking about something much narrower. Indeed, substantial fractions of the sequences overlap with his critique of bayesianism (in particular all the stuff about embededness, logical uncertainty, incomputability and TDT-style concerns). I don't think he agrees with everything in the sequences, but when he writes critiques, I am pretty sure he is responding to something else than the sequences.
Also, having 220 karma on the site is really not much evidence you understand what rationality is about. David Gerard has over 1000 karma and very clearly doesn't understand what the site is about either.
I am pretty sure Chapman has also said he hasn't read the sequences, though generally I think he understands most content on the site fine. The problem is again not that he doesn't understand the site, but just that he is using the word rationality to mean something completely different. I like a bunch of his critique, and indeed Eliezer made 90% of the sa... (read more)
David Chapman has said himself that when he is referring to rationality, what he is talking about has nothing to do with LessWrong. He is referring to the much older philosophical movement of "Rationalism". The whole thing with Chapman is literally just an annoying semantic misunderstanding. He also has some specific critiques of things that Eliezer said, but 95% of the time when he critiques rationalism has absolutely nothing to do with what is written on this site.
Chapman has also specifically said that he does not understand LW:
I frequently emphasize that by “rationalism” I am specifically NOT referring to the LW usage, and that is not the target of my critique. I gave up on trying to understand LW rationalism ~5 years ago.
I also prefer spoiler blocks
Required some integration from both sides. But yeah, the new editor made it much easier.
My guess is we want to rename this tag to "Quantified Self" since that sure seems like it should get a tag?
Yep, I expect some people will want them turned off, which is why we tried to make that pretty easy! It might also make sense to batch them into a weekly batch instead of a daily one, which I've done at some points to reduce the degree to which I felt like I was goodharting on them.
Thanks for reporting! Will make sure we fix this soon.
Wow, it's great to see follow-up posts over the course of seven years. Thank you so much for the work you put into this! I am really looking forward to reading this thoroughly sometime in the next few days.
Yep, I feel similarly, though overall think the EA Forum is pursuing a cultural strategy that is somewhat different from ours that makes it a bit less costly, but not much. I have generally been open about various cultural concerns I've had about the EA Forum when talking to CEA.
Also... it seems really unreasonable to say "if you can't handle 10 hours of grueling negotiations about what COVID precautions to take, you're weak and I need to cut you out of my life and/or take away decisionmaking power from you during times of stress." I would guess that, uhh, most people are weak by that definition.
To be clear, I do indeed think we have the luxury to exclude most people from our lives. Indeed any rule that doesn't exclude 90%+ of people from your life to a very large degree seems far too lax to me.
Also, 10 hours really doesn't seem t... (read more)
Note: There is a broken image in the post:
I am pretty confused on this, and as I said above, don't put much weight on this study because I also have some sense that the author isn't super trustworthy (though I haven't found any critique of this specific paper).
Overall, my current sense is that the effect on women in particular is quite strong, and women who choose to have children will reduce their chance of major achievement by at least 40% or so. For men it's probably weaker, and I am a lot less sure what the data says.
There is also this paper, which aims to show that as soon as great scientists marry, they very quickly stop producing great achievements, but something about it irks me out and I don't currently put a ton of weight on it:
I found this analysis one of the most useful I have found: https://academic.oup.com/psychsocgerontology/article/64B/6/767/550078
Compared with married parents, childless married couples tend to have slightly more income and about 5% more wealth. Unmarried childless men enjoy no income advantage over unmarried fathers but have 24%–33% more wealth. Compared with older unmarried mothers, unmarried childless women have 12%–31% more income and about 33% more wealth. The strength of these relationships increases as one moves up the distribution of
Yep, in general about 85% of people have kids, with something like half of the people who don't not doing so because they have fertility problems, or other things that tend to classify them as being "involuntarily childless" in a bunch of studies. So the population to study here (people who voluntarily don't have children) have historically only made up something like 7% of the population. So just looking through successful lists of people and seeing that most of them have kids isn't really going to provide a ton of evidence.
Just to check, do you want us to reimport, or did you do it yourself?
Promoted to curated: As Adele says, this feels related to a bunch of the Jeffery-Bolker rotation ideas, which I've referenced many many times since then, but in a way that feels somewhat independent, which makes me more excited about there being some deeper underlying structure here.
I've also had something like this in my mind for a while, but haven't gotten around to formalizing it, and I think I've seen other people make similar arguments in the past, which makes this a valuable clarification and synthesis that I expect to get referenced a bunch.
Maybe, but really depends on whether you have a good track record or there is some other reason why it seems like a good idea to fund from an altruistic perspective.
If you ever want to do anything particularly weird in an article, you can send me plain HTML via the Intercom and I insert it into the post directly (after doing some basic sanitization). This will make the post usually admin-only editable (if you used any HTML features that are admin only), but works well-enough, and I've done this a few times for articles that really wanted to use color (Beth's AI Safety Debate writeup was one that comes to mind here)
(Mod note: I added the image you linked to inline, since this seemed like it would improve the post and was more like what you intended to happen. But let me know if that was an intentional choice, and I am happy to reverse it)
Does seem kinda important to get this right. My guess is it's an honest mistake, but still one I would like to see corrected, and think is worth investing some effort into avoiding.
(Mod Note: I sent Adam Smith a mod warning for making a bunch of seemingly low-effort/low-quality comments. This is your first warning, we might ban you if you continue posting content of this level of quality.)