GiveWell's top charities updated today. Compared to previous recommendations, they have put Against Malaria Foundation back on the top charities list (partial explanation here), and they have also added an "Other Standout Charities" section.
Several weeks ago I wrote a heavily upvoted post called Don't Be Afraid of Asking Personally Important Questions on LessWrong. I thought it would only be due diligence if I tried to track users on LessWrong who have received advice from here, and it's backfired. In other words, to avoid bias in the record, we might notice what LessWrong as a community is bad at giving advice about. So, I'm seeking feedback. If you have anecdotes or data of how a plan or advice directly from LessWrong backfired, failed, or didn't lead to satisfaction, please share below. If you would like to keep the details private, feel free to send me a private message.
If the ensuing thread doesn't get enough feedback, I'll try making asking this question as a Discussion post in its own right. If for some reason you think this whole endeavor isn't necessary, critical feedback about that is also welcome.
My feeling was that SSC is getting close to LW in terms of popularity, but Alexa says otherwise: SSC hasn't yet cracked top 100k sites (LW is ranked 63,755) and has ~600 of links to it vs ~2000 for LW. Still very impressive for a part-time hobby of one overworked doctor. Sadly, 20% of searches leading to SSC are for heartiste.
My suspicion is that SSC would get a lot more traffic if its lousy WP comment system was better, but then Scott is apparently not motivated by traffic, so there is no incentive for him to improve it.
SSC would get a lot more traffic
SSC getting a lot more traffic might change it and not necessarily for the better.
Good futurology is different from storytelling in that it tries to make as few assumptions as possible. How many assumptions do we need to allow cryonics to work? Well, a lot.
The true point of no return has to be indeed much later than we believe it to be now. (Besides does it even exist at all? Maybe a super-advanced civilization can collect enough information to backtrack every single process in the universe down to the point of one's death. Or maybe not)
Our vitrification technology is not a secure erase procedure. Pharaohs also thought that their mummification technology is not a secure erase procedure. Even though we have orders of magnitude more evidence to believe we're not mistaken this time, ultimately, it's the experiment that judges.
Timeless identity is correct, and it's you rather than your copy that wakes up.
We will figure brain scanning.
We will figure brain simulation.
Alternatively, we will figure nanites, and a way to make them work through the ice.
We will figure all that sooner than the expected time of the brain being destroyed by: slow crystal formation; power outages; earthquakes; terrorist attacks; meteor strikes; going bankrupt; economy collapse; n
New research suggests that life may be hard to come by on certain classes of planets even if they are in the habitable zone since they will lose their water early on. See here. This is noteworthy in that in in the last few years almost all other research has pointed towards astronomical considerations not being a major part of the Great Filter, and this is a suggestion that slightly more of the Filter may be in our past.
How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don't know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.
Obviously, when I'm awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in betwee...
Say you're undergoing surgery, and as part of this they use a kind of sedation where your mind completely stops. Not just stops getting input from the outside world, no brain activity whatsoever. Once you're sedated, is there any moral reason to finish the surgery?
Say we can run people on computers, we can start and stop them at any moment, but available power fluctuates. So we come up with a system where when power drops we pause some of the people, and restore them once there's power again. Once we've stopped someone, is there a moral reason to start them again?
My resolution to both of these cases is that I apparently care about people getting the experience of living. People dying matters in that they lose the potential for future enjoyment of living, their friends lose the enjoyment of their company, and expectation of death makes people enjoy life less. This makes death different from brain-stopping surgery, emulation pausing, and also cryonics.
(But I'm not signed up for cryonics because I don't think the information would be preserved.)
What exactly causes a person to stalk other people? Is there research that investigates the question when people start to stalk and when they don't?
To what extend is getting a stalker a risk worth thinking about before it's too late?
No research, just my personal opinion: borderline personality disorder.
alternating between high positive regard and great disappointment
First the stalker is obsessed by the person because the target is the most awesome person in the universe. Imagine a person who could give you infinitely many utilons, if they wanted to. Learning all about them and trying to befriend them would be the most important thing in the world. But at some moment, there is an inevitable disappointment.
Scenario A: The target decides to avoid the stalker. At the beginning the stalker believes it is merely a misunderstanding that can be explained, that perhaps they can prove their loyalty by persistence or something. But later they give up hope, or receive a sufficiently harsh refusal.
Scenario B: The stalker succeeds to befriend the the target. But they are still not getting the infinite utilons, which they believe they should be getting. So they try to increase the intensity of the relationship to impossible levels, as if trying to become literally one person. At some moment the target refuses to cooperate, or is simply unable to cooperate in the way the stalker wants them to, but to the stalker even this...
PubMed and Wikipedia give this:
Elon Musk often advocates looking at problems from a first principles calculation rather than by analogy. My question is what does this kind of thinking imply for cryonics. Currently, the cost of full body preservation is around 80k. What could be done in principle with scale?
Ralph Merkle put out a plan (although lacking in details) for cryopreservation at around 4k. This doesn't seem to account for paying the staff or transportation. The basic idea is that one can reduce the marginal cost by preserving a huge number of people in one vat. There is some discussion of this going on at Longecity, but the details are still lacking.
I'm going to narrate a Mutants and Masterminds roleplaying campaign for my friends, and I'm planning that the final big villain behind all the plots will be... Clippy.
Any story suggestions?
Sabotage of a big company's IT systems, or of an IT company that maintains those systems, to force people to use paperclip-needing physical documents while the systems are down. Can have the paperclips be made mention of, but as what seems to the players like just fluff describing how this (rival company/terrorist/whatever) attack has disrupted things.
Several weeks ago I wrote a heavily upvoted post called Don't Be Afraid of Asking Personally Important Questions on LessWrong. I've been thinking about a couple of things since I wrote that post.
What makes LessWrong a useful website for asking questions which matter to you personally is that there is lots of insightful people here with wide knowledge base. However, for some questions, LessWrong might be too much, or the wrong kind of, monoculture to provide the best answers. Thus, for weird, unusual, or highly specific questions, there might be better d
Animal Charity Evaluators have updated their top charity recommendations, adding Animal Equality to The Humane League and Mercy for Animals. Also, their donation-doubling drive is nearly over.
We don't all agree on what a utilon is. I think a year of human suffering is very bad, while a year of animal suffering is nearly irrelevant by comparison, so I think charities aimed at helping humans are where we get the most utility for our money. Other people's sense of the relative weight of humans and animals is different, however, and some value animals about the same as humans or only somewhat below.
To take a toy example, imagine there are two charities: one that averts a year of human suffering for $200 and one that averts a year of chicken suffering for $2. If I think human suffering is 1000x as bad as chicken suffering and you think human suffering is only 10x as bad, then even though we both agree on the facts of what will happen in response to our donations, we'll give to different charities because of our disagreement over values.
In reality, however, it's more complicated. The facts of what will happen in response to a donation are uncertain even in the best of times, but because a lot of people care about humans the various ways of helping them are much better researched. GiveWell's recommendations are all human-helping charities because of a combination of "...
I may write a full discussion thread on this at some point, but I've been thinking a lot about undergraduate core curriculum lately. What should it include? I have no idea why history has persisted in virtually every curriculum I know of for so long. Do many college professors still believe history has transfer of learning value in terms of critical thinking skills? Why? The transfer of learning thread touches on this issue somewhat, but I feel like most people on there are overvaluing their own field hence computational science is overrepresented and social science, humanties, and business are underrepresented. Any thoughts?
The first question is what goals should undergraduate education have.
There is a wide spectrum of possible answers ranging from "make someone employable" to "create a smart, well-rounded, decent human being".
There is also the "provide four years of cruise-ship fun experience" version, too...
First, undergrad freshmen are probably not the right source for wisdom about what a college should be.
Second, I notice a disturbing lack of such goals as "go to awesome parties" and "get laid a lot" which, empirically speaking, are quite important to a lot of 18-year-olds.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.