I'm only age 22, and I don't have lots of life experience. So, I don't know how pleasing the rewards of such hardships would be, nor do I have a model of how much pain would go into this. However, reading through the scenarios seemed awful, so I rated my willingness to go through with them very low relative to the median response.
I'd be more interested in the same poll restricted to prime over the age of at least forty, asking along the lines of whether the rewards of hardship were so great they'd be willing to go through the pain again.
Apparently, in the days leading up to the Effective Altruism Summit, there was a conference on Artificial Intelligence keeping the research associates out of town. The source is my friend interning at the MIRI right now. So, anyway they might have been even busier than you thought. I hope this has cleared up now.
The whole subculture that is the new 'rationality movement' has some nodes, i.e., nodes, and subcultures, which are not included in this map of the Bay Area memespace. I'm sitting here at home with my friend Kytael, and we're brainstorming the following:
Nodes Unique to the Rationalist Community
Nodes From the Bay Area Separate From the Rationalist Community
Controversial Nodes Within the Rationalist Community
Emerging Subcultures and Memes in the Rationalist Community
New Memes and Groups The Rationalist Community May Want to Explore More
This list isn't exhaustive, and it could be controversial, so please question, or criticize it below. I will reflexively update this list by editing this comment in response to replies. This was more of a brainstorming exercise than anything, but one I thought other Less Wrong users might consider interesting. If a great discussion results, myself, or someone else, could turn this into a fuller post in its own right.
Is there an update on this issue? Representatives from nearly all the relevant organizations have stepped in, but what's been reported has done little to resolve my confusion on this issue, and I think of myself as divided on it as Mr. Hallquist originally was. Dr. MacAskill, Mr. O'Haigeartaigh, Ms. Salamon have all provided explanations for why they believe each of the organizations they're attached are the most deserving of funding. The problem is that this has done little to assuage my concern about which organization is in the most need of funds, and will have the greatest impact given a donation in the present, relative to each of the others.
Thinking about it as a write this comment, it strikes me an unfortunate case of events when organizations who totally want to cooperate towards the same ends are put in the awkward position of making competing(?) appeals to the same base of philanthropists. This might have been mentioned elsewhere in the comments, but donations to which organization do any of you believe would lead to the biggest return of investment in terms of attracting more donors, and talent, towards existential risk reduction as a whole? Which organization will increase the base of effective altruists, and like individuals, who would support this cause?
If anything, I could use more information from the CEA, the FHI, and the GPP. Within effective altruism, there's a bit of a standard of expecting some transparency of the organizations, purportedly effective, which are supported. In terms of financial support, this would mean the open publishing of budgets. Based upon Mr. O'Heigeartaigh's report above, the FHI itself might be strapped for available time, among all its other core activities, to provide this sort of insight.
I recently started my career as an effective altruist earning to give by making my first big splash with a $1000 USD unrestricted donation to Givewell last month.
Uh, I've trawled through Wikipedia for the causes, and symptoms, of mental illnesses, and, according to my doctors (general practitioner, and psychiatrist), I've been good at identifying what I'm experiencing before I've gone to see them about it. The default case is that patients just go to the doctor, report their symptoms, answer questions about their lifestyle lately, and the doctors take care of diagnoses, and/or assigning treatment. I choose to believe that I have such clarity about my own mental processes because my doctors tell me how impressed they are when I come to them seeming to already know what I'm experiencing. I don't know why this is, but my lazy hypothesis is chalking it up to me being smart (people I know tell me this more than I would expect), and that I've become more self-reflective after having attended a CFAR workshop.
Of course, both my doctors, and I, could be prone to confirmation bias, which would be a scary result. Anyway, I've had a similar experience of observing my own behavior, realizing it's abnormal, and being proactive about seeking medical attention. Still, for everyone, diagnosing yourself by trawling Wikipedia, or WebMD, seems a classic example of an exercise prone to confirmation bias (e.g., experiencing something like medical student's disease). This post is a signal that I've qualified my concerns through past experience, and that I encourage you to both seek out a psychiatrist, as I don't expect that to result in a false negative diagnosis, and also to still be careful as you think about this stuff.
Scientists as community of humans should expect there research to return false positives sometimes, because that is what is going to happen, and they should publish those results. Scientists should also expect experiments to demonstrate that some of their hypotheses are just plain wrong. It seems to me replication is only not very useful if the replications of the experiment are likely prone to all the same crap that currently makes original experiments from social psychology not all that reliable. I don't have experience, or practical knowledge of the field, though, so I wouldn't know.
Insofar as it's appropriate to post only about a problem well-defined rather than having the complete solution to the problem, I consider this post to be of sufficient quality to deserve being posted in Main.