All of PeerInfinity's Comments + Replies

  1. Lock the door. Then check if the door is locked. Then wait two seconds, then check again if the door is locked. Then walk two steps away, then return and check if the door is locked. Then walk several steps away, then return and check if the door is locked. Repeat with further distances until you're so embarrassed by this process that you'll vividly remember the embarrassment, and also remember that your door is locked. This is especially effective if someone else sees you doing this. Or you could just write yourself a note saying that you locked t

... (read more)

"HE IS HERE. THE ONE WHO WILL TEAR APART THE VERY STARS IN HEAVEN. HE IS HERE. HE IS THE END OF THE WORLD."

This reminded me of a dream I had the night before Sunday, Dec 2, 2012, which I posted to my livejournal blog the next day. I'm not sure what I expect to accomplish by posting this here, but I thought you might find it interesting. Here is what I wrote about that dream:

" A scene where I dreamt I was reading the next chapter of HPMOR. It was extremely vivid. As if I was there. Very clear image and sound. Even some dramatic music. Omino... (read more)

"You can't put a price on a human life."

"I agree, but unfortunately reality has already put a price on human life, and that price is much less than 5 million dollars. By refusing to accept this, you are only refusing to make an informed decision about which lives to purchase."

0plu10y
Actually the estimate I heard was about 6 million dollars. And I'd argue the other way: That human life is the only thing you can put a price on, the basis for all trade. Whenever you cross a road, you're trading a slight chance of being run over for the value of being on the other side. When you eat something unhealthy, you're trading a portion of your life expectancy for the taste. So people do it every day, except they only trade in fractions of human life.

I like the idea. This is something that could be useful to anyone, not just as part of the Rationality Curriculum.

Here is a related idea I posted about before:

Another random idea I had was to make a text adventure game, where you participate in conversations, and sometimes need to interrupt a conversation to point out a logical fallacy, to prevent the conversation from going off-track and preventing you from getting the information you needed from the conversation.

See also The Less Wrong Video Game

One obvious idea for an exercise is MBlume's Positive Bias Test, which is available online.

But of course everyone taking the course would probably already be familiar with the standard example implemented in the app. I would suggest updating the app to have several different patterns, of varying degrees of complexity, and a way for the user to choose the difficulty level before starting the app. I would expect that to be not too hard to implement, and useful enough to be worth implementing.

7Ratheka11y
What do you think of the idea of an RPG type game where the quests are designed to trigger biases in people, and that required clear thinking to win at? I'd be a big fan of a game that required you to read quests and think about them, and moved away from the 'track arrow, kill everything en route' model that many have today. Of course, it still needs to be fun to entice people to play it. Functional edutainment seems to be a rough balance to strike.

downvote this comment if you want to balance out the karma from an upvote to the other comment.

Please upvote this comment if you would have at least some use for a "saving the world wiki"

"persuade other people to do it for me"? Don't you mean "persuade other people to do it with me"?

other than that, this is an awesome post! I totally want to be your ally! :)

Congratulations on your altruism! If you really are as altruistic as you claim to be.

I'm the person who mentioned there should be a "saving the world wiki", by the way. The main thing that's stopping me from going ahead and starting it myself is that noone else expressed any interest in actually using this wiki if I created it.

Also, I've already made so... (read more)

0PeerInfinity12y
downvote this comment if you want to balance out the karma from an upvote to the other comment.
2PeerInfinity12y
Please upvote this comment if you would have at least some use for a "saving the world wiki"

I'm going to try to apply some Bayesian math to the question of whether it makes sense to believe "if you aren't sad about my bad situation then that means you don't care about me"

In this example, Person X is in a bad situation, and wants to know if Person Y cares about them.

To use Bayes' theorem, we are interested in the following probabilities:

P(A) is 'Person Y cares about Person X'

P(B) is 'Person Y feels sad about Person X's situation'

P(C) is 'Person Y expresses sadness about Person X's situation'

Let's use P(B) as an abbreviation for either P(... (read more)

To me, it still feels Wrong to not feel bad when bad things are happening. Especially when bad things are happening to the people you know and interact with.

I suspect that the reason why it feels Wrong is because I would assume that if someone you know was in a really bad situation, and they saw you not feeling bad about it, they would assume that you don't care about them. I was assuming that "feeling bad when bad things happen to someone" is part of the definition of what it means to care about someone. And I'm naturally reluctant to choose ... (read more)

4fr00t12y
When you deeply grok that you are not the world, I don't think it's likely that relishing emotional turbulence will encourage you to deliberately cause harmful things to happen. What it may (hopefully) do is encourage you to be more curious and less risk-averse. Personally, I have found that I tend to slip into a sort of autopilot, where I stagnate, become emotionally numb, and lose effectiveness as a person. Unfortunately this also causes me to lose the impetus for introspection. In periods of clarity, I can easily see that emotion is a tool I should be using, but I've gotten so good at ignoring it, I feel trapped. So this article was particularly relevant and helpful to me. I'm also interested in more specific strategies/affirmations/examples for reconciling emotion as a feedback mechanism rather than a source of anxiety to be swept under the rug.
9Kaj_Sotala12y
You still have preferences in addition to emotions. Say you have a strong preference for bad things not to happen to someone. Then you do whatever you can to prevent bad things from happening to them, and if something bad does happen to them, you help them out to the best of your ability. In my book, that counts as caring about someone. Not caring would mean that you didn't do anything to stop them from experiencing bad stuff, nor did you help them out if something bad did happen to them. Now people have various definitions of caring, and some probably do think that "feeling bad if something bad happens to someone" is required for genuine caring. But I would disagree. From an evolutionary point of view, emotions exist to motivate behavior. If you behave like a caring person would but don't feel bad, then in reality you care more than someone who feels bad but doesn't actually do anything. And if you end up feeling bad, then that may distract you and cause you to make worse decisions or temporarily paralyze you, which reduces your ability to actually help. (Also, evolutionarily, feeling bad about someone suffering probably also acts as a costly signal: if they're hurt, you suffer, so they have unfakeable evidence of you actually caring and not just pretending to care. But you have no need to prove to yourself that you care in such a perverse way, and you can also prove your caring to them with your actions.) From a consequentialist point of view also, what matters is your actual behavior. The less time you spend feeling bad, the more time you can spend things that actually make people better off. Also, not feeling bad doesn't mean that you can't express sympathy. You can still honestly say things like "I wish things got better for you". For most people, it's the notion that you don't care what happens to them that is bothersome. You can show with both your words and actions that you do care, that is, have a preference that things go well for them and are prepared
2PeerInfinity12y
I'm going to try to apply some Bayesian math to the question of whether it makes sense to believe "if you aren't sad about my bad situation then that means you don't care about me" In this example, Person X is in a bad situation, and wants to know if Person Y cares about them. To use Bayes' theorem, we are interested in the following probabilities: P(A) is 'Person Y cares about Person X' P(B) is 'Person Y feels sad about Person X's situation' P(C) is 'Person Y expresses sadness about Person X's situation' Let's use P(B) as an abbreviation for either P(B given C) or P(B given not C). Because we're doing these calculations after Person X already knows whether or not Person Y expressed sadness. In other words, I'm assuming that P(B) has already been updated on C. Bayes' theorem says that P(A given B) is P(B given A) times P(A) over P(B). P(B given A) and P(A) make it go up, P(B) makes it go down. Bayes' theorem says that P(not A given not B) is P(not B given not A) times P(not A) over P(not B). P(not B given not A) and P(not A) make it go up, P(not B) makes it go down. So this tells us: The more uncertain Person X is about whether Person Y cares about them, the more they'll worry about whether Person Y feels sad about any specific misfortune Person X is experiencing. Different people probably have different beliefs about what P(A given B) is. Someone who thinks that this value is high will be more reassured by someone feeling sad about their situation, and someone who thinks this value is low will be less reassured. So this value will be different for a different person X, and also for a different person Y. Different people probably have different beliefs about what P(not A given not B) is. Someone who thinks that this value is high will be more worried by someone not feeling sad about their situation, and someone who thinks this value is low will be less worried. So this value will be different for a different person X, and also for a different person Y.

I'll admit that after I first read that comment, I was about to make this mistake:

"When faced with a choice between doing a task and attempting to prove that it's unnecessary, most people immediately begin on the latter."

(I'm probably misremembering that quote. I tried googling but didn't find the original quote, and I don't remember where it's from.)

So a more appropriate course of action would be for me to at least check how much effort would be required to set up a github account. And so I did. I discovered that it was more complex than I ex... (read more)

3Rain12y
Thank you for putting in the effort! I like "Weighted Horoscopes".

I still haven't bothered to set up a github account. But if someone else wants to put it oh github, they're welcome to.

Today is a good day for sharing. Take a moment to overcome trivial obstacles which may be preventing you from sharing useful endeavors, insights, or wisdom with those around you. Alternatively, offer something useful to a friend, or give something away.

one idea is to have 12 separate tumblr accounts, one for each zodiac sign, then the users subscribe to the tumblr account for their own zodiac sign.

The code for this project can be downloaded here

The code is written in PHP, and uses a MySQL database. A cron job is set up to post each day's horoscope to the Tumblr account.

This is completely free software, so you're welcome to do whatever you like with it.

Contributions and feedback are appreciated.

Update:

A git repository for this project is online now at https://github.com/PeerInfinity/Weighted-Horoscopes

5saturn12y
Why not put it on github (or similar service)?

sorry, I should have stated explicitly that I'm NOT assuming that "donating to a church = donating to a good cause".

What I am assuming is that the christians think that "donating to a church = donating to a good cause"

I recently found this article, that attempts to survey the arguments against cryonics. It only finds two arguments that don't contain any obvious flaws:

  1. Memory and identity are encoded in such a fragile and delicate manner that cerebral ischemia, ice formation or cryoprotectant toxicity irreversibly destroy it.

  2. The cell repair technologies that are required for cryonics are not technically feasible.

2Gunnar_Zarncke8y
the article link is broken. Via wayback machine I found it was redirected some time ago to http://www.evidencebasedcryonics.org/2011/03/11/the-case-against-cryonics/ [http://www.evidencebasedcryonics.org/2011/03/11/the-case-against-cryonics/]

Thanks for explaining that! But, um... I still have more questions... What is the procedure for washing the surfaces, the utensils, and my hands? How do I know when the meat is cooked enough to not qualify as raw? And for stir-frying raw meat, do I need to pause the stir-frying process to wash the stir-frying utensils, so that I don't contaminate the cooked food with any raw juices that happen to still be on the utensils?

2beriukay12y
I'm not much of a stir-fryer, but my general method for meat cooking is to have separate utensils for "before cooking" and "during-to-after". So if I put the meat in the pan with a fork, that fork goes to the sink. But the wooden spoon that is cooked with the meat doesn't get washed until I'm done eating, and is usually used as my serving spoon, too. If you are really concerned for safety, you could always use one cooking spoon until the surface of the meat is obviously brown, then switch to a fresh spoon. If dealing with a low-fat meat (like moose), burger is much easier to cook than other meat, and is still healthy. It is hard to overcook, and easy to tell what's safe, because all the little chunks of meat go from red to dark brown. High fat burger (like cow) is still tasty and easy to cook, but not terribly healthy. One trick that I will immediately adopt is using an infrared thermometer to check for the 165F that saturn mentioned. Thanks for the info!
3luminosity12y
For cooking larger pieces of meat than saturn addresses, the way I learnt what was and wasn't needed was simply cooking meat, waiting until the outside looked cooked, then taking a piece out and cutting it in half. You'll be able to see if it's still bloody inside, or if it's chicken you'll be able to see if it's turned white yet. Personally I prefer meat entirely cooked, but depending on your taste pinkish in the middle should be fine. Doing this over time has given me a good feel for how long to cook meat for my preferences, though even now I still often slice pieces open to be sure.
7saturn12y
Salmonella bacteria is killed instantly at 165°F. Cooking small chopped or sliced pieces of meat is hard to do wrong because the surface area to volume ratio is high enough that they will be sterilized even before they start to appear cooked. Make your slices less than 1/2 inch thick and cook them until they start to turn golden brown. As long as the business ends of your utensils are in contact with the food as it cooks they will be sterilized along with it. Assuming that you already know how to wash things in general, you don't need to do it any differently. Normal washing is good enough because bacteria can't grow without a source of nutrients and moisture, and you need to ingest a fairly substantial amount of bacteria in order to get sick.

I think I have lots of gaps to report, but I'm having lots of trouble trying to write a coherent comment about them... so I'm going to just report this trouble as a gap, for now.

Oh, and I also have lots of trouble even noticing these gaps. I have a habit of avoiding doing things that I haven't already established as "safe". Unfortunately, this often results in gaps continuing to be not detected or corrected.

Anyway, the first gap that comes to mind is... I don't dare to cook anything that involves handling raw meat, because I'm afraid that I lack the knowledge necessary to avoid giving myself food poisoning. Maybe if I tried, I would be able to do it with little or no problem, but I don't dare to try.

2Conuly12y
One bit of food safety is to use a designated cutting board ONLY for chopping raw meat. One board for fruits and vegetables (and if they're wooden I find it's helpful to use a separate one for onions) and one for raw meat. You'll want to buy two that look dissimilar so you can't confuse the two. When you're cooking, be sure to wash the knife between chopping up your raw meat and chopping up anything that might not be cooked to the same temperature. (Practically, this means to wash the knife or switch knives after the meat, no matter what.)

I don't dare to cook anything that involves handling raw meat, because I'm afraid that I lack the knowledge necessary to avoid giving myself food poisoning.

Short tip: If the raw meat smells or tastes bad, don't eat it.

Longer tip: the reason there are so many raw meat warnings are not because you will get sick from eating or handling raw meat. If you don't have a clogged nose, there is almost no way for you to get sick from raw meat, because you will smell or taste any problems before you swallow it.

What's NOT safe is mxing raw and cooked foods. The sa... (read more)

0MartinB12y
This thread just confirms the benefits of being a vegetarian.
3wisnij12y
This is one of the things I struggled with a bit when first learning to cook for myself as well. It may help to keep in mind that some meats are safer than others. My heuristic goes roughly: chicken < pork < beef/lamb < fish, in increasing order of safety. If I'm handling raw chicken, I'll wash my hands and utensils thoroughly in warm soapy water before doing anything else. If I'm handling fish, I'll usually just give my hands a quick rinse. The same ordering also applies roughly to doneness; it's a much bigger problem to have undercooked chicken than beef, for example. A good starting place for meats is braised dishes like stews and pot roasts, because the typically long cooking time makes it hard to accidentally undercook something while still producing tasty results (as opposed to e.g. a steak grilled until it turns into shoe leather).
1NancyLebovitz12y
If you're roasting meat, you can get a thermometer that goes into the meat so you can find out whether the interior has gone up to a safe temperature. Chart of temperatures [http://www.food-worldwide.com/article/54/When-is-meat-cooked.html] Stewing meat (simmering it for an extended period until it falls apart) is another way to be sure it's safe.
1[anonymous]12y
Pork and chicken should be cooked all the way through. If you're not sure whether it's done, you can cut it open and have a look. With beef and lamb, you only need to ensure that the outer surface is cooked - whether you want it cooked all the way through is just a matter of personal taste. However, if it's minced, you should cook it all the way (it has formerly-outer-surfaces in the middle).

Generally, it is mainly chicken that one needs to be careful about, because it is sometimes contaminated with unhealthy bacteria, even when bought "fresh". A general procedure with all meat, and especially chicken, is to wash any surface that raw chicken comes in contact with when you are done preparing it and have started to cook it, then wash any utensils you used that touched the chicken, and wash you hands. To be extra cautious, you can do that for any raw meat. Raw meat should be refrigerated soon after purchase and now allowed to stand uncooked at room temperature for more than the time it takes to prepare it.

-6FAWS12y

An obvious implication of this post is that if someone tells you that you "should have known better", then rather than getting upset and instantly trying to defend yourself, it might be a better idea to calmly ask the person "How should I have known better?".

Possible answers include:

1) "using this simple and/or obvious method that I recommend as a general strategy" 2) "using this not simple and/or not obvious method that I didn't think of until just now" 3) "I don't know" 4) "how dare you ask that!&quo... (read more)

"I don't even see how one would start to research the problem of getting a hypothetical AGI to recognize humans as distinguished beings."

I'm still not convinced that human beings should be treated as a special case, as opposed to getting the AGI to recognize sentient beings in general. It's easy to imagine ways in which either strategy could go horribly wrong.

Here are some other links that are relevant to this post:

Andrew Hay's dependency graphs of Eliezer's LW posts

A Java applet for browsing through these dependency graphs Warning: This will take a long time to load, and may crash your browser

A Java applet for browsing through the concepts in the LW wiki Warning: This will take a long time to load, and may crash your browser

All of these graphs are out of date now.

0drc500free12y
Thank you. They're still relevant for the topics they cover... good background to see how much of the site is covered by sequences.

TrailMeme is awesome, thanks for posting this!

If TrailMeme had a tool to import/export to/from a file, then I might have volunteered to create a script to generate trails for the LW sequences.

Creating these trails manually would be tedious, but probably worthwhile.

But I'm not volunteering to do this myself, at least not any time soon, sorry.

3drc500free12y
You can import/export from a bookmark file. I'm not sure whether that's less tedious.
2jsalvatier12y
Maybe volunteer just for one sequence? I suppose it might be too early to do this anyway, since it sounds like it's not stable yet.

sorry if this squicks anyone here, but...

Not all of these people are sex slaves. Many of them are "service slaves".

I, personally, want to be a service slave, aka "minion", to someone whose life is dedicated to reducing x-risks.

The main purpose of this arrangement would be to maximize the combined effectiveness of me and my new master, at reducing x-risks. I seem to be no good at running my own life, but I am reasonably well-read on topics related to x-risks, and enjoy doing boring-but-useful things.

And I might as well admit that I wou... (read more)

Relevant details:

Peer is not currently ready to be a minion in any kind of stressful environment. Every authority figure he's had so far has been of the 'I won't respect you unless you stand up for yourself' type and not very sane, and as a result of that Peer has some very dysfunctional habits when it comes to interacting with such people. I'm already working on fixing that, but I expect that it will take at least a year before he's able to deal with normal expressions of disapproval in a sane way, voluntarily communicate important information that the re... (read more)

I was initially creeped out by this comment. Then I read on and got more creeped out. But at some point it got so weird that it turned back on itself and became awesome. It might be my favorite thing I've read here.

Us humans are so damned interesting. Not to mention diverse-- in ways some people who drone on about diversity can't even comprehend! And there is something special about a space which not only can make sense of the above comment but can tolerate it. And kudos Peerinfinity, for not being afraid to be seen as different. I would be.

And as a matter... (read more)

Existential Risks

More specifically, topics other than Friendly AI. Groups other than SIAI and FHI that are working on projects to reduce specific x-risks that might happen before anyone has a chance to create a FAI. Cost/benefit analysis of donating to these projects instead of or in addition to SIAI and FHI.

I thought the recent post on How to Save the World was awesome, and I would like to see more like it. I would like to see each of the points from that post expanded into a post of its own.

Is LW big enough for us to be able to form sub-groups of peop... (read more)

random trivia: I recently noticed that "The concept of cached thoughts is the most useful thing I learned from Less Wrong" is now a cached thought, in my mind.

I realize that this is kinda stretching the limits of plausibility, but maybe...

obgu lbhe jvsr naq gur ryringbe ynql jrer arire erny va gur svefg cynpr. V zrna, vg znxrf frafr gung gur ryringbe ynql vfa'g erny, fvapr fur unf 4gu-jnyy-oernxvat xabjyrqtr, ohg vs gur thl'f jvsr vf vzntvanel, gura gung zrnaf ur'f ernyyl penml, naq qrfcrengryl arrqrq gurfr qernzf gb fanc uvz bhg bs gur penmvarff. Gur ynpx bs pnef ba gur svany qnl pbhyq or rkcynvarq ol gur bgure pnef nyfb orvat vzntvanel, be ol gur thl orvat fb yngr gung qnl gung ur pbzcyrgryl zvffrq ehfu ubhe. Npghnyyl, guvf pbhyq rkcynva gur nofrapr bs uvf jvsr gbb, znlor ur jnf fb yngr gung qnl gung uvf jvsr naq gur ryringbe ynql jrer nyernql fbzrcynpr ryfr.

Zl vagrecergngvba bs "Rirel qnl gur fnzr qernz" jnf gung rirel qnl rkprcg sbe gur ynfg qnl jnf n qernz, naq gur ynfg qnl jnf ernyvgl. Naq gung gur thl jub lbh frr whzc ng gur raq vfa'g lbh, vg'f gur ynfg bs gur bgure rzcyblrrf, naq rirelbar ryfr va gur pbzcnal unq nyernql whzcrq, nf n erfhyg bs rvgure gur pbzcnal snvyvat (lbh fnj gur tencu?), be gurve bja fgerff, be obgu. Naq gung gur ernfba jul lbh'er abg whzcvat nybat jvgu gurz vf orpnhfr lbh unq guvf frevrf bs qernzf va juvpu lbh rkcyberq nyy bs gur cbffvoyr bcgvbaf, vapyhqvat fhvpvqr, naq o... (read more)

0Vaniver12y
I would like your interpretation of EDTSD, except it doesn't explain gur nofrapr bs lbhe jvsr be gur ryringbe ynql nsgre lbh whzc. V'z cerggl fher vg'f abg n cbfvgvir raqvat.

I'm surprised that noone has asked Roko where he got these numbers from.

Wikipedia says that there are about 80 billion galaxies in the "observable universe", so that part is pretty straightforward. Though there's still the question of why all of them are being counted, when most of them probably aren't reachable with slower-than-light travel.

But I still haven't found any explanation for the "25 galaxies per second". Is this the rate at which the galaxies burn out? Or the rate at which something else causes them to be unreachable? Is... (read more)

2Caspian12y
I also took a while to understand what was meant, so here is my understanding of the meaning: Assumptions: There will be a singularity in 100 years. If the proposed research is started now it will be a successful singularity, e.g. friendly AI. If the proposed research isn't started by the time of the singularity, it will be a unsuccessful (negative) singularity, but still a singularity. The probability of the successful singularity linearly decreases with the time when the research starts, from 100 percent now, to 0 percent in 100 years time. A 1 in 80 billion chance of saving 80 billion galaxies is equivalent to definitely saving 1 galaxy, and the linearly decreasing chance of a successful singularity affecting all of them is equivalent to a linearly decreasing number being affected. 25 galaxies per second is the rate of that decrease.
0FAWS12y
Hmm, by the second wikipedia link there is no basis for the 80 billion galaxies since only a relatively small fraction of the observable universe (4.2%?) is reachable if limited by the speed of light, and if not the whole universe is probably at least 10^23 times larger (by volume or by radius?).
2Roko12y
I meant if you divide the number of galaxies by the number of seconds to an event 100 years from now. Yes, not all reachable. Probably need to discount by an order of magnitude for reachability at lightspeed.

That's the basic idea, yes.

Most people in the network are looking for jobs as programmers. The second most popular job category is finance.

So far it's been only about people helping each other find paid employment, but helping each other find grants is also a good idea, thanks.

0Mass_Driver12y
Wait, so, then what on earth does "for anyone interested in donating substantial amounts (relative to income) to non-profit organizations" mean? Do you help each other get jobs on Wall Street so you can donate the money?

good points, thanks. I made some more edits.

I added a note mentioning that the mean is 2.9%, and that comment "Being charitable ≠ doing good."

I replaced "their mission of converting the whole world to christianity" with "their vaguely defined mission"

So, I think you just said that the average Christian does X, but doesn't do X, and therefore I should do X. I can't quite figure out if there's a typo in there somewhere, or whether I'm just misunderstanding radically.

You're right, thanks, the previous wording was confusing. I removed the paragraph that said "I suspect that the average christian actually gives significantly less than 10% of their income to the church, and doesn't go to church every sunday, but I haven't actually looked up the statistics yet." The point of that paragraph was that I'm admitting that I'm probably overestimating the contributions of the average christian.

you're right. thanks. I updated the comment to include your change.

A random thought:

If you donate less than 10% of your income to a cause you believe in, or you spend less than one hour per week learning how to be more effective at helping a cause you believe in, or you spend less than half an hour per week socializing with other people who support the cause... then you are less instrumentally rational than the average christian.

edit: shokwave points out that the above claim is missing a critical inferential step: "if one of your goals is to be charitable"

edit: Nick_Tarleton points out that the average christia... (read more)

1zntneo12y
It seems you area assuming that donating to a church = donating to a good cause which i am not sure is always if most of the time right.
5Nick_Tarleton12y
First off, strongly agreed that community matters and is worth investing in. You may be less something, but rational targeting of effort (both doing something besides converting people, and being strategic at whatever you're doing) utterly swamps quantity of effort here. Being charitable ≠ doing good. Source, or are you just assuming people do what they're supposed to? This [http://fatknowledge.blogspot.com/2008/05/church-donation-rates-vs-income-tax.html] (first search result) says the mean is 2.9%. (I would also bet that most Christians don't know what they nominally should give.) (ETA: I read your comment after you deleted the paragraph acknowledging this.) I feel obligated to point out (outgroup homogeneity bias, etc.) that far from all Christians see this as their goal.
3shokwave12y
People are going to balk at your use of "intrumentally rational". I would suggest explicating the chain of inference: If you donate less than 10% ... then you are less charitable than the average christian; and if one of your goals is to be charitable, then you are less instrumentally rational than them too.
1TheOtherDave12y
(blink) So, I think you just said that the average Christian does X, but doesn't do X, and therefore I should do X. I can't quite figure out if there's a typo in there somewhere, or whether I'm just misunderstanding radically. In any case, I agree with you that contributing resources to causes I support and training myself to understand them better and support them more effectively, and socializing with other supporters are all good things to spend some resources on. Incidentally, most of the Christians I know who do this in their capacities as Christians are not actually devoting those efforts to converting the world to Christianity, but rather to things like aiding the needy. Then again, the Christians I know well enough to know how they practice their religion are a pretty self-selecting bunch, and generalizing from them probably isn't safe.

The LW wiki has made it approximately one order of magnitude easier to find the best content from LW.

You could try to quantify that by:

  • the time it takes to find a specific thing you're looking for
  • the probability of giving up before finding it
  • the probability that you wouldn't even have bothered looking if the information wasn't organized in a wiki.
  • maybe more
2Roko12y
Yeah, but in terms of actually having achieved more downstream subgoals, like getting more people familiar with rationality?

Another obvious suggestion:

  • If there isn't already a wiki for the cause that you are interested in helping, then consider starting one.

Most people reading this are probably well aware of the awesome power of wikis. LW's own wiki is awesome, and LW would be a whole lot less awesome without its wiki.

What we need is a wiki that lists all the people and groups who are working towards saving the world, what projects they are working on, and what resources they need in order to complete these projects. And each user of the wiki could create a page for thems... (read more)

1Roko12y
Can we quantify that? What has it achieved?
3XiXiDu12y
If this is the answer then the SIAI should simply conclude this in a paper. Or EY should write a new sequence that concludes that supporting the SIAI is the rational choice if you want to save the world. I believe a Wiki would just add to the confusion. A wiki is good as a work of reference or a collaborative focal point for people working on a certain project. But when it comes to answering a certain question, a Wiki might lead people astray. I'm still puzzled by the fact that saving the world is not much dealt with on Less Wrong. What would be a better way to exemplify rational choice than concluding what to do when you want to save the world. On Less Wrong rationality is an abstract concept that is seldom used to tackle real life decisions.
5gwern12y
Some people, when faced with a problem, say, I know - I'll start a wiki! Now they have 2 problems. I said something similar [http://www.imminst.org/forum/topic/40615-a-new-forum-for-spaced-repetition/page__view__findpost__p__442876] yesterday, and I have a short essay, Wikipedia And Other Wikis [http://www.gwern.net/Wikipedia%20And%20Other%20Wikis] about why forking off WP is a bad idea (which is a related bad idea). tl;dr: network effects are a bitch

good point, thanks, but I think it would still be a very bad idea to avoid having any friends who are world-savers, just to avoid seeming cult-like.

And I should mention that I think that it would also be a bad idea to avoid being friends with anyone who currently isn't a world-saver, because of a mistaken belief that only world-savers are worthy of friendship.

Also, even the cults know that making friends with non-cult-members can be an effective recruitment strategy.

I rephrased the second point as "Spend less time with your current friends, if it's ob... (read more)

0[anonymous]12y
Thoughts on the 1st and 2nd points: To the extent that you are or can be someone others look up to and are inspired by, stay friends with as many non-world-savers as possible. If you assess yourself as unable to exert a possible influence in this way, have less non-world-saver friendships. Or at least keep your two worlds from colliding, so the positive one isn't hampered by the recreational one. Having friends with shared interests is critical for many people -- I can't tell you how little I care about IT (my job) when I don't have other enthusiastic people to discuss the tech with. Or, wait, I guess I just did. Jordan - When Ben Franklin started the Junto, and later the American Philosophical Society, was he being cultish?
1wedrifid12y
That depends whether you are optimising for world saving advice or social signalling. In the current form it doesn't seem cultish so much as it seems blatantly obvious. To be honest the part about synergy and sharing actually struck me as more cultish.

I made this same mistake, and ended up being significantly less optimized at world-saving as a result.

-1[anonymous]12y
Yes, Will's intuition is right. The literature is clear on how important social connectedness is to human health and well-being.

This is an awesome post! Thanks, Louie :)

some obvious suggestions:

  • Make friends with other world-savers.
  • Spend less time with your current friends, if it's obvious that they are causing you to be significantly less effective at world-saving, and the situation isn't likely to improve any time soon. But don't break contact with any of your current friends entirely, just because they aren't world-savers.
  • Find other world-savers who can significantly benefit from skills or other resources that you have, and offer to help them for free.
  • Find other people who
... (read more)
2Louie12y
Great ideas! I incorporated a not so subtle mention of the x-risks career network into #6 based on your suggestion. My goal here was to keep things general in tone and only deeply permeate the subtext + links with my own value judgments. It's a kind of overt neutrality with a strong undercurrent of things you can look into if you're interested. But if you never click on a link, you could just as easily be a member of any current activist set and still get a lot of value out of my writing. Actually I think I'll write up a new section like "Become more generally capable" which seems like something I didn't specifically cover but almost certainly should. Anyone have suggestions for "must have" items to go in that summary section? What other Less Wrong posts are good for that? EDIT: Added as the new point #5 now -- it's general if you just read it but rich in specific examples if you follow up on the resources linked from it
4Jordan12y
A great list, although the first two points seem distinctly cult-like. I think it's important for worldsavers as a group to maintain very broad connections to the greater social network.

An obvious suggestion: Getting Things Done

The GTD system is designed for helping individuals decide what to do next, and isn't really designed for organizations.

So GTD would help with:

  • what should be done
  • when it should be done
  • who should do it

but might not help much with:

  • how much of a budget in money, office space, website space, etc. a project should receive
  • when and how to evaluate the success of a project...

Though it might only take some relatively minor adjustments to make the GTD system able to handle these points as well.

I should also mention... (read more)

There have been a few posts recently to the Existential Risk Reduction Career Network lately about startups that want to hire programmers. You might want to check those out.

I'm a bisexual male, but can't upvote my own comment.

So the results so far are: 4 women, 4 gay/bisexual men, and 5 heterosexual men.

This means that that there are approximately as many women on LW as there are gay/bisexual men. And almost half of the men on LW are gay/bisexual.

And yes, I know that there are probably several reasons why this poll's results are biased or otherwise unreliable, but at least now we have some data.

One obvious problem with this poll: it contradicts a previous survey, which said:

"(96.4%) were male, 5 (3%) were female, and on... (read more)

downvote this comment if you find this poll annoying

upvote this comment if you somehow don't belong in any of the other categories listed here.

upvote this comment if you're a heterosexual male

-1PeerInfinity12y
downvote this comment if you find this poll annoying

upvote this comment if you're a gay or bisexual male

1PeerInfinity12y
I'm a bisexual male, but can't upvote my own comment. So the results so far are: 4 women, 4 gay/bisexual men, and 5 heterosexual men. This means that that there are approximately as many women on LW as there are gay/bisexual men. And almost half of the men on LW are gay/bisexual. And yes, I know that there are probably several reasons why this poll's results are biased or otherwise unreliable, but at least now we have some data. One obvious problem with this poll: it contradicts a previous survey [http://lesswrong.com/lw/fk/survey_results/], which said: "(96.4%) were male, 5 (3%) were female, and one chose not to reveal their gender." so it looks like there were lots of heterosexual males who didn't bother voting.
Load More