All of wilkox's Comments + Replies

I’ve been trying to figure out why I feel like I disagree with this post, despite broadly agreeing with your cruxes. I think it’s because it in the act of writing and posting this there is an implicit claim along the lines of:

Subtle nutritional issues that are specific to veganism can cause significant health harms, to a degree that it is worth spending time and energy thinking about this as ‘a problem’.

For context, I am both vegan and a doctor. Nutrient deficiencies are common and can cause anything ranging from no symptoms to vague symptoms to life-threa... (read more)

Have you considered melatonin? Quoting gwern:

Melatonin allows us a different way of raising the cost, a physiological & self-enforcing way. Half an hour before we plan to go to sleep, we take a pill. The procrastinating effect will not work - half an hour is so far away that our decision-making process & willpower are undistorted and can make the right decision (viz. following the schedule). When the half-hour is up, the melatonin has begun to make us sleepy. Staying awake ceases to be free, to be the default option; now it is costly to fight the melatonin and remain awake.

I use it for exactly this reason and it works brilliantly.

No, I read that at some point but forgot completely about it. That sounds like a really good idea, and I'm going to arrange to try it.

This is like saying "if evolution wants a frog to appear poisonous, the most efficient way to accomplish that is to actually make it poisonous". Evolution has a long history of faking signals when it can get away with it. If evolution "wants" you to signal that you care about the truth, it will do so by causing you to actually care about the truth if and only if causing you to actually care about the truth has a lower fitness cost than the array of other potential dishonest signals on offer.

Poisonousness doesn't change appearance though. Being poisonous and looking poisonous are separate evolutionary developments. Truth seeking values, on the other hand, affect behavior as much as an impulse to fake truth seeking values, and fake truth seeking values are probably at least as difficult to implement, most likely more so, requiring the agent to model real truth seeking agents.

I've noticed many people who practise meditation have a strong belief in meditation and the more 'rational' core of Buddhist practices, but only belief in belief about the new age-y aspects. My meditation teacher, for example, consistently prefaces the new age stuff with "in Buddhist teachings" or "Buddhists believe" ("Buddhists believe we will be reincarnated") while making other claims as simple statements of fact ("mindfulness meditation is a useful relaxation technique").

I appreciate this. I genuinely didn't (still don't ) understand what lessdazed was trying to say, and it would be a really bad thing if downvoting ignorance became common practice.

I will try again. Counterfactually, if the US budget hadn't included (pick expenditure), (unrelated expenditure that was't made) probably would not have been made, and (unrelated cut that was made) probably would have been made anyway. As the US engages in deficit spending, whatever program you think most important that wasn't funded, if Congress collectively agreed with you, it would have been funded regardless of other spending. An argument for a program is only weakly an argument against all other programs. If a program is actually bad, it is suspicious that the worst that can be said about it is that it isn't as worthy an expenditure of the most valuable unfunded thing - that's like arguing that someone is sickly because they can't lift more weight than the strongest weightlifter in their city. If someone is truly sickly, there really should be a better argument showing that than that particular measure of their strength.

It's important to avoid the if-not-for-the-worst-waste-of-money-in-the-budget-the-most-worthy-unfunded-program-would-have-been-funded argument.

Can you explain why? This seems like a perfectly normal and reasonable sort of argument about dividing a limited pool of resources wisely.

The way I understood it was that "the-worst-waste-of-money" (and possibly "the-most-worthy-unfunded-program" as well) is a label applied in retrospect. To fund the most worthy unfunded program, you'd need to unfund one of 100 programs. It's likely that of the 100 programs, one will turn out to be an abject failure, but it's hard to predict which one it will be ahead of time. Conversely, just because the unfunded program seems most worthy now, doesn't mean that earlier one could have predicted the need for it.
In politics, the argument is perfectly normal and unreasonable. It means (borrowing the idea from novalis' link): "This program was the worst waste of money in the budget, because it was against my political views. It was only funded because of people abusing the process to advance political views different from mine. If it was not for this program, so many other programs could have been funded which agree with my political views and were only blocked for political reasons."
I'm not sure if this is really the way upvotes are supposed to be used, but I voted you up from -1 because I don't think "Can you explain why?" is a question that should be censured.
You assume that the government would divide the pool wisely. (Not that it necessarily wouldn't, but not that it necessarily would either.)

Perhaps sparklines would work for this. They compress the recent history of a measurement in a space-efficient way which can fit inline with text.

They would fit neatly next to the upvote / downvote buttons. However nicely they would fit, they should not be used, though. I am in the mind of the diagram [] about the effect Google's proxy is having on web content - to the extent that karma is not a perfect proxy for good content, sparklines will make it easier to identify that proxy and where it can be gamed.

This sounds a lot like the Scouting merit system, in a good way. I learned more life skills from Scouts then I ever did from public education.

I did learn things from scouting (I'm and Eagle Scout) but an awful lot of those things were "stuff I did to check off a box and then promptly forget about" rather than "stuff I did because I wanted to learn it an integrate it into my life." I am embarrassed at how little first aid I remember.

This doesn't seem to be an answer to Wei Dai's question.

I parse the given answer as "because the social status was worth more to me than a monetary discount". (Though 'because I didn't think of it' might be a more strictly accurate answer.)

I recently introduced a friend to HPMR and she went on to discover Less Wrong entirely of her own accord. She has explicitly cited it as sparking her interest in things like Bayesian inference, which she would never have considered learning about before.

The link "summary" and the link "Here is a little more expanded text" seem to point to the same place, in my browser at least.

From the linked McDonald's coffee case article:

In addition, they awarded her $2.7 million in punitive damages. The jurors apparently arrived at this figure from [the burn victim's lawyer's] suggestion to penalize McDonald's for one or two days' worth of coffee revenues, which were about $1.35 million per day.

Talk about a brilliant use of anchoring...

I may also explain to them that if defending oneself receives the exact same penalty that attacking someone gets it will usually be best to initiate the combat yourself.

This is excellent advice, with the caveat that the school's disciplinary penalty is probably not the only cost. Being known as "the kid who walks expressionlessly up to other kids and punches them in the testicles without warning" may be a significant penalty too. (This doesn't mean striking first is always a bad strategy, just that it needs to be done carefully).

I was more referring to that time period when the bully was working himself with bluster right in the non-victim's face. :)

In any case, it is pretty clear that it is possible to hold rationality and religion in your head at the same time. This is basically how most people operate.

More generally, "In any case, it is pretty clear that it is possible to hold rationality and irrationality in your head at the same time. This is basically how most people operate." I'm no more surprised to hear about a religious rationalist than I am when I notice yet another of my own irrational beliefs or practices.

Mendeley is good for this, and specifically designed for managing a library of academic papers. It supports tagging and full text searches, as well as some half-baked "social" features which can be safely ignored. The most useful feature for me is that it can watch a directory for new papers, and add them to its library as well as my directory tree (author/year/paper). It can also maintain a bibtex file for the entire library which is handy for citations.

Alas, Mendeley always crashes when I tell it to watch the directory on my NAS for new papers.

Good point. Reading my comment again, it seems obvious that I committed the typical mind fallacy in assuming that it really is a choice for most people.

I'd take this differently. I would at least hope that you are claiming that there is, in fact, a choice, whether the subjective experience of the moment provides indication of the choice or not. Maybe stated differently you could be claiming that there is the possibility of choice for all people whether a person is aware or capable of taking advantage of that fact. That a person can alter his or her self in order to provide his or her self with the opportunity to choose in such situations. Loqi's feedback seems to me to be suggesting that individuals who do not have a belief that they have such a "possibility of choice" could have a more positive phenomenological experience of your assertion and as a result be more likely to integrate the belief into their own belief set and [presumably] gain advantage by encountering it. That is me asserting that Loqi does not appear to be rejecting your assertion but only suggesting a manner by which it can be improved.

Missionary work, including LDS, has a phenomenally low success rate. I don't recall it, but from memory a missionary might convert 1-2 people per year based on cold calls.

A one year doubling or tripling time doesn't strike me as "phenomenally low".

Conversion means conversion to an official church member, not another missionary, and conversion can be (and depending on who you ask, frequently is) reversed, for missionaries as well as new converts.

This was what confirmed Eliezer's skill as a writer in my mind. He resisted the (typical nerdish) impulse to vomit out pages of obsessively detailed explanations, instead leading the reader on with tantalising hints spaced far apart. It probably accounts for a lot of the book's notorious addictiveness.

"things that people say that really actionable beliefs even though they may not be clear on the difference"

This sounds interesting, but I can't parse it.

That's because you are using an English parser while my words were not valid English.

In any case there really isn't any reason to be offended and especially there is no reason to allow the other person to provoke you to anger or acting without thought.

It seems really, really difficult to convey to people who don't understand it already that becoming offended is a choice, and it's possible to not allow someone to control you in that way. Maybe "offendibility" is linked to a fundamental personality trait.

What constitutes a "choice" in this context is pretty subjective. It may be less confusing to tell someone they could have a choice instead of asserting that they do have a choice. The latter connotes a conscious decision gone awry, and in doing so contradicts the subject's experience that no decision-making was involved.
It could be. It seems not just difficult but actually against most culture on the planet. Consider that crimes of passion, like killing someone when you find them sleeping around on you, often get a lower sentence than a murder 'in cold blood'. If someone says 'he made me angry' we know exactly what that person means. Responding to a word with a bullet is a very common tactic, even in a joking situation; I've had things thrown at me for puns! It does seem like a learn-able skill even so. I did not have this skill when I was child, but I do have it now. The point I learned it in my life seems to roughly correspond to when I was first trained and working as technical support. I don't know if there's a correlation there. In any case, merely being aware that this is a skill may help a few people on this forum to learn it, and I can see only benefit in trying. It is possible to not control anger but instead never even feel it in the first place, without effort or willpower. Edit - please disregard this post

Agreed, with the addendum that in this context there seems as much disagreement over the definition of "possible" as the definition of "omnipotent".

This bothered me too. If 'omnipotent' is defined as 'able to do things which can be done', we're all gods.

I think it's more aptly described as "able to do that which is logically possible." Thus, the square circle paradox is generally deemed to be ruled out since it really is nonsense. I agree that the stone question is actually different. HERE's [] some discussion about that very thing...
Not really. Something "can be done" if some possible being, which may not be actual, can perform it. If there's a 500 pound barbell in front of me, and I can't lift it, this doesn't mean that the barbell can't be lifted, only that I can't lift it. If you're omnipotent, then you can lift it. I guess I've always understood omnipotence as being so powerful that no possible being can be more powerful than you are.
Defining 'omnipotent' as 'able to do things which can be done' is an interesting move-- it makes me realize that my ideas about what can be done (especially by hypothetical extremely powerful beings) are very foggy. Religious people bump up against that when they try to see why some prayers apparently get answered and others don't.

The difference between activation energy and inertia is that you can want to do something, but be having a hard time getting started - that's activation energy. Whereas inertia suggests you'll keep doing what you've been doing, and largely turn your mind off. Breaking out of inertia takes serious energy and tends to make people uncomfortable.

I don't mean to nitpick, but this distinction isn't obvious to me. It seems like inertia is just a component of activation energy.

Great post regardless.

Activation energy: It takes me about 15 minutes to get ready for exercise: I need to close down what I'm doing, change in to exercise clothes, and find a good spot for it. Inertia: Once I'm jogging, it's really easy to keep jogging. Especially when jogging is the fastest way to get back home =) End result: I find it much easier to do a single 30 minute jog, compared to three 10 minute jogs. If there were just activation costs, I'd probably want to do a single 10 minute jog. If there was just inertia, I'd probably want to do three 30 minute jogs.

This problem is compounded when the students feel obliged to stay in the class even if they're not getting anything out of it. The result is a room full of tired, frustrated students terrified of being "found out" or giving the wrong answer. I encourage my undergrad students to leave and work on a problem later if their brains just aren't up to the job, but they never do. It's not clear if this is because of years of authoritarian schooling, or if they just don't trust themselves to do the work outside of a classroom.

Thank you very much for doing this. You've clearly put a lot of effort into making it both thorough and readable.

Formulate methods of validating the SIAI’s execution of goals.

Seconded. Being able to measure the effectiveness of the institute is important both for maintaining the confidence of their donors, and for making progress towards their long-term goals.

I'm also not sure why the position of her eyes is supposed to be relevant to any of this.

Maybe something to do with the facial asymmetry JanetK mentions here?

Always wait for someone else to laugh at your joke before you join in.

This is generally good advice, but can backfire if you show no signs that you are conscious of making a joke. Making people laugh while remaining deadpan yourself is a high-level humour skill. Listeners who are not sure whether or not to laugh will look for cues from other listeners and from you, and if you're not laughing they may just go along with that.

Often it's better to make it obvious that you've amused yourself with your own joke, with a smile or small chuckle, but not react to whether others laugh or not. That displays confidence, and gives others the social room to laugh if they want.

All this is extremely context-dependent. On some of the most fun occasions when I told jokes or funny stories, I was barely able to tell them comprehensibly because I was unable to suppress spasms of laughter. Of course, if the audience and the atmosphere is not right, this can make you look like an idiot or extremely annoying.
Oh, yes. Smile, but do not actually laugh. I'm sorry, I missed that earlier.

I have an intuition that most people would find it less weird to hear a pro-cryonics advertisment from an actual cryonics company than a "Public Service Announcement" from a third party. The former would be processed more like a normal advertisement, to be judged on its merits, while the latter could invite suspicion of the creators' motives. I might be wrong - anyone from marketing or advertising have something to say here?

I'm confused by the idea that the kinds of meditation you are talking about have until now been practised by "small and somewhat private groups" in secret. Why would this kind of meditation be taboo? What did these groups have to fear that drove them to secrecy, and why has that changed?

At least in the context of Buddhism-inspired practices, the reasons are threefold...

1) Monks in many (all?) Buddhist traditions are prohibited from discussing their own attainments with non-monks by the rules of their organization.

2) Most (nearly all?) contemporary dharma centers / etc., for various sociocultural reasons, have strong taboos concerning discussion of attainments.

3) If you tell a person in normal society that you are interested in reaching enlightenment, hope to do so soon, or perhaps already have, you are most likely to be written off as men... (read more)

Why is continuing to donate as you did previously mutually exclusive with your evangelism plan?

They're not mutually exclusive. I just feel at the moment that I gain less utility from spending the money immediately than I expect to gain from receiving donation advice from the community (including "save it - a better cause might come along). I don't expect others to emulate me here though. I guess I should add a "donate or save?" post to my list.

Possibly, although I didn't think of that analogy until your comment. It seems more likely that the program will break even when I consider the potential for increased donation compared to my previous estimate, which was based only on AnnaSalamon's described expected outcomes for the program ("more rational, effective people"). I'm not sure that the program actually will break even in terms of existential risk reduction, which is why I'm very interested in seeing SIAI measure any increase in donations.

Ah, I see. That makes sense.

I don't know they will - see my above comment suggesting the SIAI actually measure donations from program participants. It does seem more likely now, however, that the program will at least break even on reducing existential risk, hence my increased comfort with the idea.

Does it seem that the program will break even because you've anchored yourself to 9x ROI?

By way of analogy, suppose a cancer charity has $10,000 to spend. It could invest the money directly into research, for a marginal expected return in decreased cancer suffering, or it could spend it on a glitzy event where potential donors get to "try their hand" at working in a research lab for a day. The second option could sound like a waste of money, as the donors probably won't do anything worthwhile in a day of messing around in a lab. However, if they go on to contribute $100,000 more to the charity than they otherwise would have, that mon... (read more)

Your analogy makes sense, but why do you think the numbers will go that way?

The idea of holding a program to increase donations actually made me more comfortable, as it seems more like a long term investment in reducing existential risk then money squandered on something fun but not obviously essential.

You'll have to run that calculation by me. I don't see how expected utility of the the former is higher than the latter.

That's a good point. An increase in donations from a specific group of people should be easy to measure too, so the SIAI could use it to directly assess the effectiveness of these programs.

Holding a program for the purpose of increasing donation revenue makes me feel uncomfortable. Maybe we should stick to the party line about raising the sanity waterline.

Why is the Singularity Institute paying for this?

We're trying to reduce existential risk -- to increase the odds that an eventual Singularity is good, from the perspective of humane values. To do this, we need more rational, effective people -- people who can train to do the needed research, who can fund that or other work, and who can otherwise exert influence toward good outcomes.

I'd be interested in hearing more about how you foresee graduates of these camps working to reduce existential risk, especially as a donor to the SIAI. Is there a long term plan in place or are you just trying some things out?

At the very least, people who personally benefit from this program are incredibly more likely to donate for the rest of their lives, even if (especially if) they make no direct research or advocacy contribution.