All of Nick_Roy's Comments + Replies


If you do decide to try it, start with a very low dose.

Where this chain of reasoning breaks down for me is in the "without resistance" phase of "take right action without resistance". If the resistance, both conscious and unconscious, is too strong, there will be no right action taken, whether I will it or no. So what I do instead is undermine the resistance itself. This is my precondition for taking right action. Do you see what I mean? Wu-wei prevents hedonism if wu-wei is essential to hedonism but there can be no wu-wei.

The similarity between our approaches is as you say: the realization that akrasia defeats frontal assaults with heavy casualties. The difference is that you are doing something like the "take right action without resistance" approach that I've encountered before in Buddhism, which matches up nicely with anhedonia (personally I am a hedonist, so this does not work for me); while I am attempting to root out the basic causes of my akrasia, down to the very sources, to change the way I feel in the first place. Both approaches have their merits, and I... (read more)

That is interesting - you correctly predicted I was exposed to Buddhism (indeed practiced it for years, although this non-forced-action, wu-wei [] is from my earlier exposure to Taoism. But it has nothing to do with anhedonia! First of all anhedonia is not enjoying stuff, not not wanting to enjoy stuff. It is not a choice or attitude, it is the illness. If you have or used to have depression you had it too - it is rather part of the definition itself. Second, if anything, the attitude I gleaned from Buddhism was very optimistic about fun and joy, my teacher [] is almost extremely hedonistic. This has more to do with my parents being blue-collar, and my cultural background is Mitteleuropa - I tried to hint on that with "shut up and soldier on", it is a direct translation from "Maul halten und weiter dienen" (BTW my first language is not German but this saying describes the region rather well). Basically this is what you get from blue-collar parents. Don't like your job? Shut up, you have a family to support. Soldier on. And so on. Finally, do you think wu-wei prevents hedonism? I think if enjoyment means resting effortlessly in the here and now instead of hoping for or worrying about something in the future, it is more like a precondition for it.

Two Disclaimers: First, I am not a doctor. Second, beware of other-optimizing. This advice is working well for me, but it may not work well for others.

The depression became obvious and major enough that I was forced to take action to stop it. The rationalizations had run dry, so I fully realized in both System 1 and System 2 that I was not "unmotivated", I was mentally ill. Years of life hacks and half-assed lifestyle interventions had accomplished some, but not enough, so it was time for medications, which I had previously feared due to bad expe... (read more)

Wow. You've been thorough. Note to self: modafinil is probably something I want to avoid if it can exacerbate anxiety that badly.

Agreed. Personal anecdote: once I redefined my "motivation problem" as a "depression and anxiety problem" a number of months ago, and began treating this depression and anxiety instead of wearily trying out yet another willpower hack, I have made more progress in being motivated in months than I had in the previous years.

i think I am inadvertedly doing something like this. Here is what I started recently: * stop drinking, the evening reward is only non-alcoholic beer * fairly heavy exercise, boxing 3 times a week and 100 pushups on other days * instead of feeling like fighting my addiction or laziness, doing the opposite, stopping fighting my better judgement (to work out and to not drink) even when I don't feel like doing so. I don't know how better to explain it. I am reinventing the bicameral mind basically: everything decided rationally is casted into an "upper self" that gives orders, and my normal self can only sigh and follow its orders even when it makes me feel not comfortable, still it is a submission to and not fighting the decisions of the upper self, instead of fighting the urges and instincts of the lower self * when having little to do at work, and spend a lot of time on LW or Reddit, schedule the day so that productive work is in the last 1-2 hours so I can go home with some pride and not feeling the day was worthless * counter-act the complete lack of socialization during work by listening to vocal music with interesting lyrics in the evening However I have no idea if I am depressed or not and I strongly suspect that if your upbringing or culture is not exactly optimistic it is not such a clear cut case. I have clear anhedonia, but it does not make me passive or dyfunctional: I am able to do my duty in a "shut up and soldier on, feeling good is not required" way. I think if people don't really expect happiness, it is hard to tell if they are depressed, if they find anhedonia normal and can function in it.
This is exactly what I was doing- constantly looking for the system that would let me be successful while ignoring the root problems. I only accepted the anxiety when it got too bad to ignore. Can I ask what you've been doing that's been so effective?

Your point on a description of Harry's thinking is well-taken. I just had my brother submit this as a review, to err on the side of caution:

"With NickRoy's permission, I am submitting his solution, which I agree with, with additional evidence appended, just in case that is necessary; so consider this as superseding NickRoy's submission:

[the relevant text is here in the submission, but I don't need to repeat it in this comment]


Harry does not know the full prophecy for certain, but he can guess it, based on: Harry's thought on star lifting in r... (read more)

Voldemort would be skeptical, yes, but he would also be interested, because "6. It is impossible to tell lies in Parseltongue" and because all this speech has to do is raise the risk enough that it makes more sense to stop and gather more information before killing Harry, thus it "allow[s] Harry to evade immediate death". What do you think would improve the believability?

Sure. Along with the centaur evidence, there's: Harry's thought on star lifting in response to this prophecy in Ch. 21, Harry noticing Quirrelmort's interest in the same prophecy in Ch. 86, Quirrelmort's talk of the stars' vulnerability to "sufficiently intelligent idiocy" in Ch. 95, Voldemort's "while the stars yet live" remark in Ch. 111, Voldemort's more explicit talk on the prophecy and his great fear of it in the next chapter, and how the Unbreakable Vow is framed in the most recent chapter. If Harry connects these dots, he'll have a good idea of what the full prophecy says.

Harry hisses "You have missinterpreted prophecy, to your great peril, becausse of power I have, but you know not. Yess, you are sstudying sscience, but, honesstly, you are yearss behind me. It may be that thiss power you know not iss ssomething I have at thiss sspecific time, that you will not know for too many yearss hence.

Before I explain, remember my Vow, and know my honesst intention not to desstroy the world, Vow or no. Now, do you know why I would tear apart the very sstarss? Do you know how? Not to desstroy the world, but to ssave it from what... (read more)

Although it sounds persuasive to us, to Voldemort this would sound like exactly the sort of 'intelligent idiocy' that would only solidify his belief that Harry has to be killed right away.
As a move that Harry can devise, this requires a description of the thinking that makes it possible. He's not told the full prophecy and doesn't know which prophecy Voldemort is talking about. I didn't realize he could piece it together sufficiently, but in Ch. 21 he hears the beginning of the first prophecy (THE ONE WHO WILL TEAR APART THE VERY S...); in Ch. 86 Quirrell discusses it with him, pointing out that Harry or Quirrel are likely ones with the power to enact or prevent the event that the prophecy is concerned with; and in Ch. 101 the centaur implies that there is a prediction that "soon the skies will be empty" with Harry responsible yet somehow "innocent" in an unclear sense.
Harry doesn't know the actual prophecy, so I'd start it with, ~~~ "Is prophesy essentially..." "Powers, not excuses." "Vow compels to raise this point. More important than powers." Voldemort paused. "Proceed." "Is prophesy essentially same as Centaur prophesy? Stars go dark?" "Essentially."

Please add this is as a review so Eliezer defintely sees it!

Paths of Glory (1957), film. Kirk Douglas vs. Moloch. An anti-war film, for reasons both usual and unusual.

In one of his Wire interviews, Simon recommends Paths of Glory; anyone who's seen both will certainly appreciate the comparison.
You could probably find yourself some cowpox virus to inoculate yourself with.
That was one addendum I was going to suggest. Perhaps we can add a stipulation about getting a vaccine that hypothetically protects you from the worst of biological fates. After all, what good is a massive knowledge and no tools if we die before we can even find/make some basic anti-biologicals.

I upvoted, but I'll clarify why, as this is a list: the only name I like on this list is Level Up, but I strongly like it.

A worthy clarification! I considered making one comment per idea, but I'm not sure they are all up to that level of scrutiny.

Personally, I figure I'm not intelligent enough to research hard problems and I lack the social skills to be an activist, so by process of elimination the best path open to me for doing some serious good is making some serious money. Admittedly, some serious student loan debt also pushes me in this direction!

"Proving useful in your life" (but not necessarily "proving beneficial") is the core of instrumental rationality, but what's useful is not necessarily what's true, so it's important to refrain from using that metric in epistemic rationality.

Example: cognitive behavioral therapy is often useful "to solve problems concerning dysfunctional emotions", but not useful for pursuing truth. There's also mindfulness-based cognitive therapy for an example more relevant to Buddhism.

I suppose that is a tension between epistemic and instrumental rationality. Put in terms of a microeconomic trade-off: The marginal value of having correct beliefs diminishes beyond a certain threshold. Eventually, the marginal value of increasing one's epistemic accuracy dips below the marginal value that comes from retaining one's mistaken belief. At that point, an instrumentally rational agent may stop increasing accuracy. On the other hand, it may be a problem of local-versus-global optima: The marginal value of accuracy may creep up again. Or maybe those who see it as a problem can fix it with the right augmentation.
It is useful for pursuing truth to the extent that it can correct actually false beliefs when they happen to tend in one direction.

Ah, I should have taken that possibility into account. Thank you.

Good instrumental rationality quote; not so good for epistemic rationality.

Why do you say that?

So, with a 60% chance of girlfriend breakup and a 90% chance of new partner acquisition, does this mean a 36% chance of a polyamorous, open, "cheating" or otherwise non-monogamous relationship situation for you at some point over the next year?

Edited to add: actually somewhat higher than 36%, since multiple new partners are possible along with a girlfriend breakup.

I'm already polyamorous, so there is in fact a certainty of a polyamorous relationship situation at some point in 2012. :)

Fascinating, thank you. I also realize that I should have Googled that before asking.

Interesting! What do you think a "bi" listing can signal? Openness to experience?

Edited for clarity. Also: I'm not complaining, but I am genuinely curious as to why this comment has been downvoted. Is this a sensitive topic?

According to OkTrends, 80% of people who claim to be bi on OkCupid only send messages to one sex [].

Since it's difficult to predict the date of the invention of AGI, has SI thought about/made plans for how to work on the FAI problem for many decades, or perhaps even centuries, if necessary?

As a subset of this question, do you think that establishing a school with the express purpose of training future rationalists/AGI programmers from an early age is a good idea? Don't you think that people who've been raised with strong epistemic hygiene should be building AGI rather than people who didn't acquire such hygiene until later in life? The only reasons I can see for it not working would be: 1. predictions that AGIs will come before the next generation of rationalists comes along. (which is also a question of how early to start such an education program). 2. belief that our current researchers are up to the challenge. (even then, having lots of people who've had a structured education designed to produce the best FAI researchers would undeniably reduce existential risk. no?) EDIT (for clarification): Eliezer has said: "I think that saving the human species eventually comes down to, metaphorically speaking, nine people and a brain in a box in a basement" Just as they would be building an intelligence greater than themselves, so to must we build human intelligences greater than ourselves.

Agreed on the excellence of "Why Spock is Not Rational". This chapter is introductory enough that I deployed it on Facebook.

Okay, but only 3.5%. I wonder how many are newbies who haven't read many of the sequences yet, and I wonder how many are simulists.

What surprised you about the survey's results regarding religion?

That there are theists around?

I currently route around this by being an ethical egoist, though I admit that I still have a lot to learn when it comes to metaethics. (And I 'm not just leaving it at "I still have a lot to learn", either - I'm taking active steps to learn more, and I'm not just signalling that, and I'm not just signalling that I'm not signalling that, etc.)!

I ran across OB while being horribly akrasic on Reddit a few years ago.

My thoughts on further social business opportunities: how about rationality consulting? If SI/LessWrong can establish enough credibility as rationalists this is worth money to both non-profit organizations and for-profit businesses, as well as potentially to consumers (as with Eliezer's rationality books). Rationality consulting would probably have to be done for free at first, of course. As a secondary benefit, this would also help with the ongoing effort to measure the impact rationality training has on an individual or an organization.

On a meta level, o... (read more)

That's an excellent point. I wonder if it's too late at this point for a renaming, or not?

Sorry about that! It's difficult (for me at least) to express tone over the Internet. I'll have to practice that.

If you find that you keep wanting to remind people of this, you may want to add it to the main post more prominently. Such reminding is currently coming off as badgering to me, and likely to other people as well since both of your reminder comments are getting into the negatives.

Non-profit organizations like SI need robust, sustainable resource strategies. Donations and grants are not reliable. According to my university Social Entrepreneurship course, social businesses are the best resource strategy available. The Singularity Summit is a profitable and expanding example of a social business.

My question: is SI planning on creating more social businesses (either related or unrelated to the organization's mission) to address long-term funding needs?

By the way, I appreciate SI working on its transparency. According to my studies, tr... (read more)

What strikes me most about this post: the enthusiasm! I find it refreshing for this site and appropriate for this subject matter. Congratulations on successfully feeling rational, D_Malik.

Why not use several different methodologies on GiveWell, instead of just one, since there is some disagreement over methodologies? I can understand giving your favorite methodology top billing, of course (both because you believe it is best and it is your site and also to avoid confusion among donors), but there seems to be room for more than one.

True. It might be interesting to see if any hidden commonalities among Less Wrongians exist, however, if the "Other" option comes along with a "fill-in-the-blank" field. It might also be a good idea to include this "Other" option in addition to the other options to avoid everyone checking "Other".

Taken. Thanks Yvain, I appreciate this effort!

Nitpick: why no "Other" categories for Participation and Expertise?

Edited to add: I posted my own ideas concerning SI and social business in the comments. What are yours? Also, addressing some valid points made in the comments, what are some other innovative ways to fund SI?
What's the point? Surely everyone is a member of some community and has expertise in something! Everyone would check "Other."

I'm considering the possibility of an experimental treatment becoming available during those two months that could save the terminally ill patient from dying of that illness. Being alive would then allow the possibility of new life extension treatments, would could lead to a very long life indeed.

This would be a conjunction of possibilities, so I realize that the overall possibility of a terminally person transitioning to a very long-lived person is slim, but even a slim chance of living for a very long time is worth almost any degree of suffering. If no ... (read more)

It seems extremely unlikely that an experimental treatment will appear as a surprise within two months. If it's actually new, then there will be trials of it first, and I think research could turn up that information.
Huh? Are you applying any discount rate to the value of living a very long time? The tradeoffs you are describing sound like they are calculated with the current utility of a very long lifespan being almost unbounded. For someone with a discount rate of 1% annually, an infinite lifespan has a net present utility of 100 years of lifespan. If, for instance, there was a 0.1% chance of the conjunction of a cure and an indefinite lifespan it wouldn't be worth -0.11 lifespan-years of utility, and a miserable two months could easily match that. It isn't desirable, of course. Nonetheless, looking, for instance, at the rather modest progress since the "war on cancer" was announced 40 years ago, it seems like a plausible extrapolation. Of course, a uFAI is perhaps plausible, and would technically satisfy your claim, a paperclipped population doesn't get cancer, but I don't think that is what you intended... What do you intend, and what is your evidence?

True. I doubt it would be as cheap, safe and effective as it is without modernity, though.

Edited to add: I posted my suggestions in the comments. What are your own suggestions for social business ideas relevant to SI?

A long-term goal of Less Wrong is to achieve the benefits of religion without becoming a religion. LW Meetups are partially an attempt at achieving a rationalist sense of community, for instance. Stanislav Petrov Day and Vasili Arkhipov Day are steps in the direction of rationalist rituals in general and rationalist holidays in specific. In addition to creating more holidays, I suggest that we figure out ways to celebrate them, rather than simply marking them.

I'll take a crack at it. A holiday celebrating existential risk reduction is a glorious opportuni... (read more)

1) I'm grateful that there's a place on the internet where a good person will remind me to be grateful for things. 2) The world is still around, despite the risks. 3) This world exists, and us in it, for whatever difficult-to-imagine reasons or lack thereof. 4) I have an incredibly caring/patient/interesting girlfriend 5) I am equipped with sensory apparatus that can discriminate for high nutrient content in foods and butter chicken exists.
The last 10-15 years of medical research into treating brain aneurysms, and the increasing popularity of rapid-response stroke treatment protocols, without both of which I'd be either dead or permanently brain-damaged. A cultural setting where a smart and personable working-class kid can more or less drift into a lucrative and not-too-challenging professional context. A cultural setting where I can establish a household with my husband and have it legally and socially acknowledged as a family. A cultural/technological setting where communication among like-minded folks across large geographic distances is trivial.
I was in Worcester this summer... I wish I had gone to the Cambridge meetup while I was up there... There isn't one anywhere near Knoxville, TN. And I doubt Tennessee is really full of aspiring rationalists. sigh And the sunrises and sunsets up there are pretty wonderful. OK. I never saw a sunrise. But the sunsets were nice. :-D
I don't know about Vasili Arkhipov Day, but it's my understanding that Stanislav Petrov is now living in meager retirement. Celebration of Stanislav Petrov Day and other people who prevented existential risks from coming to fruition could involve rewarding those people with money as well as recognition. Although the Less Wrong community is probably too small to do it on their own, I for one would be happy to contribute to visible rewards for people who save the world.

Modern medicine, for keeping me sane. Without aspirin, my TMJ pain would be serious trouble.

You can thank ancient medicine (not modern) for the use of "aspirin" to treat joint pain. Using medicines derived from willow trees and other salicylate-rich plants for pain relief has been around since at least Hippocrates and probably even the Sumerians.

Yet, at least. Hypothetical example: I wonder if something like the Voluntary Human Extinction Movement will eventually switch out the "Voluntary" in favor of "Mandatory". But that's speculative, and you are right empirically.

The same is true for counter-terrorists.

I asked this question for the Q&A [] : I also recently asked this of Luke [] for his feedback post [] before the Q&A was up, and he mentioned in his response that SI is continuing to grow the Summit brand in a multifarious manner. Luke also asked me for additional social business ideas, citing a lack of staff working on the issue. Less Wrong's collective intelligence trumps my own, so I'm fielding it to you. I do have a few ideas, but I'll hold off on proposing solutions [] at first. I find that this is a fascinating and difficult thought experiment in addition to its usefulness both for SI and as practice in recognizing opportunities.
I asked this question for the Q&A [] : I also recently asked this of Luke [] for his feedback post [] before the Q&A was up, and he mentioned in his response that SI is continuing to grow the Summit brand in a multifarious manner. Luke also asked me for additional social business ideas, citing a lack of staff working on the issue. Less Wrong's collective intelligence trumps my own, so I'm fielding it to you. I do have a few ideas, but I'll hold off on proposing solutions [] at first. I find that this is a fascinating and difficult thought experiment in addition to its usefulness both for SI and as practice in recognizing opportunities.

Something I just noticed from Ch. 55:

Amelia Bones: "Someone would burn for this."

Did Amelia Bones burn Narcissa Malfoy?

Actually, I just had a chilling realization in regards to that. From chapter 62:

'"No," said the old wizard's voice. "I do not think so. The Death Eaters learned, toward the end of the war, not to attack the Order's families. And if Voldemort is now acting without his former companions, he still knows that it is I who make the decisions for now, and he knows that I would give him nothing for any threat to your family. I have taught him that I do not give in to blackmail, and so he will not try."

Harry turned back then, and saw a coldness ... (read more)


Not necessarily. As Wikipedia says, "According to the Great Filter hypothesis at least one of these steps - if the list were complete - must be improbable." That is, if "Great Filter" means anything, it's that one or more of the steps to achieving a technological civilization that can expand throughout the galaxy is very difficult ("improbable").

What I'm talking about goes like this: suppose that none of the steps are very difficult. Of course, that doesn't mean they're instantaneous - each step takes time. You need elements o... (read more)

I resolved my typical adolescent existential crisis (for the time being) in a somewhat atypical fashion, concluding after much deliberation that I ought to pause the crisis until I know what's True and what's not, which might mean pausing it forever.

How can I resolve an existential crisis without knowing what meaning, purpose, value, etc. Truly are? Rationality makes the most persuasive claim to the distillation of Truth, so I am an aspiring rationalist.

Thanks for organizing this! Let me know if there's anything I can do to help.

Why value currently alive people dramatically more than not-alive-yet people?

Actually, that statement was probably incorrect. This is an area where my moral framework isn't well prepared to handle, and my attempts to fix it have all resulted in hypothetical outcomes I'm not happy with. (I'd elaborate, but it's not really possible to do so without going through the entire function, which I should probably attempt to do soon but won't right now)
Load More