If you do decide to try it, start with a very low dose.
Where this chain of reasoning breaks down for me is in the "without resistance" phase of "take right action without resistance". If the resistance, both conscious and unconscious, is too strong, there will be no right action taken, whether I will it or no. So what I do instead is undermine the resistance itself. This is my precondition for taking right action. Do you see what I mean? Wu-wei prevents hedonism if wu-wei is essential to hedonism but there can be no wu-wei.
The similarity between our approaches is as you say: the realization that akrasia defeats frontal assaults with heavy casualties. The difference is that you are doing something like the "take right action without resistance" approach that I've encountered before in Buddhism, which matches up nicely with anhedonia (personally I am a hedonist, so this does not work for me); while I am attempting to root out the basic causes of my akrasia, down to the very sources, to change the way I feel in the first place. Both approaches have their merits, and I... (read more)
Two Disclaimers: First, I am not a doctor. Second, beware of other-optimizing. This advice is working well for me, but it may not work well for others.
The depression became obvious and major enough that I was forced to take action to stop it. The rationalizations had run dry, so I fully realized in both System 1 and System 2 that I was not "unmotivated", I was mentally ill. Years of life hacks and half-assed lifestyle interventions had accomplished some, but not enough, so it was time for medications, which I had previously feared due to bad expe... (read more)
Agreed. Personal anecdote: once I redefined my "motivation problem" as a "depression and anxiety problem" a number of months ago, and began treating this depression and anxiety instead of wearily trying out yet another willpower hack, I have made more progress in being motivated in months than I had in the previous years.
Your point on a description of Harry's thinking is well-taken. I just had my brother submit this as a review, to err on the side of caution:
"With NickRoy's permission, I am submitting his solution, which I agree with, with additional evidence appended, just in case that is necessary; so consider this as superseding NickRoy's submission:
[the relevant text is here in the submission, but I don't need to repeat it in this comment]
Harry does not know the full prophecy for certain, but he can guess it, based on: Harry's thought on star lifting in r... (read more)
Voldemort would be skeptical, yes, but he would also be interested, because "6. It is impossible to tell lies in Parseltongue" and because all this speech has to do is raise the risk enough that it makes more sense to stop and gather more information before killing Harry, thus it "allow[s] Harry to evade immediate death". What do you think would improve the believability?
Sure. Along with the centaur evidence, there's: Harry's thought on star lifting in response to this prophecy in Ch. 21, Harry noticing Quirrelmort's interest in the same prophecy in Ch. 86, Quirrelmort's talk of the stars' vulnerability to "sufficiently intelligent idiocy" in Ch. 95, Voldemort's "while the stars yet live" remark in Ch. 111, Voldemort's more explicit talk on the prophecy and his great fear of it in the next chapter, and how the Unbreakable Vow is framed in the most recent chapter. If Harry connects these dots, he'll have a good idea of what the full prophecy says.
Harry hisses "You have missinterpreted prophecy, to your great peril, becausse of power I have, but you know not. Yess, you are sstudying sscience, but, honesstly, you are yearss behind me. It may be that thiss power you know not iss ssomething I have at thiss sspecific time, that you will not know for too many yearss hence.
Before I explain, remember my Vow, and know my honesst intention not to desstroy the world, Vow or no. Now, do you know why I would tear apart the very sstarss? Do you know how? Not to desstroy the world, but to ssave it from what... (read more)
Please add this is as a review so Eliezer defintely sees it!
Paths of Glory (1957), film. Kirk Douglas vs. Moloch. An anti-war film, for reasons both usual and unusual.
I would die of smallpox.
I upvoted, but I'll clarify why, as this is a list: the only name I like on this list is Level Up, but I strongly like it.
Personally, I figure I'm not intelligent enough to research hard problems and I lack the social skills to be an activist, so by process of elimination the best path open to me for doing some serious good is making some serious money. Admittedly, some serious student loan debt also pushes me in this direction!
"Proving useful in your life" (but not necessarily "proving beneficial") is the core of instrumental rationality, but what's useful is not necessarily what's true, so it's important to refrain from using that metric in epistemic rationality.
Example: cognitive behavioral therapy is often useful "to solve problems concerning dysfunctional emotions", but not useful for pursuing truth. There's also mindfulness-based cognitive therapy for an example more relevant to Buddhism.
Ah, I should have taken that possibility into account. Thank you.
Good instrumental rationality quote; not so good for epistemic rationality.
So, with a 60% chance of girlfriend breakup and a 90% chance of new partner acquisition, does this mean a 36% chance of a polyamorous, open, "cheating" or otherwise non-monogamous relationship situation for you at some point over the next year?
Edited to add: actually somewhat higher than 36%, since multiple new partners are possible along with a girlfriend breakup.
Fascinating, thank you. I also realize that I should have Googled that before asking.
Interesting! What do you think a "bi" listing can signal? Openness to experience?
Edited for clarity. Also: I'm not complaining, but I am genuinely curious as to why this comment has been downvoted. Is this a sensitive topic?
Since it's difficult to predict the date of the invention of AGI, has SI thought about/made plans for how to work on the FAI problem for many decades, or perhaps even centuries, if necessary?
Agreed on the excellence of "Why Spock is Not Rational". This chapter is introductory enough that I deployed it on Facebook.
Okay, but only 3.5%. I wonder how many are newbies who haven't read many of the sequences yet, and I wonder how many are simulists.
What surprised you about the survey's results regarding religion?
I currently route around this by being an ethical egoist, though I admit that I still have a lot to learn when it comes to metaethics. (And I 'm not just leaving it at "I still have a lot to learn", either - I'm taking active steps to learn more, and I'm not just signalling that, and I'm not just signalling that I'm not signalling that, etc.)!
I ran across OB while being horribly akrasic on Reddit a few years ago.
My thoughts on further social business opportunities: how about rationality consulting? If SI/LessWrong can establish enough credibility as rationalists this is worth money to both non-profit organizations and for-profit businesses, as well as potentially to consumers (as with Eliezer's rationality books). Rationality consulting would probably have to be done for free at first, of course. As a secondary benefit, this would also help with the ongoing effort to measure the impact rationality training has on an individual or an organization.
On a meta level, o... (read more)
That's an excellent point. I wonder if it's too late at this point for a renaming, or not?
Sorry about that! It's difficult (for me at least) to express tone over the Internet. I'll have to practice that.
Non-profit organizations like SI need robust, sustainable resource strategies. Donations and grants are not reliable. According to my university Social Entrepreneurship course, social businesses are the best resource strategy available. The Singularity Summit is a profitable and expanding example of a social business.
My question: is SI planning on creating more social businesses (either related or unrelated to the organization's mission) to address long-term funding needs?
By the way, I appreciate SI working on its transparency. According to my studies, tr... (read more)
What strikes me most about this post: the enthusiasm! I find it refreshing for this site and appropriate for this subject matter. Congratulations on successfully feeling rational, D_Malik.
Why not use several different methodologies on GiveWell, instead of just one, since there is some disagreement over methodologies? I can understand giving your favorite methodology top billing, of course (both because you believe it is best and it is your site and also to avoid confusion among donors), but there seems to be room for more than one.
True. It might be interesting to see if any hidden commonalities among Less Wrongians exist, however, if the "Other" option comes along with a "fill-in-the-blank" field. It might also be a good idea to include this "Other" option in addition to the other options to avoid everyone checking "Other".
Taken. Thanks Yvain, I appreciate this effort!
Nitpick: why no "Other" categories for Participation and Expertise?
I'm considering the possibility of an experimental treatment becoming available during those two months that could save the terminally ill patient from dying of that illness. Being alive would then allow the possibility of new life extension treatments, would could lead to a very long life indeed.
This would be a conjunction of possibilities, so I realize that the overall possibility of a terminally person transitioning to a very long-lived person is slim, but even a slim chance of living for a very long time is worth almost any degree of suffering. If no ... (read more)
True. I doubt it would be as cheap, safe and effective as it is without modernity, though.
A long-term goal of Less Wrong is to achieve the benefits of religion without becoming a religion. LW Meetups are partially an attempt at achieving a rationalist sense of community, for instance. Stanislav Petrov Day and Vasili Arkhipov Day are steps in the direction of rationalist rituals in general and rationalist holidays in specific. In addition to creating more holidays, I suggest that we figure out ways to celebrate them, rather than simply marking them.
I'll take a crack at it. A holiday celebrating existential risk reduction is a glorious opportuni... (read more)
Modern medicine, for keeping me sane. Without aspirin, my TMJ pain would be serious trouble.
Modern medicine, for keeping me sane. Without aspirin, my TMJ pain would be serious trouble.
You can thank ancient medicine (not modern) for the use of "aspirin" to treat joint pain. Using medicines derived from willow trees and other salicylate-rich plants for pain relief has been around since at least Hippocrates and probably even the Sumerians.
Yet, at least. Hypothetical example: I wonder if something like the Voluntary Human Extinction Movement will eventually switch out the "Voluntary" in favor of "Mandatory". But that's speculative, and you are right empirically.
The same is true for counter-terrorists.
Something I just noticed from Ch. 55:
Amelia Bones: "Someone would burn for this."
Did Amelia Bones burn Narcissa Malfoy?
Actually, I just had a chilling realization in regards to that. From chapter 62:
'"No," said the old wizard's voice. "I do not think so. The Death Eaters learned, toward the end of the war, not to attack the Order's families. And if Voldemort is now acting without his former companions, he still knows that it is I who make the decisions for now, and he knows that I would give him nothing for any threat to your family. I have taught him that I do not give in to blackmail, and so he will not try."
Harry turned back then, and saw a coldness ... (read more)
I see. Good answer.
Wouldn't this mean that the Great Filter is behind us?
Not necessarily. As Wikipedia says, "According to the Great Filter hypothesis at least one of these steps - if the list were complete - must be improbable." That is, if "Great Filter" means anything, it's that one or more of the steps to achieving a technological civilization that can expand throughout the galaxy is very difficult ("improbable").
What I'm talking about goes like this: suppose that none of the steps are very difficult. Of course, that doesn't mean they're instantaneous - each step takes time. You need elements o... (read more)
I resolved my typical adolescent existential crisis (for the time being) in a somewhat atypical fashion, concluding after much deliberation that I ought to pause the crisis until I know what's True and what's not, which might mean pausing it forever.
How can I resolve an existential crisis without knowing what meaning, purpose, value, etc. Truly are? Rationality makes the most persuasive claim to the distillation of Truth, so I am an aspiring rationalist.
Thanks for organizing this! Let me know if there's anything I can do to help.
Why value currently alive people dramatically more than not-alive-yet people?