Wiki Contributions

Comments

Salivanth10y130

I believe this lesson is designed for crisis situations where the wiser person taking the time to explain could be detrimental. For example, a soldier believes his commander is smarter than him and possesses more information than he does. The commander orders him to do something in an emergency situation that appears stupid from his perspective, but he does it anyway, because he chooses to trust his commander's judgement over his own.

Under normal circumstances, there is of course no reason why a subordinate shouldn't be encouraged to ask why they're doing something.

It's a comment on one of Eliezer Yudkowsky's Facebook posts. I got permission to post it here, as I thought it was worth posting.

Salivanth10y360

The Courage Wolf looked long and slow at the Weasley twins. At length he spoke, "I see that you possess half of courage. That is good. Few achieve that."

"Half?" Fred asked, too awed to be truly offended.

"Yes," said the Wolf, "You know how to heroically defy, but you do not know how to heroically submit. How to say to another, 'You are wiser than I; tell me what to do and I will do it. I do not need to understand; I will not cost you the time to explain.' And there are those in your lives wiser than you, to whom you could say that."

"But what if they're wrong?" George said.

"If they are wrong, you die," the Wolf said plainly, "Horribly. And for nothing. That is why it is an act of courage."

  • HPMOR omake by Daniel Speyer.

Welcome to Less Wrong!

This is an old topic. Note the title: Welcome to Less Wrong! (2012). I'm not sure where the new topic is, or even if it exists, but you should be able to search for it.

I recommend starting with the Sequences: http://wiki.lesswrong.com/wiki/Sequences

The sequence you are looking for in regards to "right" and "should" is likely the Metaethics Sequence, but said sequence assumes you've read a lot of other stuff first. I suggest starting with Mysterious Answers to Mysterious Questions, and if you enjoy that, move on to How to Actually Change Your Mind.

In that case, I pre-commit that if I win, I'll spend it on something leisure-related or some treat that I otherwise wouldn't be able to justify the money to purchase.

I co-operated; I'd already committed myself to co-operating on any Prisoner's Dilemma involving people I believed to be rational. I'd like to say it was easy, but I did have to think about it. However, I stuck to my guns and obeyed the original logic that got me to pre-commit in the first place.

If I assume other people are about as rational as me, than a substantial majority of people should think similarly to me. That means that if I decide that everyone else will co-operate and thus I can defect, there's a good chance other people will come to the same conclusion as well. The best way to go about it is to pre-commit to co-operation, and hope that other rational people will do the same.

Thanks for the chance to test my beliefs with actual stakes on the line :)

I wanted to thank you for this. I read this post a few weeks ago, and while it was probably a matter of like two minutes for you to type it up, it was extremely valuable to me.

Specifically a paraphrase of point B, "The point where you feel like you should give up is way before the point at which you should ACTUALLY give up" has become my new mantra in learning maths, and since I do math tutoring when the work's there, I'm passing this message on to my students as well.

So, thank you very much for this advice.

The main technique I used was bypassing the "trying to try" fallacy, as well as some HPMOR-style thinking; Obstacles mean you get creative, rather than give up. The most important thing was just not giving up upon finding the first reasonable-sounding solution, even if it's chances of success wasn't particularly high.

As to how I applied it, that was the best part, and what the second paragraph alluded to; it was my default response, to the point where I was briefly stunned when my friend was throwing up easily circumventible roadblocks to my ideas as if they were impossible obstacles. (And I did talk to him, in case he had other motives for wanting to not do the plan and was thus actively trying to come up with reasons not to do it.)

It was only then that I reviewed my own thinking and realised how far I've come since I first found HPMOR and LessWrong; I'd ceased to think of this particular method as unusual, I thought it was how any intelligent person attempted to solve their problems, but my friend matches me intellectually.

If you meant "how" as in specifics; my friend needed to earn extra money, and his reasonable-sounding solution was to find employment, despite the poor prospects for it in his area, and despite the fact that he'd looked before and hadn't found anything. To him, the solution stopped at there, because it could work, whereas that didn't meet my goal of solving my friend's problem on it's own due to it's unreliability. So I helped him leverage some of his other talents, in addition to looking for work. (Which is a good plan, just not sufficiently reliable on it's own.) None of my ideas were particularly brilliant, but I wouldn't have found them if I'd stopped at the reasonable-sounding solution and decided that was sufficient effort for victory.

Honestly, it's still weird to me right now. I was actually embarrassed writing this comment, because writing it out made it seem so trivial and not worth being proud about, and I had to remind myself that if it really was that obvious, my friend would have done it himself. Not to mention that a couple of years ago I'd have done the exact same thing in his position.

I got to use rationality techniques to not only solve a friend's problem that had been ongoing for months, but also managed to completely change the way he thought about problem-solving in general. Not sure if that second part will actually stick.

On a related note, that was when I found out that I've internalised the basics of how to REALLY approach a problem with the intent of solving it, to such a degree that I'd forgotten that my thought process was unusual.

How'd it go?

EDIT: My bad, I thought this was posted on 22 January 2013, not 22 January 2012. I'll leave this up just in case though.

What I've found that the spoilt version of Nethack tests, more than anything else, is patience. Nethack spoilt isn't about scholarship, really. You don't study. You have a situation, and you look up things that are relevant to that situation. There is a small bit of study at the beginning, generally when you look up stuff like how to begin, what a newbie-friendly class/race is, and how to not die on the second floor.

But really, it's patience. I once did an experiment where players who were relatively new to Nethack were encouraged to spoil themselves as early and often as possible, and request advice frequently from better players. Really, to do anything short of having someone else play the game for you was not only allowed, but actively encouraged. Since I usually put a limiter on how willing I am to spoil myself on roguelikes, I thought this might be fun. (Namely, I'm unwilling to ask for any advice in tactical situations, only strategic ones: Which area should I go to next, instead of "How do I kill this ogre?")

Conventional wisdom for Nethack states that upon reaching the halfway point of the game, you should win from there if you play correctly. I got about three-quarters of the way there, on my third run, having never gotten past the second floor on my runs prior to those three. I died to a misclick, not to lack of knowledge or poor tactics. So, patience is the true virtue of Nethack: It's surprisingly easy to win as long as you spoil yourself, get advice, and don't screw up.

Sadly, the experiment only had the one participant actually try it, namely me, so the evidence shall remain anecdotal.

Load More