Bo102010

Posts

Sorted by New

Comments

On the importance of Less Wrong, or another single conversational locus

Others have made these points, but here are my top comments:

  • The site was best when there was a new, high-quality post from a respected community member every day or two.
  • The ban on politics means that a lot of interesting discussion migrates elsewhere, e.g. to Scott's blog.
  • The site's current structure - posts vs. comments seems dated. I'd like to try something like discourse.org?
Stupid Questions January 2015

It requires some status and a consistent record of not being a jerk to do this (or to convince yourself to do this), but: "[Big Talker] has been talking for 2 hours, and [Small Talker] hasn't really had much opportunity to talk about [thing Small Talker does]. Mind if we hear from [Small Talker] for a bit? "

Stupid Questions January 2015

Reading SSC brings back the feeling I got when I first discovered Less Wrong (right after the split with Overcoming Bias, when there were still sequences being posted). Here's this extremely intelligent and articulate guy, posting very insightful things on topics I didn't even know I was interested in -- and he's doing it pretty regularly!

I like what Less Wrong has evolved into in the post-Sequences era, but reading Less Wrong today produces a very different feeling from when it did early on.

TV's "Elementary" Tackles Friendly AI and X-Risk - "Bella" (Possible Spoilers)

I enjoyed the episode also. The show is consistently solid, which is quite impressive - I don't think there's been an episode that's really low quality. The peaks aren't very high, but there are no valleys to speak of...

There a laughable P vs. NP-themed episode in a previous season in which mathematicians use their proof to hack computers, but other than that the episode was watchable.

Simulate and Defer To More Rational Selves

Great post!

Others have mentioned the HPMOR-style "take a poll of different aspects of your personality," which I have found to be entertaining and useful.

I'd also like to endorse the method for troubleshooting. I got the idea from Ridiculous Fish's blog post from 3 years ago.

When I have a technical problem I'm stuck on, I try to ask myself "What would someone who's smarter than me do?" This is really just "imagine a parody version of person x and see if that causes you to think about the problem in a different way."

I like to consult Imaginary Dr. House ("The problem is something very rare and obscured because your data is lying to you"), my former boss ("The problem is the most obvious thing it could be, trust yourself and go solve it!"), my college roommate ("Maybe there's a YouTube video from a dedicated hobbyist that explains this"), and some others.

I wrote up one experience with this technique (not as good as Ridiculous Fish's) a few months ago, when I had a baffling issue to solve at work (FTP on April 26th at 2 AM).

Caring about what happens after you die

I am reluctantly someone who pretty much doesn't care about what happens after I die. This is a position I that I don't necessarily endorse, and if I could easily self-modify into the sort of person who did care I would.

I don't think this makes me a monster. I basically behave the same way as people who claim they do care about what happens after they die. That is, I have plans for what happens to my assets if I die. I have life insurance ("free" through work) that pays to my wife if I die. I wouldn't take a billion dollars on the condition that a third world country would blow up the day after I died.

As you say, though, it's "me-of-the-present" that cares about these things. With the self-modification bit above, really what I mean is "I'd like to self-modify into the sort of person who could say that I cared about what happens after I die and not feel compelled to clarify that I really mean that I think good things are good and that acting as if I cared about good things continuing to happen after I die is probably a better strategy to keep good things happening while I'm alive."

2012 Survey Results

10 people said "Drug C: reduces the number of headaches per year from 100 to 60. It costs $100 per year" over "Drug B: reduces the number of headaches per year from 100 to 50. It costs $100 per year" on CFAR question #4...

I said "Drug A: reduces the number of headaches per year from 100 to 30. It costs $350 per year" personally. I think there's a case for B, maybe, but who picks C?

The Useful Idea of Truth

Not to mention that any candidate up to the task likely has more lucrative alternatives...

How to deal with someone in a LessWrong meeting being creepy

I'm genuinely curious why hg00's amended comment is now even more downvoted? And why my advice is also? Generally I take downvotes to mean "Would not like to read more of such comments at Less Wrong," but I'm a little puzzled at these.

How to deal with someone in a LessWrong meeting being creepy

I didn't think it was quite fair that your comment was downvoted to -2, but then I read the sentence "When women feel desperate, they cry about it."

While I think your comment was overall constructive to the discussion, that kind of thing is a turnoff. I assume you meant it in the best possible way, but I would encourage you to avoid that particular construction in the future.

Load More