For miscellaneous discussions and remarks not suitable for top-level posts even in the Discussion section, let alone in Main.

(Naturally, if a discussion gets too unwieldy, celebrate by turning it into a top-level post, just like in the good old days.)

New Comment
34 comments, sorted by Click to highlight new comments since: Today at 5:20 PM

Trivial Inconvenience Alert: I just realized that I seldom browse the comment feeds anymore, since it now requires an extra click.

I feel that way about everything. I wish there were a feed that would show all threads and not delineate between Main and Discussion.

http://lesswrong.com/r/all/ does that for posts, but it looks like http://lesswrong.com/r/all/comments just shows comments from the main section. Perhaps the latter behavior should be reported as a bug? In the meantime, if you want an RSS feed, combining two rss feeds into one is pretty trivial with yahoo pipes.

Thanks. That's exactly what I wanted. Well, I'd prefer they just have the title, but I can't be took picky.

Note that you can also get to the comment feed by clicking on "Recent Comments"; unfortunately this still requires scrolling down.

Bookmark them?

Would need to be done by every user on all repeatedly-used computers (impossible on ones used once or aggressively cleaned); generalising leads to many unmanageable bookmarks.

[-]kpreid13y110

Possibly interesting article on winning: How to seem good at everything: Stop doing stupid shit

Summary, as I interpreted it: In practicing a skill, focus on increasing the minimum of the quality of the individual actions comprising performing the skill.

(Is this worth a /r/discussion link-post?)

Yes.

Silly hypothesis: all reductionists are actually p-zombies. Lacking qualia, and generalizing from one example, we assume incorrectly that "qualia" must refer to something that our own brains do, and wind up constructing theories that try to explain qualia in terms of material processes.

That's strictly speaking nonsense given the definition of P-zombies, since it'd be a detectable difference. But the thing you actually mean is interesting. We know it's false due to all the specific things we know about the brain, but 50 years ago I might have taken it seriously.

I remember that being proposed seriously by a commenter during the Sequences.

I'm reading Mere Christianity, and boy howdy is it a hair-puller. It made me so mad that on about page 90 he was talking about donating to charity until it hurts, which reminded me of the current SIAI fundraiser. I know that donating money out of spite isn't exactly the healthiest of actions, but I've got $1000 that says fuck him.

"SingInst 2011 Summer Challenge: Give until it hurts the ghost of CS Lewis!"

What exactly about it makes you angry?

I'm having trouble explaining myself, so maybe an example of Lewis' text with an approximation of my response at the time will suffice. This clip was chosen because it was the last straw that prompted me to write an email to several friends to vent about the my issue with Lewis.

At first it is natural for a baby to take its mother’s milk without knowing its mother. It is equally natural for us to see the man who helps us without seeing Christ behind him. But we must not remain babies. We must go on to recognise the real Giver. It is madness not to.

I tend to get annoyed when an author throws a couple vague metaphors, then tells me that I ought to do something. I get even more annoyed when they tell me that I am insane if I don't follow their advice. At this point in the reading I actually shouted out loud "WHY!?"

Because, if we do not, we shall be relying on human beings. And that is going to let us down. The best of them will make mistakes; all of them will die.

Holy crap! Is Lewis psychic? Did he hear me back in time, screaming at him that his reasoning is not coherent to me? You might think so, but then you would have to explain why the followup was even less reasonable than the metaphors, and something of a non-sequitur. Granted, if you think really hard you can come up with a satisfactory response that threads all these thoughts together into a coherent chain. I've even done that myself while writing this comment. But at its core, Lewis is arguing something here: that nothing good that comes from people actually comes from people, and that we must thus treat all beneficial things as acts of god. He goes on to hedge this claim with some nice words.

We must be thankful to all the people who have helped us, we must honour them and love them. But never, never pin your whole faith on any human being: not if he is the best and wisest in the whole world. There are lots of nice things you can do with sand: but do not try building a house on it.

He's like someone in an asylum briefly realizing that he's not actually Napoleon, and then imagining himself on a horse because he likes the idea of it more than being in a padded cell. But yes, there you have it. You can't rely on anything in this world because things break down and shit happens, and therefore you must rely on fiction. Because we all know fiction will get you to where you need to be.

I could go on, if you want, but I think this is getting a bit long for a single comment.

Oh, interesting. What were your reasons for reading it? I've read it this past week as part of a "deal" with a friend (more on this in a Discussion thread).

A couple years ago, a friend suggested that I read it. He thought I would be interested in the perspective of an ex-atheist who, according to the friend, came to Christianity through reason instead of by the usual means. I kinda sat on the suggestion, along with a few other recommendations that accumulated (Expelled, What The Bleep Do We Know, Theology And Sanity, Catholic For A Reason...). This summer, I've read a lot more books than has been my average, so it wasn't hard to throw it into the list. Especially after I found a free copy of the book somewhere online.

I can't wait for the Discussion thread, and see what the "deal" was. Plus it would be nice to discuss aspects of the book with someone who is fresh on the subject.

A post in French about "You always want to be right!" presents an interesting hypothesis: People who always want to actually be right like corrections a lot (because they make them righter). So they emit a lot of them; whenever someone makes a mistake, they offer a patch. But most people dislike corrections; when presented with one, they distort it instead of updating. So they end up with two mistakes instead of one. This leads the corrector to emit another correction, making things worse. Therefore, the interlocutor sees someone who constantly tells them they're wrong, but is never right (because their words get distorted before reaching consciousness) - someone who refuses to lose debates ("who always want to be right").

This is interesting. In particular, it explains why I often get called this by people who seem, both to me and to others, to "always want to be right" (make obvious mistakes, refuse to admit them). If it were just Dunning-Kruger (people who think "Oh, I'm so good at changing my mind in response to evidence!" being worse at it, and getting called out), we shouldn't expect such a pattern.

Alternately, maybe they're accusing us of being clever arguers.

This situation is common - Alice cares about being right, verifiably changes her mind unusually often, including saying "You're right, I was wrong" during debates, likes to look at the evidence; Bob (according to several outsiders) often defends propositions like "The sky is green" in the face of contrary evidence, and gets angry when corrected; yet Bob accuses Alice of always wanting to be right.

It can't just be about status. Bob would just call Alice a jerk or something. The hypothesis I linked is the best I've seen so far. What's going on?

I think part of what is going on is that many forms of tribal allegiance are either defined by or illustrated by shared beliefs (e.g. our religion is right, our sports team is the best, our political stance is correct, etc.). So, repeatedly correcting someone has not just a simple status hit to it but an implicit attack on someone's loyalty and an undermining of tribal allegiance. Note that this is to some extent simply a variation of the status hypothesis. Both the simple status hypothesis and this one predict that people will respond better to corrections if they are given in a less public situation which seems to be true.

I can't present much in the way of evidence, but I think it is about status, and 'you always want to be right' is a more-specific way of calling someone a jerk.

It may be about status in a way that's not immediately obvious, though - my model suggests that it's less about who's got higher status and more about something like equanimity, and that the question is whether or not Alice is trying to make a power grab; if not, the common wisdom is that she won't consider it worthwhile to fight about something just for the sake of being right.

Actually on further reflection, this reminds me of a model I read about a while ago that suggests that uncertainty in relative status is important for group cohesion - that only the group alpha and the group omega can have approximately-known status, and between those two extremes someone making their relative status clear will be a destabilizing influence, for reasons that either weren't presented well or I've forgotten. I'll see if I can find that; it was a rather complicated model, of which this is just a small part, but it seemed potentially useful.

ETA: Found it. That's actually the last post in the series, and it uses some specialized definitions for deliberately semi-offensive words, so it might be better to start at the beginning.

Agree it's about status, disagree it's only about status, or there'd be no reason for the way to be that specific.

Agree about egalitarian pressure.

It was Distracting wolves and real estate agents . Do you mean that the correct response isn't admitting the other person is right (what my mother advises), which loses the fight, but rather to drop the topic, making it unclear who won?

Actually I was thinking of something else (I ETA'd my last post with links), but that's an interestingly similar example.

As to what's 'correct', it depends on one's goals and preferences.

What do you mean, then?

It's rather safe to assume that anyone interested in the questions has the following preferences:

  • Not being thought of as a jerk who always wants to be right;
  • Being as right as possible;
  • Helping others be as right as possible;
  • Enjoying socialization (of which the first item is a subgoal).

The first, second, and fourth of those are well served by noting the difference between being right and being known to be right, and not worrying about the latter in situations where the other person doesn't value objective rightness. That basically describes my personal policy, anyway - I have a strong habit of going "oh, ok" and dropping the subject at the first sign of annoyance on the other person's part in such cases, unless there's something at stake beyond just their knowledge, and that seems to work well enough.

I really wish that someone would develop an algorithm that stitched together news, discussion and academic papers so that a debate could be tracked. I'd especially like it if, at the end, the system would spit out "RESOLVED: XYZ is true/sorta true/a total load. [Here's why.]"

I figure that you could currently train a machine to recognize a spin down in academic debates and hire someone who could then review the literature to write about the resolution.

I just feel the need to say this. I'm so tired of losing track of things and then never knowing whether I was right or wrong about some issue as a result.

...train a machine to recognize a spin down in academic debates...

Off the top of my head, having a program track a Google Alerts feed on the topic (with filtering via Yahoo Pipes if Alerts gives too many false positives) and let you know when it's gone a certain amount of time without getting any input seems like it would be a reasonable first approximation of that.

So I recently came across this paper and http://arxiv.org/abs/1107.5849 which seemed relevant to us but which I really don't have time to read right now, not least due to the fact that I don't actually know anything of quantum information theory and so would need a bit more background to actually understand it.

The reason I thought it relevant was -- well, since I began to understand that QM runs on amplitudes, not probabilities, it's bothered me that we fundamentally still use probabilities rather than amplitudes to represent uncertainty. Of course there's good reasons for doing this (Savage...), it's good enough most of the time, and it's not at all clear how amplitudes could sensibly be assigned in most cases, but it still bugs me. I was wondering if this paper did anything to elucidate how such a setup might work? Because it seems to treat how you would go about conditioning on an event, and the lack of being able to do so seems a more fundamental obstacle than the ones I listed above.

If not, perhaps it's still relevant to us for other reasons anyway. :)

Does anyone know anything about cryonics in Europe? I've done some looking but I figured that there's bound to be someone that's already done the research.

I'm planning on recording myself reading at least one and possibly more sequences aloud, to be uploaded to dropbox, and posted on the comments, maybe to be collated in a post elsewhere later.

But I need a braindead simple voice recorder to do it because I'm crap with computers/ easily frustrated. MS audiorecorder does not have an immediately obvious way for me to (a) make the files it record end when I want them to, as opposed to after 60 seconds (b) always save as mp3. Audacity requires me to do magic shit with tarballs and unzipping if I want to export their .aup files as .mp3 as opposed to doing what would seem to me to be the sensible thing and make it work out of the box. I realise it's made by techies, for techies, but I want to register my frustration that they made something that is Windows compatible and didn't just package LAME painlessly with it. Beware Trivial Inconveniences and all that. I have VLC as well so if there's some way of doing it in that that just requires the right button clicks in the menu that'd be good too.

Help would be much appreciated.

Is there a way to rss subscribe to my message inbox, or to get emailed when someone replies to my posts?

There is an rss feed for the message inbox. Although, I can only subscribe to it using Firefox's builtin mechanism, not the external RSS reader I use.

ETA: I presume it relates to the authentication cookies LW uses to identify whose inbox to show, which Firefox has access to (since I'm logged on), but the external reader doesn't.

Did the site layout just change, or is my browser doing funky things?