MugaSofer

MugaSofer's Comments

Remembering the passing of Kathy Forth.

Any suicide in general, and this one in particular, definitely has multiple causes. I'm really sorry if I gave the opposite impression.

But I think it's reasonable and potentially important to respond to a suicide by looking into those causes and trying to reduce them.

To be more object-level:

  • Kathy was obviously mentally ill, and her particular brand of mental illness seems to have been well-known. I don't know what efforts were made to help her with that (I do get the impression some were made), but I've seen people claim her case was an example of the ways our community habitually fails to help people with mental illness and it certainly seems worth looking into that.
  • Kathy publicly attributed her suicide to the fact that she had been sexually assaulted. Whatever else was in play, it's certainly true that sexual assault is a risk factor for suicide and she really does seem to have been assaulted. It behooves us to check for flaws in our protections against this sort of thing when they fail this dramatically.
  • In particular, it seems she felt she didn't know how to avoid inevitably getting assaulted again. I get the impression this was part of a paranoid/depressive spiral on her part. But it's true that this is a real phenomenon and I've talked to other rationalists who have been concerned with this as well.

To return to the meta level, I'm also very concerned by the fact that this has been taken up by the anti-rationalist crowd and this may be making some people defensive. I don't recall anyone saying that we should be so concerned about suicide contagion as to ignore the object-level issues raised completely when Aaron Swartz committed suicide, for example. Maybe we should have been! But the fact that we as a community potentially failed or simply could have done better here means that we should be more careful about dismissing this, not less.

Remembering the passing of Kathy Forth.

It's pretty standard to respond to the suicides of Y victims by rallying to reduce Y.

Making a commitment not to notice when something drives a person to suicide seems like it would probably be a monumental mistake.

37 Ways That Words Can Be Wrong

I don't think so - I think Eliezer's just being sloppy here. "God did a miracle" is supposed to be an example of something that sounds simple in plain English but is actually complex:

One observes that the length of an English sentence is not a good way to measure "complexity". [...] An enormous bolt of electricity comes out of the sky and hits something, and the Norse tribesfolk say, "Maybe a really powerful agent was angry and threw a lightning bolt." The human brain is the most complex artifact in the known universe. [...] The complexity of anger, and indeed the complexity of intelligence, was glossed over by the humans who hypothesized Thor the thunder-agent.

To a human, Maxwell's Equations take much longer to explain than Thor.

What's up with Arbital?

Will this "Arbital 2.0" be an entirely unrelated microblogging platform, or are you simply re-branding Arbital 1.0 to focus on the microblogging features?

On the importance of Less Wrong, or another single conversational locus

Off the top of my head: Fermat's Last Theorem, whether slavery is licit in the United States of America, and the origin of species.

The Psychological Unity of Humankind

It's almost like having a third sex. In fact the winged males look far more like females than they look like wingless males.

That sounds like exactly the kind of situation Eliezer claims as the exception - the adaptation is present in the entire population, but only expressed in a subset based on the environmental conditions during development, because there's a specific advantage to polymorphism.

There's the whole phenomenon of frequent-dependent selection. Most people are familiar with this from blood types, and sickle-cell anaemia.

Those are single genes, not complex adaptations consisting of multiple mutually-dependant genes. Exactly the "froth" he describes.

Ethics Notes

Psy-Kosh: Hrm. I'd think "avoid destroying the world" itself to be an ethical injunction too.

The problem is that this is phrased as an injunction over positive consequences. Deontology does better when it's closer to the action level and negative rather than positive.

Imagine trying to give this injunction to an AI. Then it would have to do anything that it thought would prevent the destruction of the world, without other considerations. Doesn't sound like a good idea.

No more so, I think, than "don't murder", "don't steal", "don't lie", "don't let children drown" etc.

Of course, having this ethical injunction - one which compels you to positive action to defend the world - would, if publicly known, rather interfere with the Confessor's job.

Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 119

Well, that and the differences in the setting/magic (there's no Free Transfiguration in canon, for instance, and the Mirror is different - there are less Mysterious Ancient Artefacts generally - and Horcruxes run on different mechanics ... stuff like that.)

And Voldemort is just inherently smarter than everyone else, too, for no in-story reason I can discern; he just is, it's part of the conceit. (Although maybe that was Albus' fault too, somehow?)

2013 Less Wrong Census/Survey

I don't like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.

Actually, with our expanding universe you can get starships far enough away that the light from them will never reach you.

But I see we agree on this.

That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can't. This is another reason I'd prefer that the capability continue not to exist.

But is it possible to impersonate intelligence? Isn't anything that can "fake" problem-solving, goal-seeking behaviour sufficiently well intelligent (that is, sapient; but potentially not sentient, which could be a problem.)

I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don't accept.

When it comes down to it, ethics are entirely a matter of taste (though I would assert that they're a unique exception to the old saw "there's no accounting for taste" because a person's code of ethics determines whether he's trustworthy and in what ways).

I strongly disagree with this claim, actually. You can definitely persuade people out of their current ethical model. Not truly terminal goals, perhaps, but you can easily obfuscate even those.

What makes you think that "individual rights" are a thing you should care about? If you had to persuade a (human, reasonably rational) judge that they're the correct moral theory, what evidence would you point to? You might change my mind.

One can't really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.

Oh, everyone is misguided. (Hence the name of the site.) But they generally aren't actual evil strawmen.

Load More