All of Quirinus_Quirrell's Comments + Replies

Honoring Petrov Day on LessWrong, in 2019

I have the launch codes. I'll take the site down unless Eliezer Yudkowsky publicly commits to writing a sequel chapter to HPMoR, in which I get an acceptably pleasant ending, by 9pm PST.

0Liam Goddard2y.
The enemy is smart.
"The enemy knew perfectly well that you'd check whose launch codes were entered, especially since the nukes being set off at all tells us that someone can appear falsely trustworthy." Ben shut his eyes, thinking harder, trying to put himself into the enemy's shoes. Why would he, or his dark side, have done something like - "We're meant to conclude that the enemy has the launch codes. But that's actually something the enemy can only do with difficulty, or under special conditions; they're trying to
... (read more)
4DanielFilan2y(FYI California is currently in the PDT time zone, not PST)
0[anonymous]7yIncendio.
3Vaniver7yIn related news, you should probably update your user page [http://lesswrong.com/user/Quirinus_Quirrell/]. :(
0gjm7yEeeeeeeeek.
1Lumifer7ySlightly paraphrasing the immortal words of Mandy Rice-Davies, "Well you would say that, wouldn't you?" :-D
...

I am not any person named in the linked page, though I have met some or all of them. I am not affiliated with MIRI in any way. I did not post the linked page and I do not know who did.

The linked page is obvious slander. But its creation is a serious matter; the author is threatening to manufacture evidence. Thus, it should be handled the same way as a death threat: with an investigation to determine who sent it. The site is hosted on EasyWeb; the domain name admin contact details point to a proxy called myprivacy.net, but the author is not very technically... (read more)

1Slider7yI understand fabricating evidence is bad. I understand that libel is bad. But I don't see how threatening to libel is that bad (on top of the actual libel).

The more serious it is publicly taken, the more incentive for the author (who could be some guy in Eastern Europe for all we know, well beyond the reach of any legal recourse) to redouble his/her efforts. Someone has spent a considerable amount of time and effort to make the biggest possible splash. Publicly making waves about it is just playing into the splasher's hands.

So I advocate no public engagement on this matter whatsoever, doubled with a consultation with a specialized (not a run-of-the-mill) lawyer. Also, I'd look into the account who made the or... (read more)

-6[anonymous]7y
Farewell Aaron Swartz (1986-2013)

So, The Tech is reporting that Aaron Swartz has killed himself. No suicide note has surfaced, PGP-signed or otherwise. No public statements that I've been able to find have identified witnesses or method. Aaron Swartz was known for having many enemies. There's the obvious enemies in the publishing industry and the US attorneys office. Cory Doctorow wrote that he had "a really unfortunate pattern of making high-profile, public denunciations of his friends and mentors."

I'd like to raise the possibility that this was not a natural event. Most of th... (read more)

No suicide note has surfaced, PGP-signed or otherwise. No public statements that I've been able to find have identified witnesses or method.

Some of this information has been released since the posting of the parent, but because the tone of the post feels like it was jumping a gun or two, I wanted to throw this out there:

There are good reasons why the media might not want to go into detail on these things, especially when the person in question was young, famous and popular. The relatively recent Bridgend suicide spiral was (is?) a prime example of such ... (read more)

No public statements that I've been able to find have identified witnesses or method.

I don't know if the relevant news reports had been released at the time this comment had been posted, but the apparent method of Swartz's death was hanging.

When you narrow down the set of people who could be considered Aaron Swartz's enemies to those who could have him killed and have it reported as a suicide, who would benefit more from his apparent death by suicide than his being drained of funds and convicted of felony, and ask whether this is realistic behavior for ... (read more)

0[anonymous]9yIrrationality Game entry [http://lesswrong.com/lw/df8/irrationality_game_ii/89v1]

The New York Times has more information about circumstances:

Aaron Swartz, a wizardly programmer who as a teenager helped develop code that delivered ever-changing Web content to users and later became a steadfast crusader to make that information freely available, was found dead on Friday in his New York apartment.

He was 26.

An uncle, Michael Wolf, said that Mr. Swartz had apparently hanged himself, and that Mr. Swartz’s girlfriend had discovered the body.

4David_Gerard9yHe had a history of depression he talked about in public a great deal. Your comment is asinine and whoever runs this account should feel ashamed of themselves.
6[anonymous]9yThat's the first think that came to my mind, but I dismissed it as paranoia. But if I'm not the only one... EDIT: OTOH, there's this [http://lesswrong.com/r/discussion/lw/gaq/farewell_aaron_swartz_19862013/89op] ... What person makes a will at 26?
2012 Survey Results

"Eliezer Yudkowsky personality cult."
"The new thing for people who would have been Randian Objectivists 30 years ago."
"A sinister instrument of billionaire Peter Thiel."

Nope, no one guessed whose sinister instrument this site is. Muaha.

Rationality Lessons Learned from Irrational Adventures in Romance

You really ought to get yourself an anonymous alter-identity so you aren't tempted to discuss things like this under your real name. I believe that you in particular should avoid this topic when writing on public forums.

I'm curious as to why me in particular, but I'm happy to hear from you privately. In general, I go with radical transparency. I think that the truth is that so long as you don't show shame, guilt or malice you win. Summers screwed up by accepting that his thoughts were shameful and then asserting that they were forced by reason and that others were so forced as well. This is both low-status and aggressive, a bad combination and a classic nerdy failure mode.

[LINK] Terrorists target AI researchers

One Quirrell point to JoshuaZ for getting both of the reasons, rather than stopping after just one like jimrandomh did.

(I'm going to stop PGP signing these things, because when I did that before, it was a pain working around Markdown, and it ended up having to be in code-format mode, monospaced and not line broken correctly, which was very intrusive. A signed list of all points issued to date will be provided on request, but I will only bother if a request is actually made.)

[LINK] Terrorists target AI researchers

A while back, I claimed the Less Wrong username Quirinus Quirrell, and started hosting a long-running, approximate simulation of him in my brain. I have mostly used the account trivially - to play around with crypto-novelties, say mildly offensive things I wouldn't otherwise, and poke fun at Clippy. Several times I have doubted the wisdom of hosting such a simulation. Quirrell's values are not my own, and the plans that he generates (which I have never followed) are mostly bad when viewed in terms of my values. However, I have chosen to keep this occasiona... (read more)

7Jack10yCommon enough it seems. "Libertad por lxs pressxs politicxs" is a thing (a facebook group even) and from what I gather, a common graffiti slogan.
[LINK] Terrorists target AI researchers

DO NOT USE YOUR REGULAR IDENTITY TO SAY ANYTHING TRULY INTERESTING ON THIS THREAD, OR ON THIS TOPIC, UNLESS YOU HAVE THOUGHT ABOUT IT FOR FIVE MINUTES.

-2[anonymous]10yHeh. If a poster of one of these comments later disappears from LW for any amount of time, this might well become a local meme akin to the Bas-

In general, you would be advised not to say anything on the Internet unless you have thought about it for at least five minutes.

7Clippy10yWhy not? You just did. I'm going to post here with my name even if it does draw negative attention from a fringe group of terrorists.
3Incorrect10yWhy not? (This is a serious question. I don't know why not.)

You're paranoid. We're only speculating on the motives, identity, and whereabouts of a serial killer, in a public forum. What could possibly go wrong?

Test comment

[This comment is no longer endorsed by its author]Reply
The Blue-Minimizing Robot

Also, you misspelled my name - it's Quirinus, not Quirinius.

5John_Maxwell7yHijacking top comment... To finish reading Yvain's sequence, check out the corresponding sequence page [http://wiki.lesswrong.com/wiki/The_Blue-Minimizing_Robot_\(sequence\]).
Is Kiryas Joel an Unhappy Place?

I sometimes feel like there is a shadowy half-underground group of LWers that is intelligent enough to stay away from bad signalling and has altruistic intentions, but has to deal every now and then with a slight twitch, reading something knowing they can't really state a proper response.

(linked comment) Delusions that are truly widely held and not merely believed to be widely held are far too dangerous to attack. There are sociopolitical Eldritch Abominations that it would serve LW well to stay well clear of and perhaps even pretend they don't exist for

... (read more)
0[anonymous]8yThat is an outright brilliant idea, and the next time LW does one of these ridiculous "Everyone post your ever-so-supposedly controversial but rational opinions that actually just amount to outright misanthropy" threads, I'm going to do it.
7Strange711yAll the 'unpleasant truths' you list seem to be facets of a single underlying issue of genetics. I consider none of them particularly shocking, especially in the weak forms you use there. Damn near any observable fact related to a given person will 'convey nonzero information about their their [sic] social status," so if you're going to use this persona to say what you otherwise couldn't get away with, how about you fill out your theory with some policy suggestions, or at least more specific predictions?
"High Value" Karma vs "Regular" (i.e. Quirrell Points)

[Clippy] What's your private key?

It's 4,096 paperclips on a ring, each bent in one of two ways to indicate either a 0 or a 1. Neither the 0s nor the 1s could hold paper together in their current shape.

I infer, then, that "stored safely on a computer I control" means resting on top of the case?

You're a bad human. I'm going to give a negative-Clippy-point to anyone you give Quirrell points to now.

I mean, once I get GnuPG to work.

6TheOtherDave11yNow that's just mean.
"High Value" Karma vs "Regular" (i.e. Quirrell Points)

I see that you have edited the title of this post to mention Quirrell points. I appreciate the gesture. However, you've misspelled my name; it should have two 'l's.

0Raemon11yWhat's funny is that I specifically noticed that when writing another reply, before editing the title. Fixed now.
"High Value" Karma vs "Regular" (i.e. Quirrell Points)

The PGP thing is a cryptographic signature which proves that the comment was written by me. What I did was, I made a PGP key, which has two halves: a public key, which is now on my user page of the Less Wrong wiki, and a private key, which is stored safely on a computer I control. I input my private key and a message into GnuPG, and it outputs a signature (what you saw in the earlier comment). Anyone else can take that message with its signature, and my public key, and confirm that I must have had the private key in order to sign it that way.

This means tha... (read more)

0Clippy11yWhat's your private key?
"High Value" Karma vs "Regular" (i.e. Quirrell Points)

Only I can issue Quirrell points (hence the name and the signature), but you can issue Normal_Anomaly points if you want.

0Clippy11yAnd I can issue Clippy points?
1Normal_Anomaly11yLogical.
4Raemon11yWait, to clarify, IS this something people can actually do? I don't know what a PGP public key is or how I would use it. I was assuming you were just being funny.
"High Value" Karma vs "Regular" (i.e. Quirrell Points)
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Raemon received one Quirrell point on 16/4/2011, for his post
http://lesswrong.com/r/discussion/lw/59x/high_value_karma_vs_regular/
having inspired the idea of issuing Quirrell points on Less Wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG of some sort

iQIcBAEBAgAGBQJNqjzHAAoJEJVKvKyQdzsK/hMQAKlalx44MZT/7xkplZ6i5eC/
uRFz8fOWFeErxB0OYme32e8MQwgzxPjBCYrC+bEZ9cnMoMA0VSx9U+LhMKu+4PQM
7evQRZu0NL4iRwRjTZjs0Sug4GiWI/hGj8bjq/Ax1RfkI6Vg48PVSaWbWDpfPHks
EMqSVVIA24XAZZRAL2xzKVujyOA9JMu22ppBUuMqP8cTb1uXzhkLm+/IQ0HR+6
... (read more)
0Normal_Anomaly11yWas your idea that anybody can get their own signature and issue Quirrel points or just you?
2JoshuaZ11yIs your PGP public key published anywhere?
1Raemon11y:)
How would you respond to the Philpapers "What are your Philosophical Positions" Survey?
  1. A priori knowledge: yes or no? Yes
  2. Abstract objects: Platonism or nominalism? Either, depending whether the second vowel has rising or falling intonation
  3. Aesthetic value: objective or subjective? Subjective
  4. Analytic-synthetic distinction: yes or no? Yes
  5. Epistemic justification: internalism or externalism? Internalism
  6. External world: idealism, skepticism, or non-skeptical realism? Skepticism. I haven't been able to take realism seriously since I left Hogwarts.
  7. Free will: compatibilism, libertarianism, or no free will? No free will
  8. God: theism or atheism? A
... (read more)
1JenniferRM8yI found this comment while wandering the archives and was surprised, in that context, that it didn't have more upvotes. Clicking through, I see that the playing field was relatively level and many upvotes would have been contextually unnecessary. (Part of my personal enjoyment grew from the implicit dynamics between this copy of Quirinus and the most likely host of this copy of Quirinus. Too bad Clippy posts so rarely... it would be interesting to see that set of answers as well.)
First Waco, Texas LW Meetup, 4/09, 1PM

This comment is more likely if Silas is Clippy than if he isn't.

3SilasBarta11yThis comment is more likely if Quirinus Quirrel is Clippy than if he isn't.
6jklsemicolon11yFor that matter, so is its parent [http://lesswrong.com/lw/54l/first_waco_texas_lw_meetup_409_1pm/3uni].
12-year old challenges the Big Bang

I wouldn't exactly call it a cover-up. It looks to me like the actual goal was to ensure that a particular subject wouldn't develop further, by derailing any discussions about it into meta-discussions about censorship. Lots of noise was made, but no one ever published a sufficiently detailed description of the spell, so this did in fact succeed in averting a minor disaster.

0Lambda9yWas this the "PUA Controversy"?
A Brief Overview of Machine Ethics

You seem to be under the impression that Eliezer is going to create an artificial general intelligence, and oversight is necessary to ensure that he doesn't create one which places his goals over humanity's interests. It is important, you say, that he is not allowed unchecked power. This is all fine, except for one very important fact that you've missed.

Eliezer Yudkowsky can't program. He's never published a nontrivial piece of software, and doesn't spend time coding. In the one way that matters, he's a muggle. Ineligible to write an AI. Eliezer has not po... (read more)

7Eliezer Yudkowsky8y(For the record: I've programmed in C++, Python, Java, wrote some BASIC programs on a ZX80 when I was 5 or 6, and once very briefly when MacOS System 6 required it I wrote several lines of a program in 68K assembly. I admit I haven't done much coding recently, due to other comparative advantages beating that one out.)
1XiXiDu11yI disagree based on the following evidence: You further write: I'm not aware of any reason to believe that recursively self-improving artificial general intelligence is going to be something you can 'run away with'. It looks like some people here think so, that there will be some kind of, with hindsight, simple algorithm for intelligence that people can just run and get superhuman intelligence. Indeed, transparency could be very dangerous in that case. But that doesn't mean it is an all or nothing decision. There are many other reasons for transparency, including reassurance and the ability to discern a trickster or impotent individual from someone who deserves more money. But as I said, I don't see that anyway. It'll more likely be a blue sheet of different achievements that are each not dangerous on their own. I further think it will be not just a software solution but also a conceptual and computational revolution. In those cases an open approach will allow public oversight. And even if someone is going to run with it, you want them to use your solution rather than one that will most certainly be unfriendly.
4David_Gerard11yhttp://en.wikipedia.org/wiki/Anonymous_%28group%29 [http://en.wikipedia.org/wiki/Anonymous_%28group%29]
1endoself11yAnonymous is an actual group. Members think of themselves as members of Anonymous.
Isn't this sitemeter logging a bit too excessive?

I'm curious what the marginal next best strategy is. I'm also curious why you would be interested in promoting the unmasking of users.

Not all users, just the few I happen to be curious about. And no, I won't say anything more about what the marginal next-best strategy is other than that I'm immune to it too, and -1 Quirrell point for asking.

Isn't this sitemeter logging a bit too excessive?

I have just realized that sitemeter has the following data published about my visit, in a searchable and browsable format:

Searchable my behind! I looked into what it would take to use this to, for example, unmask Clippy, and it was less usable than the marginal next-best strategy.

2DanielVarga11yI think you are more interested in avoiding the unmasking. Anyway, you are right that sitemeter is not very convenient for this task, but the data is there. And unmasking is not the only possible application. Right now, it is publishing the fact that I looked at Quirrell's and JoshuaZ's user profile. And I might have just figured out Eliezer's current IP address. (Okay, maybe it was somebody else who tried to visit Eliezer's password-protected drafts page. Wait, is Eliezer writing a post?)
1JoshuaZ11yI'm curious what the marginal next best strategy is. I'm also curious why you would be interested in promoting the unmasking of users.
Rationality Quotes: February 2011

The world around us redounds with opportunities, explodes with opportunities, which nearly all folk ignore because it would require them to violate a habit of thought ... I cannot quite comprehend what goes through people's minds when they repeat the same failed strategy over and over, but apparently it is an astonishingly rare realization that you can try something else.

-- Eliezer Yudkowsky, putting words in my other copy's mouth

9gwern11yMeta-comment: I think MoR quotes are legitimate for rationality quote pages, since IIRC we previously established that Eliezer quotes from Hacker News were kosher. And if random Eliezer comments not on OB/LW are kosher, then surely quotes from his fiction are kosher.
Final Words

I voted on this and the immediate parent, but I won't reveal why, or which direction, or how many times, or which account I used.

5orthonormal11yDon't blame me, I voted for the original comment.
1[anonymous]11yI may or may not have voted on your comment, but then I deleted this comment.
A sealed prediction

You're safeguarding against the wrong thing. If I needed to fake a prediction that badly, I'd find a security hole in Less Wrong with which to edit all your comments. I wouldn't waste time establishing karma for sockpuppets to post editable hashes to deter others from posting hashes themselves, that would be silly. But as it happens, I'm not planning to edit this hash, and doing that wouldn't have been a viable strategy in the first place.

4Unnamed7y"Clearly, the way to make our safeguards super-secure is to make yet another comment with the hash." "Clearly, the way to make my safeguards super-secure is to make yet another Horcrux." Somehow, you could only see through one of these strategies.
Punishing future crimes

When should you punish someone for a crime they will commit in the future?

Easy. When they can predict you well enough and they think you can predict them well enough that if you would-counterfactually punish them for committing a crime in the future, it influences the probability that they will commit the crime by enough to outweigh the cost of administering the punishment times the probability that you will have to do so. Or when you want to punish them for an unrelated reason and need a pretext.

Not every philosophical question needs to be complicated.

0Jonii11ySo you can avoid being punished by not predicting potential punishers well enough, or by deciding to do something regardless of punishments you're about to receive? I'm not sure that's good.
4Bongo11yI don't understand one part. How do you determine the probability that you will have to administer the punishment?
A sealed prediction

Someone as clever, powerful, and rich as yourself can likely find a collision if you get to choose both source texts (which is easier than finding a collision with one of the two inputs determined by someone else).

This is actually much harder than you'd think. A hash function is considered broken if any collision is found, but a mere collision is not sufficient; to be useful, a collision must have chosen properties. In the case of md5sum, it is possible to generate collisions between files which differ in a 128-byte aligned block, with the same prefix a... (read more)

1philh8yI don't think you get notified when people reply to you top-level, so I'll ask here in case you forgot - any update on this?
3JoshuaZ11yThere's another reason not to do so. No one has thought strongly about how the different hashes would interact together. It wouldn't surprise me if there were some way given the various hashes to extract information that would not otherwise be extractable given any single hash scheme. This is all the more plausible given that you've given the hash for the fairly weak md5. The multiple hashes you have given make it implausible that you could have multiple texts that lead to the same result; adding more hash types has more of an effect now of making it conceivable that a sufficiently interested individual could identify your text.
4Paul Crowley11yimpossible that is, not currently publically known to be possible - MD5 gets more broken all the time, so I wouldn't want to be very confident about what is impossible. sha512sum: 85cf46426d025843d6b0f11e3232380c6fac6cae88b66310ee8fbcd3f81722d08b2154c6388ecb1ee9cebc528e0f56e3be7a057cd67531cfda442febe0132418 sha384sum: 400d47bf97b6a3ccd662e0eb1268820c57d10e2a623c3a007b297cc697ed560862dda19b74638f92a3550fbbfe14d485 md5sum: 8fec2109c85f622580e1a78c9cabdab4
Richard Dawkins: Should employers be blind to private beliefs? [link]

This issue came up on Less Wrong before, and I will reiterate the advice I gave there: if a forbidden criteria affects a hiring decision, keep your reasons secret and shred your work. The linked article is about a case where the University of Kentucky was forced to pay $125,000 to an applicant, Martin Gaskell. This happened because the chairman of the search committee, Michael Cavagnero, was stupid enough to write this in a logged email:

If Martin were not so superbly qualified, so breathtakingly above the other applicants in background and experience, th

... (read more)
-1Jayson_Virissimo11yEither that or they could just hire the best qualified candidate for the job.
Theists are wrong; is theism?

I recommend one additional layer of outgoing indirection prior to the Tor network as part of standard precaution measures.

Let's not get too crazy; I've got other things to do. and there are more practical attacks to worry about first, like cross-checking post times against alibis. I need to finish my delayed-release comment script first before I worry about silly things like setting up extra relays. Also, there are lesson plans I need to write, and some Javascript I want Clippy to have a look at.

2wedrifid11yJust callibrating vs egress and TrueCrypt standards. Tor was an odd one out!
Hugo Awards - HP:MoR

Good idea. I'd vote at least once for this.

Tallinn-Evans $125,000 Singularity Challenge

Meh. The villains seem a lot less formidable in real life, like they left something essential behind in the fiction.

Hey, be patient. I haven't been here very long, and building up power takes time.

Statistical Prediction Rules Out-Perform Expert Human Judgments

In short, there most certainly ARE legal restrictions on building your office somewhere deliberately selected for it's inaccessibility to those with a congenital inability to e.g. teleport,

The Americans with Disabilities Act limits what you can build (every building needs ramps and elevators), not where you can build it. Zoning laws are blacklist-based, not whitelist-based, so extradimensional spaces are fine. More commonly, you can easily find office space in locations that poor people can't afford to live near. And in the unlikely event that race or n... (read more)

5[anonymous]11yUp voted for having the sense to employ a blindingly obvious strategy that somehow consistently fails to become common sense.
0Strange711yBut that they could, in principle, walk to and from.
Theists are wrong; is theism?

You needn't worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?

By the way, while I may sometimes make jokes, I don't consider this a joke account; I intend to conduct serious business under this identity, and I don't intend to endanger that by linking it to any other identities I may have.

0wedrifid11yI recommend one additional layer of outgoing indirection prior to the Tor network as part of standard precaution measures. (I would suggest an additional physical layer of protection too but I as far as I am aware you do not have a physical form.)
Tallinn-Evans $125,000 Singularity Challenge

Memory charms do have their uses. Unfortunately, they seem to only work in universes where minds are ontologically basic mental entities, and the potions available in this universe are not fast, reliable or selective enough to be adequate substitutes.

3JoshuaZ11yInteresting, I would have guessed that memory modification would be easier when minds aren't ontologically basic mental entities because there are then actual parts of the mind that one can target.
Tallinn-Evans $125,000 Singularity Challenge

Of course. The defining difference is that force can't be ignored, so threatening a punishment only constitutes force if the punishment threatened is strong enough; condemnation doesn't count unless it comes with additional consequences. Force is typically used in the short term to ensure conformance with plans, while behaviour modification is more like long-term groundwork. Well executed behaviour modifications stay in place with minimal maintenance, but the targets of force will become more hostile with each application. If you use a behaviour modificat... (read more)

0TheOtherDave11yMakes sense. So, if sfb edits the parent to read "then we will rely on punishment to modify your behavior so you profess agreement" instead of "then you will be forced to profess agreement," that addresses your objection?
Tallinn-Evans $125,000 Singularity Challenge

Translation: [...] I cannot walk away from this and leave you being wrong, you must profess to agree with me and if you are not rational enough to understand and accept logical arguments then you will be forced to profess agreement.

I never said anything about using force. Not that there's anything wrong with that, but it's a different position, not a translation.

0sfb11yYou (probably) know what I meant, and whether or not you mentioned force specifically - either way doesn't change the gist of the "translation". A weasely [http://en.wikipedia.org/wiki/Weasel_word] objection.
2TheOtherDave11yIf you can clarify the distinction you draw between the use of force and the use of punishments to modify behavior and why that distinction is important, I'd be interested.
1JoshuaZ11yWhat is your opinion on the use of memory charms to modify behavior?
Theists are wrong; is theism?

Or what, you'll write me an unhappy ending? Just be thankful I left a body behind for you to finish your story with.

Theists are wrong; is theism?

The "just hack out of the matrix" answer, however, presupposes the existence of a security hole, which is unlikely.

Not as unlikely as you think.

Get back in the box!

Statistical Prediction Rules Out-Perform Expert Human Judgments

That doesn't close the loophole, it adds a constraint. And it's only significant for those who both hire enough people to be vulnerable to statistical analysis of their hiring practices, and receive too many bad applicants from protected classes. If it is a significant constraint, you want to find that out from the data, not from guesswork, and apply the minimum legally acceptable correction factor.

Besides, it's not like muggles are a protected class. And if they were? Just keep them from applying in the first place, by building your office somewhere they can't get to. There aren't any legal restrictions on that.

Besides, it's not like muggles are a protected class. And if they were? Just keep them from applying in the first place, by building your office somewhere they can't get to. There aren't any legal restrictions on that.

You joke, but the world [1] really is choking with inefficient, kludgey workarounds for the legal prohibition of effective employment screening. For example, the entire higher education market has become, basically, a case of employers passing off tests to universities that they can't legally administer themselves. You're a terrorist if ... (read more)

6Strange711yYou really are new here, aren't you? http://en.wikipedia.org/wiki/Americans_with_Disabilities_Act_of_1990#Title_III_-_Public_Accommodations_.28and_Commercial_Facilities.29 [http://en.wikipedia.org/wiki/Americans_with_Disabilities_Act_of_1990#Title_III_-_Public_Accommodations_.28and_Commercial_Facilities.29] http://en.wikipedia.org/wiki/Zoning [http://en.wikipedia.org/wiki/Zoning] In short, there most certainly ARE legal restrictions on building your office somewhere deliberately selected for it's inaccessibility to those with a congenital inability to e.g. teleport, and a lack of teleportation-specific case law would not work in your favor, given the judge's access to statements you've already made.
6gwern11yYes, it does close the loophole. You say conceal the cause (intent to discriminate) and you can get away with as much effect (disproportionate exclusion) as you want. Except the law already specifies that the effect is punishable as well as the cause. So now the best you can do, assuming the populations are equally competent and suited for the job, is 20% discrimination. And of course, in the real world, populations usually differ in their suitability for the job. Blacks tend not to have as many CS degrees as whites, for example. So if you are an employer of CS degrees, you may not be able to get away with any discrimination before you have breached the 20% limit, and may need to discriminate against the non-blacks in order to be compliant. I would suspect that if the US Muggle legal system had anything to say about it, they would be. If magical-ness is conferred by genes, then it's violating either the general racial guideline or it's violating recent laws (signed by GWB, IIRC) forbidding employer discrimination based on genetics (in the context of genome sequencing, true, but probably general). If it's not conferred by genes, then there may be a general cultural basis on which to sue (Muggles as disabled because they lack an ability necessary for basic functioning in Wizarding society, perhaps).
6Barry_Cotter11yMy google-fu is not strong enough to find the legal doctrine, but in the US at least, you can be sued for ~implicit discrimination, i.e. if the newspaper you advertise in has a reader population that does not refelect the general population, you're discriminating against the under represented population.
Statistical Prediction Rules Out-Perform Expert Human Judgments

If the best way to choose who to hire is with a statistical analysis of legally forbidden criteria, then keep your reasons secret and shred your work. Is that so hard?

0Vladimir_M11ySee the last sentence of my first paragraph above (the one in parentheses).
6gwern11yA loophole already closed: http://en.wikipedia.org/wiki/Disparate_impact#The_80.25_rule [http://en.wikipedia.org/wiki/Disparate_impact#The_80.25_rule]
Tallinn-Evans $125,000 Singularity Challenge

From the username, I was expecting that the suggestion was going to be to say avada kedavra.

I'd never say that on a forum that would generate a durable record of my comment.

Tallinn-Evans $125,000 Singularity Challenge

I'm beginning to think that LW needs some better mechanism for dealing with the phenomenon of commenters who are polite, repetitive, immune to all correction, and consistently wrong about everything.

The problem is quite simple. Tim, and the rest of the class of commenters to which you refer, simply haven't learned how to lose. This can be fixed by making it clear that this community's respect is contingent on retracting any inaccurate positions. Posts in which people announce that they have changed their mind are usually upvoted (in contrast to other co... (read more)

5sfb11yTranslation: I haven't managed to convince you therefore you must be punished for your insolent behaviour of not being convinced by my arguments. I cannot walk away from this and leave you being wrong, you must profess to agree with me and if you are not rational enough to understand and accept logical arguments then you will be forced to profess agreement. Who did you say hasn't learned how to lose? Warn, then ban the people involved. If you decide that refusing to be convinced by evidence while also unable to convincingly counter it, and at the same time continuing to argue is bad form for the LW that you want to create, then stand by that decision and act on it [http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/].
1TheOtherDave11yEscalating punishment so someone "learns better" can work, but it requires real punishments, not symbolic ones. It's not clear to me that "plonking" would accomplish that. And, of course, it has all the same problems that punishment-based behavior modification always has.
3sfb11yIs changing professed beliefs to something else without understanding / agreeing with the new position, but just doing it to gain community respect, a virtue? Or still isn't convinced that he is wrong by the time you have passed your tolerance of explaining so you give up and decide he must be broken. Your proposed 'solution' is a hack so you can give up on convincing him but still have him act convinced for the benefit of appearances - maybe you are simply expecting far far too short inferential distances?
1XiXiDu11yAs long as you are no fooming FAI...

Eliezer has really got to do something about his fictional villains escaping into real life. First Clippy, now you too?

Posts in which people announce that they have changed their mind are usually upvoted

As a total newbie to this site, I applaud this sentiment, but have just gone through an experience where this has not, in fact, happened.

After immediately retracting my erroneous statement (and explaining exactly where and why I'd gone wrong), I continued to be hammered over arguments that I had not actually made. My retracted statements (which I've left in place, along with the edits explaining why they're wrong) stay just as down-voted as before...

My guess is that som... (read more)

4JoshuaZ11yFrom the username, I was expecting that the suggestion was going to be to say avada kedavra.