All of RobinZ's Comments + Replies

Heads-up: Meeting starts as normal in the courtyard, but there is an event tomorrow and the preparations might lead to disruptions around 5 p.m. Just for general reference: the backup location is the Luce Center on the third floor - same side of the building as the big spiral staircase, toward the right if you're standing at the top of the staircase facing the outside wall.

Belatedly: some more vivid examples of "hope":

... (read more)

I continue to endorse being selective in whom one spends time arguing with.

Is the long form also unclear? If so, could you elaborate on why it doesn't make sense?

I didn't propose that you should engage in detailed arguments with anyone - not even me. I proposed that you should accompany some downvotes with an explanation akin to the three-sentence example I gave.

Another example of a sufficiently-elaborate downvote explanation: "I downvoted your reply because it mischaracterized my position more egregiously than any responsible person should." One sentence, long enough, no further argument required.

I retract my previous statement [] based on new evidence acquired.

I may have addressed the bulk of what you're getting at in another comment; the short form of my reply is, "In the cases which 'heroic responsibility' is supposed to address, inaction rarely comes because an individual does not feel responsible, but because they don't know when the system may fail and don't know what to do when it might."

Short form reply: That seems false. Perhaps you have a different notion of precisely what heroic responsibility is supposed to address?

I think I see what you're getting at. If I understand you rightly, what "heroic responsibility" is intended to affect is the behavior of people such as [trigger warning: child abuse, rape] Mike McQueary during the Penn State child sex abuse scandal, who stumbled upon Sandusky in the act, reported it to his superiors (and, possibly, the police), and failed to take further action when nothing significant came of it. [/trigger warning] McQueary followed the 'proper' procedure, but he should not have relied upon it being sufficient to do the job. He ... (read more)

All right, cool. I think that dissolves most of our disagreement.

I confess, it would make sense to me if Harry was unfamiliar with metaethics and his speech about "heroic responsibility" was an example of him reinventing the idea. If that is the case, it would explain why his presentation is as sloppy as it is.

No, I haven't answered my own question. In what way was Harry's monologue about consequentialist ethics superior to telling Hermione why McGonagall couldn't be counted upon?

...huh. I'm glad to have been of service, but that's not really what I was going for. I meant that silent downvoting for the kind of confusion you diagnosed in me is counterproductive generally - "You keep using that word. I do not think it means what you think it means" is not a hypothesis that springs naturally to mind. The same downvote paired with a comment saying:

This is a waste of time. You keep claiming that "heroic responsibility" says this or "heroic responsibility" demands that, but you're fundamentally mistaken ab

... (read more)

If I believed you to be a virtue ethicist, I might say that you must be mindful of your audience when dispensing advice. If I believed you to be a deontologist, I might say that you should tailor your advice to the needs of the listener. Believing you to be a consequentialist, I will say that advice is only good if it produces better outcomes than the alternatives.

Of course, you know this. So why do you argue that Harry's speech about heroic responsibility is good advice?

It seems like you've already answered your own question!

You are analyzing "heroic responsibility" as a philosophical construct. I am analyzing it as [an ideological mantra]. Considering the story, there's no reason for Harry to have meant it as the former, given that it is entirely redundant with the pre-existing philosophical construct of consequentialism, and every reason for him to have meant it as the latter, given that it explains why he must act differently than Hermione proposes.

[Note: the phrase "an ideological mantra" appears here because I'm not sure what phrase should appear here. Let me know if what I mean requires elaboration.]

I think you might be over-analyzing the story; which is fine actually, as I'm enjoying doing the same. I have no evidence that Eliezer considered it so, but I just think Harry was explaining consequentialism to Hermione, without introducing it as a term. I'm unsure if it's connected in any obvious way, but to me the quoted conversation between Harry and Hermione is reminiscent of other conversations between the two characters about heroism generally. In that context, it's obviously a poor 'ideological mantra' as it was targeted towards Hermione. Given what I remember of the story, it worked pretty well for her.

s/work harder, not smarter/get more work done, not how to get more work done/

This advice doesn't tell people how to fix things, true, but that's not the point--it tells people how to get into the right mindset to fix things.

Why do you believe this to be true?

That's an interesting question. I'll try to answer it here. This seems to imply that no matter whatever happens, you should hold yourself responsible in the end. If you take a randomly selected person, which of the following two cases do you think will be more likely to cause that person to think really hard about how to solve a problem? 1. They are told to solve the problem. 2. They are told that they must solve the problem, and if they fail for any reason, it's their fault. Personally, I would find the second case far more pressing and far more likely to cause me to actually think, rather than just take the minimum number of steps required of me in order to fulfill the "role" of a problem-solver, and I suspect that this would be true of many other people here as well. Certainly I would imagine it's true of many effective altruists, for instance. It's possible I'm committing a typical mind fallacy here, but I don't think so. On the other hand, you yourself have said that your attitude toward this whole thing is heavily driven by the fact that you have anxiety disorder, and if that's the case, then I agree that blaming yourself is entirely the wrong way to go about doing things. That being said, the whole point of having something called "heroic responsibility" is to get people to actually put in some effort as opposed to just playing the role of someone who's perceived as putting in effort. If you are able to do that without resorting to holding yourself responsible for the outcomes of situations, then by all means continue to do so. However, I would be hesitant to label advice intended to motivate and galvanize as "useless", especially when using evidence taken from a subset of all people (those with anxiety disorders) to make a general claim (the notion of "heroic responsibility" is useless).

Neither Hermione nor Harry dispute that they have a responsibility to protect the victims of bullying. There may be people who would have denied that, but none of them are involved in the conversation. What they are arguing over is what their responsibility requires of them, not the existence of a responsibility. In other words, they are arguing over what to do.

Human beings are not perfect Bayesian calculators. When you present a human being with criteria for success, they do not proceed to optimize perfectly over the universe of all possible strategies. T... (read more)

That's the part I'm not getting. All Harry is saying is that you should consider yourself responsible for the actions you take, and that delegating that responsibility to someone else isn't a good idea. Delegating responsibility, however, is not the same as delegating tasks. Delegating a particular task to someone else might well be the correct action in some contexts, but you're not supposed to use that as an excuse to say, "Because I delegated the task of handling this situation to someone else, I am no longer responsible for the outcome of this situation." This advice doesn't tell people how to fix things, true, but that's not the point--it tells people how to get into the right mindset to fix things. In other words, it's not object-level advice; it's meta-level advice, and obviously if you treat it as the former instead of the latter you're going to come to the conclusion that it sucks. Sometimes, to solve a problem, you have to work harder. Other times, you have to work smarter. Sometimes, you have to do both. "Heroic responsibility" isn't saying anything that contradicts that. In the context of the conversation in HPMoR, I do not agree with either Hermione or Harry; both of them are overlooking a lot of things. But those are object-level considerations. Once you look at the bigger picture--the level on which Harry's advice about heroic responsibility actually applies--I don't think you'll find him saying anything that runs counter to what you're saying. If anything, I'd say he's actually agreeing with you! Humans are not perfectly rational agents--far from it. System 1 often takes precedence over System 2. Sometimes, to get people going, you need to re-frame the situation in a way that makes both systems "get it". The virtue of "heroic responsibility", i.e. "no matter what happens, you should consider yourself responsible", seems like a good way to get that across.
Again, you're right about the advice being poor – in the way you mention – but I also think it's great advice if you consider it's target the idea that the consequences are irrelevant if you've done the 'right' thing. If you've done the 'right' thing but the consequences are still bad, then you should probably reconsider what you're doing. When aiming at this target, 'heroic responsibility' is just the additional responsibility of considering whether the 'right' thing to do is really right (i.e. will really work). ... And now that I'm thinking about this heroic responsibility idea again, I feel a little more strongly how it's a trap – it is. Nothing can save you from potential devastation at the loss of something or someone important to you. Simply shouldering responsibility for everything you care about won't actually help. It's definitely a practical necessity that groups of people carefully divide and delegate important responsibilities. But even that's not enough! Nothing's enough. So we can't and shouldn't be content with the responsibilities we're expected to meet. I subscribe to the idea that virtue ethics is how humans should generally implement good (ha) consequentialist ethics. But we can't escape the fact that no amount of Virtue is a complete and perfect means of achieving all our desired ends! We're responsible for which virtues we hold as much as we are of learning and practicing them.

I downvoted RobinZ's comment and ignored it because the confusion about what heroic responsibility means was too fundamental, annoyingly difficult to correct and had incidentally already been argued for far more eloquently elsewhere in the thread.

I would rather you tell me that I am misunderstanding something than downvote silently. My prior probability distribution over reasons for the -1 had "I disagreed with Eliezer Yudkowsky and he has rabid fans" orders of magnitude more likely than "I made a category error reading the fanfic and now we're talking past each other", and a few words from you could have reversed that ratio.

Thankyou for your feedback. I usually ration my explicit disagreement with people on the internet [] but your replies prompt me to add "RobinZ" to the list of people worth actively engaging with.

I'm realizing that my attitude towards heroic responsibility is heavily driven by the anxiety-disorder perspective, but telling me that I am responsible for x doesn't tell me that I am allowed to delegate x to someone else, and - especially in contexts like Harry's decision (and Swimmer's decision in the OP) - doesn't tell me whether "those nominally responsible can't do x" or "those nominally responsible don't know that they should do x". Harry's idea of heroic responsibility led him to conflate these states of affairs re: McGonagall, ... (read more)

Surprisingly, so is mine, yet we've arrived at entirely different philosophical conclusions. Perfectionistic, intelligent idealist with visceral aversions to injustice walk a fine line when it comes to managing anxiety and the potential for either burn out or helpless existential dispair. To remain sane and effectively harness my passion and energy I had to learn a few critical lessons: * Over-responsibility is not 'responsible'. It is right there next to 'completely negligent' inside the class 'irresponsible'. * Trusting that if you do what the proximate social institution suggests you 'should' do then it will take care of problems is absurd. Those cursed with either weaker than normal hypocrisy [] skills or otherwise lacking the privelidge to maintain a sheltered existence will quickly become distressed from constant disappointment. * For all that the local social institutions fall drastically short of ideals - and even fall short of what we are supposed to pretend to believe of them - they are still what happens to be present in the universe that is and so are a relevant source of power. Finding ways to get what you want (for yourself or others) by using the system is a highly useful skill. * You do not (necessarily) need to fix the system in order to fix a problem that is important to you. You also don't (necessarily) need to subvert it. 'Hermione' style 'responsibility' would be a recipe for insanity if I chose to keep it. I had to abandon it at about the same age she is in the story. It is based on premises that just don't hold in this universe. 'Responsibility' of the kind you can tell others they have is almost always fundamentally different in kind to the 'responsibility' word as used in 'heroic responsibility'. It's a difference that results in frequent accidental equivocation and accidental miscommunicaiton across inferential distances [
I agree with all of this except the part where you say that heroic responsibility does not include this. As wedrifid noted in the grandparent of this comment, heroic responsibility means using the resources available in order to achieve the desired result. In the context of HPMoR, Harry is responding to this remark by Hermione: Again, as wedrifid noted above, this is step one and only step one. Taking that step alone, however, is not heroic responsibility. I agree that Harry's method of dealing with the situation was far from optimal; however, his general point I agree with completely. Here is his response: Notice that nowhere in this definition is the notion of running to an authority figure precluded! Harry himself didn't consider it because he's used to occupying the mindset that "adults are useless". But if we ignore what Harry actually did and just look at what he said, I'm not seeing anything here that disagrees with anything you said. Perhaps I'm missing something. If so, could you elaborate?

Full disclosure: I stopped reading HPMoR in the middle of Chapter 53. When I was researching my comment, I looked at the immediate context of the initial definition of "heroic responsibility" and reviewed Harry's rationality test of McGonagall in Chapter 6.

I would have given Harry a three-step plan: inform McGonagall, monitor situation, escalate if not resolved. Based on McGonagall's characterization in the part of the story I read, barring some drastic idiot-balling since I quit, she's willing to take Harry seriously enough to act based on the i... (read more)

Your mention of anxiety (disorders) reminds me of Yvain's general point that lots of advice is really terrible for at least some people []. As I read HPMoR (and I've read all of it), a lot of the reason why Harry specifically distrusts the relevant authority figures is that they are routinely surprised by the various horrible events that happen and seem unwilling to accept responsibility for anything they don't already expect. McGonagall definitely improves on this point in the story tho. In the story, the advice Harry gives Hermione seems appropriate. Your example would be much better for anyone inclined to anxiety about satisfying arbitrary constraints (i.e. being responsible for arbitrary outcomes) – and probably for anyone, period, if for no other reason than it's easier to edit an existing idea than generate an entirely new one. @wedrifid's correct your plan is better than Harry's in the story, but I think Harry's point – and it's one I agree with – is that even having a plan, and following it, doesn't absolve oneself – and to oneself, if no one else – of coming up with a better plan, or improvising, or delegating some or all of the plan, if that's what's needed to stop kids from being bullied or an evil villain from destroying the world (or whatever). Another way to consider the conversation in the story is that Hermione initially represents virtue ethics: Harry counters with a rendition of consequentialist ethics.
Your three step plan seems much more effective than Harry's shenannigans and also serves as an excellent example of heroic responsibility. Normal 'responsibility' in that situation is to do nothing or at most take step one. Heroic responsibility doesn't mean do it yourself through personal power and awesomeness. It means using whatever resources are available to cause the desired thing to occur (unless the cost of doing so is deemed too high relative to the benefit). Institutions, norms and powerful people are valuable resources.

My referent for 'heroic responsibility' was HPMoR, in which Harry doesn't trust anyone to do a competent job - not even someone like McGonagall, whose intelligence, rationality, and good intentions he had firsthand knowledge of on literally their second meeting. I don't know the full context, but unless McGonagall had her brain surgically removed sometime between Chapter 6 and Chapter 75, he could actually tell her everything that he knew that gave him reason to be concerned about the continued good behavior of the bullies in question, and then tell her if... (read more)

Did we read the same story? Harry has lots of evidence that McGonagall isn't in fact trustworthy and in large-part it's because she doesn't fully accept heroic responsibility and is too willing to uncritically delegate responsibility to others. I also vaguely remember your point being addressed in HPMoR. I certainly wouldn't guess that Harry wouldn't understand that "there are no rational limits to heroic responsibility". It certainly matters for doing the most good as a creature that can't psychologically handle unlimited responsibility.
HPJEV isn't supposed to be a perfect executor of his own advice and statements. I would say that it's not the concept of heroic responsibility is at fault, but his own self-serving reasoning which he applies to justify breaking the rules and doing something cool. In doing so, he fails his heroic responsibility to the over 100 expected people whose lives he might have saved by spending his time more effectively (by doing research which results in an earlier friendly magitech singularity, and buying his warm fuzzies separately by learning the spell for transfiguring a bunch of kittens or something), and HPJEV would feel appropriately bad about his choices if he came to that realisation. Depending on what you mean by "blame", I would either disagree with this statement, or I would say that heroic responsibility would disapprove of you blaming yourself too. By heroic responsibility, you don't have time to feel sorry for yourself that you failed to prevent something, regardless of how realistically you could have. Where do you get the idea of "requirements" from? When a shepherd is considered responsible for his flock, is he not responsible for every sheep? And if we learn that wolves will surely eat a dozen over the coming year, does that make him any less responsible for any one of his sheep? IMO no: he should try just as hard to save the third sheep as the fifth, even if that means leaving the third to die when it's wounded so that 4-10 don't get eaten because they would have been traveling more slowly. It is a basic fact of utilitarianism that you can't score a perfect win. Even discounting the universe which is legitimately out of your control, you will screw up sometimes as point of statistical fact. But that does not make the utilons you could not harvest any less valuable than the ones you could have. Heroic responsibility is the emotional equivalent of this fact. That sounds wise, but is it actually true? Do you actually need that serenity/acceptance part? T

Well, let's imagine a system which actually is -- and that might be a stretch -- intelligently designed.

Us? I'm a mechanical engineer. I haven't even read The Checklist Manifesto. I am manifestly unqualified either to design a user interface or to design a system for automated diagnosis of disease - and, as decades of professional failure have shown, neither of these is a task to be lightly ventured upon by dilettantes. The possible errors are simply too numerous and subtle for me to be assured of avoiding them. Case in point: prior to reading that arti... (read more)

Because the cases where the doctor is stumped are not uniformly the cases where the computer is stumped. The computer might be stumped because a programmer made a typo three weeks ago entering the list of symptoms for diphtheria, because a nurse recorded the patient's hiccups as coughs, because the patient is a professional athlete whose resting pulse should be three standard deviations slower than the mean ... a doctor won't be perfectly reliable either, but like a professional scout who can say, "His college batting average is .400 because there aren't many good curveball pitchers in the league this year", a doctor can detect low-prior confounding factors a lot faster than a computer can.

Well, let's imagine a system which actually is -- and that might be a stretch -- intelligently designed. This means it doesn't say "I diagnose this patient with X". It says "Here is a list of conditions along with their probabilities". It also doesn't say "No diagnosis found" -- it says "Here's a list of conditions along with their probabilities, it's just that the top 20 conditions all have probabilities between 2% and 6%". It also says things like "The best way to make the diagnosis more specific would be to run test A, then test B, and if it came back in this particular range, then test C". A doctor might ask it "What about disease Y?" and the expert system will answer "It's probability is such-and-such, it's not zero because of symptoms Q and P, but it's not high because test A came back negative and test B showed results in this range. If you want to get more certain with respect to disease Y, use test C." And there probably would be button which says "Explain" and pressing it will show the precisely what leads to the probability of disease X being what it is, and the doctor should be able to poke around it and say things like "What happens if we change these coughs to hiccups?" An intelligently designed expert system often does not replace the specialist -- it supports her, allows her to interact with it, ask questions, refine queries, etc. If you have a patient with multiple nonspecific symptoms who takes a dozen different medications every day, a doctor cannot properly evaluate all the probabilities and interactions in her head. But an expert system can. It works best as a teammate of a human, not as something which just tells her.

Even assuming that the machine would not be modified to give treatment recommendations, that wouldn't change the effect I'm concerned about. If the doctor is accustomed to the machine giving the correct diagnosis for every patient, they'll stop remembering how to diagnose disease and instead remember how to use the machine. It's called "transactive memory".

I'm not arguing against a machine with a button on it that says, "Search for conditions matching recorded symptoms". I'm not arguing against a machine that has automated alerts about ... (read more)

You are using the wrong yardstick. Ain't no thing is perfectly reliable. What matters is whether an automated system will be more reliable than the alternative -- human doctors. Commercial aviation has a pretty good safety record while relying on autopilots. Are you quite sure that without the autopilot the safety record would be better? And why do you think a doctor will do better in this case?

Largely for the same reasons that weather forecasting still involves human meteorologists and the draft in baseball still includes human scouts: a system that integrates both human and automated reasoning produces better outcomes, because human beings can see patterns a lot better than computers can.

Also, we would be well-advised to avoid repeating the mistake made by the commercial-aviation industry, which seems to have fostered such extreme dependence on the automated system that many 'pilots' don't know how to fly a plane. A system which automates almost all diagnoses would do that.

I am not saying this narrow AI should be given direct control of IV drips :-/ I am saying that a doctor, when looking at a patient's chart, should be able to see what the expert system considers to be the most likely diagnoses and then the doctor can accept one, or ignore them all, or order more tests, or do whatever she wants. No, I don't think so because even if you rely on an automated diagnosis you still have to treat the patient.

True story: when I first heard the phrase 'heroic responsibility', it took me about five seconds and the question, "On TV Tropes, what definition fits this title?" to generate every detail of EY's definition save one. That detail was that this was supposed to be a good idea. As you point out - and eli-sennesh points out, and the trope that most closely resembles the concept points out - 'heroic responsibility' assumes that everyone other than the heroes cannot be trusted to do their jobs. And, as you point out, that's a recipe for everyone gettin... (read more)

This would only be true if the hero has infinite resources, actually able to redo everyone's work. In practice, deciding how your resources should be allocated requires a reasonably accurate estimate of how likely everyone is to do their job well. Swimmer963 shouldn't insist on farming her own wheat for her bread (like she would if she didn't trust the supply chain), not because she doesn't have (heroic) responsibility to make sure she stays alive to help patients, but because that very responsibility means she shouldn't waste her time and effort on unfounded paranoia to the detriment of everyone. The main thing about heroic responsibility is that you don't say "you should have gotten it right". Instead you can only say "I was wrong to trust you this much": it's your failure, and whether it's a failure of the person you trusted really doesn't matter for the ethics of the thing.
Medical expert systems are getting pretty good, I don't see why you wouldn't just jump straight to an auto-updated list of most likely diagnoses (generated by a narrow AI) given the current list of symptoms and test results.

Completed survey less annoying question that required using an annoying scanner that makes annoying noises (I am feeling annoyed). Almost skipped it, but realized that the attitudes of ex-website-regulars might be of interest.

Also, I don't know if "Typical mind and gender identity" is the blog post that you stumbled across, but I am very glad to have read it, and especially to have read many of the comments. I think I had run into related ideas before (thank you, Internet subcultures!), but that made the idea that gender identity has a strength as well as a direction much clearer.

A combination of that post and What universal human experiences are you missing without realizing it? [] actually. I would say that I am strongly typed as male, strong enough that occasionally I've been known to get annoyed at my body not being male enough. (Larger muscle groups, more body hair, darker beard, etc.) Probably influencing this are the facts that Skyler is the feminine form of my name, and that puberty was downright cruel to me. As you say, it's not common to think of being strongly or weakly identified with your own sex, rather than just a binary "fits/doesn't fit" check.

Hence the substitution. :)

I'm afraid I haven't been active online recently, but if you live in an area with a regular in-person meetup, those can be seriously awesome. :)

Meatspace meetups sound like a good deal of fun, and possibly a faster route to being part of the community than commenting on articles that I think I have something to add. Downside is, I'm currently in Rochester New York, and unless I'm misusing the meetups page somehow, looks like the closest regular meetup is in Albany. That's a long bike ride. :) If anybody is in Rochester, by all means let me know!

Jiro didn't say appeal to you. Besides, substitute "blog host" for "government" and I think it becomes a bit clearer: both are much easier ways to deal with the problem of someone who persistently disagrees with you than talking to them. Obviously that doesn't make "don't argue with idiots" wrong, but given how much power trivial inconveniences have to shape your behavior, I think an admonition to hold the proposed heuristic to a higher standard of evidence is appropriate.

Speaking for myself, I've got a fair bit of sympathy for the concept with that substitution and a fair bit of antipathy without it. It's a lot easier to find a blog you like and that likes you than to find a government with the same qualities.

Hmm ... that and a la shminux's xkcd link gives me an idea for a test protocol: instead of having the judges interrogate subjects, the judges give each pair of subjects a discussion topic a la Omegle's "spy" mode:

Spy mode gives you and a stranger a random question to discuss. The question is submitted by a third stranger who can watch the conversation, but can't join in.

...and the subjects have a set period of time they are permitted to talk about it. At the end of that time, the judge rates the interesting-ness of each subject's contribution... (read more)

Were I using that test case, I would be prepared with statements like "A fluid ounce is just under 30 cubic centimeters" and "A yardstick is three feet long, and each foot is twelve inches" if necessary. Likewise "A liter is slightly more than one quarter of a gallon".

But Stuart_Armstrong was right - it's much too complicated an example.

Honestly, when I read the original essay, I didn't see it as being intended as a test at all - more as an honorable and informative intuition pump or thought experiment.

In other words, agreed.

Your test seems overly complicated; what about simple estimates? Like "how long would it take to fly from Paris, France, to Paris, USA" or similar? Add in some Fermi estimates, get them to show your work, etc...

That is much better - I wasn't thinking very carefully when I invented my question.

If the human subject is properly motivated to want to appear human, they'd relax and follow the instructions. Indignation is another arena in which non-comprehending programs can hide their lack of comprehension.

I realize this, but as someone who want... (read more)

The manner in which they fail or succeed is relevant. When I ran Stuart_Armstrong's sentence on this Web version of ELIZA, for example, it failed by immediately replying:

Perhaps you would like to be human, simply do nothing for 4 minutes, then re-type this sentence you've just written here, skipping one word out of 2?

That said, I agree that passing the test is not much of a feat.

Belatedly: I recently discovered that in 2011 I posted a link to an essay on debating charitably by pdf23ds a.k.a. Chris Capel - this is MichaelBishop's summary and this is a repost of the text (the original site went down some time ago). I recall endorsing Capel's essay unreservedly last time I read it; I would be glad to discuss the essay, my prior comments, or any differences that exist between the two if you wish.

Similar to your lazy suggestion, challenging the subject to a novel (probably abstract-strategy) game seems like a possibly-fruitful approach.

On a similar note: Zendo-variations. I played a bit on a webcomic forum using natural numbers as koans, for example; this would be easy to execute over a chat interface, and a good test of both recall and problem-solving.

Nope; general game-playing is a well-studied area of AI; the AI's aren't great at it, but if you aren't playing them for a long time they can certainly pass as a bad human. Zendo-like "analogy-finding" has also been studied. By only demanding very structured action types, instead of a more free-flowing, natural-language based interaction, you are handicapping yourself as a judge immensely.
Maybe just do some roleplaying, with the judge as the DM.

Very nice! I love this kind of mathematical detective-story - I'm reminded of Nate Silver's consideration of the polling firm Strategic Vision here and here - but this is far, far more blatant.

Speaking of original Turing Test, the Wikipedia page has an interesting discussion of the tests proposed in Turing's original paper. One of the possible reads of that paper includes another possible variation on the test: play Turing's male-female imitation game, but with the female player replaced by a computer. (If this were the proposed test, I believe many human players would want a bit of advance notice to research makeup techniques, of course.) (Also, I'd want to have 'all' four conditions represented: male & female human players, male human & computer, computer & female human, and computer & computer.)

[EDIT: Jan_Rzymkowski's complaint about 6 applies to a great extent to this as well - this approach tests aspects of intelligence which are human-specific more than not, and that's not really a desirable trait.]

Suggestion: ask questions which are easy to execute for persons with evolved physical-world intuitions, but hard[er] to calculate otherwise. For example:

Suppose I have a yardstick which was blank on one side and marked in inches on the other. First, I take an unopened 12-oz beverage can and lay it lengthwise on one end of the yardstick so that hal

... (read more)
Familiarity with imperial units is hardly something I would call an evolved physical-world intuition...
Your test seems overly complicated; what about simple estimates? Like "how long would it take to fly from Paris, France, to Paris, USA" or similar? Add in some Fermi estimates, get them to show your work, etc... If the human subject is properly motivated to want to appear human, they'd relax and follow the instructions. Indignation is another arena in which non-comprehending programs can hide their lack of comprehension.

It is a neat toy, and I'm glad you posted the link to it.

The reason I got so mad is that Warren Huelsnitz's attempt to draw inferences from these - even weak, probabilistic, Bayesian inferences - were appallingly ignorant for someone who claims to be a high-energy physicist. What he was doing would be like my dad, in the story from his blog post, trying to prove that gravity was created by electromagnetic forces because Roger Blandford alluded to an electromagnetic case in a conversation about gravity waves. My dad knew that wasn't a true lesson to learn f... (read more)

If my research is correct:

"Casus ubique valet; semper tibi pendeat hamus:
     Quo minime credas gurgite, piscis erit."

Ovid's Ars Amatoria, Book III, Lines 425-426.

I copied the text from Tuft's "Perseus" archive.

Coincidentally, I was actually heading out to meet my dad (a physics Ph.D.), and I mentioned the paper and blog post to him to get his reaction. He asked me to send him a link, but he also pointed me at Feynman's lecture on electrostatic analogs, which is based on one of those simple ideas that invites bullet-swallowing: The same equations have the same solutions.

This is one of those ideas that I get irrationally excited about, honestly. The first thing I thought of when you described these hydrodynamic experiments was the use of similitude in experimental... (read more)

Why draw strong conclusions? Let papers be published and conferences held. It's a neat toy to look at, though.


I have to go, but downvote this comment if I don't reply again in the next five hours. I'll be back.

Edit: Function completed; withdrawing comment.

[This comment is no longer endorsed by its author]Reply

I don't think I understand the relevance of your example, but I agree on the bullet-swallowing point, especially as I am an inveterate bullet-dodger.

(That said, the experiments sound awesome! Any particular place you'd recommend to start reading?)

There don't seem to be many popularizations. This looks fun and as far as I can tell is neither lying nor bullshitting us. [] This [] is an actual published paper, for those with the maths to really check. I think I should phrase this properly by dropping into the language of the Lord High Prophet of Bayes, E.T. Jaynes: it is often optimal to believe in some model with some probability based on the fixed, finite quantity of evidence we have available on which to condition, but this is suboptimal compared to something like Solomonoff Induction that can dovetail over all possible theories. We are allocating probability based on fixed evidence to a fixed set of hypotheses (those we understand well enough to evaluate them). For instance, given all available evidence, if you haven't heard of sub-quantum physics even at the mind-porn level, believing quantum physics to be the real physics is completely rational, except in one respect. I don't understand algorithmic information theory well enough to quantify how much probability should be allocated to "sub-Solomonoff Loss", to the possibility that we have failed to consider some explanation superior to the one we have, despite our current best explanations adequately soaking up the available evidence as narrowed, built-up probability mass, but plainly some probability should be allocated there. Why, particularly in the case of quantum physics? Because we've known damn well for decades that it's an incomplete theory! If it cannot be unified with the other best-supported theory in the same domain (General Relativity), then it is incomplete. Period. Reality does not contradict itself: the river of evidence flowing into General Relativity and the river of evidence flowing into quantum mechanics cannot collide and run against each-other unless we idiot humans have approximated two different perspectives (cosmic scale and mic

Having come from there, the general perception is that LW-ers and our positions are not idiots, but instead the kind of deluded crackpot nonsense smart people make up to believe in. Of course, that's largely for the more abstruse stuff, as people in the outside world will either grudgingly admit the uses of Bayesian reasoning and debiasing or just fail to understand what they are.

There's also a tendency to be doctrinaire among LW-ers that people may be reacting to - an obvious manifestation of this is our use of local jargon and reverential capitalizati... (read more)

Yes, very definitely so. The other thing that makes LW seem... a little bit silly sometimes is the degree of bullet swallowing [] in the LW canon. For instance, just today I spent a short while on the internet reading some good old-fashioned "mind porn" in the form of Yves Couder's experiments with hydrodynamics that replicate many aspects of quantum mechanics. This is really developing into quite a nice little subfield, direct physical experiments can be and are done, and it has everything you could want as a reductive explanation of quantum mechanics. Plus, it's actually classical: it yields a full explanation of the real, physical, deterministic phenomena underlying apparently quantum ones. But if you swallowed your bullet, you'll never discover it yourself. In fact, if you swallow bullets in general, I find it kind of difficult to imagine how you could function as a researcher, given that a large component of research consists of inventing new models to absorb probability mass that currently has nowhere better to go than a known-wrong model.

A good second stage is to look for techniques that were publicized and not used, and see why some techniques gained currency while others did not.

I see what you're getting at, although praying is a bad example - most people pray because their parents and community prayed, and we're looking at ways to lead people away from what their parents and community had done. The Protestant Reformation might be a better case study, or the rise of Biblical literalism, or the abandonment of the prohibition on Christians lending money at interest.

You post a link to "Disputing Definitions" as if there is no such thing as a wrong definition. In this case, the first speaker's definition of "decision" is wrong - it does not accurately distinguish between vanadium and palladium - and the second speaker is pointing this out.

I would also like to note that I have learned a number of interesting things by (a) spending an hour researching idiotic claims and (b) reading carefully thought out refutations of idiocy - like how they're called "federal forts" because the statutes of the states in which they were built include explicitly ceding the land upon which they were built to the federal government.

Load More