All of Lightwave's Comments + Replies

Covid 10/7: Steady as She Goes

if I was sufficiently sick I would want it.

The problem is in the first few days of an infection you won't know whether you'll be sufficiently sick to be hospitalised on say day 10. We don't really know who will deteriorate on day 7 and beyond, and who will recover. And if you take the drug later it won't be as effective.

I suspect this drug will only be used for high risk groups, due to risk of side-effects and the high price (US is paying $700 per course).

Is antibody testing to assess effects of vaccines on you a good idea?

I'm a bit surprised by this. The entire logic behind booster doses for the immunocompromised (which several countries are already doing, e.g. US/UK) is based on a lack of (or low) immune response to the vaccine, which can be confirmed by an antibody test (measuring antibodies to the spike protein). There are studies with large numbers of people where you can see the median levels of antibodies in healthy adults and compare results (e.g. here and here). These tests are quantitative and give you a number, not just a positive or negative result, some people c... (read more)

Covid 6/3: No News is Good News

2.5 times deadlier than existing strains

Where does this number come from?

Covid 1/28: Muddling Through

I'm confused by this graph, if ~30% of critically ill patients are 60+, ~20% are 40-55, that adds up to 50%. What are the other 50%? Only the 55-60 and 0-39 age groups remain, but surely they can't be 50% of critically ill patients (the 55-60 group is too small and the 0-39s should have like 2 orders of magnitude fewer critically ill people than the 60+ group)?

2Randaly1yThe source article is here [] . The numbers are not how much of the total the subgroups make up, they are how quickly each subgroup is growing. The text continues:
Beta - First Impressions

The site is lacking breadcrumbs so it's hard to orient oneself. It's hard to follow what section of the website you're in as you dig deeper into the content. Any plans to add breadcrumbs (or some alternative)?

4habryka4yYep, I have some plans for this. The current mainline plan (which might still change) is to change the color and title of the navbar when on a user page, or when you are on a collection or when you are in the Meta section of the page. And then generally improve the degree to which the content above the fold clearly tells you where you are.
Marginal Revolution Thoughts on Black Lives Matter Movement

A black is also more likely to commit a violent crime than a white person.

Isn't it more relevant whether a black person is more likely to commit a violent crime against a police officer (during a search, etc)? After all the argument is that the police are responding to some perceived threat. The typical mostly black-on-black violent crime isn't the most relevant statistic that should be used. Where are the statistics about how blacks respond to the police?

Why GiveWell can't recommend MIRI or anything like it

Funny you should mention that..

AI risk is one of the 2 main focus areas for the The Open Philanthropy Project for this year, which GiveWell is part of. You can read Holden Karnofsky's Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity.

They consider that AI risk is high enough on importance, neglectedness, and tractability (their 3 main criteria for choosing what to focus on) to be worth prioritizing.

Also like: here is a 4000-word evaluation of MIRI by OpenPhil. ???

Crony Beliefs

things that I am very confident are false

Could you give any example?

1entirelyuseless5yOf things that I am very confident are false which are believed by communities? Basically things like "the other communities have very little grasp on reality," when in fact they all share a large core of things in common. But when the other community describes that core in different words, they say that the words are meaningless or ignorant or false, even though in fact they are all talking about the same thing and are roughly in agreement about it. For example, when Eliezer talks about "how an algorithm feels from the inside," he is basically talking about the same thing that Thomas Nagel is talking about when he talks about things like "what it is like to be a bat." But I suspect that Eliezer would be offended by the comparison, despite its accuracy. Likewise, Eliezer's identification of AIs with their program is basically the same kind of thing as identifying a human being with an immaterial soul -- both are wrong, and in basically the same way and for the same reasons, but there is something right that both are getting at. Again, I am quite sure Eliezer would feel offended by this comparison, despite its accuracy. The same thing is true of TDT -- it is basically in agreement with a form of virtue theory or deontological ethics. But since Eliezer came to his conclusion via utilitarianism, he thinks he is right and the others are wrong. In reality they are both right, but the other people were right first. Of course this happens a bit differently with communities than it does with individuals and individual claims. I used individuals in these examples because the situation is clearer there, but there is an analagous situation with communities. This might be a selective effect -- a community preserves its own existence by emphasizing its difference with others. Consider how diverse languages develops. Naturally there would just be a continuum of languages, with the people in the middle speaking something intermediate between the people on the two ends. But lar
3scarcegreengrass5yOh, this is much more complete, thanks. Wow, it's surreal to hear Obama talking about Bostrom, Foom, and biological x risk.
Open thread, Sep. 26 - Oct. 02, 2016

He's mentioned it on his podcast. It won't be out for another 1.5-2 years I think.

Also Sam Harris recently did a TED talk on AI, it's now up.

Open thread, Sep. 26 - Oct. 02, 2016

He's writing an AI book together with Eliezer, so I assume he's on board with it.

2ThoughtSpeed5yIs that for real or are you kidding? Can you link to it?
Turning the Technical Crank

Can't we just add a new 'link' post type to the current LW? Links and local posts would both have comment threads (here on LW), the only difference is the title of the linked post would link to an outside website/resource.

Open Thread April 4 - April 10, 2016

Should we try to promote the most valuable/important (maybe older?) Less Wrong content on the front page? Currently the front page features a bunch of links and featured articles that don't seem to be organized in any systematic way. Maybe Less Wrong would be more attractive/useful to new people if they could access the best the site has to offer directly from the front page (or at least more if it, and in a systematic way)?

0Douglas_Knight6y"Featured Articles" is such a "best of," but that isn't obvious. But the other list is clearly marked "recent."
Lesswrong Potential Changes

Target: a good post every day for a year.

Why specifically 1/day? It seems a bit too much. Why not e.g. ~3/week?

0Elo6yI suppose that could work too. I would like to aim higher (towards 1/day) but maybe 3/week is more realistic.
Consciousness and Sleep

Your sensory system is still running

There are brain subsystems that are still running, but they are not necessarily ones "you" identify with. If you replaced the parts/networks of the brain that control your heart and lungs (through some molecular nanotechnology), would "you" still be you? My intuition says yes. The fact that "something is running" doesn't mean that something is you.

I know the computer metaphor doesn't work well for the brain, but imagine the system in the brain that wakes you up when you hear some sound c... (read more)

0PhilGoetz6yIn this particular case, no. Not with the page table attack. What would help would be encrypting the mapping from virtual memory to physical memory--but that would GREATLY slow down execution speed. I don't think the "homomorphic encryption" idea works as advertised in that post--being able to execute arithmetic operations on encrypted data doesn't enable you to execute the operations that are encoded within that encrypted data.
(Rational) website design and cognitive aesthetics generally- why no uptake?

empirical literature on what makes websites effective (which we've done a lot of now)

Can you share some of your sources?

In Defense of the Fundamental Attribution Error

It only takes a small extension of the logic to show that the Just World Hypothesis is a useful heuristic.

I don't see it, how is it useful?

0Salemicus7yTo expand on what OrphanWilde wrote: The Just World Hypothesis can be summarised as "you reap what you sow []." If you wish to argue that you don't "deserve" to reap what you sow (perhaps because you didn't have access to better seeds), or that it's not "just" to reap what you sow (because everyone should reap in rough equality, regardless of how they sowed), or similar, that's fine, but you aren't arguing against the Just World Hypothesis. So when we see the fruit, the Just World Hypothesis tells us: that's probably how the person sowed the seeds. And yes, there is noise, which is why it's a heuristic, not an infallible rule. But the whole reason to sow the seeds in the first place was to cause them to bear fruit. "Ye shall know them by their fruits. Do men gather grapes of thorns, or figs of thistles?" In other words, Coherent Extrapolated Volition. So to take an example from the original post - smoking. If I meet someone with lung cancer, the overwhelming likelihood is that they are responsible for their own problem, through smoking. But if I smoke and then I get lung cancer, I'll want to make excuses for myself, and will stubbornly refuse to make the connection between my own culpable past behaviour (the sowing) and my present misfortune (the reaping). People who complain about the Just World Hypothesis want me to extend this non-judgemental behaviour to everyone else. But just as with the Fundamental Attribution Error, the problem is not that I am being too harsh on other people, but that I am being too easy on myself. I am right to draw the connection between behaviour and outcomes for everyone else, and I should do the same for myself.
4OrphanWilde7yThe Just World Hypothesis holds that people get what they deserve. Because bad things aren't purely random. The person on the motorcycle with the helmet, versus the person on the motorcycle without, are not courting tragedy equally; one of them is doing a little bit to "earn" their tragedy. Likewise, Tit-for-Tat means evil people tend to be the recipients of evil in turn. I think the "Just World Hypothesis", as typically described, is largely incorrect in its use of the concept of deserving, versus the concept of having some responsibility for - but I also think most people who follow a variant of the JWH use the non-moralizing "responsibility" version, and it is largely (but not exclusively) those who oppose the Just World Hypothesis who insert moralizing, to make it seem more reprehensible. Regardless of whether they wear a helmet or not, motorcyclists don't deserve to get hit; rather, whether or not they wear a helmet determines part of their responsibility for what happens when they do. Those who believe in the Just World Hypothesis tend to analyze their behavior after something bad happens to them, and hold something they've done partially responsible, and try to correct their behavior in the future - and do the same thing to other people who have something bad happen to them. Those who oppose the hypothesis sometimes refer to this tendency as "victim blaming". Personally, I call it "willingness to accept and learn from mistakes". But then, I tend to upset the sorts of people who use phrases like "victim blaming". ETA: Retracted, because I failed to actually answer the question, and Salemicus did.
Meetup : London - Index Funds and Other Fun Stuff

Hey, is there write-up a of the UK-specific stuff for people who weren't able to attend?

1sixes_and_sevens7yHere is a one-page document [] giving a very rough overview for people in the UK. One of the London regulars has anti-recommended M&G, since their application for a M&G index tracker was a confusing and uncommunicative process. Two of us already have index trackers, with Fidelity and Legal & General respectively, and are satisfied with their service.
Confused as to usefulness of 'consciousness' as a concept

Sleep might be a Lovecraftian horror.

Going even further, some philosophers suggest that consciousness isn't even continuous, e.g. as you refocus your attention, as you blink, there are gaps that we don't notice. Just like how there are gaps in your vision when you move your eyes from one place to another, but to you it appears as a continuous experience.

6RichardKennaway8yConsciousness is complex. It is a structured thing, not an indivisible atom. It is changeable, not fixed. It has parts and degrees and shifting, uncertain edges. This worries some people.
This is why we can't have social science

The error rate in replication experiments in the natural sciences is expected to be much much lower than in the social sciences. Humans and human environments are noisy and complicated. Look at nutrition/medicine - it's taking us decades to figure out whether some substance/food is good or bad for you and under what circumstances. Why would you expect it be easier to analyze human psychology and behavior?

3ChristianKl8yIf you want to know whether food is good or bad you have to look at mortality which means you might have to wait a decade. A lot of psychology experiments claim effects over much shorter timeframes.

Is Johnny Depp playing Ben Goertzel? :P

What makes us think _any_ of our terminal values aren't based on a misunderstanding of reality?

to value breadth of perspective and flexibility of viewpoint significantly more than internal consistency

As humans we can't change/modify ourselves too much anyway, but what about if we're able to in the future? If you can pick and choose your values? It seems to me that, for such entity, not valuing consistency is like not valuing logic. And then there's the argument that it leaves you open for dutch booking / blackmail.

0TheOtherDave8yYes, inconsistency leaves me open for dutch booking, which perfect consistency would not. Eliminating that susceptibility is not high on my list of self-improvements to work on, but I agree that it's a failing. Also, perceived inconsistency runs the risk of making me seen as unreliable, which has social costs. That said, being seen as reliable appears to be a fairly viable Schelling point among my various perspectives (as you say, the range is pretty small, globally speaking), so it's not too much of a problem. In a hypothetical future where the technology exists to radically alter my values relatively easily, I probably would not care nearly so much about flexibility of viewpoint as an intrinsic skill, much in the same way that electronic calculators made the ability to do logarithms in my head relatively valueless.
What makes us think _any_ of our terminal values aren't based on a misunderstanding of reality?

Well whether it's a "real" change may be besides the point if you put it this way. Our situation and our knowledge are also changing, and maybe our behavior should also change. If personal identity and/or consciousness are not fundamental, how should we value those in a world where any mind-configurations can be created and copied at will?

0lmm8ySo there's a view that a rational entity should never change its values. If we accept that, then any entity with different values from present-me seems to be in some sense not a "natural successor" of present-me, even if it remembers being me and shares all my values. There seems to be a qualitative distinction between an entity like that and upload-me, even if there are several branching upload-mes that have undergone various experiences and would no doubt have different views on concrete issues than present-me. But that's just an intuition, and I don't know whether it can be made rigorous.
What makes us think _any_ of our terminal values aren't based on a misunderstanding of reality?

we value what we value, we don't value what we don't value, what more is there to say?

I'm confused what you mean by this. If there wasn't anything more to say, then nobody would/should ever change what they value? But people's values changes over time, and that's a good thing. For example in medieval/ancient times people didn't value animals' lives and well-being (as much) as we do today. If a medieval person tells you "well we value what we value, I don't value animals, what more is there to say?", would you agree with him and let him go on t... (read more)

1lmm8yIs that an actual change in values? Or is it merely a change of facts - much greater availability of entertainment, much less death and cruelty in the world, and the knowledge that humans and animals are much more similar than it would have seemed to the medieval worldview?
0TheOtherDave8yFair enough. Agreed that if someone expresses (either through speech or action) values that are opposed to mine, I might try to get them to accept my values and reject their own. And, sure, having set out to do that, there's a lot more to be relevantly said about the mechanics of how we hold values, and how we give them up, and how they can be altered. And you're right, if our values are inconsistent (which they often are), we can be in this kind of relationship with ourselves... that is, if I can factor my values along two opposed vectors A and B, I might well try to get myself to accept A and reject B (or vice-versa, or both at once). Of course, we're not obligated to do this by any means, but internal consistency is a common thing that people value, so it's not surprising that we want to do it. So, sure... if what's going on here is that byrnema has inconsistent values which can be factored along a "privilege my own identity"/"don't privilege my own identity" axis, and they net-value consistency, then it makes sense for them to attempt to self-modify so that one of those vectors is suppressed. With respect to my statement being confusing... I think you understood it perfectly, you were just disagreeing -- and, as I say, you might well be correct about byrnema. Speaking personally, I seem to value breadth of perspective and flexibility of viewpoint significantly more than internal consistency. "Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes." Of course, I do certainly have both values, and (unsurprisingly) the parts of my mind that align with the latter value seem to believe that I ought to be more consistent about this, while the parts of my mind that align with the former don't seem to have a problem with it. I find I prefer being the parts of my mind that align with the former; we get along better.
More "Stupid" Questions

By the same logic eating you favorite food because it tastes good is also wireheading.

0drethelin8yWell no because you have to eat SOMETHING. You could just not listen to music.
Instrumental rationality/self help resources

Better, instrumentally, to learn to handle the truth.

It really depends on your goals/goal system. I think the wiki definition is supposed to encompass possible non-human minds that may have some uncommon goals/drives, like a wireheaded clippy that produces virtual paperclips and doesn't care whether they are in the real or virtual world, so it doesn't want/need to distinguish between them.

0RichardKennaway9yI really do not care about hypothetical entities that have the goal of being ignorant, especially constructions like wireheaded clippies. It's generally agreed here that wireheading is a failure mode. So is the valorisation of ignorance by romanticism [].
"Stupid" questions thread

You can use the "can be good at everything" definition to suggest quantification as well. For example, you could take these same agents and make them produce other things, not just paperclips, like microchips, or spaceships, or whatever, and then the agents that are better at making those are the more intelligent ones. So it's just using more technical terms to mean the same thing.

Open Thread, June 16-30, 2013

I looked through some of them, there's a lot of theory and discussions, but I'm rather interested just in a basic step-by-step guide on what to do basically.

From Meditation, insight, and rationality (Part 2 of 3):

Basic method: Sit down in a place where there are few distractions, and pick an object to focus one's attention on. The most popular objects are the feeling of breath at the tip of the nostrils / upper lip, and the motion of the abdomen as one breathes in and out. (In this description I'll assume you're using the latter.) Begin by trying to clearly perceive the feeling of the abdomen expanding and contracting; when it expands and you perceive it clearly, attach the label 'in' to that perception, and

... (read more)
Open Thread, June 16-30, 2013

So I'm interested in taking up meditation, but I don't know how/where to start. Is there a practical guide for beginners somewhere that you would recommend?

0Risto_Saarelma9yI found Daniel Ingram's Mastering the Core Teachings of the Buddha [] a fun read.
8[anonymous]9yMindfulness in Plain English [] is a good introduction to (one kind of) meditation practice. It seems like most interested people end up practicing concentration or insight meditation by default (as indeed you will, if you read and follow the book). I would also recommend eventually looking into loving-kindness meditation. I've been trying it for a couple of weeks and I think it might be much more effective for someone who just wants a tool to improve quality of life (rather than wanting to be enlightened or something).
0tofu2579yI tried zazen for a few months: I like it and decided to start it again just this week. Here is straightfoward advice on what to do: [] if you don't want to watch the long youtube video read the following then skip to 8:20 where he explains how to think/what to do with your mind: * how to sit your body: cross legged or lotus - but lotus requires flexibility and isn't necessary. straight spine, back and neck. rest your hands to make a ring shape. face a wall and shut your eyes. rock side to side a little then stop straight. * how to think: first few times - it's initially very difficult to let your mind free itself of thoughts/chatter so a way to practice this is counting down slowly from 10, restarting if you stray from counting onto thinking about something else.
4Qiaochu_Yuan9yHave you seen the previous LW posts on the subject?
Three more ways identity can be a curse

"Regression to the mean" as used above is basically using a technical term to call someone stupid.

Well I definitely wasn't implying that. I actually wanted to discuss the statistics.

Three more ways identity can be a curse

Why? I couldn't think of a way to make this comment without it sounding somewhat negative towards the OP, so I added this as a disclaimer, meaning that I want to discuss the statistics, not to insult the poster.

2loup-vaillant9yNevertheless, an above-average post is still evidence for an above-average poster. It's also her first post. She might very well "get better" in the future, as she put it. Sure, I wouldn't count on it, but we still have a good reason to look forward to reading her future posts.

I hate to sound negative

Somehow, I doubt this.

-2IlyaShpitser9yThat doesn't make sense. This post may well be below OP's mean post quality.
A thought-process testing opportunity

They probably would have flown off had he twisted it faster.

4private_messaging9yYes, but then one could predict - from the abundance of wiring in the background - that he would not twist it fast. (I seen video before this thread).
An attempt to dissolve subjective expectation and personal identity

Or maybe you do, but it's not meaningfully different from deciding to care about some other person (or group of people) to the exclusion of yourself if you believe in personal identity

I think the point is actually similar to this discussion, which also somewhat confuses me.

Discussion: Which futures are good enough?

figure out how to make everyone sitting around on a higher level credibly precommit to not messing with the power plug

That's MFAI's job. Living on the "highest level" also has the same problem, you have to protect your region of the universe from anything that could "de-optimize" it, and FAI will (attempt to) make sure this doesn't happen.

Discussion: Which futures are good enough?

I, on the other hand, (suspect) I don't mind being simulated and living in a virtual environment. So can I get my MFAI before attempts to build true FAI kill the rest of you?

2ikrase9yPossible compromise: Have there be some way of revealing the truth to those who want the truth badly enough. I am extremely hostile to this idea. (I'm not dead-set against forced uploading or even hidden forced uploading (though I would prefer to keep the interface to the base universe open) but I cannot tolerate this end of interpersonal entanglement.
An attempt to dissolve subjective expectation and personal identity

Not really. You can focus your utility function on one particular optimization process and its potential future execution, which may be appropriate given that the utility function defines the preference over outcomes of that optimization process.

Well you could focus your utility function on anything you like anyway, the question is why, under utilitarianism, would it be justified to value this particular optimization process? If personal identity was fundamental, then you'd have no choice, conscious existence would be tied to some particular identity. B... (read more)

0Elithrion9yI wrote an answer, but upon rereading, I'm not sure it's answering your particular doubts. It might though, so here: Well, if we're talking about utilitarianism specifically, there are two sides to the answer. First, you favour the optimization-that-is-you more than others because you know for sure that it implements utilitarianism and others don't (thus having it around longer makes utilitarianism more likely to come to fruition). Basically the reason why Harry decides not to sacrifice himself in HPMoR. And second, you're right, there may well be a point where you should just sacrifice yourself for the greater good if you're a utilitarian, although that doesn't really have much to do with dissolution of personal identity. But I think a better answer might be that: You do not, in fact, have the choice. Or maybe you do, but it's not meaningfully different from deciding to care about some other person (or group of people) to the exclusion of yourself if you believe in personal identity, and there is no additional motivation for doing so. If you mean something similar to Eliezer writing "how do I know I won't be Britney +5 five seconds from now" in the original post, that question actually relies on a concept of personal identity and is undefined without it. There's not really a classical "you" that's "you" right now, and five seconds from now there will still be no "you" (although obviously there's still a bunch of molecules following some patterns, and we can assume they'll keep following similar patterns in five seconds, there's just no sense in which they could become Britney).
Memetic Tribalism

Well, shit. Now I feel bad, I liked your recent posts.

8[anonymous]9yI'll make a quick exception to cut off your anxiety. Don't feel bad, I need a break from LW; your comment and lent just gave me a good excuse. I'm still writing though. I'll be back in april with a good stack of posts.
Memetic Tribalism

I'm now quite skeptical that my urge to correct reflects an actual opportunity to win by improving someone's thinking,

Shouldn't you be applying this logic to your own motivations to be a rationalist as well? "Oh, so you've found this blog on the internet and now you know the real truth? Now you can think better than other people?" You can see how it can look from the outside. What would the implication for yourself be?

1Viliam_Bur9yWe should measure our winning, somehow, and see whether reading LW increases it. Sure, this answer just brings a new set of questions. Such as: what exactly should we measure? If we use something as an approximation, what if it becomes a lost purpose? If we change our method of measuring later, what if we are just rationalizing conveniently? (We can create an illusion of infinite growth just by measuring two complementary values X and Y, always focusing on the one which grows at the given moment.) I would say that a person reading LW for longer time should be able to list specific improvements in their life. Improvements visible from outside; that is, what they do differently, not how they think or speak differently. That is the difference from the outside. If there is no such improvement, that would suggest it is time to stop reading; or at least stop reading the general discussion, and focus on stuff like Group Rationality Diary. (My personal excuse is that reading LW reduces the time spent reading other websites. Debates on other websites suddenly feel silly. And the improvement is that reading other websites often made me angry, but reading LW does not mess with my emotions. -- I wish I could say something better, but even this is better than nothing. Of course it does not explain why reading LW would be better than abstaining from internet. Except that abstaining from internet seems unlikely; if I stopped reading LW, I would probably return to the websites I used to read previously.)
9[anonymous]9yThis comment is thoroughly discouraging to me as it pokes at some open wounds that I'm working on. Therefor I'm quitting LW for lent []. Kick my ass if I'm back before April fools day. (except for meetup posts).
A Little Puzzle about Termination

On the other hand, given that humans (especially on LW) do analyze things on several meta levels, it seems possible to program an AI to do the same, and in fact many discussions of AI assume this (e.g. discussing whether the AI will suspect it's trapped in some simulation). It's an interesting question how intelligent can an AI get without having the need (or ability) to go meta.

1[anonymous]9yAlso true. Indeed, this puzzle is all about resolving confusion between object and meta level(s); hopefully no one here at LW endorses the view that a (sufficiently well programmed) AI is incapable of going meta, so to speak.
Ideal Advisor Theories and Personal CEV

Given that a parliament of humans (where they vote on values) is not accepted as a (final) solution to the interpersonal value / well-being comparison problem, why would a parliament be acceptable for intrapersonal comparisons?

Open Thread, November 16–30, 2012

It seems like people sort of turn into utility monsters - if people around you have a strong opinion on a certain topic, you better have a strong opinion too, or else it won't carry as much "force".

0DanArmak9yIf we haven't decided what morality to use yet, then how are we making moral decisions now, and how are we going to decide this later? I think that what you might call "the function that we'll use to decide our morality later on" is what I call "my morality both now and later". Or you might simply mean our morality will keep changing over time (because we will change, and the environment and its moral challenges will also change). That's certainly true.
Rationality versus Short Term Selves

With regards to the singularity, and given that we haven't solved 'morality' yet, one might just value "human well-being" or "human flourishing" without referring to a long-term self concept. I.e. you just might care about a future 'you', even if that person is actually a different person. As a side effect you might also equally care about everyone else in to future too.

-2DanArmak9yI'm bothered by the apparent assumption that morality is something that can be "solved".
Open Thread, October 16-31, 2012

Right, but I want to use a closer to real life situation or example that reduces to the wason selection task (and people fail at it) and use that as the demonstration, so that people can see themselves fail in a real life situation, rather than in a logical puzzle. People already realize they might not be very good at generalized logic/math, I'm trying to demonstrate that the general logic applies to real life as well.

Open Thread, October 16-31, 2012

Well the thing is that people actually get this right in real life (e.g. with the rule 'to drink you must be over 18'). I need something that occurs in real life and people fail at it.

0Vaniver9yNo, people are more likely to get it right in real life. Some fraction of your audience will get it wrong, even with ages and drinks.
1sixes_and_sevens9yThey get it correct when it's in an appropriate social context, not simply because it's happening in real life. If it didn't happen in real life, confirmation bias wouldn't be a real thing.
Open Thread, October 16-31, 2012

I'm planning on doing a presentation on cognitive biases and/or behavioral economics (Kahneman et al) in front of a group of university students (20-30 people). I want to start with a short experiment / demonstration (or two) that will demonstrate to the students that they are, in fact, subject to some bias or failure in decision making. I'm looking for suggestion on what experiment I can perform within 30 minutes (can be longer if it's an interesting and engaging task, e.g. a game), the important thing is that the thing being demonstrated has to be releva... (read more)

4Morendil9yI've done this in a few small groups, using: * the "this is an attention test" video (aka "how many passes does the team in white make") - always a hit * a calibration exercise just as Larks suggests below, which works pretty well * the Wason card task - some people find it unconvincing "because it's logic" (!) * a "learned helplessness []" experiment using impossible anagrams - works so-so * a "priming" experiment where two groups have a phrase-building task with different priming words - my version of that sucks, need to rework it * I also show the Spinning Dancer [] often
3maia9yFor something very brief, anchoring bias is easy to demonstrate and fairly dramatic. I tried this on a friend a couple weeks ago, anchoring her on 1 million people as the population of Ghana; she guessed 900,000. Turned out to be 25 million.
8Larks9yGet people to give 90% confidence intervals on 10 questions, and then at the end ask "Ok, so who got all 10 within their intervals. 9? That's what you should have got... ok, 8? Still no-one? Ok, how about 7?"
1sixes_and_sevens9yThe Wason selection task [] is a good go-to example of confirmation bias.
1Barry_Cotter9yConfirmation bias, the triplet number test where the rule is “Any triplet where the second number is greater than the first and the third greatet than the second”. Original credit (edit:for my exposure)to Eliezer in HPmoR but I thought of it because that was what Yvain did at a meetup I was at.
Is xkcd "Think Logically" talking about this site?

I don't think this deserves its own top level discussion post and I suspect most of the downvotes are for this reason. Maybe use the open thread next time?

9ArisKatsaris9yI downvoted because one of the chiefly repeated themes of the sequences and LW is that rationality is about winning, not about following your preconceptions about what logic means to your defeat, nor about making excuses. It's insulting and rather ignorant that summerstay associated LW with the type of person Munroe was mocking. That type of person is pretty much the opposite of what LW is about.
9Alejandro19yI agree that links to comics should better go in the open thread, but this explanation for the downvotes seems incorrect--a search for "xkcd" and SMBC" shows many posts consisting of links to comics, most of them upvoted. This one was downvoted because the suggestion that the comic references LW was deemed wrong. And I think the community's reaction was rather too harsh--it was a honest mistake by the poster, and while I can understand someone downvoting it when it had zero or few downvotes, I cannot understand someone piling on after it had been already heavily downvoted and taking it to its present value of -15, a karma score I associate with trolls, spam, personal abuse or political rants, not a honest mistake.
5summerstay9yThanks, I'll try that.
Load More