All of Val's Comments + Replies

Popular religions suggest extrapolated volition is non-existence and wireheading

I didn't say I had an answer. I only said it can be an interesting dilemma.

Popular religions suggest extrapolated volition is non-existence and wireheading

That's true, but the change a strong AI would make would be probably completely irreversible and unmodifiable.

Is that different to how humans are the dominant planetary species?
Popular religions suggest extrapolated volition is non-existence and wireheading

This brings up an interesting ethical dilemma. If strong AI will ever be possible, it will be probably designed with the values of what you described as a small minority. Does this this small minority have the ethical right to enforce a new world upon the majority which will be against their values?

The entire concept of CEV is meant to address this question. []
Do you (or we) have the ethical right to enforce current world, the majority of which is against our values (as measured by the amount of complaining, at least).
Wouldn't be the first time that a small minority were enacting the change they wanted. The universe is not ethical or anything.
HOWTO: Screw Up The LessWrong Survey and Bring Great Shame To Your Family

I usually look out for the surveys, but until I opened this article I never even knew there was a survey for this year... so yeah, poor advertising.

New business opportunities due to self-driving cars

"services that go visit the customer outcompete ones that the customer has to go visit" - and what does this have to do with self-driving cars? Whether the doctor has to actively drive the car to travel to the patient, or can just sit there in the car while the car drives all the way, the same time is still lost due to the travel, and the same fuel is still used up. A doctor or a hairdresser would be able to spend significantly less time with customers, if most of the working day would be taken up by traveling. And what about all the tools which ... (read more)

Yes. But a significant part of the job of a doctor is paperwork (filing stuff for insurance companies etc.) and she can do that while the car drives itself. If she had to hire a driver (and have her sit idly while she's with a patient), the driver would be the most expensive part of her vehicle, just like the taxi driver is the most expensive part of the taxi. If she's the kind of doctor that can carry all her equipment inside that car (i.e. not a radiologist 😉) she might even be able to abolish her office and waiting room entirely, for extra savings. No, we're in a world where tourists generally don't mind going slowly and enjoying the view. These things would be pretty big on the outside, at least RV size, but they wouldn't be RVs. They wouldn't usually have kitchens and their showers would have to be way nicer than typical RV showers.
Mini map of s-risks

I know about the first one having been mentioned on this site, I've read about it plenty of times, but it was not named as such. Therefore it's advisable if you use a rare term (or especially one made up by you) that you also tell what it means.

Mini map of s-risks

Could you please put some links to "Hacker's joke" and "Indexical blackmail"? Both use words common enough to not yield obvious results for a google search.

Indexical blackmail was somewhere on lesswrong, and idea is that AI in the box creates many my copies, and inform me about it, and because of this I can't be sure that I am not one of such copies, and thus I will realise it from the box (or face torture with probability 999 to 1000). I can't find the link. The idea is based on idea of the "indexical uncertainty", which is googlable, for example, here: [] Hacker's joke - it is a hypothetical situation when first and last AI creator is just a random 15 years old boy, who wants to play with AI by putting in it stupid goals. Nothing to google here.
Any Christians Here?

Another Christian here, raised as a Calvinist, but consider myself more of a non-denominational, ecumenical one, with some very slight deist tendencies.

I don't want to sound rude, but I don't know how to formulate it in a better way: if you think you have to choose between christianity and science, you have a very incomplete information about what Christianity is about, and also incomplete knowledge about the history of science itself. I wonder how many who call themselves Bayesians know that Bayes was a very devout Christian, similar to many other founder... (read more)

The real god is using you to make these people think. And it is using me to make you think. Yours thoughtlessly, metatroll
Open thread, May 15 - May 21, 2017

If you make 100 loaves and sell them for 99 cents each, you've provided 1 dollar of value to society, but made 100 dollars for yourself.

Not 99 dollars?

Whoops! Fixed. Thank you.
The 2017 Effective Altruism Survey - Please Take!

Anyone who is reading this should take this survey, even if you don't identify as an "effective altruist".

Why? The questions are too much centered not only on effective altruists, but also on left- or far-left-leaning ideologies. I stopped filling it when it assumed only movements of that single political spectrum are considered social movements.

How AI/AGI/Consciousness works - my layman theory

Even with the limited AGI with very specific goals (build 1000 cars) the problem is not automatically solved.

The AI might deduce that if humans still exist, there is a higher than zero probability that a human will prevent it from finishing the task, so to be completely safe, all humans must be killed.

Or it will deduce that there is an even higher probability that either (1) it will fail at killing humans and be turned off itself, or (2) encounter problems for which it needs or would largely benefit from human cooperation.
Allegory On AI Risk, Game Theory, and Mithril

Those "very real, very powerful security regimes around the world" are surprisingly inept at handling a few million people trying to migrate to other countries, and similarly inept at handling the crime waves and the political fallout generated by it.

And if you underestimate how much a threat could a mere "computer" be, read the "Friendship is Optimal" stories.

I've read the sequences on friendliness here and find them completely unconvincing with lack of evidence and a one-sided view the problem. I'm not about to start generalizing from fictional evidence. I'm not sure I agree with the assessment of the examples that you give. There are billions of people who would like to live in first world countries but don't. I think immigration controls have a particularly effective if there's only a few million people crossing borders illegally in a world of 7 billion. And most of the immigration issues being faced by the world today such as Syrian refugees are about asylum-seekers who are in fact being permitted just in larger numbers than secondary systems were designed to support. Also the failure modes are different. If you let the wrong person in, what happens? Statistically speaking, nothing of great consequence. Crime waves? We are currently at one of the lowest periods of violence per capita. I think the powers that be have been doing quite a good job actually.
How to talk rationally about cults

This is a well-presented article, and even though most (or maybe all) of the information is easily available else-where, this is a well-written summary. It also includes aspects which are not talked about much, or which are often misunderstood. Especially the following one:

Debating the beliefs is a red herring. There could be two groups worshiping the same sacred scripture, and yet one of them would exhibit the dramatic changes in its members, white the other would be just another mainstream faith with boring compartmentalizing believers; so the differen

... (read more)
Open thread, Jan. 02 - Jan. 08, 2017

This comment was very insightful, and made me think that the young-earth creationist I talked about had a similar motivation. Despite this outrageous argument, she is a (relatively speaking) smart and educated person. Not academic-level, but neither grown up on the streets level.

Open thread, Jan. 02 - Jan. 08, 2017

I always thought the talking snakes argument was very weak, but being confronted by a very weird argument from a young-earth creationist provided a great example for it:

If you believe in evolution, why don't you grow wings and fly away?

The point here is not about the appeal to ridicule (although it contains a hefty dose of that too). It's about a gross misrepresentation of a viewpoint. Compare the following flows of reasoning:

  • Christianity means that snakes can talk.
  • We can experimentally verify that snakes cannot talk.
  • Therefore, Christianity is fals
... (read more)
I know someone who told me that she hoped President Trump wouldn't successfully legalize rape. I opined that, perhaps, that might not be on his itinerary. She responded that, of course it was, and proved it thus: She is against it. She is against him. Therefore he is for it. I bring this up to sort of angle at 'broadening' the talking snakes point. The way people do arguments is kind of how they do boggle. Find one thing that is absolutely true, and imply everything in its neighborhood. One foot of truth gives you one mile of argument. As soon as you have one true thing, then you can retreat to that if anyone questions any part of the argument. Snakes can't talk, after all.
Open thread, Jan. 02 - Jan. 08, 2017

Isn't this very closely related to the Dunning-Kruger effect?

Seems quite different to me. D-K effect is "you overestimate how good you are at something", while what I describe does not even involve a belief that you are good at the specific thing, only that -- despite knowing nothing about it on the object level -- you still have the meta-level ability of estimating how difficult it is "in principle". An example of what I meant would be a manager in an IT company, who has absolutely no idea what "fooing the bar" means, but feels quite certain that it shouldn't take more than three days, including the analysis and testing. While an example of D-K would be someone who writes a horrible code, but believes to be the best programmer ever. (And after looking at other people's code, keeps the original conviction, because the parts of the code he understood he could obviously write too, and the parts of the code he didn't understand are obviously written wrong.)
1Richard Korzekwa 6y
I may be misunderstanding the connection with the availability heuristic, but it seems to me that you're correct, and this is more closely related to the Dunning-Kruger effect. What Dunning and Kruger observed was that someone who is sufficiently incompetent at a task is unable to distinguish competent work from incompetent work, and is more likely to overestimate the quality of their own work compared to others, even after being presented with the work of others who are more competent. What Viliam is describing is the inability to see what makes a task difficult, due to unfamiliarity with what is necessary to complete that task competently. I can see how this might relate to the availability heuristic; if I ask myself "how hard is it to be a nurse?", I can readily think of encounters I've had with nurses where they did some (seemingly) simple task and moved on. This might give the illusion that the typical day at work for a nurse is a bunch of (seemingly) simple tasks with patients like me.
If Atheists Had Faith

I'm not surprised Dawkins makes a cameo in it. The theist in the discussion is a very blunt strawman, just as Dawkins usually likes to invite the dumbest theists he can find, who say the stupidest things about evolution or global warming, thereby allegedly proving all theists wrong.

I'm sorry if I might have offended Dawkins, I know many readers here are a fan of him. However, I have to state that although I have no doubts about the values of his scientific work and his competence in his field, he does make a clown of himself with all those stawman attacks against theism.

What do you mean by straw man, exactly? This isn't meant to be the most philosophically defensible theist. Closer, perhaps, to the most common kind of theist.
What do you actually do to replenish your willpower?

For many people, religion helps a lot in replenishing willpower. Although, what I observed, it's less about stopping procrastination, and more about not despairing in a difficult or depressing situation. I might even safely guess that for a lot of believers this is among the primary causes of their beliefs.

I know that religious beliefs on this site are significantly below the offline average, I didn't want to convince anyone of anything, I just pointed out that for many people it helps. Maybe by acknowledging this fact we might understand why.

Agree and my proposed mechanism of action is a stance shift (more in the sense of how Mark uses stance in folding, or Chapman does) that seems to be the difference between believing that things are/will basically turn out okay vs being random and largely not in our control. In much the same way that having a fake button that the person thinks will turn off the annoying noise lets them tolerate it longer, having a fake meaning button allows one to handle set backs outside of the locus of control from sapping motivation.
I've noticed something even more general: people that have a well-defined philosophy of life seems more motivated and resilient to setbacks or tragedy than those who lack such a self-narrative. But this appears to be the case even for philosophies of life which have tenets that contradict (or at least stand is strong tension with) each other in important ways, such as Christianity, Objectivism, Buddhism, Stoicism, etc... This is pure anecdote, and obviously the people I come in contact with are not even close to a random sample of humanity, so I'd very much like to be pointed towards a more systematic study of the phenomena (or lack thereof).
Open thread, Oct. 24 - Oct. 30, 2016

we'd only really need the 5 big crops + plants for photosynthesis , insects and impollinators in order to survive and thrive

Time and time it turned out that we underestimated the complexity of the biosphere. And time and time again our meddling backfired horribly.

Even if we were utterly selfish and had no moral objections, wiping out all but a handful of "useful" species would almost certainly lead to unforeseen consequences ending in the total destruction of the planet's biosphere. We did not yet manage to fully map the role each species pla... (read more)

More like leading to a temporary collapse to a lower level of complexity (including much less if any in the way of humans) until all the available niches were re-filled by radiating evolution from the surviving forms.
Open thread, Oct. 24 - Oct. 30, 2016

True, the scenario is not implausible for a non-hostile alien civilization to arrive who are more efficient than us, and in the long term they will out-compete and out-breed us.

Such non-hostile assimilation is not unheard of in real life. It is happening now (or at least claimed by many to be happening) in Europe, both in the form of the migrant crisis and also in the form of smaller countries fearing that their cultural identities and values are being eroded by the larger, richer countries of the union.

Open thread, Oct. 24 - Oct. 30, 2016

I'm surprised to find such rhetoric on this site. There is an image now popularized by certain political activists and ideologically-driven cartoons, which depict the colonization of the Americas as a mockery of the D-Day landing, with peaceful Natives standing on the shore and smiling, while gun-toting Europeans jump out of the ships and start shooting at them. That image is even more false than the racist depictions in the late 19th century glorifying the westward expansion of the USA while vilifying the natives.

The truth is much more complicated than t... (read more)

You misunderstood my point. The Europeans did not "proceed with a controlled extermination of the population". Yet, what happened to that population? You don't need to start with a deliberate decision to exterminate in order to end up with almost none of the original population. Sometimes you just need to not care much.
Open thread, Oct. 24 - Oct. 30, 2016

If we developed practical interstellar travel, and went to a star system with an intelligent species somewhat below our technological level, our first choice would probably not be annihilating them. Why? Because it would not fit into our values to consider exterminating them as the primary choice. And how did we develop our values like this? I guess at least in some part it's because we evolved and built our civilizations among plenty of species of animals, some of which we hunted for food (and not all of them to extinction, and even those which got extinc... (read more)

If we were rational, we would stop their continued self-directed development, because having a rapidly advancing alien civilisation with goals different to ours is a huge liability. So maybe we would not wipe them out, but we would not let them continue on as normal.
To me that's not a culture , but a bias (the hunter gatherer bias).....there are thousands of animal species serving no real purpose for our cause and still we slow down our growth because of concerns regarding their survival , not only that , but after having analyzed our daily values and necessities it becomes perfectly crystal clear how we'd only really need the 5 big crops + plants for photosynthesis , insects and impollinators in order to survive and thrive , plus we would be able to support much more people ! Imagine a planet where 15 billions humans live and each and everyone of them consumes 2700 kcal/day and contributes to the world's economy because nobody has to suffer hunger anymore.... that would be possible if we got rid of wastes and inefficiencies . So In my opinion if we ever find other forms of intelligent life and we can't trade with them , eat them , learn from them or acquire knowledge studying them , then yes I am all up for bombing them , just as I am all up for (and I know many will hate me for this :-D ) running a railway + HVDC line through the giant panda's territory , or finally get rid of domesticated animals like cows which convert calories and proteins from grains so poorly . Also I agree with @woodchopper , we should stop sending messages literally "Across the Universe" in order to avoid perishing . An other approach we might use in the remote future could be only using old technologies to broadcast an "hello signal"..... stuff we've long moved from , so we could try to select for civilizations which are way behind us technologically so we could sort of be in control of their destiny like your usual anthill , but even then it could be a trap or they might catch up during the time necessary to make the trip or they could be monitored by some other advanced civilization which is not monitoring us , so we would just signal our presence to them as well...
Did you ask the Native Americans whether they hold a similar opinion?
Article on IQ: The Inappropriately Excluded

First of all, IQ tests aren't designed for high IQ, so there's a lot of noise there and this is probably mainly noise.

Indeed. If an IQ test claims to provide accurate scores outside of the 70 to 130 range, you should be suspicious.

There are so many misunderstandings about IQ in the general population, ranging from claims like "the average IQ is now x" (where x is different from 100), to claims of a famous scientist having had an IQ score over 200, and claims of "some scientists estimating" the IQ of a computer, an animal, or a fictio... (read more)

I think you are just being pedantic. When people say something like "the flynn effect has raised the average IQ has increased by 10 points over the last 50 years", they mean that the average person would score 10 points higher on a 1950's IQ test. See also the value of money, which also changes over time due to inflation. When people say "a dollar was worth more 50 years ago", you don't reply "nuh uh, a dollar has always been worth exactly one dollar." I mean it's impossible to do any kind of serious estimate. But I don't think the idea of a linear scale of intelligence is inherently meaningless. So you could give a very rough estimate where nonhuman intelligences would fall on it, and where that would put them relative to humans with such and such IQ.
The progressive case for replacing the welfare state with basic income

Also, many people on this site seem to have come from a liberal / libertarian upbringing, where it is a very popular trend to believe in. The survey supports this, by presenting support for BI for each political group.

[Link] How the Simulation Argument Dampens Future Fanaticism

Isn't the "Do I live in a simulation?" question practically indistinguishable from the question "does God exist?", for a sufficiently flexible definition for "God"?

For the latter, there are plenty of ethical frameworks, as well as incentives for altruism, developed during the history of mankind.

On the grounds that those ethical frameworks rested on highly in-flexible definitions for God, I am skeptical of their applicability. Moreover, why would we look at a different question where we redefine it to be the first question all over again?
Can you expand? This confused me.
No not really. There is plausible reasoning to believe simulations will someday exist in our future (or if we are in the simulation, our past). I don't think there is much reason to believe in a creator otherwise, and certainly not the very specific ones that major religions believe.
The call of the void

And it seems the community is not interested enough to counter the ten or so accounts which do this... :(

At this moment, the post is at -4 karma 44% positive, that is about 19 downvotes and 15 upvotes. The active part of the community is not large enough to provide significantly more upvotes. Just look at how much karma an average article gets. (And even if the community would be larger, if Eugine's sockpuppets are automated, it wouldn't ultimately make any difference.)

it's more like 20+. And the community is not active enough to fight. Once a post is invisible to a large fraction of the community there are significantly less people able to fight.

The call of the void

There is something I don't understand. Are people voting now on the person instead of the article? I see that all of Elo's recent activity is massively down-voted, and some of the posts might have deserved it. But certainly not all. I'm just curious whether if this post has been written by someone else, would it have been similarly down-voted.

It might not be among the core principles of this site, but it's certainly not an uninteresting topic.

Eugine has added Elo to his list of targets (I've been one for ages and if this comment doesn't have negative karma when you're reading it, come back in a day or two and it almost certainly will) and his army of sockpuppets is downvoting Elo's stuff into oblivion. Why? Because Elo is likely to be involved when Eugine's sockpuppet army is finally expelled from Less Wrong, and Eugine doesn't like that.
Inefficient Games

In this case, we should really define "coercion". Could you please elaborate what you meant through that word?

One could argue, that if someone holds a gun to your head and demands your money, it's not coercion, just a game, where the expected payoff of not giving the money is smaller than the expected payoff of handing it over.

(Of course, I completely agree with your explanation about taxes. It's just the usage of "coercion" in the rest of your comment which seems a little odd)

I originally used 'fiat' instead of 'coercion'. I was just trying to make sure we don't miss other possibilities besides regulations for solving problems like these.
I do not think that Gram_Stone is making the claim that fining or jailing those who do not pay their taxes is not coercion. Instead, I think that he is arguing that it is not the coercion per se that results in most people paying their taxes, but rather that (due to the coercion) failing to pay taxes does not have a favorable payoff, and that it is the unfavorable payoff that causes most people to pay their taxes. So, if there were some way to create favorable payoffs for desirable behavior without coercion, then this would work just as well as does using coercion. Gram_Stone, please correct me if that is not accurate. Also, do you have any ideas as to how to make voluntary payment of taxes have a favorable payoff without using coercion?
"Is Science Broken?" is underspecified

Parenting might be even worse, with plenty of contradictions between self-proclaimed experts, one claiming something is very important to do, the other claiming you must never do it under any circumstances.

"Is Science Broken?" is underspecified

Has anyone heard about the book "The egg-laying dog" from Beck-Bornholdt? I don't know about an English translation, I freely translated the title from German. It is a book about fallacies in statistics, research, especially in medicine, written in a style to be comprehensible by the layman.

It discusses at great length the problems plaguing modern research (well, the research of the 1990's when the book was written, but I doubt that very much has changed). For example, the required statistical significance for a publication is much more relaxed t... (read more)

New Pascal's Mugging idea for potential solution

Let's be conservative and say the ratio is 1 in a billion.


Why not 1 in 10? Or 1 in 3^^^^^^^^3?

Choosing an arbitrary probability has good chances of leading us unknowingly into circular reasoning. I've seen too many cases of using for example Bayesian reasoning about something we have no information about, which went like "assuming the initial probability was x", getting some result after a lot of calculations, then defending the result to be accurate because the Bayesian rule was applied so it must be infallible.

It's arbitrary, but that's OK in this context. If I can establish that this works when the ratio is 1 in a billion, or lower, then that's something, even if it doesn't work when the ratio is 1 in 10. Especially since the whole point is to figure out what happens when all these numbers go to extremes--when the scenarios are extremely improbable, when the payoffs are extremely huge, etc. The cases where the probabilities are 1 in 10 (or arguably even 1 in a billion) are irrelevant.
Rationality test: Vote for trump

And why should we be utility maximization agents?

Assume the following situation. You are very rich. You meet a poor old lady in a dark alley who carries a purse with her, with some money which is a lot from her perspective. Maybe it's all her savings, maybe she just got lucky once and received it as a gift or as alms. If you mug her, nobody will ever find it out and you get to keep that money. Would you do it? As a utility maximization agent, based on what you just wrote, you should.

Would you?

Surely you 'should' only do something like this iff acquiring this amount of money has a higher utility to you than not ruining this lady's day. Which, for most people, it doesn't. Since you're saying 'you are very rich' and 'some money which is a lot from her perspective', you seem to be deliberately presenting gaining this money as very low utility, which you seem to assume should logically still outweigh what you seem to consider the zero utility of leaving the lady alone. But since I do actually give a duck about old ladies getting home safely (and, for that matter, about not feeling horribly guilty), mugging one has a pretty huge negative utility.
Have you read the LW sequences? Because like gjm explained, your question reveals a simple and objective misunderstanding of what utility functions look like when they model realistic people's preferences.
Only if your utility function gives negligible weight to her welfare. Having a utility function is not at all the same thing as being wholly selfish. (Also, your scenario is unrealistic; you couldn't really be sure of not getting caught. If you're very rich, the probability of getting caught doesn't have to be very large to make this an expected loss even from a purely selfish point of view.)
Rationality test: Vote for trump

There are some people who think punishment and reward work linearly.

If I remember correctly (please correct me if I'm wrong) even Eliezer himself believes that if we assign a pain value in the single digits to very slightly pinching someone so they barely feel anything, and a pain value in the millions to torturing someone with the worst possible torture, then you should choose torturing a thousand people over slightly pinching half of the planet's inhabitants, if your goal was to minimize suffering. With such a logic, you could assign rewards and punishments to anything, and calculate pretty strange things out of that.

Crazy Ideas Thread

Another problem would be, that unless this system suddenly and magically got applied to the whole world, it would not be competitive. It can't grow from a small set of members because the limits it imposes would hinder those who would have contributed the most to the size and power of the economy. By shrinking your economy, you will become less competitive against those who don't adopt this new system.

Crazy Ideas Thread

I fear some people will quickly learn how to game the system. No wonder our current society is so complicated, every time a group came up with a simple and brilliant way to create the perfect utopia, it always failed miserably.

(also, try selling your idea to the average voter, I would love to see their faces when you mention "logarithm of total social product")

Sure telling people that logarithms are involved will probably not help :-) Also oversimplification probably wouldn't work either. One key point is that - at a suitable level of abstraction - you can actually prove invariants of the system like limits to individual income/property/power. Invariants that you might or might not want to have. - publish, discover, and discuss rational fiction

Cars in the 1930's didn't have such crumple zones as modern cars do. Also, in the city they don't move as fast as on the freeway. Even a small difference might decide between life and death.

I would suggest giving the story the benefit of the doubt. It must stay at least somewhat true to the style of the comics, but at the same time explore the world in a more serious and realistic tone. And it manages that quite well, it's worth reading.

Open Thread May 30 - June 5, 2016

Imagine that you are literally the first organism who by random mutation achieved a gene for "helping those who help you"

Not all information is encoded genetically. Many kinds of information have to be learned from the parents or from society.

Open Thread May 23 - May 29, 2016

One problem I can see at first glance that the article doesn't look like a Wikipedia article, but as a textbook or part of a publication. The goal of a Wikipedia article should be for a wide audience to understand the basics of something, and not a treatise only experts can comprehend.

What you wrote seems to be an impressive work, but it should be simplified (or at least the introduction of it), so that even non-experts can have a chance to at least learn what it is about.

I don't think this is true. Wikipedia is a collection of knowledge, not a set of introductory articles. See e.g. the Wikipedia pages on intermediate-to-high statistical concepts and techniques, e.g. copulas [\]).
Hedge drift and advanced motte-and-bailey

It's not only in social sciences where this phenomena is common. The most striking examples I've seen were in medicine. An article is published, for example "supplement xyz slightly reduces a few of the side effects encountered during radiotherapy used in cancer treatment", which is then published in the media and on social networks as "What the medical industry doesn't want you to know: supplement xyz instantly cures all forms of cancer!". And often there is a link to the original publication, but people still believe it and forward it. And what's even more sad, probably many people then buy that supplement and don't seek medical help, believing that it alone will help.

How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience?

If this would be enough to prove the effectiveness of rain-dancing, then we would develop 30 different styles of rain-dance, test each of them, and with a very high chance we would get p<0.05 on at least one of them.

Sadly, the medical industry is full of such publications, because publishing new ideas is rewarded more than reproducing already published experiments.

How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience?

Since then I found a partially relevant, but very simple and effective "puzzle".

There are four cards in front of you on the desk. It is known, that every card has a numerical digit on one side, and a letter from the English alphabet on the other side.

You have to verify the theory that "if one side of the card has a vowel, the other side has an even number", and you are only allowed to flip two cards.

The cards in front of you are:

A T 7 2

Which cards will you flip?

(I wrote partially relevant because this is not an example for an unfalsi... (read more)

Actually, I think most people will misunderstand the theory they have to verify or falsify. However, evidently people's ability to solve this puzzle hugely depends on the way it's formulated [].
How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience?

I agree, but I see a connection to falsifiability in that most people don't even try to falsify their theories in this game, even if it would be possible.

A much better example than the 2-4-6 game would be one where the most obvious hypothesis was unfalsifiable.

Continuity of Consciousness. Are you the same person you were before you went to sleep last night? Were you created five minutes ago?
Quantum immortality?
How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience?

This and Russel's teapot are just unverifiable claims, and not a study of understanding how a system works which would fail because we committed an innocent mistake.

Besides, they have strong ideological undertones, so all they would manage to do is to cater for the ego of those who agree with their ideological implications, and make angry those who don't. They won't really convince anyone.

You didn't mention what kind of audience it was. For some it would be an appropriate example. What about the second example?
The Sally-Anne fallacy

I often encountered (when discussing politics, theology or similar subjective topics) a fallacy which is similar to this one, or maybe it can be seen as the reverse of it.

  • A: ice is hot, therefore 2+2=4
  • B: No, ice is not hot, but even if it was, it still wouldn't be a good proof for 2+2=4
  • A: So you don't believe in the obvious truth that 2+2=4 ?

Also, sometimes A might try to prove 2+2=5 with the same strategy.

Consider having sparse insides

Not necessarily. One might sincerely believe in the core values promoted by Christianity (Do unto others as you would have them do unto you) without being a biblical literalist. Christianity includes a wide spectrum of views, not only what how some people define it, which might even be just a parody of Christianity.

To summarize it, I don't know her so I cannot judge whether she's just lying for a social benefit or not, but I find it plausible that she might not be lying, or might not behave like this solely as a facade for a social benefit.

Also, I suspect that SquirrellnHell's friend probably has more respect for Christianity than SquirrellnHell does, even if she does not manifest that additional respect in the context of conversations between them (when she might be motivated to match SquirrellnHell's own attitude more closely.)
Open Thread April 4 - April 10, 2016

You are right, I meant bihacking, my mistake.

My concern was based on the observation how the word phobia (especially in cases of homophobia and xenophobia) is increasingly applied to cases of mild dislike, or even to cases of failing to show open support.

I agree that -phobia gets applied much more broadly than my etymological sensitivities would prefer, and I expect that (unfortunately) to continue. But what I find unlikely isn't anything to do with word usage; I just don't expect that any time in the near future it will be widely held that you mistreat any group by not going out of your way to make yourself want to have sex with them. I could be wrong, of course. And, as I already said, I'm sure there are some people who hold that kind of position even now. But it doesn't seem to me like the kind of silliness that would ever attract a lot of support.
Open Thread April 4 - April 10, 2016

I fear a time will come when people who don't want to try polyhacking bihacking will be labeled as homophobic. And that will just further dilute the term.

When you write "polyhacking", do you actually mean "bihacking"? If not, what you say you fear seems to me a very odd thing to fear. Actually, I would be quite surprised if (within, let's say, the next 40 years, and assuming no huge technological changes that would affect this) heterosexuality + unwillingness to try to become bi were enough to get anyone widely labelled as homophobic. (I'm sure there are already people who would apply that label, but not enough to have much impact.) [EDITED to add:] Just to clarify, the point of the second paragraph is that I find Val's fear not-terribly-plausible even if "bihacking" is what s/he meant.
Lesswrong 2016 Survey

Besides saying that I have taken the survey...

I would also like to mention that the predictions of probabilities of unobservable concepts was the hardest one for me. Of course, there are some in which i believe more than in some others, but still, any probability besides 0% or 100% seems really strange for me. For something like being in a simulation, if I would believe it but have some doubts, saying 99%, or if I would not believe but being open to it and saying 1%, these seem so arbitrary and odd for me. 1% is really huge in the scope of very probable o... (read more)

Consider having sparse insides

Please explain what you mean by saying "it is easier to...".

Judging by the examples, for me the opposite seems to be much easier, if we define easiness as how easy it is to identify with a view, select a view, or represent a view among other people.

Do you instead use the term as "it will be more useful for me"? For the average person, it is much easier to identify oneself with a label, because it signifies a loyalty to a well-defined group of people, which can lead to benefits within that group.

Saying "I'm a democrat" or &q... (read more)

One of my friends, whose meta beliefs about religion etc. match pretty closely with mine, goes on calling herself "Christian". There's literally nothing Christian about her, just the label. And it works. She is getting all the social benefits of actually being Christian, without believing any of the bullshit. This blows my mind, and yet it is how social groups work.
What makes buying insurance rational?

Insurance for small consumer products are not rational for the buyer, for the very reasons which were presented in the question. If you can afford the loss of the item, it's better to not buy insurance and just buy the item again in the case it is lost or destroyed. Why insurance companies are still making money out of extended warranties for consumer products, is because they have good marketing and people are not perfectly rational. Gambling, lottery, etc. exist for the same reasons, despite having a negative expected value.

However, if you cannot afford ... (read more)

Load More