All of Untermensch's Comments + Replies

[Link] Is the Endowment Effect Real?

If I am given a thing, like a mug, I now have one more mug than I had before. My need for mugs has therefore decreased. If I am to sell the mug, I must examine how much I will need the mug after it is gone and place a price on that loss of utility. If I am buying a mug I must set a price on how much I need it after I have it and place a price on that increase of utility. If the experiment is not worded carefully then the thought process could go along the lines of...

I have 2 mugs, and often take a tea break with my mate Steve. To sell one of those mugs wou... (read more)

This all depends on the valuation elicitation process, which is pretty clever and, assuming that the subjects are acting rationality, does in fact elicit true values - at least as implemented in the paper I linked. As the paper goes into, others unknowningly tweak this process a bit and change the incentive structure, then (surprise) they get WTP-WTA gaps.
Wanted: "The AIs will need humans" arguments

I have a couple of questions about this subject...

Does it still count if the AI "believes" that it needs humans when it, in fact, does not?

For example does it count if you code into the AI the belief that it is being run in a "virtual sandbox," watched by a smarter "overseer" and that if it takes out the human race in any way, then it will be shut down/tortured/highly negative utilitied by said overseer?

Just because an AI needs humans to exist, does that really mean that it won't kill them anyway?

This argument seems to be co... (read more)

We mention the "layered virtual worlds" idea, in which the AI can't be sure of whether it has broken out to the "top level" of the universe or whether it's still contained in an even more elaborate virtual world than the one it just broke out of. Come to think of it, Rolf Nelson's simulation argument attack [] would probably be worth mentioning, too.
It's probably been thought of here and other places before, but I just thought of the "Whoops AI" -- a superhuman AGI that accidentally or purposefully destroys the human race, but then changes its mind and brings us back as a simulation.
Newcomb's Problem and Regret of Rationality

Sorry, I am having difficulty explaining as I am not sure what it is I am trying to get across, I lack the words. I am having trouble with the use of the word predict, as it could imply any number of methods of prediction, and some of those methods change the answer you should give.

For example if it was predicting by the colour of the player's shoes it may have a micron over 50% chance of being right, and just happened to have been correct the 100 times you heard of. In that case one should take a and b, if, on the other hand, it was a visitor from a highe... (read more)

This seems a bizarre way of thinking about it, to me. It's as though you'd said "suppose there's someone walking past Sam in the street, and Sam can shoot and kill them, ought Sam do it?" and I'd replied "well, I need to know how reliable a shot Sam is. If Sam's odds of hitting the person are low enough, then it's OK. And that depends on the make of gun, and how much training Sam has had, and..." I mean, sure, in the real world, those are perhaps relevant factors (and perhaps not). But you've already told me to suppose that Sam can shoot and kill the passerby. If I assume that (which in the real world I would not be justified in simply assuming without evidence), the make of the gun no longer matters. Similarly, I agree that if all I know is that Omega was right in 100 trials that I've heard of, I should lend greater credence to the hypothesis that there were >>100 trials, the successful 100 were cherrypicked, and Omega is not a particularly reliable predictor. This falls into the same category as assuming Omega is simply lying... sure, it's highest-expected-value thing to do in an analogous situation that I might actually find myself in, but that's different from what the problem assumes. The problem assumes that I know Omega has an N% prediction rate. If I'm going to engage with the problem, I have to make that assumption. If I am unable to make that assumption, and instead make various other assumptions that are different, then I am unable to engage with the problem. Which is OK... engaging with Newcombe's problem is not a particularly important thing to be able to do. If I'm unable to do it, I can still lead a fulfilling life.
Newcomb's Problem and Regret of Rationality

Thank you. By depersonalising the question it makes it easier for me to think about. If do you take one box or two becomes should one take one box or two... I am still confused. I'm confident that just box B should be taken, but I think that I need information that is implied to exist but is not presented in the problem to be able to give a correct answer. Namely the nature of the predictions Omega has made.

With the problem as stated I do not see how one could tell if Omega got lucky 100 times with a flawed system, or if it has a deterministic or causalit... (read more)

Mm. I'm not really understanding your thinking here.
Newcomb's Problem and Regret of Rationality

Thanks, that does help a little, though I should say that I am pretty sure I hold a number of irrational beliefs that I am yet to excise. Assuming that Omega literally implanted the idea into my head is a different thought experiment to Omega turned out to be predicting is different to Omega saying that it predicted the result etc. Until I know how and why I know it is predicting the result I am not sure how I would act in the real case. How Omega told me that I was only allowed to pick box a and b or just b may or may not be helpful but either way not as ... (read more)

Fair enough. For my own part, I find that I often act on my beliefs in a situation without stopping to consider what my basis for those beliefs is, so it's not too difficult for me to imagine acting on my posited beliefs about Omega's predictive ability while ignoring the question of where those beliefs came from. I simply accept, for the sake of the exercise, that I do believe it and act accordingly. Another way of looking at it you might find helpful is to leave aside altogether the question of what I would or wouldn't do, and what I can and can't believe, and instead ask what the right thing to do would be were this the actual situation. E.g., if you give me a device that is indistinguishable from a revolver, but is designed in such a way that placing it to my temple and firing the trigger doesn't put a bullet in my skull but instead causes Vast Quantities of Really Good Stuff to happen, the right thing to do is put the device to my temple and fire the trigger. I won't actually do that, because I have no way of knowing what the device actually does, but whether I do it or not it's the right thing to do.
Newcomb's Problem and Regret of Rationality

The difficulty I am having here is not so much that the stated nature of the problem is not real so much that it is asking one to assume they are irrational. With a .999999999c spaceship it is not irrational to assume one is in a trolley on a space ship if one is in a trolley on a space ship. There is not enough information in the Omega puzzle as it assumes you, the person it drops the boxes in front of, know that omega is predicting, but does not tell you how you know that. As the mental state 'knowing it is predicting' is fundamental to the puzzle, not k... (read more)

I agree that I can't imagine any justified way of coming to believe Omega has the properties that I am presumed to believe Omega to have. So, yes, the thought experiment either assumes that I've arrived at that state in some unjustified way (as you say, assume I'm irrational, at least sometimes) or that I've arrived at it in some justified way I currently have no inkling of (and therefore cannot currently imagine). Assuming that I'm irrational sometimes, and sometimes therefore arrive at beliefs that aren't justified, isn't too difficult for me; I have a lot of experience with doing that. (Far more experience than I have with riding a trolley on a spaceship, come to that.) But, sure, I can see where people whose experience doesn't include that, or whose self-image rejects it regardless of their experience, or who otherwise have trouble imagining themselves arriving at beliefs that aren't rationally justified, might balk at that step. If by "best choice" we mean the choice that has the best possible results, then in this case we either cannot make the best choice except by accident, or we always make the best choice, depending on whether the things that didn't in fact happen were possible before they didn't happen, which there's no particular reason to believe. If by "best choice" we mean the choice that has the highest expected value given what we know when we make it, then we make the best choice by evaluating what we know.
Newcomb's Problem and Regret of Rationality

Sorry, I'm new here, I am having trouble with the Idea that anyone would consider taking both boxes in a real world situation. How would this puzzle be modeled differently, versus how would it look differently if it were Penn and Teller flying Omega?

If Penn and Teller were flying Omega then they would have been able to produce exactly the same results as seen, without violating causality or time travelling or perfectly predicting people by just cheating and emptying the box after you choose to take both.

Given that "it's cheating" is a significant... (read more)

Yeah, this comes up a lot. My usual way of approaching it is to acknowledge that the thought experiment is asking me to imagine being in a particular epistemic state, and then asking me for my intuitions about what I would do, and what it would be right for me to do, given that state. The fact that the specified epistemic state is not one I can imagine reaching is beside the point. This is common for thought experiments. If I say "suppose you're on a spaceship traveling at .999999999c, and you get in a trolley inside the ship that runs in the direction the ship is travelling at 10 m/s, how fast are you going?" it isn't helpful to reply "No such spaceship exists, so that condition can't arise." That's absolutely true, but it is beside the point.
It's simply one of the rules of the thought experiment. If you bring in the hypothesis that Omega is cheating, you are talking about a different thought experiment. That may be an interesting thought experiment in its own right, but it isn't the thought experiment under discussion, and the solution you are proposing to your thought experiment is not a solution to Newcomb's problem.
[SEQ RERUN] The Dilemma: Science or Bayes?

I agree with the terms, for the sake of explanation by magical thinker I was thinking along the lines of young non science trained children, or people who have either no knowledge of or no interest in the scientific method. Ancient Greek philosophers could come under this label if they never experimented to test their ideas. The essence is that they theorise without testing their theory.

In terms of the task, my first idea was the marshmallow test from a Ted lecture, "make the highest tower you can that will support a marshmallow on top from dry spagh... (read more)

At the risk of repeating myself... it depends on the properties of the marshmallow-test task. If it is such that my intuitions about it predict the actual task pretty well, then I should just start taping pasta. If it is such that my intuitions about it predict the task poorly, I might do better to study the system... although if there's a time limit, that might not be a good idea either, depending on the time limit and how quickly I study.
Why do people ____?

Good point, I do not, but I find it strange that people, myself included, practice at enjoying something when there are plenty of things that are enjoyable from the start. Especially when starting an aquired taste is often quite uncomfortable. I salute the mind that looked at a tobacco plant, smoked it, coughed its lungs out, and then kept doing it till it felt good.

Why do people ____?

Why do people take the time to develop "aquired tastes". "That was an unpleasant experience", somehow becomes "I will keep doing it until I like it."

My guess is social conditioning, but then how did it become popular enough for that to be a factor?

Do you expect everything that's possibly enjoyable to be enjoyable immediately?
Some things I decided to like (when I was young) in order to "be more grown up." (Liquor, coffee, classical music, opera) Or because cool people or people I admired were doing it (smoking a tobacco pipe, philosophy, math). Some things to add variety to my life, just like MixedNuts. For example, learning to appreciate and distinguish between different types of wine, teas, cheeses, classical music. Some because I thought they were good for me, so I might as well like them. (Yogurt, sushi)

I do it because I love variety and thus value having more possible pleasant experiences to have.

[SEQ RERUN] The Dilemma: Science or Bayes?

Well said. In considering your response I notice that a process P as part of its cost E has room to include the cost of learning the process if necessary, something that was concerning me.

I am now considering a more complicated case.

You are in a team of people of which you are not the team leader. Some of the team are scientists, some are magical thinkers, you are the only Bayesian.

Given an arbitrary task which can be better optimised using Bayesian thinking, is there a way of applying a "Bayes patch" to the work of your teammates so that they... (read more)

I expect it depends rather a lot on the nature of the problem, and on just what exactly we mean by "science," "magical thinking," and "Bayes". I find, thinking about your question, that I'm not really sure what you mean by these terms. Can you give me a more concrete example of what you have in mind? That is, OK, there's a team comprising A, B, and C. What would lead me to conclude that A is a "magical thinker", B is a "Bayesian," and C is a "scientist"? For my own part, I would say that the primary difference has to do with how evidence is evaluated. For example, I would expect A, in practice, to examine the evidence holistically and arrive at intuitive conclusions about it, whereas I would expect B and C to examine the evidence more systematically. In a situation where the reality is highly intuitive, I would therefore expect A to arrive at the correct conclusion with confidence quickly, and B and C to confirm it eventually. In a situation where the reality is highly counterintuitive, I would expect A to arrive at the wrong conclusion with confidence quickly, while B and C become (correctly) confused. For example, I would expect B and C, in practice, to try and set up experimental conditions under which all observable factors but two (F1 and F2) are held fixed, and F1 is varied and F2 measured and correlations between F1 and F2 calculated. In a situation where such conditions can be set up, and strong correlations are observed between certain factors, I would expect C to arrive at correct conclusions about causal links with confidence slowly, and B to confirm them even more slowly. In a situation where such conditions cannot be set up, or where no strong correlations are observed between evaluated factors, I would expect C to arrive at no positive conclusions about causal links, and B to arrive at weak positive conclusions about causal links. Are these expectations consistent with what you mean by the terms?
[SEQ RERUN] The Dilemma: Science or Bayes?

Science is simple enough that you can sic a bunch of people on a problem with a crib sheet and an "I can do science, me" attitude, and get a good enough answer early. The mental toolkit for applying Bayes is harder to give to people. I am right at the beggining approaching from a mentally lazy, slight psychological, and engineering background, when I first saw the word Bayes was in a certain Harry Potter fanfic a week or so ago. I failed the insightful tests in the early sequences, and caught myself noticing I was confused and not doing anything ... (read more)

Sure. More generally, if I don't want to optimize X, but merely want to satisfy some threshold T for X, then I don't really care what the optimal way of doing X is in general, I care what way of doing X gets me across T most cheaply. If getting across T using process P1 costs effort E1 from where I am now, and P2 costs E2, and E2 > E1, and I don't care about anything else, I should choose E1. The catch is, like a lot of humans, I also have a tendency to overestimate both the effectiveness of whatever I'm used to doing and the costs of changing to something else. So it's very easy for me to dismiss P2 on the grounds of an argument like the above even in situations where E1 > E2, or where it turns out that I do care about other things, or both. There are some techniques that help with countering that tendency. For example, it sometimes helps to ask myself from time to time whether, if I were starting from scratch, I would choose P1 or P2. (E.g. "if I were learning to type for the first time, would I learn Dvorak or Qwerty?"). Asking myself that question lets me at least consider which process I think is superior for my purposes, even if I subsequently turn around and ignore that judgment due to status-quo bias. That isn't great, but it's better than failing to consider which process I think is better.
Focus Your Uncertainty

Edit - I didn't read the premises correctly. I missed the importance of the bit "Your mind keeps drifting to the explanations you use on television, of why each event plausibly fits your market theory. But it rapidly becomes clear that plausibility can't help you here—all three events are plausible. Fittability to your pet market theory doesn't tell you how to divide your time. There's an uncrossable gap between your 100 minutes of time, which are conserved; versus your ability to explain how an outcome fits your theory, which is unlimited."

The t... (read more)

1Sudeep Kumar5y
When unsure of an outcome to excuse what you are looking for is not the "most likely to be needed" excuse to be "really good" but for any excuse you need to be "as good as possible." This might depend on the probability distribution across possible events. If the probabilty of all 3 outcomes is similar (33.3%), it might make sense to use "each excuse as good as possible". But when one of the outcomes is really likely (say 85%+), you can start to think about adopting "most likely needed excuse to be really good" strategy. Playing too defensive might guarantee to save you from embarassment no matter what, but you can consider being greedy too.