All of Manfred's Comments + Replies

This site isn't too active - maybe email someone from CFAR directly?

I've mailed CFAR ( []) or should I have mailed people directly?

Man, this interviewer sure likes to ask dense questions. Bostrom sort of responded to them, but things would have gone a lot smoother if LARB guy (okay, Andy Fitch) had limited himself to one or two questions at a time. Still, it's kind of shocking the extent to which Andy "got it," given that he doesn't seem to be specially selected - instead he's a regular LARB contributor and professor in an MFA program.

Hm, the format is interesting. The end product is, ideally, a tree of arguments, with each argument having an attached relevance rating from the audience. I like that they didn't try to use the pro and con arguments to influence the rating of the parent argument, because that would be too reflective of audience composition.


Infinity minus one isn't smaller than infinity. That's not useful in that way.

The thing being added or subtracted is not the mere number of hypotheses, but a measure of the likelihood of those hypotheses. We might suppose an infinitude of mutually exclusive theories of the world, but most of them are extremely unlikely - for any degree of unlikeliness, there are an infinity of theories less likely than that! A randomly-chosen theory is so unlikely to be true, that if you add up the likelihoods of every single theory, they add up to a number less than in... (read more)

This was talking about set sizes, which is what I replied about. You can't quantify your fallibility in the sense of knowing how likely you are to be mistaken in an unexpected way. That's not possible.

I think this neglects the idea of "physical law," which says that theories can be good when they capture the dynamics and building-blocks of the world simply, even if they are quite ignorant about the complex initial conditions of the world.

Sure. This is true of all maps and models. As simple as possible, but no simpler. That simplicity ALWAYS comes with a loss of fidelity to the actual state of the universe.

Can't this be modelled as uncertainty over functional equivalence? (or over input-output maps)?

Hm, that's an interesting point. Is what we care about just the brute input-output map? If we're faced with a black-box predictor, then yes, all that matters is the correlation even if we don't know the method. But I don't think any sort of representation of computations as input-output maps actually helps account for how we should learn about or predict this correlation - we learn and predict the predictor in a way that seems like updating a distribution over... (read more)

Interesting that resnets still seem state of the art. I was expecting them to have been replaced by something more heterogeneous by now. But I might be overrating the usefulness of discrete composition because it's easy to understand.

Plausibly? LW2 seems to be doing okay, which is gonna siphon off posts and comments.

The dust probably is just dust - scattering of blue light more than red is the same reason the sky is blue and the sun looks red at sunset (Rayleigh scattering / Mie scattering). It comes from scattering off of particles smaller than a few times the wavelength of the light - so if visible light is being scattered less than UV, we know that lots of the particles are of size smaller than ~2 um. This is about the size of a small bacterium, so dust with interesting structure isn't totally out of the question, but still... it's probably just dust.

I think people get too hung up on computers as being mechanistic. People usually think of symbol manipulation in terms of easy-to-imagine language-like models, but then try to generalize their intuitions to computation in general, which can be unimaginably complicated. It's perfectly possible to simulate a human on an ordinary classical computer (to arbitrary precision). Would that simulation of a human be conscious, if they matched the behavior of a flesh and blood human almost perfectly, and could output to you via text channel and output things like &qu... (read more)

The working of a computer is not unimaginably complicated. Its basis is quite straightforward really. As I said in my answer to MrMind below “As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this [] (from 9:12).”. In our debate I am holding the position that there can not be a simulation of consciousness using the current architectural basis of a computer. Searle has provided a logical argument. In my quotes above I show that the state of neuroscience does not point towards a purely digital brain. What is your evidence?

Neat paper about the difficulties of specifying satisfactory values for a strong AI. h/t Kaj Sotala.

The design of social choice AI faces three sets of decisions: standing, concerning whose ethics views are included; measurement, concerning how their views are identified; and aggregation, concerning how individual views are combined to a single view that will guide AI behavior. [] Each set of decisions poses difficult ethical dilemmas with major consequences for AI behavior, with some decision options yielding pathological or even catastrophic results.

... (read more)
Note that these three things (standing, measurement, and aggregation) are unsolved for human moral decisionmaking as well.

Yeah, whenever you see a modifier like "just" or "merely" in a philosophical argument, that word is probably doing a lot of undeserved work.

I don't, and maybe you've already been contacted, but you could try contacting him on social sites like this one (user paulfchristiano) and Medium, etc. Typical internet stalking skillset.

Ah, you mean to ask if the brain is special in a way that evades our ability to construct an analogy of the chinese room argument for it? E.g. "our neurons don't indiviually understand English, and my behavior is just the product of a bunch of neurons following the simple laws of chemistry, therefore there is nothing in my body that understands English."

I think such an argument is totally valid imitation. It doesn't necessarily bear on the Chinese room itself, which is a more artificial case, but it certainly applies to AI in general.

Hmm.. I do not think that is what I mean, no. I lean towards agreeing with Searle's conclusion but I am examining my thought process for errors. Searle's argument is not that consciousness is not created in the brain. It is that it is not based on syntactic symbol manipulation in the way a computer is and for that reason it is not going to be simulated by a computer with our current architecture (binary, logic gates etc.) as the AI community thought (and thinks). He does not deny that we might discover the architecture of the brain in the future. All he does is demonstrate through analogy how syntactic operations work. In the Chinese gym rebuttal the issues is not really addressed. There is no denial by Searle that the brain is a system, with sub components, through which structure, consciousness emerges. That is a different discussion. He is arguing that the system must be doing something, different or in addition to, syntactic symbol manipulation. Since the neuroscience does not support the digital information processing view where is the certainty of the community coming from? Am I missing something fundamental here?
"our neurons don't indiviually understand English, and my behavior is just the product of a bunch of neurons following the simple laws of chemistry" The question is what the word "just" means in that sentence. Ordinarily it means to limit yourself to what is said there. The implication is that your behavior is explained by those simple laws, and not by anything else. But as I pointed out recently, having one explanation does not exclude others. So your behavior can be explained by those simple laws, and at the same time by the fact that you were seeking certain goals, or in other ways. In other words, the argument is false because the word "just" here implies something false.

You say impressions, but I'm assuming this is just the "things I want changed" thread :)

Vote button visibility and responsiveness is a big one for me. Ideally, it should require one click, be disabled while it messages the server, and then change color much more clearly.

On mobile, the layout works nicely, but load / render times are too long (how much javascript is necessary to serve text? Apparently, lots) and the text formatting buttons take up far too much space.

First time, non-logged in viewers should probably not see the green messaging blob... (read more)

It took me some time to notice that the up-down buttons are not for some kind of chapter back/forth navigation but for voting...

Well, it really is defined that way. Before doing math, it's important to understand that entropy is a way of quantifying our ignorance about something, so it makes sense that you're most ignorant when (for discrete options) you can't pick out one option as more probable than another.

Okay, on to using the definition of entropy as the sum over event-space of -P log(P) of all the events. E.g. if you only had one possible event, with probability 1, your entropy would be 1 log(1) = 0. Suppose you had two events with different probabilities. If you changed the ... (read more)

Moderation is basically the only way, I think. You could try to use fancy pagerank-anchored-by-trusted-users ratings, or make votes costly to the user in some way, but I think moderation is the necessary fallback.

Goodhart's law is real, but people still try to use metrics. Quality may speak for itself, but it can be too costly to listen to the quality of every single thing anyone says.

Which is why there should be a way to vote on users, not content, the quantity of unevaluated content shouldn't divide the signal. This would matter if the primary mission succeeds and there is actual conversation worth protecting.
People use name recognition in practice, works pretty well.

The only thing I don't like about the "2017 feel" is that it sometimes feel like you're just adrift in the text, with no landmarks. Sometimes you just want guides to the eye, and landmarks to keep track of how far you've read!

I haven't run into that problem, but I'm reading from my phone, and Chrome tracks where I've scrolled to.

I also agree that HPMOR might need to go somewhere other than the front page. From a strategic perspective, I somehow want to get the benefits of HPMOR existing (publicity, new people finding the community) without the drawbacks (it being too convenient to judge our ideas by association).

I am somewhat conflicted about this. HPMOR has been really successful at recruiting people to this community (HPMOR is the path by which I ended up here), and according to last year's survey about 25% of people who took the survey found out about LessWrong via HPMOR. I am hesitant to hide our best recruitment tool behind trivial inconveniences. One solution to this that I've been thinking about is to have a separate section of the page filled with rationalist art and fiction, which would prominently feature HPMOR, Unsong and some of the other best rationalist fiction out there. I can imagine that section of the page itself getting a lot of traffic, since fiction is a lot easier to get into than the usually more dry reading on LW and SSC, and if we set up a good funnel between that part of the site and the main discussion we might get a lot of benefits, without needing to feature HPMOR prominently on the frontpage.

I think votes have served several useful purposes.

Downvotes have been a very good way of enforcing the low-politics norm.

When there's lots of something, you often want to sort by votes, or some ranking that mixes votes and age. Right now there aren't many comments per thread, but if there were 100 top-level comments, I'd want votes. Similarly, as a new reader, it was very helpful to me to look for old posts that people had rated highly.

How are you going to prevent gaming the system and collusion? -------------------------------------------------------------------------------- Goodhart's law: you can game metrics, you can't game targets. Quality speaks for itself.

And what if the universe is probably different for the two possible copies of you, as in the case of the boltzmann brain? Presumably you have to take some weighted average of the "non-anthropic probabilities" produced by the two different universes.

Re: note. This use of SSA and SIA can also be wrong. If there is a correct method for assigning subjective probabilities to what S.B. will see when she looks at outside, it should not be an additional thing on top of predicting the world, it should be a natural part of the process by which S.B. predict... (read more)

That's not quite what I was talking about, but I managed to resolve my question to my own satisfaction anyhow. The problem of conditionalization can be worked around fairly easily.

Suppose that there is 50% ehance of there being a boltzmann brain copy of you

Actually, the probability that you should assign to there being a copy of you is not defined under your system - otherwise you'd be able to conceive of a solution to the sleeping beauty problem - the entire schtick is that Sleeping Beauty is not merely ignorant about whether another copy of her exist... (read more)

Non-anthropic ("outside observer") probabilities are well defined in the sleeping beauty problem - the probability of heads/tails is exactly 1/2 (most of the time, you can think of these as the SSA probabilities over universes - the only difference being in universes where you don't exist at all). You can use a universal prior or whatever you prefer; the "outside observer" doesn't need to observe anything or be present in any way. I note that you need these initial probabilities in order for SSA or SIA to make any sense at all (pre-updating on your existence), so I have no qualms claiming them for ADT as well.

Moral value is not an "intrinsic property" of a mathematical structure - aliens couldn't look at this mathematical structure and tell that it was morally important. And yet, whenever we compute something, there is a corresponding abstract structure. And when we reason about morality, we say that what is right wouldn't change if you gave us brain surgery, so by morality we don't mean "whatever we happen to think," we mean that abstract structure.

Meanwhile, we are actual evolved mammals, and the reason we think what we do about morality i... (read more)

I was making a comment on the specific points of dogiv but the discussion is about trying to discover if morality 1) has an objective basis or is completely relative and 2) it has a rational/computational basis or not. Is it that you don't care about approaching truth on this matter, or that you believe you already know the answer? In any case my main point is that Jordan Peterson's perspective is (in my opinion) the most rational, cohesive and supported by evidence available and would love to see the community taking the time to study it, understand it and try to dispute it properly. Nevertheless, I know not everyone has the time for that so If you expand on your perspective on this 'abstract structure' and its basis we can debate :)

Since we are in the real world, it is a possibility that there is a copy of me, e.g. as a boltzmann brain, or a copy of the simulation I'm in.

Does your refusal to assign probabilities to these situations infect everyday life? Doesn't betting on a coin flip require conditioning on whether I'm a boltzmann brain, or am in a simulation that replaces coins with potatoes if I flip them? You seem to be giving up on probabilities altogether.

Suppose that there is 50% ehance of there being a boltzmann brain copy of you - that's fine, that is a respectable probability. What ADT ignores are questions like "am I the boltzmann brain or the real me on Earth?" The answer to that is "yes. You are both. And you currently control the actions of both. It is not meaningful to ask 'which' one you are." Give me a preference and a decision, and that I can answer, though. So the answer to "what is the probability of being which one" is "what do you need to know this for?"

This seems like a question about aesthetics - thia choice won't change my experience, but it will change what kind of universe I live in. I think I'd choose duplication - I put a pretty low value on tiling the universe with conscious experience, but it's larger than zero.

I totally agree. Perceived differences in kind here are largely due to the different methods we use to think about these things.

For the triangle, everybody knows what a triangle is, we don't even need to use conscious thought to recognize them. But for the key, I can't quite keep the entire shape in my memory at once, if I want to know if something is shaped like my front door key, I have to compare it to my existing key, or try it in the lock.

So it naively seems that triangleness is something intrinsic (because I perceive it without need for thought), whi... (read more)

What is the analogy of sum that you're thinking about? Ignoring how the little pieces are defined, what would be a cool way to combine them? For example, you can take the product of a series of numbers to get any number, that's pretty cool. And then you can convert a series to a continuous function by taking a limit, just like an integral, except rather than the limit going to really small pieces, the limit goes to pieces really close to 1.

You could also raise a base to a series of powers to get any number, then take that to a continuous limit to get an in... (read more)

Someone has probably thought of this already, but if we defined an integration analogue where larger and larger logarithmic sums cause their exponentiated, etc. value to approach 1 rather than infinity, then we could use it to define a really cool account of logical metaphysics: Each possible state of affairs has an infinitesimal probability, there are infinitely many of them, and their probabilities sum to 1. This probably won't be exhaustive in some absolute sense, since no formal system is both consistent and complete, but if we define states of affairs as formulas in some consistent language, then why not? We can then assign various differential formulas to different classes of states of affairs. (That is the context in which this came up. The specific situation is more technically convoluted.)

In problems where multiple different agents in the universe "could be you," (i.e. share information), you really don't have to do anything fancy. Just assign equal probability to all agents in the entire universe who, as far as you know, can match your current state of information.

If there are two copies of Earth, and, hypothetically, only these two copies of me in the universe, I assign 50% probability to being each. This stays the same whether these Earths are at different points in space, or at different times, or sped up or slowed down or pla... (read more)

I will be frank. This sounds like a lame deal for anyone who takes you up on the offer. "My physics is shit, but I have a great idea for a new theory of gravity. PM me if you are a professional physicist and want to coauthor a paper." "My writing is shit, but I have a clever idea for a story and would like someone to write it it for me."

First you should do 90% of the work you know about, then maybe you can find a professional to do the last 10% plus the things you didn't know about. Read the relevant philosophy! Go read wikipedia, read ... (read more)

Across the frozen sea around most of Antartica even in the summertime?

I'm not sure if you're actually curious, or if you think this is a "gotcha" question.

Here's a picture. As the glacier flows outward (here's measured flow rates), it begins floating on the sea and becomes an ice shelf, which then loses mass to the ocean through melting and breaking up into pieces, which then melt. This ice shelf is thick (100m - 1 km scale), because it's a really thick sheet of ice being pushed out into the water by gravity. It then encounters the sea ice, wh... (read more)

That picture is silly. The deep-cold freshwater continental ice flowing into the ocean and melting there in the icy waters, but the 1-4 meters thick salty ice survives the Antarctic summer? Actually, there are a few places on Antarctica, where glaciers flow into the ocean, but not very fast at all. And where is the heat to melt -40 degrees cold ice, 2000 cubic kilometers per summer? It is not only the question of the heat but the question of the heat transfer. I think, most people still believe that picture anyway. Most people here, I guess, too.

If the glacier is flowing off of the continent into the sea, then sea ice is in an equilibrium between melting at the edges and bottom and being replenished at the middle.

demands huge melting we don't see

"See" how? It seems to me that you don't have an involved understanding of the melting of glaciers. If we could measure the mass of the Antarctic glacier straightforwardly, then I'm sure we'd agree on the meaning of changes in that mass. But if we don't see the particular melting process you expect, perhaps you're just expectung the wrong pro... (read more)

Across the frozen sea around most of Antartica even in the summertime? No conspiracy, I agree. Some lack of basic arithmetic skills only.


Glaciers don't have to form icebergs in order to melt. It can just melt where it meets the sea.

Almost 3 Amazons still missing for the 6 meters sea rise in a century

You know, now that you mention it, 6 meters sure is a lot. Where did you get that number from? See p. 1181 for IPCC projections.

How many liters per meter per second in icy waters? After the sea ice has already melted away? Which never does in most places? Told you, The Inconvenient Truth by Al Gore. Much smaller numbers, popular now, still demands huge melting we don't see really.

How about glacial flow? Ice doesn't move fast, but it does move. It can postpone melting until it's in contact with seawater. What do you think the ratio of mass moved by rivers vs. glaciers is in Antarctica?

A solid state river, promptly melting in the icy, ice covered ocean, is even less plausible than a large watery river. Don't you think so?
That's about 0.4 Amazon. The precipitations alone compensate most of this. Almost 3 Amazons still missing for the 6 meters sea rise in a century, Besides ... 10 million icebergs per year? Per a few summer months? Highly unrealistic.

Where and how some people see three Amazons on Antarctica, is a mystery to me. The amount of ice falling directly into the sea, is quite pathetic, as well.

The amazon begins distributed across brazil, as the occasional drops of rain. Then it comes together because of the shape and material of the landscape, and flows into streams, which join into rivers, which feed one big river. If global warming is causing antarctica to lose mass, do you expect the same thing to happen in antarctica, with meltwater beginning distributed across the surface, and then collecting into rivers and streams?

Yes. How else could it be?

Why do we care about acausal trading with aliens to promote them acting with "moral reflection, moral pluralism," etc?

Thanks for the comment! W.r.t. moral reflection: Probably many agents put little intrinsic value on whether society engages in a lot of moral reflection. However, I would guess that as a whole the set of agents having a similar decision mechanism as I have do care about this significantly and positively. (Empirically, disvaluing moral reflection seems to be rare.) Hence, (if the basic argument of the paper goes through) I should give some weight to it. W.r.t. moral pluralism: Probably even fewer agents care about this intrinsically. I certainly don’t care about it intrinsically. The idea is that moral pluralism may avoid conflict or create gains from “trade”. For example, let’s say the aggregated values of agents with my decision algorithm contain two values A and B. (As I argue in the paper, I should maximize these aggregated values to maximize my own values throughout the multiverse.) Now, I might be in some particular environment with agents who themselves care about A and/or B. Let’s say I can choose between two distributions of caring about A and B: Either each of the agents cares about A and B, or some care only about A and the others only about B. The former will tend to be better if I (or rather the set of agents with my decision algorithm) care about A and B, because it avoids conflicts, makes it more easy to exploit comparative advantages, etc. Note that I think neither promoting moral reflection nor promoting moral pluralism is a strong candidate for a top intervention. Multiverse-wide superrationality just increases their value relative to what, say, what a utilitarian would think about these interventions. I think it’s a lot more important to ensure that AI uses the right decision theory. (Of course, this is important, anyway, but I think multiverse-wide superrationality drastically increases its value.)

I think writing something like this is a bit like a rite of passage. So, welcome to LW :P

When we talk about someone's values, we're using something like Dan Dennett's intentional stance. You might also enjoy this LW post about not applying the intentional stance.

Long story short, there is no "truly true" answer to what people want, and no "true boundary" between person and environment, but there are answers and boundaries that are good enough for what people usually mean.

Thanks so much for replying! I'm still reading Dan Dennett's intentional stance now so I won't address that right now, but in terms of /not/ applying the intentional stance, I think we can be considered different from the "blue minimizer" since the blue minimizer assumes it has no access to its source code--we do actually have access to our source code so can see what laws govern us. Since we "want" to do things, we should be able to figure out why we "want" anything or really, why we "do" anything. To be clear, are you saying that instead of the equations being X="good points" and Y="good points" and the law is "maximize good points" the law might just be DO X and Y? If so I still don't think things like "survival" and "friendship" are terminal values or laws of the form "SURVIVE" and "MAKE FRIENDS". When these two are in conflict we still are able to choose a course of action therefore there must be some lower level law that determines the thing we "want" to do (or more accurately, just do if you don't want to assign intention to people). I also want to address the point that you said there are answers and boundaries good enough for what people usually mean--I think what we should really be going for is "answers and boundaries good enough to get what we really /want/." I think a common model of humans in this community is somewhat effective optimizers upon a set of terminal values, if that's really true, in order to optimize our terminal value(s) we should be trying to know them, and as I said I think the current idea that we can have multiple changeable terminal values contradicts the definition of a terminal value.

Well, if the acronym "POMDP" didn't make any sense, I think we should start with a simpler example, like a chessboard.

Suppose we want to write a chess-playing AI that gets its input from a camera looking at the chessboard. And for some reason, we give it a button that replaces the video feed with a picture of the board in a winning position.

Inside the program, the AI knows about the rules of chess, and has some heuristics for how it expects the opponent to play. Then it represents the external chessboard with some data array. Finally, it has some... (read more)

To our best current understanding, it has to have a model of the world (e.g. as a POMDP) that contains a count of the number of paperclips, and that it can use to predict what effect its actions will have on the number of paperclips. Then it chooses a strategy that will, according to the model, lead to lots of paperclips.

This won't want to fool itself because, according to basically any model of the world, fooling yourself does not result in more paperclips.

"according to basically any model of the world, fooling yourself does not result in more paperclips." Paul Almond at one time proposed that every interpretation of a real thing is a real thing. According to that theory, fooling yourself that there are more paperclips does result in more paperclips (although not fooling yourself also has that result.)
But what does the code for that look like. It looks like maximize(# of paperclips in world), but how does it determine (# of paperclips in world)? You just said it has a model. But how can it distinguish between real input that leads to the perception of paperclips and fake input that leads to the perception of paperclips?

I think I don't know the solution, and if so it's impossible for me to guess what he thinks if he's right :)

But maybe he's thinking of something vague like CIRL, or hierarchical self-supervised learning with generation, etc. But I think he's thinking of some kind of recurrent network. So maybe he has some clever idea for unsupervised credit assignment?

Cool insight. We'll just pretend constant density of 3M/4r^3.

This kind of integral shows up all the time in E and M, so I'll give it a shot to keep in practice.

You simplify it by using the law of cosines, to turn the vector subtraction 1/|r-r'|^2 into 1/(|r|^2+|r'|^2+2|r||r'|cos(θ)). And this looks like you still have to worry about integrating two things, but actually you can just call r' due north during the integral over r without loss of generality.

So now we need to integrate 1/(r^2+|r'|^2+2r|r'|cos(θ)) r^2 sin(θ) dr dφ dθ. First take your free 2π from... (read more)

EDIT: On second thoughts most of the following is bullshit. In particular, the answer clearly can't depend logarithmically on R. I had a long train journey today so I did the integral! And it's more interesting than I expected because it diverges! I got the answer (GM^2/R^2)(9/4)(log(2)-43/12-log(0)). Of course I might have made a numerical mistake somewhere, in particular the number 43/12 looks a bit strange. But the interesting bit is the log(0). The divergence arises because we've modelled matter as a continuum, with parts of it getting arbitrarily close to other parts. To get an exact answer we would have to look at how atoms are actually arranged in matter, but we can get a rough answer by replacing the 0 in log(0) by r_min/R, where r_min is the average distance between atoms. In most molecules the bond spacing is somewhere around 100 nm. So r_min ~ 10^-10, and R = 6.37*10^6 so log(r_min/R) ~ -38.7, which is more significant than the log(2)-43/12 = -2.89. So we can say that the total is about 38.7*9/4*GM^2/R^2 which is 87GM^2/R^2 or 5.1*10^27. [But after working this out I suddenly got worried that some atoms get even closer than that. Maybe when a cosmic ray hits the earth it does so with such energy that it gets really really close to another nucleus, and then the gravitational force between them dominates the rest of the planet put together. Well the strongest cosmic ray on record is the Oh-My-God particle [] with mass 48J. So it would have produced a spacing of about h_barc/48, which is about 6.6\10^-28. But the mass of a proton is about 10^-27, so Gm^2/r^2 is about G, and this isn't as significant as I feared.]

To spell it out:

Beauty knows limiting frequency (which, when known, is equal to the probability) of the coin flips that she sees right in front of her will be equal to one-half. That is, if you repeat the experiment many times (plus a little noise to determine coin flips), then you get equal numbers of the event "Beauty sees a fair coin flip and it lands Heads" and "Beauty sees a fair coin flip and it lands Tails." Therefore Beauty assigns 50/50 odds to any coin flips she actually gets to see.

You can make an analogous argument from symm... (read more)

According to SSA beauty should update credence of H to 2/3 after learning it is Monday.

I always forget what the acronyms are. But the probability of H is 1/2 after learning it's Monday, any any method that says otherwise is wrong, exactly by the argument that you can flip the coin on monday right in front of SB, and if she knows it's Monday and thinks it's not a 50/50 flip, her probability assignment is bad.

Yes, that's why I think to this day Elga's counter argument is still the best.
I don't see any argument there.

He proposes the coin toss could happen after the first awakening. Beauty’s answer ought to remain the same regardless the timing of the toss. A simple calculation tells us his credence of H must be 1/3. As SSA dictates this is also beauty’s answer. Now beauty is predicting a fair coin toss yet to happen would most likely land on T. This supernatural predicting power is a conclusive evidence against SSA.

So how do you get Beauty's prediction? If at the end of the first day you ask for a prediction on the coin, but you don't ask on the second day, then now... (read more)

Thank you for the reply. I really appreciate it since it reminds me that I have made a mistake in my argument. I didn't say SSA means reasoning as if an observer is randomly selected from all actually existent observers ( past, present and /b/future/b/). I think Elga's argument is beauty's credence should not be dependent on the exact time of coin toss. It seems reasonable to me since the experiment can be carried out exact the same way no matter if the coin is tosses on Sunday or Monday night. According to SSA beauty should update credence of H to 2/3 after learning it is Monday. If you think beauty shall give 1/2 if she finds out the coin is tossed on Monday night then her answer would be dependent on the time of coin toss. Which to me seems a rather weak position. Regarding a betting odds argument. I have give a frequentist model in part I which uses betting odds as part of the argument. In essence, beauty's break even odd is at 1/2 while the selector's is at 1/3, which agrees with there credence.

Sorry for the slow reply.

The 8 rooms are definitely the unbiased sample (of your rooms with one red room subtracted).

I think you are making two mistakes:

First, I think you're too focused on the nice properties of an unbiased sample. You can take an unbiased sample all you want, but if we know information in addition to the sample, our best estimate might not be the average of the sample! Suppose we have two urns, urn A has 10 red balls and 10 blue balls, while urn B has 5 red balls and 15 blue balls. We choose an urn by rolling a die, such that we have a 5... (read more)

No problem, always good to have a discussion with someone serious about the subject matter. First of all, you are right: statistic estimation and expected value in bayesian analysis are different. But that is not what I'm saying. What I'm saying is in a bayesian analysis with an uninformed prior (uniform) the case with highest probability should be the unbiased statistic estimation (it is not always so because round offs etc). In the two urns example, I think what you meant is that using the sample of 4 balls a fair estimation would be 5 reds and 15 blues as in the case of B but bayesian analysis would give A as more likely? However this disagreement is due to the use of an informed prior, that you already know we are more likely to draw from A right from the beginning. Without knowing this bayesian would give B as the most likely case, same as statistic estimate. Definitely something smaller than 100%. Just because beauty thinks r=81 is the most likely case doesn't mean she think it is the only case. But that is not what the estimation is about. Maybe this question would be more relevant: If after opening 8 doors and they are all red and beauty have to guess R. what number should she guess (to be most likely correct)?

The HoTT book is pretty readable, but I'm not in a position to evaluate its actual goodness.

In your example, I think Bob is doing something unrelated to rationalist Taboo.

In the actual factual game of Taboo, you replace a word with a description that is sufficient to tell your team what the original word is. In rationalist Taboo, you replace a word with a description that is sufficient to convey the ideas you were trying to convey with the original word.

So if Bob tries to taboo "surprise" as "the feeling of observing a low-probability event," and Alice says "A license plate having the number any particular number is low p... (read more)

And yet, people, when giving examples of selfishness, don't just sample the entirety of human behavior. They point out a specific sort of behavior. Or when naming optimization functions, they might call one function "greedy," even though all functions tautologically do what they do. So clearly people have some additional criteria for everyday use of the word not captured by the extremely simple definition in this post.

First, I checked out the polling data on interracial marriage. Every 10 years the approval rating has gone up by ~15 percentage points. I couldn't find a concise presentation of the age-segregated data from now vs. in the past, but 2007 and 1991 were available, and they look consistent with over 80% of the opinion change being due to old people dying off. This surprised me, I expected to see more evidence of people changing their mind.

Now look at gay marriage.. It's gained at ~18 points per 10 years. This isn't too different from 15, so maybe this is peopl... (read more)

"Refute" is usually not an objective thing - it's a social thing. You can probably prove to yourself that pi=3 is false, but if you write "pi=3" on a sheet of paper, no argument will make the ink rearrange itself to be correct.

This is one of the problems with a falsificationist idea of scientific progress, where we never prove theories true but make progress by proving them false. If evidence against a theory appears (e.g. the ability to see different stars from different parts of the earth might be thought of as "refuting" th... (read more)

Criticism is a much wider concept than falsification. You can criticise a theory for having too many patches to work around apparent problems.
Load More