Not sure if this is what KevinGrant was referring to, but this article discusses the same phenomenon

http://rosettaproject.org/blog/02012/mar/1/language-speed-vs-density/

You say you are rejecting Von Neumann utility theory. Which axiom are you rejecting?

https://en.wikipedia.org/wiki/Von_Neumann–Morgenstern_utility_theorem#The_axioms

08y

The axiom of independence. I did mention this in the post.

68y

The last time this came up, the answer was:
This is, as pointed out there, not one of the axioms.

8y9

I think this is pretty cool and interesting, but I feel compelled to point out that all is not as it seems:

Its worth noting, though, that just the evaluation function is a neural network. The search, while no long iteratively deepening, is still recursive. Also, the evaluation function is not a pure neural network. It includes a static exchange evaluation.

It's also worth noting that doubling the amount of computing time usually increasing a chess engine's score by about 60 points. International masters usually have a rating below 2500. Though this is sketc...

8y14

Although it's not *better* than existing solutions, it's a cool example of how good results can be achieved in a relatively automatic way - by contrast, the evaluation functions of the best chess engines have been carefully engineered and fine-tuned over many years, at least sometimes with assistance from people who are themselves master-level chess players. On the other hand this neural network approach took a relatively short time and could have been applied by someone with little chess skill.

edit: Reading the actual paper, it does sound like a certain amo...

8y0

Just to clarify, I feel that what you're basically saying that often what is called the base-rate fallacy is actually the result of P(E|!H) being too high.

I believe this is why Bayesians usually talk not in terms of P(H|E) but instead use Bayes Factors.

Basically, to determine how strongly ufo-sightings imply ufos, don't look at P(ufos | ufo-sightings). Instead, look at P(ufos | ufo-sightings) / P(no-ufos | ufo-sightings).

This ratio is the Bayes factor.

08y

Thank you for your feedback.
Yes, I'm aware of likelihood ratios (and they're awesome, especially for log-odds). Earlier draft of this post ended at "the correct method for answering this query involves imagining world-where-H-is-true, imagining world-where-H-is-false and comparing the frequency of E between them", but I decided against it. And well, if some process involves X and Y, then it is correct (but maybe misleading) to say that in involves just X.
My point was that "what it does resemble?" (process where you go E -> H) was fundamentally different from "how likely is that?" (process where you go H -> E). If you calculate likelihood ratio using the-degree-of-resemblance instead of actual P(E|H) you will get wrong answer.
(Or maybe thinking about likelihood ratios will force you to snap out of representativeness heuristic, but I'm far from sure about it)
I think that I misjudged the level of my audience (this post is an expansion of /r/HPMOR/ comment) and hadn't made my point (that probabilistic thinking is more correct when you go H->E instead of vice versa) visible enough. Also, I was going to blog about likelihood ratios later (in terms of H->E and !H->E) — so again, wrong audience.
I now see some ways in which my post is debacle, and maybe it makes sense to completely rewrite it. So thank you for your feedback again.

8y3

I'm currently in debate and this is one of (minor) things that annoy me about it. The reason I can still enjoy debate (as a competitive endeavor) is that I treat it more like a game than an actual pursuit of truth.

I am curious though whether you think this actively harms peoples ability to reason or whether this just provides more numerous examples how most people reason - i.e. is this primarily a sampling problem?

88y

The best debaters who I knew personally really identified themselves with debate. Two of them went on to coach debate in college. The better you are at debate, the better you think you are at arguing. For them, believing that policy debate and real logical argumentation are substantially different things would imply that the thing they're good at is a mere game, rather than a pragmatic generalizable skill. Psychologically there's a powerful motivation to want policy debate to be more real.
So I think it's harmful to the degree that the debater doesn't keep in mind the artificiality of the format.
All that said, it's fun, it teaches you to think and speak on your feet, you get to socialize with other likeminded people, and you learn a thousand times more about politics and history than you would learn in any other class.

8y2

Could we ever get evidence of a "read-only" soul? I'm imagining something that translates biochemical reactions associated with emotions into "actual" emotions. Don't get me wrong, I still consider myself an atheist, but it seems to me that how strongly one believes in a soul that is only affected by physical reality is based purely on their prior probability.

28y

Sounds like epiphenomenalism.

1[anonymous]8y

Isn't that how it works right now? I mean, we actually "feel sad" and we actually "have thoughts".

18y

In a sense, that's what ordinary materialists believe in: Oh look, here's this system which happens to be an instance of conscious thought! That's part of a soul!
It's just pattern-recognition, but that's all a read-only soul is.

8y3

Thanks for taking the time to contribute!

I'm particularly interested in "Goals interrogation + Goal levels".

Out of curiosity, could you go a little more in-depth regarding what "How to human" would entail? Is it about social functioning? first aid? psychology?

I'd also be interested in "Memory and Notepads", as I don't really take notes outside of classes.

With "List of Effective Behaviors", would that be behaviors that have scientific evidence for achieving certain outcomes ( happiness, longevity, money, etc.), or wou...

08y

.5 the last one is essentially the void; but also connects with steelman/strawman fallacies.
.4 List of effective behaviours are more anecdotal. As you will probably find with a subset of behaviours - they only work for some people. i.e. keeping track of food intake might be easier to one person than another. It would be difficult to write an exhaustive list; but a lot easier to write a partial list of behaviours that I have picked up that now make me more effective than beforehand to open up the possibility of more people testing them and incorporating the concepts.
.3 I will write it up.
.2 how to human - thinking about it from basic levels; maslow up. consciousness, breathing, blinking, eating, sleeping, some basic checks you might want to be doing to confirm that your simple needs are met; before doing complicated tasks.
.1 I will write it up sooner :)
Thanks for the comments!

8y1

Not sure if this is obvious of just wrong, but isn't it possible (even likely?) that there is no way of representing a complex mind that is sufficiently useful enough to allow an AI to usefully modify itself. For instance, if you gave me complete access to my source code, I don't think I could use it to achieve any goals as such code would be billions of lines long. Presumably there is a logical limit on how far one can usefully compress ones own mind to reason about it, and it seams reasonably likely that such compression will be too limited to allow a singularity.

08y

There's certainly ways you can usefully modify yourself. For example, giving yourself a heads-up display. However, I'm not sure how much it would end up increasing your intelligence. You could get runaway super-intelligence if every improvement increases the best mind current!you can make by at least that much, but if it increases by less than that, it won't run away.

4[anonymous]8y

The ability to reason about large amounts of code seems to be more a memory and computation speed problem, than a logic problem. Computers already seem to be better than humans on these counts, so it seems like they may be better at understanding large pieces of code, once we have the whole "understanding" thing solved.

What I mean by "essentially ignore" is that if you are (for instance) offered the following bet you would probably accept: "If you are in the first 100 rooms, I kill you. Otherwise, I give you a penny."

I see your point regarding the fact that updating using Bayes' theorem implies your prior wasn't 0 to begin with.

I guess my question is now whether there are any extended versions of probability theory. For instance, Kolmogorov probability reverts to Aristotelian logic for the extremes P=1 and P=0. Is there a system of though that revers ...

18y

They have already been pointed to you: either extend PT to use some kind of measure (Jaynes' solution), ore use only distributions that have a definite limit when extended to the infinite, or use infinitely small quantities.

From a decision-theory perspective, I should essentially just ignore the possibility that I'm in the first 100 rooms - right?

Similarly, if I'm born in a universe with infinite such rooms and someone tells me to guess whether my room is a multiple of 10 or not. If I guess correctly, I get a dollar; otherwise I lose a dollar.

Theoretically there are as many multiples of 10 as not (both being equinumerous to the integers), but if we define rationality as the "art of winning", then shouldn't I guess "not in a multiple of 10"? I admit that my...

38y

Well, what do you mean by "essentially ignore"? If you're asking if I should assign substantial credence to the possibility, then yeah, I'd agree. If you're asking whether I should assign literally zero credence to the possibility, so that there are no possible odds -- no matter how ridiculously skewed -- I would accept to bet that I am in one of those rooms... well, now I'm no longer sure. I don't exactly know how to go about setting my credences in the world you describe, but I'm pretty sure assigning 0 probability to every single room isn't it.
Consider this: Let's say you're born in this universe. A short while after you're born, you discover a note in your room saying, "This is room number 37". Do you believe you should update your belief set to favor the hypothesis that you're in room 37 over any other number? If you do, it implies that your prior for the belief that you're in one of the first 100 rooms could not have been 0.
(But. on the other hand, if you think you should update in favor of being in room x when you encounter a note saying "You are in room x", no matter what the value of x, then you aren't probabilistically coherent. So ultimately, I don't think intuition-mongering is very helpful in these exotic scenarios. Consider my room 37 example as an attempt to deconstruct your initial intuition, rather than as an attempt to replace it with some other intuition.)
Perhaps, but reproducing this result doesn't require that we consider every room equally likely. For instance, a distribution that attaches a probability of 2^(-n) to being in room n will also tell you to guess that you're not in a multiple of 10. And it has the added advantage of being a possible distribution. It has the apparent disadvantage of arbitrarily privileging smaller numbered rooms, but in the kind of situation you describe, some such arbitrary privileging is unavoidable if you want your beliefs to respect the Kolmogorov axioms.

I (now) understand the problem with using a uniform probability distribution over a countably infinite event space. However, I'm kind of confused when you say that the example doesn't exist. Surely, its not logically impossible for such an infinite universe to exist. Do you mean that probability theory isn't expressive enough to describe it?

38y

When I say the probability distribution doesn't exist, I'm not talking about the possibility of the world you described. I'm talking about the coherence of the belief state you described. When you say "The probability of you being in the first 100 rooms is 0", it's a claim about a belief state, not about the mind-independent world. The world just has a bunch of rooms with people in them. A probability distribution isn't an additional piece of ontological furniture.
If you buy the Cox/Jaynes argument that your beliefs must adhere to the probability calculus to be rationally coherent, then assigning probability 0 to being in any particular room is not a coherent set of beliefs. I wouldn't say this is a case of probability theory not being "expressive enough". Maybe you want to argue that the particular belief state you described ("Being in any room is equally likely") is clearly rational, in which case you would be rejecting the idea that adherence to the Kolmogorov axioms is a criterion for rationality. But do you think it is clearly rational? On what grounds?
(Incidentally, I actually do think there are issues with the LW orthodoxy that probability theory limns rationality, but that's a discussion for another day.)

8y1

There are different levels of impossible.

Imagine a universe with an infinite number of identical rooms, each of which contains a single human. Each room is numbered outside: 1, 2, 3, ...

The probability of you being in the first 100 rooms is 0 - if you ever have to make an expected utility calculation, you shouldn't even consider that chance. On the other hand, it is definitely possible in the sense that some people are in those first 100 rooms.

If you consider the probability of you being in room Q, this probability is also 0. However, it (intuitively) feels "more" impossible.

I don't really think this line of thought leads anywhere interesting, but it definitely violated my intuitions.

48y

As others have pointed out, there is no uniform probability distribution on a countable set. There are various generalisations of probability that drop or weaken the axiom of countable additivity, which have their uses, but one statistician's conclusion is that you lose too many useful properties. On the other hand, writing a blog post to describe something as a lost cause suggests that it still has adherents. Googling /"finite additivity" probability/ turns up various attempts to drop countable additivity.
Another way of avoiding the axiom is to reject all infinities. There are then no countable sets to be countably additive over. This throws out almost all of current mathematics, and has attracted few believers.
In some computations involving probabilities, the axiom that the measure over the whole space is 1 plays no role. A notable example is the calculation of posterior probabilities from priors and data by Bayes' Theorem:
Posterior(H|D) = P(D|H) Prior(H) / Sum_H' ( P(D|H') Prior(H') )
(H, H' = hypothesis, D = data.)
The total measure of the prior cancels out of the numerator and denominator. This allows the use of "improper" priors that can have an infinite total measure, such as the one that assigns measure 1 to every integer and infinite measure to the set of all integers.
There can be a uniform probability distribution over an uncountable set, because there is no requirement for a probability distribution to be uncountably additive. Every sample drawn from the uniform distribution over the unit interval has a probability 0 of being drawn. This is just one of those things that one comes to understand by getting used to it, like square roots of -1, 0.999...=1, non-euclidean geometry, and so on.

08y

This is an old problem in probability theory, and there are different solutions.
PT is developed first in finite model, so it's natural that its extension to infinite models can be done in a few different ways.

98y

There is no such thing as a uniform probability distribution over a countably infinite event space (see Toggle's comment). The distribution you're assuming in your example doesn't exist.
Maybe a better example for your purposes would be picking a random real number between 0 and 1 (this does correspond to a possible distribution, assuming the axiom of choice is true). The probability of the number being rational is 0, the probability of it being greater than 2 is also 0, yet the latter seems "more impossible" than the former.
Of course, this assumes that "probability 0" entails "impossible". I don't think it does. The probability of picking a rational number may be 0, but it doesn't seem impossible. And then there's the issue of whether the experiment itself is possible. You certainly couldn't construct an algorithm to perform it.

48y

I opine that you are equivocating between "tends to zero as N tends to infinity" and "is zero". This is usually a very bad idea.

38y

Measure theory) is a tricky subject. Also consider https://twitter.com/ZachWeiner/status/625711339520954368 .

98y

Your math has some problems. Note that, if p(X=x) = 0 for all x, then the sum over X is also zero. But if you're in a room, then by definition you have sampled from the set of rooms- the probability of selecting a room is one. Since the probability of selecting 'any room from the set of rooms' is both zero and one, we have established a contradiction, so the problem is ill-posed.

9y2

I'm tentatively interested. I live about an hour east of Madison, but as a college student this is really only relevant during the summer. I'll take a look at potential (cheap) transportation.

29y

Thanks for posting! I'm working on getting a Facebook group up, and all events will be posted there. https://www.facebook.com/groups/783506698431372/
Check there for events and see if any of it works for you. If you don't mind my asking, where do you live? I'm very fond of driving and could possibly transport you for a weekend visit.

Interesting. Do you have any idea why this results in a paradox, but not the corrigibility problem in general?

9y4

One common way to think about utilitarianism is to say that each person has a utility function and whatever utilitarian theory you subscribe to somehow aggregates these utility functions. My question, more-or-less, is whether an aggregating function exists that says that (assuming no impact on other sentient beings) the birth of a sentient being is neutral. My other question is whether such a function exists where the birth of the being in question is neutral if and only if that sentient being would have positive utility.

EDIT: I do recall that a similar-se...

69y

If you try to do that, you get a paradox where, if A is not creating anyone, B is creating a new person and letting them lead a sad life, and C is creating a new person and letting them lead a happy life, then U(A) = U(B) < U(C) = U(A). You can't say that it's better for someone to be happy than sad, but both are equivalent to nonexistence.

9y2

I think the probability of you popping into existence again is (1) very small and (2) depends on how you define your "self." Would you consider an atom-for-atom copy of you to be "you"? How about an uploaded copy? etc. The simple fact is that physicists have constructed a very simple model for the universe that hasn't been wrong yet and, so, is very likely to be correct in the vast majority of situations - your existence should be one of them. Faith in the accepted model of the universe constructed by modern physicists can be justified ...

-5[anonymous]9y

09y

The prospect of influencing the future doesn't excite you?

I had typed up an eloquent reply to address these issues, but instead wrote a program that scored uniform priors vs 1/x^2 priors for this problem. (Un)fortunately, my idea does consistently (slightly) worse using the p*log(p) metric. So, you are correct in your skepticism. Thank you for the feeback!

9y5

I think such a tree would depend in large part on what approach one wants to take. Do you want to learn probability to get a formal foundation of probabilistic reasoning? As far as I know, no other rationality skill is required to do this, but a good grasp of mathematics is. On the other hand, very few of the posts in the main sequences (http://wiki.lesswrong.com/wiki/Sequences#Major_Sequences) require probability theory to understand. So, in a sense, there is very little cross-dependency between mathematical understanding of probability and the rationalit...

9y0

As others have stated, obligation isn't really part of utilitarianism. However, if you really wanted to use that term, one possible way to incorporate it is to ask what would the xth percentile of people do in this situation (where the people are ranked in terms of expected utility) given that everyone has the same information and use that as a boundary to the label "obligation."

As an aside, there is a thought experiment called the "veil of ignorance." Although it is not, strictly speaking, called utilitarianism, you can view it that wa...

Since this seems like a question at the center of this whole thing, I just wanted to double check using other sources. Using [this](https://www.omnicalculator.com/health/tdee) calculator with the "little/no exercise" setting, I see

Female; 36; 5'4"

Weight: 65.5 kg -> 78 kg

TDEE: 1596 -> 1746

Diff: 150 kcal/d

If we set it to "moderate exercise", the gap increases to 194 kcal/d

[This](https://www.niddk.nih.gov/bwp) calculator yields similar results

So, pretty much in line with your conclusion.