Bob Jacobs

Comments

A Toy Model of Hingeyness

It might be interesting to distinguish between "personal hingeyness" and "utilitarian hingeyness". Humans are not utilitarians so we care mostly about stuff that's happening in our own lives, when we die, our personal tree stops and we can't get more hinges. But the "utilitarian hingeyness" continues as it describes all possible utility. I made this with population ethics in mind, but you could totally use the same concept for your personal life, but then the most hingey time for you and the most hingey time for everyone will be different.

I'm not sure I understand your last paragraph, because you didn't clarify what you meant with the word "hingeyness"? If you meant by that "the range of total amount of utility you can potentially generate" (aka hinge broadness) or "the amount by which that range shrinks" (aka hinge reduction) It is possible to draw a tree where the first tick of an 11 tick tree has just as broad of a range as an option in the 10th tick. So the hinge broadness and the hinge reduction can be just as big in the 10th as in the 1st tick, but not bigger. I don't think you're talking about "hinge shift", but maybe you were talking about hinge precipiceness instead in which case, yes that can totally be bigger in the 10th tick.

A Toy Model of Hingeyness

If in the first image we replace the 0 with a -100 (much wider) what happens? The amount of endings for 1 is still larger than 3. The amount of branches for 1 is still larger than 3. The width of the range of the possible utility of the endings for 1 is [-100 to 8] and for 3 is [-100 to 6] (smaller). The width of the range of the total amount of utility you could generate over the future branches is [1->3->-100 = -96 up to 1->2->8= 11] for 1 and [3->-100= -97 up to 3->6= 9] for 3 (smaller). Is this a good example of what you're trying to convey? If not could you maybe draw an example tree, to show me what you mean?

A Toy Model of Hingeyness

Ending in negative numbers wouldn't change anything. The amount of endings will still shrink, the amount of branches will shrink, the range of the possible utility of the endings will still shrink or stay the same length, the range of the total amount of utility you could generate over the future branches will also shrink or stay the same length. Try it! Replace any number in any of my models with a negative number or draw your own model and see what happens.

A Toy Model of Hingeyness

If we draw a tree of all possible timelines (and there is an end to the tree) the older choices will always have more branches that will sprout out because of them. If we are purely looking at the possible endings then the 1 in the first image has a range of 4 possible endings, but 2 only has 2 possible endings. If we're looking at branches then the 1 has a range of 6 possible branches, while 2 only has 2 possible branches. If we're looking at ending utility then 1 has a range of [0-8] while 2 only has [7-8]. If we're looking at the range of possible utility you can experience then 1 has a range from 1->3->0 = 4 utility all the way to 1->2->8 = 11 utility, while 2 only has 2->7 = 9 to 2->8 = 10.

When we talk about the utility of endings it is possible that the range doesn't change. For example:

(I can't post images in comments so here is a link to the image I will use to illustrate this point)

Here the "range of utility in endings" tick 1 has (the first 10) is [0-10] and the range of endings the first 0 has (tick 2) is [0-10] which is the same. Of course the probability has changed (getting an ending of 1 utility is not even an option anymore), but the minimum and maximum stay the same.

Now the width of the range of the total amount of utility you could potentially experience can also stay the same. For example the lowest utility tick 1 can experience is 10->0->0 = 10 utility and the highest is 10-0-10 = 20 utility. The difference between the lowest and highest is 10 utility. The lowest total utility that the 0 on tick 2 can experience is 0->0 = 0 utility and the highest is 0->10 = 10 utility, which is once again a difference of 10 utility. The probability has changed (ending with a weird number like 19 is impossible for tick 2). The range has also shifted downwards from [10-20] to [0-10], but the width stays the same.

It just occurred to me that some people may find the shift in range also important for hingeyness. Maybe call that 'hinge shift'?

Crucially, in none of these definitions is it possible to end up with a wider range later down the line than when you started.

Bob Jacobs's Shortform

I know LessWrong has become less humorous over the years, but this idea popped into my head when I made my bounty comment and I couldn't stop myself from making it. Feel free to downvote this shortform if you want the site to remain a super serious forum. For the rest of you: here is my wanted poster for the reference class problem. Please solve it, it keeps me up at night.

Multitudinous outside views

Thanks for replying to my question, but although this was nicely written it doesn't really solve the problem. So I'm putting up a $100 bounty for anyone on this site (or outside it) who can solve this problem by the end of next year. (I don't expect it will work, but it might motivate some people to start thinking about it).

Calibration Practice: Retrodictions on Metaculus

I've touched on this before, but it would be wise to take your meta-certainty into account when calibrating. It wouldn't be hard for me to claim 99.9% accurate calibration by just making a bunch of very easy predictions (an extreme example would be buying a bunch of different dice and making predictions about how they're going to roll). My post goes into more detail but TLDR by trying to predict how accurate your prediction is going to be you can start to distinguish between "harder" and "easier" phenomena. This makes it easier to compare different peoples calibration and allows you to check how good you really are at making predictions.

mAIry's room: AI reasoning to solve philosophical problems

I can also "print my own code", if I make a future version of a MRI scan I could give you all the information necessary to understand (that version of) me, but as soon as I look at it my neurological patterns change. I'm not sure what you mean with "add something to it", but I could also give you a copy of my brain scan and add something to it. Humans and computers can of course know a summery of themselves, but never the full picture.

mAIry's room: AI reasoning to solve philosophical problems

An annoying philosopher would ask whether you could glean knowledge of your "meta-qualia" aka what it consciously feels like to experience what something feels like. The problem is that fully understanding our own consciousness is sadly impossible. If a computer discovers that in a certain location on it's hardware it has stored a picture of a dog, it must then store that information somewhere else, but if it subsequently tries to know everything about itself it must store that knowledge of the knowledge of the picture's location somewhere else, which it must also learn. This repeats in a loop until the computer crashes. An essay can fully describe most things but not itself: "The author starts the essay with writing that he starts the essay with writing that...". So annoyingly there will always be experiences that are mysterious to us.

Billionaire Economics

I was not referring to the 'billionaires being universally evil', but to the 'what progressives think' part.

Load More