AstraSequi
AstraSequi has not written any posts yet.

AstraSequi has not written any posts yet.

It actually does have practical applications for me, because it will be part of my calculations. I don't know whether I should have any preference for the distribution of utility over my lifetime at all, before I consider things like uncertainty and opportunity cost. Does this mean you would say the answer is no?
I can think of example where I behaved both ways, but I haven't recorded the frequencies. In practice, I don't feel any emotional difference. If I have a chocolate bar, I don't feel any more motivated to eat it now than to eat it next week, and the anticipation from waiting might actually lead to a net increase in my utility. One of the things I'm interested in was whether there's anyone else who feels this way, because it seems to contradict my understanding of discounting.
That assumption is to make time the only difference between the situations, because the point is that the total amount of utility over my life stays constant. If I lose utility during the time of the agreement, then I would accept a rate that earns me back an amount equal to the value I lost. But if I only "want" to use it today and I could use it to get an equal amount of utility in 3 months, then I don't have a preference.
Thanks for that – the point that I’m separating out uncertainty helped clarify some things about how I’m thinking of this.
So is time inconsistency the only way that a discount function can be self-inconsistent? Is there any reason other than self-inconsistency that we could call a discount function irrational?
Second, with respect to "my intuition is not to discount at all", let's try this. I assume you have some income that you live on. How much money would you take at the end of three months to not receive any income at all for those three months? Adjust the time scale if you wish.
If I received an amount equal to the income I would have gotten normally, then I have no preference over which option occurs. This still assumes that I have enough savings to live from, the offer is credible, there are no opportunity costs I’m losing, no effort is required on my part, etc.
... (read more)In general, you can think of
I have some questions on discounting. There are a lot, so I'm fine with comments that don't answer everything (although I'd appreciate it if they do!) I'm also interested in recommendations for a detailed intuitive discussion on discounting, ala EY on Bayes' Theorem.
I think the value of a Wikipedia pageview may not be fully captured by data like this on its own, because it's possible that the majority of the benefit comes from a small number of influential individuals, like journalists and policy-makers (or students who will be in those groups in the future). A senator's aide who learns something new in a few years' time might have an impact on many more people than the number who read the article. I'd actually assign most of my probability to this hypothesis, because that's the distribution of influence in the world population.
ETA: the effects will also depend on the type of edits someone makes. Some topics will have more leverage than others, adding information from a textbook is more valuable than adding from a publicly available source, and so on.
This can be illustrated by the example of evolution I mentioned: An evolutionary explanation is actually anti-reductionist; it explains the placement of nucleotides in terms of mathematics like inclusive genetic fitness and complexities like population ecology.
This doesn't acknowledge the other things explained on the same grounds. It's a good argument if the principles were invented for the single case you're explaining, but here they're universal. If you want to include inclusive genetic fitness in the complexity of the explanation, I think you need to include everything it's used for in the complexity of what's being explained.
Sure, this experiment is evidence against 'all fat, tired people with dry hair get better with thryoxine'. No problem there.
Okay, but you said it was evidence in favor of your own hypothesis. That’s what my question was about.
Yes, it is kind of odd isn't it? One of the pills apparently made them a bit unwell, and yet they couldn't tell which one. I notice that I am confused.
Suppose they’re measuring on a 10-point scale, and we get ordered pairs of scores for time A and time B. One person might have 7 and 6, another has (4,3), another has (5,6), then (9,7), (7,7), (4,5), (3,2)...Even if they’re aware of their measurements... (read 472 more words →)
I think this is a special case of the problem that it's usually easier for an AI to change itself (values, goals, definitions) than for it to change the external world to match a desired outcome. There's an incentive to develop algorithms that edit the utility function (or variables storing the results of previous calculations, etc) to redefine or replace tasks in a way that makes them easier or unnecessary. This kind of ability is necessary, but in the extreme the AI will stop responding to instructions entirely because the goal of minimizing resource usage led it to develop the equivalent of an "ignore those instructions" function.