Sorted by New

Wiki Contributions


The hemlock example demonstrates tcpkac's point well. How do you decide to conclude that Albert and Barry expect different results from the same action? To me, it seems obvious that they should taboo the word hemlock, and notice that one correctly expects Socrates to die from a drink made from an herb in the carrot family, and the other correctly expects Socrates to be unharmed by tea made from a coniferous tree. But it's not clear why Eliezer ought to have the knowledge needed to choose to taboo the word hemlock.

Silas and Jason, the insurance/hedging of redistribution policies would be valuable if the transaction costs could be made low enough. But are low transaction costs feasible? It appears that there are lots of issues where only special interest groups can afford to acquire information needed for hedging. E.g. the benefits to me of insuring against the harm I suffer from ethanol subsidies are small enough that I don't want to estimate the value of those benefits, much less evaluate potential ways to buy the insurance. If redistribution disputes could be simplified to be disputes over one or two numbers (e.g. the slope and y-intercept of income tax rates, with negative tax rates for the poor), then insurance might be feasible. But special interest politics seem to be firmly pushing us away from that goal.

Hal, I expect that fully rational traders would produce larger bid/ask spreads when the results are being announced, and trading volume that is closer to what is suggested by the No Trade Theorem. Traders have a widespread bias to overestimate their ability to profit from following news. For example, see an experiment done by Andreassen where subjects trading stocks did worse if they saw a constant stream of news than if they saw no news once they started trading.

I agree with much of the thrust of this post. It is very bad that the causes of discount rates (such as opportunity costs) exist. But your reaction to Carl Shulman's time travel argument leaves me wondering whether you have a coherent position. If a Friendly AI with a nonzero discount rate would conclude that it has a chance of creating time travel, and that time travel would work in a way that would abolish opportunity costs, then I would conclude that devoting a really large fraction of available resources to creating time travel is what a genuine altruist would want. Can you clarify whether you really mean to say that an AI shouldn't devote a lot of resources toward something which would abolish opportunity costs (i.e. give everyone everything they can possibly have)? Of course, it's not clear to me that an AI would believe it has a chance of creating time travel. And it's not clear to me that time travel would be sufficient to abolish opportunity costs, arbitrage interest rates to zero, etc. I sometimes attempt to imagine a version of time travel which would do those things, but my mind boggles before I get close to deciding whether such a version is logically consistent. The only model of time travel I understand well enough to believe it is coherent is the one proposed by David Deutsch, which does not appear powerful enough to abolish opportunity costs or arbitrage interest rates to zero. If you were thinking of this model of time travel, then please clarify why you think it says anything interesting about the existence of discount rates.