# 12

Personal Blog

Very distant future times are ridiculously easy to help via investment.  A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it.

So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them?  How can you think anyone on Earth so cares?  And if no one cares the tiniest bit, how can you say it is "moral" to care about them, not just somewhat, but almost equally to people now?  Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much.

So why do many people seem to care about policy that effects far future folk?   I suspect our paternalistic itch pushes us to control the future, rather than to enrich it.  We care that the future celebrates our foresight, not that they are happy.

In the comments  some people gave counterarguments. For those in a rush, the best ones are Toby Ord's. But I didn't bite any of the counterarguments to the extent that it would be necessary to counter the 10^100. I have some trouble conceiving of what would beat a consistent argument a googol fold.

Things that changed my behavior significantly over the last few years have not been many, but I think I'm facing one of them. Understanding biological immortality was one, it meant 150 000 non-deaths per day. Understanding the posthuman potential was another. Then came the 10^52 potential lives lost in case of X-risk, or if you are conservative and think only biological stuff can have moral lives on it, 10^31. You can argue about which movie you'll watch, which teacher would be best to have, who should you marry. But (if consequentialist) you can't argue your way out of 10^31 or 10^52. You won't find a counteracting force that exactly matches, or really reduces the value of future stuff by

3 000 000 634 803 867 000 000 000 000 000 000 777 000 000 000 999  fold

Which is way less than 10^52

You may find a fundamental and qualitative counterargument "actually I'd rather future people didn't exist", but you won't find a quantitative one. Thus I spend a lot of time on X-risk related things.

Back to Robin's argument: so unless someone gives me a good argument against investing some money in the far future (and discovering some vague techniques of how to do it that will make it at least one in a millionth possibility) I'll set aside a block of money X, a block of time Y, and will invest in future people 12 thousand years from now. If you don't think you can beat 10^100, join me.

And if you are not in a rush, read this also, for a bright reflection on similar issues.

# 12

Mentioned in
New Comment

But I didn't bite any of the counterarguments to the extent that it would be necessary to counter the 10^100.

I don't think this is very hard if you actually look at examples of long-term investment. Background: http://www.gwern.net/The%20Narrowing%20Circle#ancestors and especially http://www.gwern.net/The%20Narrowing%20Circle#islamic-waqfs

First things:

Businesses and organizations suffer extremely high mortality rates; one estimate puts it at 99% chance of mortality per century. (This ignores existential risks and lucky aversions like nuclear warfare, and so is an underestimate of the true risks.) So to survive, any perpetuity has a risk of 0.01^120 = 1.000000000000001e-240. That's a good chunk of the reason to not bother with long-term trusts right there! We can confirm this empirically by observing that there were what must have been many scores of thousands of waqfs in the Islamic world - perpetual charities - and very few survive or saw their endowments grow. (I have pointed Hanson at waqfs repeatedly, but he has yet to blog on that topic.) Similarly, we can observe that despite the countless temples, hospitals, homes, and institutions with endowments in the Greco-Roman world just 1900 years ago or so - less than a sixth of the time period in question - we know of zero surviving institutions, all of them having fallen into decay/disuse/Christian-Muslim expropriation/vicissitudes of time. The many Buddhist institutions of India suffered a similar fate, between a resurgent Hinduism and Muslim encroachment. We can also point out that many estimates ignore a meaningful failure mode: endowments or nonprofits going off-course and doing things the founder did not mean them to do - the American university case comes to mind, as does the British university case I cite in my essay, and there is a long vein (some of it summarized in Cowen's Good and Plenty) of conservative criticism of American nonprofits like the Ford Foundation pointing out the 'liberal capture' of originally conservative institutions, which obviously defeats the original point.

(BTW, if you read the waqf link you'd see that excessive iron-clad rigidity in an organization's goal can be almost as bad, as the goals become outdated or irrelevant or harmful. So if the charter is loose, the organization is easily and quickly hijacked by changing ideologies or principal-agent problems like the iron law of oligarchy; but if the charter is rigid, the organization may remain on-target while becoming useless. It's hard to design a utility function for a potentially powerful optimization process. Hm.... why does that sentence sound so familiar... It's almost as if we needed a theory of Friendly Artificial General Organizations...)

Survivorship bias as a major factor in overestimating risk-free return overtime is well-known, and a new result came out recently, actually. We can observe many reasons for survivorship bias in estimates of nonprofit and corporate survival in the 20th century (see previously) and also in financial returns: Czarist Russia, the Weimar and Nazi Germanies, Imperial Japan, all countries in the Warsaw Pact or otherwise communist such as Cuba/North Korea/Vietnam, Zimbabwe... While I have seen very few invocations recently of the old chestnut that 'stock markets deliver 7% return on a long-term basis' (perhaps that conventional wisdom has been killed), the survivorship work suggests that for just the 20th century we might expect more like 2%.

The risk per year is related to the size of the endowment/investment; as has already been point out, there is fierce legal opposition to any sort of perpetuity, and at least two cases of perpetuities being wasted or stolen legally. Historically, fortunes which grow too big attract predators, become institutionally dysfunctional and corrupt, and fall prey to rare risks. Example: the non-profit known as the Catholic Church owned something like a quarter of all of England before it was expropriated precisely because it had so effectively gained wealth and invested it (property rights in England otherwise having been remarkably secure over the past millennium). Not to mention the Vatican States or its holdings elsewhere. The Buddhist monasteries in China and Japan had issues with growing so large and powerful that they became major political and military players, leading to war and extirpation by other actors such as Oda Nobunaga. Any perpetuity which becomes equivalent to a large or small country will suffer the same mortality rates.

And then there's opportunity cost. We have good reason to expect the upcoming centuries to be unusually risky compared to the past: even if you completely ignore new technological issues like nanotech or AI or global warming or biowarfare, we still suffer under a novel existential threat of thermonuclear warfare. This threat did not exist at any point before 1945, and systematically makes the future riskier than the past. Investing in a perpetuity, itself investing in ordinary commercial transactions, does little to help except possibly some generic economic externalities of increased growth (and no doubt there are economists who, pointing to current ultra-low interest rates and sluggish growth and 'too much cash chasing safe investments', would deprecate even this).

Compounding-wise, there are other forms of investment: investment into scientific knowledge, into more effective charity (surely saving peoples' lives can have compounding effects into the distant future?), and so on.

So to recap:

1. organizational mortality is extremely high
2. financial mortality is likewise extremely high; and both organizational & financial mortality are relevant
3. all estimates of risk are systematically biased downwards, estimates indicating that one of these biases is very large
4. risks for organizations or finances increases with size
5. opportunity cost is completely ignored

Any of these except perhaps #3 could be sufficient to defeat perpetuities, and I think that combined, the case for perpetuities is completely non-existent.

Philip Trammel has criticized my comment here: https://philiptrammell.com/static/discounting_for_patient_philanthropists.pdf#page=33 He makes 3 points:

1. Perhaps the many failed philanthropies were not meant to be permanent?

First, they almost certainly were. Most philanthropies were clan or religious-based. Things like temples and monasteries are meant to be eternal as possible. What Buddhist monastery or Catholic cathedral was ever set up with the idea that it'd wind up everything in a century or two? What dedication of a golden tripod to the Oracle at Delphi was done with the idea that they'd be done with the whole silly paganism thing in half a millennium? What clan compound was created by a patriarch not hoping to be commemorated and his grave honored for generations without end? Donations were inalienable, and often made with stipulations like a mass being said for the donators' soul once a year forever or the Second Coming, whichever happened first. How many funny traditions or legal requirements at Oxford or Cambridge, which survive due to a very unusual degree of institutional & property right continuity in England, came with expiration dates or entailments which expired? (None come to mind.) The Islamic world went so far as to legally remove any option of being temporary! To the extent that philanthropies are not encumbered today, it's not for any lack of desire by philanthropists (as charities constantly complain & dream of 'unrestricted' funds), but legal systems refusing to enforce them via the dead hand doctrine, disruption of property rights, and creative destruction. My https://www.gwern.net/The-Narrowing-Circle is relevant, as is Fukuyama's The Origins of Political Order, which makes clear what a completely absurd thing that is to suggest of places like Rome or China.

Second, even if they were not, most of them do not expire due to reaching scheduled expiration dates, showing that existing structures are inadequate even to the task of lasting just a little while. Trammel seems to believe there is some sort of silver bullet institutional structure that might allow a charity to accumulate wealth for centuries or millennia if only the founders purchased the 1000-year charity plan instead of cheaping out by buying the limited-warranty 100-year charity plan. But there isn't.

2. His second point is, I'm not sure how to summarize it:

Second, it is misleading to cite the large numbers of failed philanthropic institutions (such as Islamic waqfs) which were intended to be permanent, since their closures were not independent. For illustration, if a wave of expropriation (say, through a regional conquest) is a Poisson process withλ= 0.005, then the probability of a thousand-year waqf is 0.7%. Splitting a billion-dollar waqf into a billion one-dollar waqfs, and observing that none survive the millennium, will give the impression that “the long-term waqf survival rate is less than one in one billion”.

I can't see how this point is relevant. Aside from his hypothetical not being the case (the organizational death statistics are certainly not based on any kind of fission like that), if a billion waqfs all manage to fail, that is a valid observation about the durability of waqfs. If they were split apart, then they all had separate managers/staff, separate tasks, separate endowments etc. There will be some correlation, and this will affect, say, confidence intervals - but the percentage is what it is.

3. His third point argues that the risk needs to grow with size for perpetuities to be bad ideas.

This doesn't seem right either. I gave many reasons quite aside from that against perpetuities, and his arguments against the very plausible increasing of risk aren't great either (pogroms vs the expropriation of the Church? but how can that be comparable when by definition the net worth of the poor is near-zero?).

A handful of relatively recent attempts explicitly to found long-term trusts have met with with partial success (Benjamin Franklin) or comical failure (James Holdeen). Unfortunately, there have not been enough of these cases to draw any compelling conclusions.

I'd say there's more than enough when you don't handwave away millennia of examples.

Incidentally, I ran into another failure of long-term trusts recently: Wellington R. Burt's estate trustees managed to, over almost a century of investment in the USA during possibly the greatest sustained total economic growth in all of human history, with only minor disbursements and some minor legal defeats, no scandals or expropriation or anything, nevertheless realize a real total return of around 75% (turning the then-inflation-adjusted equivalent of ~\$400m into ~\$100m).

Hi gwern, thanks for the reply.

I think you might be misunderstanding my points here. In particular, regarding point 2, I'm not suggesting that the waqfs split, or that anything at all like that might have happened. The “split waqfs” point is just meant to illustrate the fact that, when waqf failures are correlated for whatever reason, arbitrarily many closures with zero long-term survivors can be compatible with a relatively low annual hazard rate. The failure of a billion waqfs would be a valid observation, but it would be an observation compatible with the belief that the probability that a new waqf survives a millennium is non-negligible.

In any event--I should probably have reached out to you sooner, sorry about that! Now unfortunately I'll be too busy to discuss this more until June, but let me know if you're interested in going over all three points (and anything else regarding the value of long-term philanthropic investment) once summer comes. I would sincerely like to understand the source of our disagreements on this.

In the meantime, thanks for the Wellington R. Burt example, I'll check it out!

So to survive, any perpetuity has a risk of 0.01^120 = 1.000000000000001e-240.

The premises in this argument aren't strong enough to support conclusions like that. Expropriation risks have declined strikingly, particularly in advanced societies, and it's easy enough to describe scenarios in which the annual risk of expropriation falls to extremely low levels, e.g. a stable world government run by patient immortals, or with an automated legal system designed for ultra-stability.

ETA: Weitzman on uncertainty about discount/expropriation rates.

The premises in this argument aren't strong enough to support conclusions like that.

Sure. But the support for other parts of the perpetuity argument like long-term real returns aren't strong either. And a better model would take into account diseconomies of scale. Improbability needs to work both ways, or else you're just setting up Pascalian wagers...

Expropriation risks have declined strikingly, particularly in advanced societies,

They have?

and it's easy enough to describe scenarios in which the annual risk of expropriation falls to extremely low levels

Even easier to describe scenarios in which the risk spikes. How's the Middle East doing lately? Are the various nuclear powers like Russia and North Korea still on friendly terms with everyone, and nuclear war utterly unthinkable?

ETA: Weitzman on uncertainty about discount/expropriation rates.

This seems to be purely theoretical modeling which does not address my many disjunctive & empirical arguments above against the perpetuity strategy.

Expropriation risks have declined strikingly, particularly in advanced societies

I see no reasons to conclude that. Au contraire, I see expropriation risks rising as the government power grows and the political need to keep the feeding trough full becomes difficult to satisfy.

it's easy enough to describe scenarios in which the annual risk of expropriation falls to extremely low levels

"Easy to describe" is not at all the same thing as "Are likely". Both utopias and dystopias are easy to describe.

I have some trouble conceiving of what would beat a consistent argument a googol fold.
Now I don't anymore.

I stand corrected.

Thank you Gwern.

Robin used a Dirty Math Trick that works on us because we're not used to dealing with large numbers. He used a large time scale of 12000 years, and assumed exponential growth in wealth at a reasonable rate over that time period. But then for depreciating the value of the wealth due to the fact that the intended recipients might not actually receive it, he used a relatively small linear factor of 1/1000 which seems like it was pulled out of a hat.

It would make more sense to assume that there is some probability every year that the accumulated wealth will be wiped out by civil war, communist takeover, nuclear holocaust, etc etc. Even if this yearly probability were small, applied over a long period of time, it would still counteract the exponential blowup in the value of the wealth. The resulting conclusion would be totally dependent on the probability of calamity: if you use a 0.01% chance of total loss, then you have about a 30% chance of coming out with the big sum mentioned in the article. But if you use a 1% chance, then your likelihood of making it to 12000 years with the money intact is 4e-53.

Do you think the risk per year of losing the accumulated wealth is higher in the far future than in the near future? If the risk is not higher, doesn't your objection generalize to ordinary (near-future) investments?

[-][anonymous]9y 8

Yes. If you're not around to manage the money, it's far more likely to be embezzled or end up used on something no longer useful.

Also, many possible risks you can see coming before they actually happen. The Brazilian Empire isn't going to invade and pillage the USA in the next 10 years, but can you be so sure that it won't happen in the 3240s?

Oh you know nothing about the Brazilian Empire...

We look tame on the outside... but it's the atom's inside that counts...

As I said in response to Gwern's comment, there is uncertainty over rates of expropriation/loss, and the expected value disproportionately comes from the possibility of low loss rates. That is why Robin talks about 1/1000, he's raising the possibility that the legal order will be such as to sustain great growth, and the laws of physics will allow unreasonably large populations or wealth.

Now, it is still a pretty questionable comparison, because there are plenty of other possibilities for mega-influence, like changing the probability that such compounding can take place (and isn't pre-empted by expropriation, nuclear war, etc).

Nice catch!

I'm not sure what an investment in a particular far-future time would look like. Money does not, in fact, breed and multiply when left in a vault for long enough. It increases by being invested in things that give payoffs or otherwise rise in value. Even if you have a giant stockpile of cash and put it in a bank savings account, the bank will then take it and lend it out to people who will make use of it for whatever projects they're up to. If you do that, all you're doing is letting the bank (and the borrowers) choose the uses of your money for the first while, and then when you eventually take it out you take the choice back and make it yourself. The one way I can think of to actually invest in the distant future is to find or create some project that will have a massive payoff in the distant future but low payoffs before that, and I don't think anyone knows of a project that pays off further than 100 years in the future.

Maybe you could try to create a fund that explicitly looks for far-future payoff opportunities and invests in them, but I don't think one exists right now, and the idea is non-trivial.

I dunno, maybe there's something else I'm missing, though.

Likewise, if one actually expects to collect a googol dollars from investment, then either (a) galactic economies would need to be servicing the interest payments or (b) inflation has rendered dollars nearly valueless.

I'm not sure what an investment in a particular far-future time would look like.

Maybe like this:

Franklin [left] £1000 each to Philadelphia and Boston in his will to be invested for 200 years. He died in 1790, and by 1990 the funds had grown to 2.3, 5M\$, giving factors of 35, 76 inflation-adjusted gains, for annual returns of 1.8, 2.2%.

This is more like a conservative investment in various things by the managing funds for 200 years, followed by a reckless investment in the cities of Philadelphia and Boston at the end of 200 years. It probably didn't do particularly more for the people 200 years from the time than it did for people in the interim.

Also, the most recent comment by cournot is interesting on the topic:

You may also be using the wrong deflators. If you use standard CPI or other price indices, it does seem to be a lot of money. But if you think about it in terms of relative wealth you get a different figure [and standard price adjustments aren't great for looking far back in the past]. I think a pound was about 5 dollars. So if we assume that 1000 pounds = 5000 nominal dollars and we use the Econ History's price deflators http://www.measuringworth.com/uscompare/ we find that this comes to over \$2M if we use the unskilled wage and about \$5M if we use nominal GDP. As a relative share of GDP, this figure would have been an enormous \$380M or so. The latter is not an irrelevant calculation.

Given how wealthy someone had to be (relative to the poor in the 18th century) to fork over a thousand pounds in Franklin's time, he might have done more good with it then than you could do with 2 to 5 million bucks today.

That is unreasonable because we have more access to means of helping the poor today. If you expect the trend to go on into the future, than 2 million tomorrow is always better than a thousand today, which approximates maximal 3 lives on AMF of SCI

all you're doing is letting the bank (and the borrowers) choose the uses of your money for the first while,

You're letting the bank and borrowers choose uses which they expect to be worth more than the cost, under the knowledge that they may be bankrupted if they choose poorly and keep the surplus profits if they choose well. These constraints tend to lead to fewer consumable luxury purchases and more carefully selected productive investments, and having more of the latter increases the potential economic output of the future.

There are many caveats to this, though. Does our potential economic output really have no upper bound within a hundred orders of magnitude of its present state? That seems unlikely, but if not then those exponential returns are just the bottom tails of S-curves. Is this economic system going to be protected from overwhelming corruption, violence, and theft for a future period longer than all prior human history? That would be historically unprecedented, but it only takes one disaster to wipe out a fortune.

Like a lot of Robin's stuff, he makes the assumption that everyone already knows about some argument that's mostly original to him, and then proceeds to deduce why they're not acting on that (nonexistent) knowledge. Personally, after reading his argument, I did become marginally more interested in doing something like what he describes. However, I'm not convinced that it's obviously the most altruistic way to use resources.

For one thing, it seems possible that making the world a better place now would actually yield greater returns in the long run.

Let's say I donate to AMF, saving 10 African children from dying. Those children grow up and do useful work, shifting the world economy a bit forward on its exponential growth curve (say, from t=10 to t=10.01). 12,000 years later, we're at t=12,010.01 instead of t=12.010--but since we're so far along the exponential growth curve at this point, that ends up making a much larger difference. The difference that your contribution makes will grow exponentially just like in Robin's plan:

$e^\{t \+ 0\.01\} \- e^\{t\} = e^\{t\}\(e^\{0\.01\} \- 1\$)

There are lots of people who are trying to maximize their individual return on investment (through buying stocks and stuff), so opportunities there are thoroughly picked over. By contrast, there are a relatively small number of dollars chasing Wikipedia-style projects that accelerate economic growth without delivering financial returns to their creators, so ROI should (theoretically) be much higher.

One problem with this plan is that although you'll have a larger impact on growing the world economy, it's not clear to what degree a wealthier world economy contributes to altruistic ends. If everyone becomes extremely uncharitable between now and the very far future, then you'd be relatively better off following Robin's plan.

Another problem: if we assume a constant background probability that your investment ends up being worthless (because humanity destroys itself, or because it gets stolen, or whatever), then this is exponential decay, which has the potential to cancel out exponential growth. Let's say your investment has a constant .99 probability of continuing to exist each year. In 100 years, the probability that your investment still exists is .36. After 12,000 years, the probability that your investment still exists is about .000000000000000000000000000000000000000000000000000004.

Another problem with Robin's plan: although your investment grows exponentially, it stays constant as a factor of the world economy. Let's say global GDP is \$80 trillion and you put \$1000 in to Robin's idea (000000001.25% of global GDP). Your investment grows at the same rate as the global economy, so a billion years later, your investment is still amounts to about 000000001.25% of global GDP. If the number of people alive with problems to solve has stayed roughly constant in that period, then it's unlikely your investment will do much good, since everyone should be ridiculously wealthy anyway (unless there's extreme global inequality).

So yeah, lots of potential problems with Robin's argument. Sorry if I'm repeating stuff that was already said in the comments here or there.

unless there's extreme global inequality

This is a bizarre caveat, given that we currently have extreme global inequality.

Not to the point that you could cure world hunger by paying a few cents.

More points, some in favor of Robin:

The probability of losing your money each year is likely not independent. If you've managed to last the past 11,000 years, it's relatively more likely you'll last another 1,000.

Also, if the expected gain year-on-year is positive (for example, if our assets grow by a factor of 1.02 every year, and there's a .99 chance that they continue to exist, that means their expected year-on-year growth is a factor of 1.0098), then the argument could still work. But you start getting in to Pascal's Mugging territory there.

Another point is the implausibility of economic growth continuing for that long. Things don't grow exponentially in nature forever. Typically they reach some carrying capacity, run out of resources, etc.

Overall I think Robin's idea has a low enough probability of working that it runs in to Pascal's Mugging territory, but it might be worth doing with a small fraction of available altruistic resources.

The Southern red oak tree (Q. falcata) grows at a rate something like 1.25 feet every year, so in 12,000 years, I should have a tree over 12,000 feet tall, right?

Making a 2 percent return on investment which you expect to pay off after 12 thousand years is like planting a tree and expecting it to grow to 12,000 feet, or starting with two rabbits and expecting them to cover the entire world in a dozen generations.

The oldest bank in the world has existed a paltry few centuries

Making a 2 percent return on investment which you expect to pay off after 12 thousand years is like planting a tree and expecting it to grow to 12,000 feet,

I don't think this is a fair analogy. Robin explicitly assumed a 1/1000 chance of success. So he clearly didn't expect that his investment would pay off after 12k years.

Is planting a thousand trees and expecting one of them to grow to 12,000 feet a better analogy?

The point isn't that there is a small random percentage chance of failure/success. Your tree doesn't have 1/1000 chance of growing to 12,000 feet, there are structural problems with the way the world works that make it functionally impossible. There are trees near ten thousand years old, none of which have grown anywhere near that tall, because the number we assign to "growth rate" is actually one of the very very many variables that affect which trees can exist and how they do so. Using a self-multiplying "Growth-rate" number to try to figure out how your investment fund is going to do in 12k years is ignoring just as many variables.

Do you expect your investment to grow as a fraction of total wealth (rather than just keeping pace with overall economic growth)? If yes: How high a proportion of total wealth do you expect it to become?

Thought experiment: Suppose that a large fraction of the wealth on Earth was held by a "charitable trust" which was started 12,000 years ago, had spent the intervening time solely managing its wealth (not doing anything charitable), and now was seeking to use its resources for altruistic purposes (following the guidelines set forth by the person who gave it instructions 12,000 years ago). Would that be better than the status quo, or worse? By how much?

Second thought experiment: Suppose that a large fraction of the wealth on Earth was held by a "charitable trust" which was started 11,000 years ago, had spent the intervening time solely managing its wealth (not doing anything charitable), and was under strict instructions to spend the next 1,000 years solely managing its wealth (not doing anything charitable) before it finally turned to altruistic purposes. Would that be better than the status quo, or worse? By how much?

[-][anonymous]9y 15

Methuselah trusts are not entirely legal. Someone who tries to set one up today may be prevented from doing so.

http://www.laphamsquarterly.org/essays/trust-issues.php?page=all

You would need high status such as Benjamin Franklin to avoid having it robbed eventually anyway. And in order to have it last thousands of years you need your status to last thousands of years.

The bitcoin blockchain looks like it will almost last forever, since there are many fanatics that would keep the flame lit even if there was a severe crackdown.

So, an answer for the extreme rational altruist seems to lie in how to encode the values of their trust in something like a bitcoin blockchain, a peer to peer network that rewards participants in some manner, giving them the motive to keep the network alive.

The bitcoin blockchain looks like it will almost last forever, since there are many fanatics that would keep the flame lit even if there was a severe crackdown.

That seems highly unlikely, unless you actually meant something like 'the successors to the bitcoin blockchain'. We already know that quantum computing is going to lop off a large fraction of the security in the hashes used in Bitcoin, and no cryptographic hash has so far lasted even a century.

I agree to a certain extent. I just pointed out one thing, probably the only thing, that is fairly immune from the law , is expected to last fairly long and rewards its participants.

I did mention, something like a blockchain, a peer to peer network that rewards its participants. Contrarians and even reactionaries can use something like this to preserve and persist their values across time.

Doesn't that assume that the bitcoin protocol isn't altered with updated security?

It's hard to update the protocol when most of the network will ignore and shun any updated clients. Any update as invasive as changing the hash functions will probably require shifting to a new blockchain; hence:

unless you actually meant something like 'the successors to the bitcoin blockchain'

But the devs released updated clients all the time.

There's a big difference between adding some bugfixes or nifty userland features to what is now merely one client among many - and making a fundamental backwards incompatible upgrade to the entire protocol which would affect every client, miner, and interfacing software with major security ramifications. Interoperability is fragile (witness the recent blockchain fork which led to lead dev Gavin paying out >\$70k in bitcoins to miners on the wrong side of the fork), and changing hash functions will break it.

[-][anonymous]9y 1

If they need to change to a new hash function, there'll probably be plenty of warning, so a sensible rollout can be planned. If you need a new hash function, everyone's going to have to update anyway, and I think most people involved in bitcoin would prefer to keep the existing blockchain rather than start again from scratch.

The recent fork was different, in that the problem wasn't detected until it happened (and the people running old versions are going to have to upgrade in any case).

If you need a new hash function, everyone's going to have to update anyway, and I think most people involved in bitcoin would prefer to keep the existing blockchain rather than start again from scratch.

But is this even possible? If the hashes are broken, depending on the attack any transaction on the 'old' blockchain may be a double-spend or theft, and so backwards compatibility just imports the new security problems. (Imagine there's a new attack which can double bitcoins at an old-chain address, but the new-chain with a hash forbidding it is backwards-compatible and accepts all old-address transmissions to new-addresses; then as soon as the attacks finally become practical, anyone can flood the new-chain with counterfeit coins.)

Easiest to just make a clean break with an entirely new blockchain. People can sell out their old coins and buy in, or they can use a different scheme. (For example, Bitcoins can be verifiably destroyed, so the new blockchain's protocol might use that as a way to launder 1.0 into 2.0; at least as long as there is no largescale counterfeiting happening.)

[-][anonymous]9y 1

I think it could be done, assuming there's enough time between the old hash looking vulnerable and it actually being broken:

Release a new client version X which uses the new hash after some future block N. Once block N+1000 has been found , hash every block up to N using the new hash and bake a final result of that into client version X+1, such that it rejects all old-hash blocks that haven't been blessed by the new hash.

Still, that is rather involved, and your destroy-to-convert scheme (which could be disabled once the old hash is looking too shaky) looks like it would work pretty well.

I'm not sure how well selling old coins and buying in would work, though - someone's going to be left holding a large bag of worthless bitcoins at the end of that.

I don't think the delineation between old and new will be quite so clear. Consider a new client that all the miners switch to, importing your wallet to this client causes a transaction to appear on the network transferring all your old coins to a new client, when confirmed all your old coins are now bitcoin2, which can't be sent to bitcoin1 wallets. ANy attempt to use your old bitcoin1 coins will show up as invalid.

Immortal fanatics? Or fanatics very good at inspiring equal zeal in future generations?

I guess after a point, the network takes care of itself, with self interest guiding the activities of participants. Of course, I could be wrong.

Continued accumulation of risk of some kind of failure over the years is exponential just like the interest is, and can reach 10^-100 as easily.

A space of correct arguments of N words, and space of invalid but superficially correct arguments of N words, differ in size by factor exponential in N, so improbability of correctness in general too can easily reach googol.

That being said, there is genuinely a problem with description length priors and unbounded utilities. Given a valid theory of everything with length L , a fairly small increase in the length can yield a world with enormous invisible, undetectable consequences. E.g. an enormous (3^^^^3 , or BB(11) ) number of secondary worlds where the fundamental constants depend to the temperature of the coldest 1-gram piece of frozen hydrogen in our universe (something global which we may influence, killing all the aliens in those universes). The perversity is that we don't know those worlds exist, and we don't know they don't exist, and the theory where they do exist is not very much longer than simplest known ToE, and predicts exactly identical observations so it can never be disproved.

I may not understand Robin's post. I think he said (paraphrased): "If you really cared about future bazillions of people, and if you are about to spend N dollars on X-risk reduction, then instead you should invest some of that so that some subset of future people - whoever would have preferred money/wealth to a reduced chance of extinction - can actually get the money; then everyone would be happier. We don't do that, which reveals that we care about appearing conscientious rather than helping future people."

But this seems wrong. However high the dollar value of our investment at time T, it will only buy the inheritors some amount of wealth (computing power, intellectual content, safety, etc.). This amount is determined by how much wealth humanity has produced/has access to at time T. This wealth will be there anyway, and will benefit (some) humans with or without the investment. Then increasing the chances of this wealth being there at all - i.e. reducing X-risk - dominates our present day calculation.

A flaw or trick in Robin's argument is talking about 12000 years as if that is the future that people care about. People concerned about AI and Global Warming make it a point that the harm comes, or at least is well on its way, within 100 years. 2% over 100 years is sort of crappy return, not game changing in anybody's imagination.

I wonder what the longest time that has passed between an investment aimed at a return in the far future and an actual return similar to what was expected in the far future? The few things I think of are cathedrals and universities. There are a few universities that are 1000 years old, do they qualify? I think not for these reasons: 1) the universities were not founded with the intention of a payoff 500 years or a 1000 years later, they were founded with the intention of getting value from them soon after they were founded. 2) The universities that have lasted 1000 years have been re-invested in episodically and massively over the 1000 years, so the return now from the original investment is probably quite a small part of their total present value.

And that's 1000 years, I am aware of NOTHING with the slightest hint of long term payoff from 1000 years ago. Look forward to being shown I'm wrong in comments, though :)

12000 years? Its a joke. 11,900 years after an AI with some real capacity is developed? Supposing I cared deeply about 12000 years from now, what "signal" would I have telling me how to invest for that time period that wouldn't be totally swamped by the noise of uncertainty and the interference of numerous possibilities?

This of course says nothing of an investment that pays a consistent or even an average of 2% over 12000 years. The fact that I can imagine such a thing does not suggest that its probability of existence is > 1.02^(-12000).

Benjamin Franklin bequeathed a reasonable sum of money (IIRC, 1000 pounds Stirling each to two cities) to get invested for two centuries. The fund is worth something like \$5 million today. I don't recall the exact details, but it's a good past example of something like this.

I suspect the impact of this fund has been pretty small compared to the other stuff that Franklin did.

Franklin donated a small amount of money which grew to a very small fraction of the economy for a time period less than 1/60th of the proposal; he got incredibly lucky that he did it in America, and not in any of the other growing powers or economies of his day or later, such as Russia/France/Germany/Mexico/Argentina/Japan, which might have wiped out his legacy; and even Americans have found it difficult to replicate his feat.

I plan on being alive in 12,000 years, please send bitcoins.

Note that Robin's post was an argument that nobody cares much about the far future. The conclusion is kind-of obvious - evolved organisms tend to care about things they can influence.

How will your investment create the future wealth? If you intend loan it and create interest, how much harm will those loans do (through environmental degradation) over 12k years?

If you instead invest that money right now in something that maximizes good, rather than investing it to maximize returns for 12k years first, would you expect higher total good?

Why 12k years, and not 12k+5, or 50? It hardly seems like a likely place for a global or even local maximum.

I can 10^100 trivially: if inflation averages about 2% per year, your compound interest keeps your value constant, providing no benefit over time. What values for inflation do you predict over the next 12,000 years, broken down into averages for each 200-year period?

The first problem is that, given that average inflation rates have exceeded 2% per year, any investment at 2% annually is going to be a terrible investment in the future - you're actually LOSING money every year on average rather than gaining it, in terms of actual value.

However, let us assume that for whatever reason you were actually getting 2% growth on top of inflation per year (an unlikely scenario for an unguided account, but bear with me a moment). The second problem is that the result is, obviously, irrational. There are not even a googol particles in the Universe; how could you have a googol dollars and have that be a reasonable result? There isn't anything to purchase with a googol dollars. Ergo, we must assume our assumption is flawed.

The flaw in this assumption is, of course, the lack of understanding of exponential growth; all exponential growth is self-limiting in nature. In reality you run into real constraints to how much growth you can have, how much of the economy can be in your fund, ect. As someone else pointed out, you can't assume that you will have indefinite growth; it is confined by the size of the economy (which will never reach a googol dollars in current day money), by the likelihood of you actually getting said returns (which is 0), whether anyone would recognize a currency when too much of the world economy was bound up into it, ect.

The truth is it is just a terrible argument to begin with. Anyone who promises you 2% growth for even a thousand years is a dirty liar. The rate seems reasonable but in actuality it is anything but. Do you think that the world economy is going to increase by 2% a year for the next 1000 years? I don't, at least not in real, inflation-adjusted dollars. The total amount of energy we can possibly use on the planet alone would constrain such economic growth.

[-][anonymous]9y 0

I'm not sure what an investment in a particular far-future time would look like.

Maybe like this:

Franklin [left] £1000 each to Philadelphia and Boston in his will to be invested for 200 years. He died in 1790, and by 1990 the funds had grown to 2.3, 5M\$, giving factors of 35, 76 inflation-adjusted gains, for annual returns of 1.8, 2.2%.

[This comment is no longer endorsed by its author]Reply

Utilitarianism has these problems whether you look at thousands of years in the future or today. It's unlikely that any "utilitarian" reading this list today doesn't have resources they reserve for themselves or those close to them that wouldn't easily produce greater good for a greater number spent elsewhere.