Wiki Contributions

Comments

Also, the question was the pressures on creating the system, not the pressures during the false alarm.

What relation does the prior have to the latter? 

But the planet is precisely what's being taxed! Why stage a tax rebellion only to forfeit your taxable assets?

If the lands are marginal, they would be taxed very little, or not at all.
 

Well the planet would not be paying the tax, the colonists would be paying the tax. They likely won’t have to forefeit anything at all since the mere threat is enough to prevent any attempts at taxing them.

If the tax was literally zero, and the authority of Earth only nominal, then maybe the issue could be sidestepped, but then the issue of what kind of taxation would be redundant.

But if it’s above zero I’m not really sure how you imagine the situation enfolding or what sort of things can pay tax or be used as tax payments.  As you mentioned there’s mass, energy,  space-time, plus information. Small colonists obviously can’t pay anything with space-time since this is not something they can relocate. So it will have to be either mass, energy, and/or information as the unit of settlement for taxes in any plausible future. 

Maybe there will be a common currency but more likely not since currency controls are impossible with a time lag of many years, so it would be a very unstable system.

Regardless, even on 2022 Earth it’s clear that some folks, and not just a few,  thousands upon thousands, are willing to die for abstract principles of one kind or another, including the matter of taxation. I can easily imagine a future world of millions of very independent colonists that are more than willing to fight to the death if they even have to pay a single dollar of taxes. And unlike the present day they will be on a nearly level playing field even against a polity with 1000x the resources.

There’s also no plausible way to give representation in exchange for taxation, since the communications lag is so massive, so I really can’t see how anyone could compel even a single dollar out of distant colonists due to the previously discussed reasons.

 

Even if they left the planet, couldn’t the counter strike follow them? It doesn’t matter if you can do more economic damage if you also go extinct. It’s like refusing to pay a $100 fine by doing $1000 of damage and then ending up in prison. The taxing authority can precommit to massive retaliation in order to deter such behavior. The colony cannot symmetrically threaten the tax authority with extinction because of the size difference.

There is no way that the counter strike can ‘follow’ them to other planets because that would guarantee destruction of more value then any tax of a single planet could ever collect. Plus it would be pointless if they get sufficient advance warning. It doesn’t take centuries to pack up and move on.

 The taxing authority cannot precommit to ‘massive retaliation’ because as mentioned, there’s no way for them to completely destroy the colonists. Neither can the colonists completely destroy everything in Sol, but that doesn’t matter because they can still credibly destroy many units of value with a single unit of effort.

All of this ignores the practical issues with these weapons, the fact that earth’s value is minuscule compared to the sun, 

How does the value of the Sun relate to this discussion?

the costs of forfeiting property rights, 

What property rights? For any rights to actually exist there must be some authority capable of enforcing such, which wouldn’t be the case as previously mentioned.

the relocation costs, 

The relocation cost would be there but like I said it would be many orders of magnitude less for the colonists than for Earth.

and the fact that citizens of marginal lands would receive net payments from the citizens dividend.

If Earth simply wants to send payments to the colonists then that renders the choice of taxation system moot. If they want to send payments a few dozen years after taxes are collected then they still first have to collect the taxes. Which is the same problem. Promising large rewards at some future date without an enforceable guarantee doesn’t work, since after all the colonists also can’t compel the payments to be sent out either.

It's hard to tell whether your joking or being serious. Do you really believe the pressures your experiencing are that significant compared to what Petrov was experiencing on that day?

There are two possibilities here:

  1. Nations have the technology to destroy another civilization
  2. Nations don't have the technology to destroy another civilization

In either case, taxes are still possible!

In case 1, any nation that attempts to destroy another nation will also be destroyed since their victim has the same technology. Seems better to pay the tax.

No? Your own example of detecting a dangerous launch some number of years in advance demonstrates the opposite. 

As this would provide for enough time for a small low value colony, on a marginally habitable planet, to evacuate nearly all their wealth, except for maybe low value heavy things such as railroad tracks, whereas Earth would never be able to evacuate even a fraction of its total wealth. Since a huge amount is locked up in things such as the biosphere, which cannot be credibly moved off-planet or replicated.

There's likely dozens or hundreds of marginal planets for every Earth-like planet so the small colonists can just pack up and move to another place of almost equivalent value, minus relocation costs, whereas there's no such option for Earth. Once its destroyed there's likely no replacement within at least a hundred light years.

For example, if both sides have access to at least one 100 00 ton spacecraft capable of 0.5 c, it means there's an asymmetric threat, as the leaders of the small colonists can credibly threaten to destroy civilization on Earth and along with it all hope of a similar replacement, whereas the leaders of Earth wouldn't be able to credibly do the same.

And this relationship is not linear either, because even if Earth could afford 1000 such spacecraft, and the small colonists only 1, it doesn't balance the scales as the leaders of Earth couldn't credibly threaten to destroy the small colonists 1000x over, since that's impossible. And they can't credibly threaten to destroy every marginally inhabitable planet within a certain radius since that will certainly destroy more value then any tax of a single colony could ever feasibly recover.

i.e. small colonists can actually punch back a 1000x harder (if 1 Earth value-wise = 1000 small colonies on marginal planets) whereas Earth cannot.

You received financial pressure and/or top down command pressure to rush it in 4 days? Or was it your own decision? Because the former would imply some rather significant things.

Its not clear that space war will be dominated by kinetic energy weapons or MAD:

  1. These weapons seem most useful when entire civilizations are living on a single planet, but its possible that people will live in disconnected space habitats. These would be much harder to wipe out.

2. Any weapon will take a long time to move over interstellar distances. A rebelling civilization would have to wait thousands of years for the weapon to reach its target. It seems like their opponent could detect the weapon and respond in this time.

3. Even if effective civilization-destroying weapons are developed, mutual defense treaties and AI could be used to launch a second strike, making civilization-scale attacks unlikely.

For (1) they would still be useful because Earth represents much more value then the value of any tax that could be collected on a short timescale (< 100 years) from even another equivalent Earth-like planet.  (Let alone for some backwater colony)

Thus threatening the destruction of value several orders of magnitude greater than the value to be collected is a viable deterrent. Since no rational authority would dare test it. Who would trade a 10%, or even 1%, chance of losing $10 000 in exchange for a 90% chance of collecting $1 ?

For (2) It's only a few years for a 0.5 c spacecraft to go from Alpha Centauri to Earth, only a few dozen years from several hundred systems to Earth. It's impossible, without some as yet uninvented sensing technology, to reliably surveil even the few hundred closest star systems. 

Of course once it's at speed in interstellar space it's vanishingly unlikely to be detected due to basic physics, which cannot be changed, and once it's past the Oort Cloud and relatively easy to detect again, there will be almost no time left at 0.5 c.

 

For (3) A second-strike is only a credible counter if the opponent has roughly equal amounts to lose. But, assuming it's much easier to make a 0.5 c spacecraft then to colonize a planet to Earth level, the opponent in this case, a small colony of a few million or something, would have very little to lose in comparison.

Thus the second-strike of some backwater colony would only represent a minuscule threat compared to the value destroyed by an equivalent strike on Earth. And it's a lot easier to spread out a few million folks on short notice, if detection were possible, then a few tens of billions.

In fact, reliable detection a few dozen years out, would decrease the credibility of second-strikes on smaller targets, as the leaders of the small colony would be confident they could evacuate everyone and most valuables in that timeframe. Whereas the leaders of Earth would have very low confidence of the same.

Just read this about 3 years later. I found Thiel to be mostly spot on. Especially:

Peter Thiel: Right. Look, I don't know how you solve the social problem if everybody has to be a mathematician or a concert pianist. I want a society in which we have great mathematicians and great concert pianists. That seems that that would be a very healthy society. It's very unhealthy if every parent thinks their child has to be a mathematician or a concert pianist, and that's the kind of society we unfortunately have.

Which seems to point to the fact that the root cause of many societal ills is that the average human psyche simply can't accept the fact that in Thiel's language, the vast majority of children, including most likely theirs, will never be a great mathematician or concert pianist. (or attain an equally prestigious position)

Since human prestige by definition is defined relative to other humans and thus only a tiny minority could ever be near the top. 

Yet there seems to be some instinctual demand for an individual to be special in some way and furthermore that this must be recognized by some sufficiently large group. i.e. for a child to grow into an adult, mediocre in every way, is seen as a tragedy.

Thus necessitating a huge amount of distortions everywhere, across all institutions and policies causing damage in innumerable ways, to compensate.

How would a future government enforce their tax policies on a distant star system? 

Since it's vastly easier to destroy something than to build in outer space, there's no feasible way of using the threat of violence, at least not without mutually assured destruction.

For example, a single 100 000 ton spacecraft going at 0.5 c has about the same kinetic energy as the lower bound estimate for the KT comet impactor that wiped out the dinosaurs.

Modern self driving vehicles can't run inference on even a chinchilla scale network locally in real time, latency and reliability requirements preclude most server-side work, and even if you could use big servers to help, it costs a lot of money to run large models for millions of customers simultaneously.

This is a good point regarding latency.

Why wouldn't it also apply to a big datacenter? If it's a few hundred meters of distance from the two farthest apart processing units, that seems to imply an enormous latency in computing terms.

'Paternalism' in this sense would seem more difficult to bring about, more controversial, and harder to control then AGI itself. So then why worry about it?

In the unlikely case mankind becomes capable of realizing beforehand then it wouldn't serve a purpose by that point as any future AGI will have become an almost trivial problem by comparison. If it was realized afterhand, by presumably super intelligent entities, 2022 human opinions regarding it would just be noise.

At most the process of getting global societal trust to point where it's possible to realize may be useful to discuss. But that almost certainly would be made harder, rather than easier, by discussing 'paternalism' before the trust level has reached that point.

Load More