Satoshi Nakamoto?

Obvious kinds of humans include:

Dead humans. (Who didn't manage to leave the coins to their heirs.)

Cryonically preserved humans hoping to use them later. (Including an obvious specific candidate.)

Humans optimistic enough about Bitcoin to think current prices are too low. (We know Nakamoto had resources, so it seems a safe bet that they could keep living on ordinary means for now.)

And the obvious: you don't know that all of Nakamoto's coins fit the standard assumed profile. It's entirely possible they intentionally mined some with the regular setup and are spending a few from that pool.

Inadequacy and Modesty

The advanced answer to this is to create conditional prediction markets. For example: a market for whether or not the Bank of Japan implements a policy, a market for the future GDP or inflation rate of Japan (or whatever your preferred metric is), and a conditional market for (GDP given policy) and (GDP given no policy).

Then people can make conditional bets as desired, and you can report your track record, and so on. Without a prediction market you can't, in general, solve the problem of "how good is this prediction track record really" except by looking at it in detail and making judgment calls.

Scope Insensitivity

I hope you have renter's insurance, knowledge of a couple evacuation routes, and backups for any important data and papers and such.

Bet or update: fixing the will-to-wager assumption

I'm not aware of any legal implications in the US. US gambling laws basically only apply when there is a "house" taking a cut or betting to their own advantage or similar. Bets between friends where someone wins the whole stake are permitted.

As for the shady implications... spend more time hanging out with aspiring rationalists and their ilk?

Bet or update: fixing the will-to-wager assumption

The richer structure you seek for those two coins is your distribution over their probabilities. They're both 50% likely to come up heads, given the information you have. You should be willing to make exactly the same bets about them, assuming the person offering you the bet has no more information than you do. However, if you flip each coin once and observe the results, your new probability estimate for next flips are now different.

For example, for the second coin you might have a uniform distribution (ignorance prior) over the set of all possible probabilities. In that case, if you observe a single flip that comes up heads, your probability that the next flip will be heads is now 2/3.

Well, in general, I'd say achieving that reliability through redundant means is totally reasonable, whether in engineering or people-based systems.

At a component level? Lots of structural components, for example. Airplane wings stay attached at fairly high reliability, and my impression is that while there is plenty of margin in the strength of the attachment, it's not like the underlying bolts are being replaced because they failed with any regularity.

I remember an aerospace discussion about a component (a pressure switch, I think?). NASA wanted documentation for 6 9s of reliability, and expected some sort of very careful fault tree analysis and testing plan. The contractor instead used an automotive component (brake system, I think?), and produced documentation of field reliability at a level high enough to meet the requirements. Definitely an example where working to get the underlying component that reliable was probably better than building complex redundancy on top of an unreliable component.

You might also want a mechanism to handle "staples" that individuals want. I have a few foods / ingredients I like to keep on hand at all times, and be able to rely on having. I'd have no objections to other people eating them, but if they did I'd want them to take responsibility for never leaving the house in a state of "no X on hand".

Those numbers sound like reasonable estimates and goals. Having taught classes at TechShop, that first handful of hours is important. 20 hours of welding instruction ought to be enough that you know whether you like it and can build some useful things, but probably not enough to get even an intro-level job. It should give you a clue as to whether signing up for a community college class is a good idea or not.

Also I'm really confused by your inclusion of EE in that list; I'd have put it on the other one.

However, I'm skeptical of systems that require 99.99% reliability to work. Heuristically, I expect complex systems to be stable only if they are highly fault-tolerant and degrade gracefully.

On the other hand... look at what happens when you simply demand that level of reliability, put in the effort, and get it. From my engineering perspective, that difference looks huge. And it doesn't stop at 99.99%; the next couple nines are useful too! The level of complexity and usefulness you can build from those components is breathtaking. It's what makes the 21st century work.

I'd be really curious to see what happens when that same level of uncompromising reliability is demanded of social systems. Maybe it doesn't work, maybe the analogy fails. But I want to see the answer!

Hidden universal expansion: stopping runaways

What happens when the committed scorched-earth-defender meets the committed extortionist? Surely a strong precommitment to extortion by a powerful attacker can defeat a weak commitment to scorched earth by a defender?

It seems to me this bears a resemblence to Chicken or something, and that on a large scale we might reasonably expect to see both sets of outcomes.

Load More