There's lots of things we could do, but don't. Generally, the risk/cost is non-zero, even if small, and the recognizable value (that which can be captured or benefit to the decision-maker) is less than that.
I'd probably pay a little bit to see this in the skies while I'm safely on the ground, and even to be in one after the first 10,000 have gone by. But I wouldn't pay enough to make up for the lawsuits and loss of revenue from people who don't like the idea.
reasons I downvoted:
Fun exploration, though I don't believe the underlying assumptions at all. The biggest disconnect I see is the belief that the current mean individual wealth can be made to retain that fraction of total wealth over any significant time period, including massive changes in number of wealth-holders, and in what "wealth" can even be measured in.
There is no long-term passive wealth mechanism. It always requires quite a bit of attention and management, and then gets transferred to the managers rather than the nominal owners. Or, often, to the revolutionaries or vendors who are able to capture it.
This is problematic, EVEN IF the concept of "ownership" can be applied to galaxies and human-comprehensible owning entitites.
I don't think markets are likely to correlate very strongly to this. Whether prediction markets or stock/commodity that has a bit of correlation to what you care about, the fundemental problem is "if the economy changes by enough, the units of measure for the market (money!) change". Which means that payoff risk overwhelms prediction risk. You can be spot-on correct about timelines, and STILL not get paid. So why participated in that prediction at al?
Yup, things that are mostly out of your control and aren't truly immediate are easy to forget about in your daily life. I'd argue that this compartmentalization is actually a really useful skill, to prevent worry and depression that doesn't lead to actions which improve your life satisfaction.
The ability/habit of thinking about distant/big topics in a time boxed way, considering what, if anything, to do about them, and then going back to your normal activities is, for most people, a very effective strategy.
Interesting take, but I'm having trouble accepting it, as I don't think "reality", "mathematics", and "theorem" as used here are the common definitions. If you don't like the results of a theorem, yes, examine the axioms, and yes, identify where you're misinterpreting the results. But you still have to believe the underlying syllogism "if X and Y, then Z" that the theorem proves. You can only notice that Z is suspicious, so you need to be really sure about X and Y.
I mostly agree with your resistance steps, but recognize that this isn't resisting the math, it's resisting humans who are trying to bamboozle you by incorrectly presenting the math.
This doesn't solve the problem of motivation to lie about (or change) one's utility function to move the group equilibrium, does it?
My rules are:
- Unlabeled walls of AI-generated text intended for humans are never okay.
- If the text is purely formalized or logistical and not a wall, that can be unlabeled.
- If the text is not intended to be something a human reads, game on.
- If the text is clearly labeled as AI that is fine if and only if the point is to show that the information comes from a neutral third party of sorts.
I agree, but these are too specific. This is a violation of discourse norms, and doesn't need to be separated into ai-generated vs human-nerdsplaining walls of text. Also, it's always been annoying: LMGTFY.
I think most relationships and values are multi-dimensional and don't collapse very easily into this dimension.
Separately, I generally dislike and cannot use models that diverge so far from reality - this situation does not come up, and if it did there would be no certainties that you posit.
No. If you replace "just" with "partially model-able as", then yes.