Thanks, the suggestion sounds interesting. However; first quick update fwiw: I've only had the chance to read the first small section, "A Brief Proof at You Are Every Conscious Thing", and I must say to me it seems totally clear he's essentially making the same Bayesian mistake - or sort of Anthropic reasoning mistake - that OP contains. It's totally not making sense the way he puts it, and I'm slightly surprised he published it like that.
I plan to read more and to provide my view on the rest of his argument - hopefully I'll not fail despite time pressure.
You mention a few; fwiw some additional things that occasionally increase my empathy to whom I consider of lower abstract intelligence:
Can empathize with a lot here, but strikes me:
If you go to what is quasi the incarnation of the place where low IQ makes us fail - PHILOSOPHY group - no wonder you end up appalled :-). Maybe next time you go to a pub or anywhere else and despite even lower IQ persons, they may be more insightful or interesting as their discussions are ones that benefit from a broader spectrum of things than sheer core IQ.
Warning: more for imho beautiful geeky abstract high-level interpretation than really resolving with certainty the case at hand.
:-)
I purposely didn't try to add any conclusive interpretation of it in my complaint about the bite-its-tail logic mistake.
But now that we're here :-):
It's great you did the 'classical' (even if not named as such) mistake so explicitly, as even if you hadn't made it, somehow the two ideas would have easily swung along with it in many of us half consciously without being fully resolved, pbly in head too.
Much can be said about '10x times as suspicious'; the funny thing being that as long as you conclude what you now just iterated, it again defeats a bit the argument: as you just prove that with his 'low' bet we may - all things considered - here simply let him go, while otherwise... Leaving open all the other arguments around this particular case, I'm reminded of the following that I think is the pertinent - even if bit disappointing as probabilistic fuzzy - way to think about it. And it will make sense of some of us finding it more intuitive that he'd surely gone for 800k instead of 80k (let's ascribe this to your intuition so far), and others the other way round (maybe we're allowed to call that the 2nd-sentence-of-Dana position), while some are more agnostic - and in a basic sense 'correct':
I think Game Theory calls what we end in a "trembling hand" equilibrium (I might be misusing the terminology as I rmbr more the term than the theory; either way the equil mechanism here then I'd still wager makes sense at a high level of abstraction): A state where if it was clear that 800k would have made more sense for the insider, then he could choose 80k to be totally save from suspicion, and we'd in that world see many '80k-size' type of such frauds, as anyone could pull them off w/o creating any suspicion - well and greedy people with some occasions will always exist. And in the world where instead we assume it was clear that 80k was already perfectly suspect, he would have zero reason to not go all out for the 800k if at all he tries... In the end, we end up with: It's just a bit ambiguous which exact scale increases the suspicious-ness how much, or, put more precisely: it is just such that the increase of suspicious-ness vaguely offsets the increase in payoff in many cases. I.e. it all becomes somewhat probabilistic. We're left with some of the insider thieves sometimes going for the high, sometimes for the low amount, and (i) potentially with many of us fighting about what that particular choice now means as fraud-indicator - while, more importantly, trembling-hand-understanders, or actually maybe many other a bit more calm natures, actually see how little we can learn from the amount chosen, as in equilibrium, it's systematically fuzzy along that dimension. If we'd be facing one single player consistently being insider a gazillion times, he might adopt a probabilistic amount-strategy; in the real world we're facing the one-time or so random insider who has incentives to play high or low amount which may be more explained by nuanced subtleties rather than a simple high-level view on it all - as that high-level-only view merely spits out: probabilistic high or low; or in a single case a 'might roughly just as well play high amount as low amount'.
I don't really claim there cannot be anything much more detailed/specific said here that puts this general approach into perspective in this particular case, but from the little we have here in OP and the comments so far, I think that would reasonably apply.
Disagree. If you earn a few millions or so a year, a few hundred thousand dollars quick and easy is still a nice sum to get for quasi free. Plus not very difficult to imagine that some not extremely high up people likely enough had hints as to what they might be directly involved with soon.
FWIW empirical example: A few years ago the super well regarded head of prestigious Swiss National Bank had to go because of alleged dollar/franc insider trading (executed by his wife via arts) when questions of down-pegging the value of Swiss franc to weaker EUR was a daily question with gains of - if I rmbr well - a few ten thousand dollars or so from the trade.
Note the contradiction in your argumentation:
You write (I add the bracket but that's obviously rather exactly what's meant in your line of argument)
[I think the guy's trade is not as suspicions as others think because] why only bet 80k?
and two sentences later
And I don’t think the argument of “any more would be suspicious” really holds either here, betting $800k or $80k is about as suspicious
I don't see this defeating my point: as a premise, GD may dominate from the perspective of merely improving lives of existing people as we seem to agree; unless we have a particular bias for long lives specifically of the currently existing humans over in future created humans, ASI may not be a clear reason to save more lives, as it may not only make existing lives longer and nicer, but may actually exactly also reduce the burden for creating any aimed at number of - however long lived - lives; this number of happy future human lives thus hinging less on the preservation on actual lives.
If people share your objective, in a positive ASI world, maybe we can create many happy human people quasi 'from scratch'. Unless, of course, you have yet another unstated objective, of aiming to make many unartificially created humans happy instead..
On a high level I think the answer is reasonably simple:
It all depends on the objective function we program/train into it.
And, fwiw, in maybe slightly more fanciful situations, there could also be some sort of evolutionary process between future ASIs that mean only those with a strong instinct for survival/duplication (and/or of killing off competitors?) (and or minor or major improvements) would eventually be the ones being around in the future. Although I could also see this 'based on many competing individuals' view is a bit obsolete with ASI as the distinction between many decentralized individuals and one more unified single unit or so may not be so necessary; that all becomes a bit weird.
Not sure about the exact context of what you write, but fwiw:
On the other hand, if you increase taxes, even if the above is strictly speaking true, it's not true for all types of actors at all, and, maybe most importantly: