Mind: Brain Replacement isn't Brain Augmentation.
History, and much of the 96% of non-human work as you call it - and however you define that exact number - were mainly all sorts of types of brain augmentation, i.e. brain extension beyond our arms and mouths, using horses, ploughs, speakers, and all types of machines, worksheets, what have you.
AI, advanced AI, incontrast, is more and more sidelining that hitherto essential piece or monopolist, alias human brain.
And so, whatever the past, there is a structural break happening right now.
And so, you and the many others who ignore that one simple phrase I suggest remembering: Brain Replacement isn't Brain Augmentation - risk to wake up baffled in not so long a future. This at least would seem to be the very natural course of things to expect absent doom. Then, indeed the future is weird and who knows anyway. Maybe it's so weird that one way or other you'll still be right - I just really wouldn't bet on it they way you seem to argue for.
Interesting thought.
I don't think it goes too too far in practice, still.
Three spontaneous complications, the first to me intuitively most relvant though idk how general it is - in the end for me there's not much left of the original idea even if it's a nice one; mind is a freaking complex machine, and friendship to me a hyperdimensional concoction of that machine, evading such nice trivialization despite the original appeal:
Not sure about the exact context of what you write, but fwiw:
On the other hand, if you increase taxes, even if the above is strictly speaking true, it's not true for all types of actors at all, and, maybe most importantly:
Thanks, the suggestion sounds interesting. However; first quick update fwiw: I've only had the chance to read the first small section, "A Brief Proof at You Are Every Conscious Thing", and I must say to me it seems totally clear he's essentially making the same Bayesian mistake - or sort of Anthropic reasoning mistake - that OP contains. It's totally not making sense the way he puts it, and I'm slightly surprised he published it like that.
I plan to read more and to provide my view on the rest of his argument - hopefully I'll not fail despite time pressure.
You mention a few; fwiw some additional things that occasionally increase my empathy to whom I consider of lower abstract intelligence:
Can empathize with a lot here, but strikes me:
If you go to what is quasi the incarnation of the place where low IQ makes us fail - PHILOSOPHY group - no wonder you end up appalled :-). Maybe next time you go to a pub or anywhere else and despite even lower IQ persons, they may be more insightful or interesting as their discussions are ones that benefit from a broader spectrum of things than sheer core IQ.
Warning: more for imho beautiful geeky abstract high-level interpretation than really resolving with certainty the case at hand.
:-)
I purposely didn't try to add any conclusive interpretation of it in my complaint about the bite-its-tail logic mistake.
But now that we're here :-):
It's great you did the 'classical' (even if not named as such) mistake so explicitly, as even if you hadn't made it, somehow the two ideas would have easily swung along with it in many of us half consciously without being fully resolved, pbly in head too.
Much can be said about '10x times as suspicious'; the funny thing being that as long as you conclude what you now just iterated, it again defeats a bit the argument: as you just prove that with his 'low' bet we may - all things considered - here simply let him go, while otherwise... Leaving open all the other arguments around this particular case, I'm reminded of the following that I think is the pertinent - even if bit disappointing as probabilistic fuzzy - way to think about it. And it will make sense of some of us finding it more intuitive that he'd surely gone for 800k instead of 80k (let's ascribe this to your intuition so far), and others the other way round (maybe we're allowed to call that the 2nd-sentence-of-Dana position), while some are more agnostic - and in a basic sense 'correct':
I think Game Theory calls what we end in a "trembling hand" equilibrium (I might be misusing the terminology as I rmbr more the term than the theory; either way the equil mechanism here then I'd still wager makes sense at a high level of abstraction): A state where if it was clear that 800k would have made more sense for the insider, then he could choose 80k to be totally save from suspicion, and we'd in that world see many '80k-size' type of such frauds, as anyone could pull them off w/o creating any suspicion - well and greedy people with some occasions will always exist. And in the world where instead we assume it was clear that 80k was already perfectly suspect, he would have zero reason to not go all out for the 800k if at all he tries... In the end, we end up with: It's just a bit ambiguous which exact scale increases the suspicious-ness how much, or, put more precisely: it is just such that the increase of suspicious-ness vaguely offsets the increase in payoff in many cases. I.e. it all becomes somewhat probabilistic. We're left with some of the insider thieves sometimes going for the high, sometimes for the low amount, and (i) potentially with many of us fighting about what that particular choice now means as fraud-indicator - while, more importantly, trembling-hand-understanders, or actually maybe many other a bit more calm natures, actually see how little we can learn from the amount chosen, as in equilibrium, it's systematically fuzzy along that dimension. If we'd be facing one single player consistently being insider a gazillion times, he might adopt a probabilistic amount-strategy; in the real world we're facing the one-time or so random insider who has incentives to play high or low amount which may be more explained by nuanced subtleties rather than a simple high-level view on it all - as that high-level-only view merely spits out: probabilistic high or low; or in a single case a 'might roughly just as well play high amount as low amount'.
I don't really claim there cannot be anything much more detailed/specific said here that puts this general approach into perspective in this particular case, but from the little we have here in OP and the comments so far, I think that would reasonably apply.
Disagree. If you earn a few millions or so a year, a few hundred thousand dollars quick and easy is still a nice sum to get for quasi free. Plus not very difficult to imagine that some not extremely high up people likely enough had hints as to what they might be directly involved with soon.
FWIW empirical example: A few years ago the super well regarded head of prestigious Swiss National Bank had to go because of alleged dollar/franc insider trading (executed by his wife via arts) when questions of down-pegging the value of Swiss franc to weaker EUR was a daily question with gains of - if I rmbr well - a few ten thousand dollars or so from the trade.
Note the contradiction in your argumentation:
You write (I add the bracket but that's obviously rather exactly what's meant in your line of argument)
[I think the guy's trade is not as suspicions as others think because] why only bet 80k?
and two sentences later
And I don’t think the argument of “any more would be suspicious” really holds either here, betting $800k or $80k is about as suspicious
Challenge accepted, thanks - and I think easily surmounted:
Your Fakeness argument - I'll call it "Sheer Size Argument" makes about as much sense as for a house cat seeing only the few m^3 around it, to claim the world cannot be the size of Earth - not to speak of the galaxy.
Who knows!
Or to make the hopefully obvious point more explicitly: Given we are so utterly clueless as to why ANY THING is at all instead of NOTHING, how would you have any claim to ex ante know about how large the THING that is has to be? It feels natural to claim what you claim, but it doesn't stand the test at all. Realize, you don't have any informed prior about potential actual size of universe beyond what you observe, unless your observation directly suggested a sort of 'closure' that would make simplifying sense of observations in a sort of Occam Razor way. But the latter doesn't seem to exist; at least people suggesting Many Worlds suggest it's rather simpler to make sense of observations if you presume Many Worlds - judging from ongoing discussions that later claim in turn seems to be up for debate, but what's clear: The Sheer Size Argument is rather mute in actual thinking about what the structure of the universe may or may not be.