A mind upload without strong guarantees potentially carries huge S-risks. You're placing your own future self in the hands of whoever or whatever happens to have that data in the future. If one thousands year from now for whatever reason someone decides to use that data to run a billion simulations of you forever in atrocious pain, there is nothing you can do about it. And if you think your upload is "yourself" in a meaningful way enough for you to care about having one done, you must think that is also a very horrible fate.
But on another level, norms of "You should take responsibility by default for how people will interpret what you say and do, even if that interpretation is completely decoupled from your intent, and even if what you said was the objectively correct truth" is also super harmful to a slice of the population and especially neurodivergent people.
I mean obviously this is simply a blurry thing. If I say something very deliberately ambiguous that could mean A and B, and then claim A when someone understands B, I may be bad at communicating or even being playing a malicious motte-and-bailey. If I say something that is decidedly A but someone manages to understand B anyway, it's on them. Obviously where precisely the lines lie depends since language isn't an objective thing, but there are fuzzy areas we can identify.
The funniest example of this I can always think of is one guy who wrote a review of the Pixar movie Inside Out on an Italian newspaper passionately arguing that it was a horrible piece of propaganda meant to make kids accept CIA brainwashing ("little men" controlling their brains). And I'm like, I'm all for interpreting art in different ways, Death of the Author, and such, and I still think that that is plainly ridiculous and it can only come to mind if you literally are so obsessed with it that you're unable to interpret anything without your weird lens.
I'd say there are two sides to that question.
On one side, it's definitely harmful to the markets. Distorts the prices and scams other investors out of their money by essentially cheating. This is a lesser point but worth considering.
On the other, it's possibly harmful to their legislation too. If it's a case of "I would do this anyway for unrelated reasons, may as well make a few bucks off it", then no. But is that how it works? If you were in the habit of doing that, wouldn't "which of these possible legislative decisions is going to make me more money" be a factor in your choices?
Also, as kind of an aside, but it's very much illegal. And while this is far from the only illegal thing that legislators engage in, the people who make the rules that can put you in jail doing things that should put them in jail blatantly and without consequence is something that deeply undermines the confidence in the entire concept of the rule of law, which is kind of an important cornerstone of civilisation.
But if your best case scenario is "Maybe we'll wind up as beloved house pets!", maybe you should think carefully before building AGI.
Also because - and I already made the case elsewhere - if other people are not completely stupid and realise that you are about to unleash this thing which is very very much against most humans' traditional values, and in fact a thing considered so horrible and evil that death would be preferable to it, you have a non-zero likelihood of finding yourself, your data centre and your entire general neighbourhood evaporated via sufficient application of thermonuclear plasma before you can push that button. Developing AGI that even merely disempowers everyone and enforces its own culture on the world forever is essentially not unlike launching a weapon of mass destruction.
If memory serves, the average human lives for around 500 years before opting for euthanasia, mostly citing some kind of ennui. What the hell? 500 years is nothing in the grand scheme of things.
As far as we go, no single human has ever experienced 500 years of life. I do agree realistically it doesn't seem enough to run out of things to do, but we can't exclude that as a factual unknown detail of human psychology, it would be a limit. Maybe even if we don't run out of specific things to do, we simply wear down our emotional range and ability to feel much about any of it? It could even be framed as such a thing as, maybe there's some kind of desensitisation going on with your dopamine receptors that they're not good enough at rebalancing yet.
Basically I think you could just take that as a simple part of the premise of the setting, a speculative guess about how precisely human psychology could interact with immortality, and move on.
I mean, I still think realistically (and "realistically" is a very loosely used word here) that is as good as it could possibly get with ASIs running the show. But that's the thing, it still conceivably looks like a dystopia because it still implies disempowerment, which is why some people will simply be never ok with any kind of AGI/ASI on principle, alignment or not. The question of whether simply avoiding them forever could be feasible is a different one, but I don't see how you could possibly retain true "control" if your AI companions/servants/whatever can run circles around you intellectually to that extent.
these events seem most common on Airbus A320 aircrafts
As far as I can tell that's an extremely common plane for travel within Europe in my experience, so probably very relevant to a lot of people. I can count on my fingers the times I've taken a plane that was not one of those, and I travel multiple times a year.
These are two very different predictions. The original ‘by 2030’ prediction is Obvious Nonsense unless you expect superintelligence and a singularity, probably involving us all dying. There’s almost zero chance otherwise. Technology does not diffuse that fast.
To be fair to me this immediately evoked more the image of Amazon leveraging its specific scale economies, not having e.g. generalist humanoid robots with AGI, but mega-depots, auto vans, delivery drones etc. that each perform one specific function within a mechanism that works because it's all Amazon stuff interfacing with other Amazon stuff. That said, still a bold prediction that I wouldn't buy much, but less wild.
Sure, I mostly meant the limits as in all sorts of constraints (what people will and won't allow you to get away with, what is knowable, what is easily computed/predicted, etc). But ultimately it really boils down to, every step in the "better" direction requires moving towards an even smaller state space and thus decreasing bits of entropy in the state of the world. So in a sense it literally is a fight against the second law of thermodynamics.
I also reckon it might get you in trouble given the look of "person on a place purposefully concealing their face".