There's a difference between 'working hard' and actually inhumane conditions, which, while I did not experience them in high school, seem to pop up by default in a lot of situations. So I wouldn't be really surprised if it happened in some high schools, because there isn't much defending against it there.
So yeah the labor unions having the goal of 'not having to work hard' is a protection against a very serious and insidious problem.
Have any stats on that?
(note I'm not trying to be that annoying guy who asks for statistics to try and win an argument if the other party fails to produce them; I really want to see info on people's expected vs actual employment outcomes)
I think you're right that the top 1/2 of 1% are much more varied and idiosyncratic than the norm, because they are all going to be gifted in very unique and divergent ways.
However, honestly I think the best way to utilize them (and remove tremendous frustration on both their part and the part of people who would manage them) is treat them like a black box; tell them, "ok, go off and act as you would by default. We'll make sure no one will bother you. Sink or swim on your own, though. Try to find something interesting. Good luck.
Some of them may not produce all that much of use, but it's no big loss since they're only a fraction of a fraction of a percent of the population. And some of them will find and create very unique and interesting things, things that only they could find and create. And that more than offsets the losses from the ones that by chance don't work out.
Maybe this will help
This may not be strictly statistical, but I would choose the idea that in order to make any meaningful statement with data, you always have to have something to compare it to.
Like someone will always come in some political thread and say , "X will increase/decrease Y by Z%.) And my first thought in response is always, "Is that a lot?"
For a recent example I saw, someone showed a graph of Japanese student suicides as a function of day of the year. There were pretty high spikes (about double the baseline value) on the days corresponding to the first day of each school semester. The poster was attributing this to Japanese school bullying and other problems with Japan's school system.
My first thought was, "wait. Show me that graph for other countries. For the world, if such data has been reliably gathered." If it looks the same, it's not a uniquely Japanese problem. What if it's worse in other countries, even?
Yeah, I'd really like to see people stop using information where it doesn't mean anything in isolation. A lot of people think that controls in science exist to make sure that the effects you see aren't spurious or adventitious. It's not like that's wrong, but it's deeper and even more fundamental than that.
I'm a scientist, so let me give you an example from my research (grossly simiplified and generalized for brevity).
Substance A was designed such that it manifests an as-of-yet unexplored type of structural situation. We then carried out a reaction on substance A to see what some of the effects of this situation are. Something happened.
So, if we were to leave it at that, what would we have learned? Nothing. We need substance B, which does not have that siutation going on but is otherwise as similar to A as we can make it, to see what IT does, to see if it does anything different than A. See, we need to do the experiments on both B and A not to see whether the results of A are 'real'. We need to do it to see what the results even ARE in the first place.
Why do people believe that AI is dangerous? What direct evidence is there that this is likely to be the case?
I don't really buy it. The world is changing too fast. Things are way different now than they were in the 50s, so I don't think the statistics from then really mean much anymore.
In another 50 years what will the landscape look like? who knows? Maybe the diseases won't really be such a huge problem because our anivirals will become as good as our antibiotics.
The one thing that can be said with pretty high certainty is that for the most part it will be a completely different world in the second half of the 21st century.
Looking at stuff in the second half of the 20th century to predict the 21st isn't going to cut it, the same way that looking at politics and wars in the 1860s wouldn't produce any useful results about the 1960s.
I'm not sure this is bad. In my research (and in everyday life), often the best solution is to try to do something, anything, just perturb the system in some way to see what happens, because I find you often need a vector to start optimizing and correcting. Often I find what a desirable outcome is by taking the action of putting things in motion or thinking of them in motion.
Hmm.... I'd say that simulations and representations aren't the same thing. A representation only presents the appearance of something in some way, whereas a simulation tries to present the appearance of something for the same types of causal reasons the real thing has. So no, I wouldn't say that a video of mars is a simulation of mars.
I don't think I'm in a simulation, and I only just now reading this became able to verbalize why that is.
I reject as a premise any arguments that rely on some kind of 'probability that I find myself as me'.The reason for this is that I don't think that such probabilities can be considered to exist. You may say that I could have been born a hunter-gatherer thousands of years ago, some guy living in the future, or some guy living in a simulation in the future, but I don't think that these really work as potentialities. The hunter-gatherer's experiences are different than mine, as are those of the future people. I am myself, and am a unique structure that has unique experiences. My 'consciousness' is what it's like to occupy this particular area of spacetime. The hunter-gatherer, future man, and simulation man have their own consciousness, but they are different than mine. In some ways it works to talk about these entities with similar structures as a class (people), but I don't think it works in the way some people think it does. People aren't electrons. Each individual is different, and thus the descriptor is only a classification to generalize about some general pattern that keeps coming up.
Basically, they all either exist or don't, so it's not like I should be surprised to find myself as myself. Everyone finds themself as themself.
And I also tend not to take argumentation as strong evidence for anything, because above all it has a lot of problems of interpretation. Sure it may sound convincing, but how do we know there isn't some flaw in the reasoning that we don't see? For everyday things, it's not such a big issue, but when we start going into things positing the existence of entire universes, I think it's gone far beyond the domain where it can be reliable. The fundamental assumptions we're making that we don't know we're making start to pile up and matter a lot. For example, imagine trying to argue about time before the concept of relativity had been imagined? Zeno's paradoxes before calculus? You're just not playing with the right deck, and a lot of the time you won't even realize it.
This problem is still pretty huge with empirical information, but there it seems a lot more manageable (read: sometimes, it's POSSIBLE to manage it).