This is also true for many people not in that age range. “Many people in a group will try to make life harder for those around them” isn’t much of an argument for incarceration. If it were, who would you permit to be free?
That might work. Maybe have the adversarial network try to distinguish GPT-3 text from human text? That said, GPT-3 is already trying to predict humanlike text continuations, so there's a decent chance that having a separate GAN layer wouldn't help. It's probably worth doing the experiment though; traditional GANs work by improving the discriminator as well as the desired categorizer, so there's a chance it could work here too.
You say vulnerable, low-income people "must put themselves at risk to stay alive", then propose not letting them do so? A lockdown, by itself, does not give the poor any money. If you wish to prevent them from working risky jobs to support themselves, you must either offer them some other form of support or assert that they have other, better options ("homelessness, malnourishment, etc."?), but are making the wrong decision by working and thus ought to be prevented from doing so. Being denied options is only protection if one is making the wrong decision.
Do you think these people ought to be homeless and malnourished? If so, that's a hard case to make morally or practically. If not, you should offer an alternative, rather than simply banning what you yourself state is their only path to avoiding this.
"We hold all Earth to plunder, all time and space as well. Too wonder-stale to wonder at each new miracle."-Rudyard Kipling
This is a genuine concern, and this may be particularly high-variance advice. However, a focus on avoiding mistakes over trying new "superstrategies" might also help some people with akrasia. It's easier to do what you know than seek some special trick. Personally, at least, I find akrasia is worst when it comes from not knowing what to do next. And while taking fewer actions in general is usually a bad idea, trying to avoid mistakes could also be used for "the next time I'm about to sit around and do nothing, instead I'll clean/program/reach out to a friend." This doesn't sound like it has to be about necessarily doing less.
Consider a charity providing malaria nets. Somebody has to make the nets. Somebody has to distribute them. These people need to eat, and would prefer to have shelter, goods, services and the like. That means that you need to convince people to give food, shelter, etc. to the net makers. If you give them money, they can simply buy their food.
This of course raises the question of why you can't simply ask other people to support the charity directly. But consider someone providing a service to the charity workers: even if they care passionately about fighting malaria, they do not want to run out of resources themselves! If you make food, and give it all to the netweavers, how can you get your own needs met? What happens when you need medical care, and the doctor in turn would love to treat a supporter of the anti-malaria fight, but wants to make sure he can get his car fixed?
In a nutshell, people want to make sure there will be resources available to us when we need them. Money allows us to keep track of those resources: if everyone treats money as valuable, we can be confident of having access to as many resources as our savings will buy at market rates. If we decide instead to have everyone be "generous" and give in the hopes that others will give to them in turn, it becomes impossible to keep track of who needs to do how much work or who can take how many resources without creating a shortage. You can't even solve that problem by having everyone decide to work hard and consume little; doing too much can be as harmful as doing too little, as resources get foregone. And of course, that's with everyone cooperating. If someone decides to defect in such a system, they can take and take while providing nothing in return. Thus, it is much easier to mange resources with money, despite it being "not real", even in the chase of charity. Giving money to a charity is a commitment to consume less (or to give up the right to consume as much as you possibly could, whether or not your actual current spending changes), freeing up resources that are then directed to the charity.
By that definition nothing is zero sum. "Zero sum" doesn't mean that literally all possible outcomes have equal total utility; it means that one person's gain is invariably another person's loss.
But Petrov was not a launch authority. The decision to launch or not was not up to him, it was up to the Politburo of the Soviet Union.
This is obviously true in terms of Soviet policy, but it sounds like you're making a moral claim. That the Politburo was morally entitled to decide whether or not to launch, and that no one else had that right. This is extremely questionable, to put it mildly.
We have to remember that when he chose to lie about the detection, by calling it a computer glitch when he didn't know for certain that it was one, Petrov was defecting against the system.
Indeed. But we do not cooperate in prisoners' dilemmas "just because"; we cooperate because doing so leads to higher utility. Petrov's defection led to a better outcome for every single person on the planet; assuming this was wrong because it was defection is an example of the non-central fallacy.
Is that the sort of behavior we really want to lionize?
If you will not honor literally saving the world, what will you honor? If we wanted to make a case against Petrov, we could say that by demonstrably not retaliating, he weakened deterrence (but deterrence would have helped no one if he had launched), or that the Soviets might have preferred destroying the world to dying alone, and thus might be upset with a missileer unwilling to strike. But it's hard to condemn him for a decision that predictably saved the West, and had a significant chance (which did in fact occur) of saving the Soviet Union.
This seems wrong.
The second law of thermodynamics isn't magic; it's simply the fact that when you have categories with many possible states that fit in them, and categories with only a few states that count, jumping randomly from state to state will tend to put you in the larger categories. Hence melting-arrange atoms randomly and it's more likely that you'll end up in a jumble than in one of the few arrangements that permit solidity. Hence heat equalizing-the kinetic energy of thermal motion can spread out in many ways, but remain concentrated in only a few; thus it tends to spread out. You can call that the universe hating order if you like, but it's a well-understood process that operates purely through small targets being harder to hit; not through a force actively pushing us towards chaos, making particles zig when they otherwise would have zagged so as to create more disorder.
This being the case, claiming that life exists for the purpose of wasting energy seems absurd. Evolution appears to explain the existence of life, and it is not an entropic process. Positing anything else being behind it requires evidence, something about life that evolution doesn't explain and entropy-driven life would. Also, remember, entropy doesn't think ahead. It is purely the difficulty of hitting small targets; a bullet isn't going to 'decide' to swerve into a bull's eye as part of a plan to miss more later! It would be very strange if this could somehow mold us into fearing both death and immortality as part of a plan to gather as much energy as we could, then waste it through our deaths.
This seems like academics seeking to be edgy much more than a coherent explanation of biology.
As for transhumanism being overly interested in good or evil, what would you suggest we do instead? It's rather self-defeating to suggest that losing interest in goodness would be a good idea.
So enlightenment is defragmentation, just like we do with hard drives?