Small nitpick with the vocabulary here. There is a difference between 'strategic' and 'tactical', which is particularly poignant in chess. Tactics is basically your ability to calculate and figure out puzzles. Finding a mate in 5 would be tactical. Strategy relates to things too big to calculate. For instance, creating certain pawn structures that you suspect will give you an advantage in a wide variety of likely scenarios, or placing a bishop in such a way that an opponent must play more defensively.
I wasn't really sure which you were referring to here; it seems that you simply mean that GPT isn't very good at playing strategy games in general; ie it's bad at strategy AND tactics. My guess is that GPT is actually far better at strategy; it might have an okay understanding of what board state looks good and bad, but no consistent ability to run any sort of minimax to find a good move, even one turn ahead.
I have a general principle of not contributing to harm. For instance, I do not eat meat, and tend to disregard arguments about impact. For animal rights issues, it is important to have people who refuse to participate, regardless of whether my decades of abstinence have impacted the supply chain.
For this issue however, I am less worried about the principle of it, because after all, a moral stance means nothing in a world where we lose. Reducing the probability of X-risk is a cold calculation, while vegetarianism is is an Aristotelian one.
With that in mind, a boycott is one reason not to pay. The other is a simple calculation: is my extra $60 a quarter going to make any tiny miniscule increase in X-risk? Could my $60 push the quarterly numbers just high enough so that they round up to the next 10s place, and then some member of the team works slightly harder on capabilities because they are motivated by that number? If that risk is 0.00000001%, well when you multiply by all the people who might ever exist... ya know?
I agree that we are unlikely to pose any serious threat to an ASI. My disagreement with you comes when one asks why we don't pose any serious threat. We pose no threat, not because we are easy to control, but because we are easy to eliminate. Imagine you are sitting next to a small campfire, sparking profusely in a very dry forest. You have a firehose in your lap. Is the fire a threat? Not really. You can douse it at any time. Does that mean it couldn't in theory burn down the forest? No. After all, it is still fire. But you're not worried because you control all the variables. An AI in this situation might very well decide to douse the fire instead of tending it.
To bring it back to your original metaphor: For a sloth to pose a threat to the US military at all, it would have to understand that the military exists, and what it would mean to 'defeat' the US military. The sloth does not have that baseline understanding. The sloth is not a campfire. It is a pile of wood. Humans have that understanding. Humans are a campfire.
Now maybe the ASI ascends to some ethereal realm in which humans couldn't harm it, even if given completely free reign for a million years. This would be like a campfire in a steel forest, where even if the flames leave the stone ring, they can spread no further. Maybe the ASI will construct a steel forest, or maybe not. We have no way of knowing.
An ASI could use 1% of its resources to manage the nuisance humans and 'tend the fire', or it could use 0.1% of its resources to manage the nuisance humans by 'dousing' them. Or it could incidentally replace all the trees with steel, and somehow value s'mores enough that it doesn't replace the campfire with a steel furnace. This is... not impossible? But I'm not counting on it.
Sorry for the ten thousand edits. I wanted the metaphor to be as strong as I could make it.
I understand that perspective, but I think it's a small cost to Sam to change the way he's framing his goals. Small nudge now, to build good habits for when specifying goals becomes, not just important, but the most important thing in all of human history.
I'm very glad that this was written. It exceeded my expectations of OpenAI. One small problem that I have not seen anyone else bring up:
"We want AGI to empower humanity to maximally flourish in the universe."
If this type of language ends up informing the goals of an AGI, we could see some problems here. In general, we probably won't want our agentic AI's to be maximizers for anything, even if it sounds good. Even in the best case scenario where this really does cause humanity to flourish in a way that we would recognize as such, what about when human flourishing necessitates the genocide of less advanced alien life in the universe?
I did not know about HPPD, although I've experienced it. After a bad trip (second time I'd ever experimented), I experienced minor hallucinogenic experiences for years. They were very minor (usually visuals when my eyes were closed) and would not have been unpleasant, except that I had the association with the bad trip.
I remember having so much regret on that trip. Almost everything in life, you have some level of control over. You can almost always change your perspective on things, or directly change your situation. On this trip though, I realized I messed with the ONE thing that I am always stuck with: my own point of view. I couldn't BELIEVE I had messed with that so flippantly.
That said, the first time I tried hallucinogens, it was a very pleasant and eye-opening experience. The point is not to take it lightly, and not to assume there are no risks.
As another anecdote, I had a friend when I was 17 who sounds very much like you, John. He knew more about drugs then than I ever have during my life. His knowledge of what was 'safe' and what wasn't didn't stop his drug usage from turning into a huge problem for him. I am certain that he was better off than someone thoughtlessly snorting coke, but he was also certainly worse off than he would have been had he never been near any sort of substance. If nothing else, it damaged some of his relationships, and removed support beams that he needed when other things inevitably went wrong. It turns out, damaging your reputation actually can be bad for you.
If you decide to experiment with drugs (and I am not recommending that, just saying if), my advice is two-fold:
1) Don't be in a hurry. You can absolutely afford to wait a few years (or decades), and it won't negatively impact you or your drug experience. Make sure you are in the right headspace.
2) Don't let it become a major aspect of your life. Having a couple trips to see what it's like is completely different from having a bi-monthly journey and making it your personality to try as many different mind-benders as possible. I've seen that go very badly.
Well for my own sanity, I am going to give money anyway. If there's really no differentiation between options, I'll just keep giving to Miri.
I am not an AI researcher, but it seems analogous to the acceptance of mortality for most people. Throughout history, almost everyone has had to live with the knowledge that they will inevitably die, perhaps suddenly. Many methods of coping have been utilized, but at the end of the day it seems like something that human psychology is just... equipped to handle. x-risk is much worse than personal mortality, but you know, failure to multiply and all that.
Ha, no kidding. Honestly, it can't even play chess. I just tried to play it, and asked it to draw the board state after each move. It started breaking on move 3, and deleted its own king. I guess I win? Here was its last output.
For my move, I'll play Kxf8:
8 r n b q . b . .7 p p p p . p p p6 . . . . . n . .5 . . . . p . . .4 . . . . . . . .3 . P . . . . . .2 P . P P P P P P1 R N . Q K B N Ra b c d e f g h