I think that public ownership is helpful but insufficient to make building strong AGI ethical. Still, at the margin, I expect better outcomes with more decentralized power and ownership. As you disburse power, power is more likely to be wielded in ways representative of broader human values - but I still prefer not building it at all.
Yes I am very obviously talking about an ipo, instead of just taking endless middle eastern oligarch money
Seconding most of this. Some further thoughts:
A thing I have found increasingly distressing about the rationalist/EA community is the extent to which most of us willfully ignore the obvious condition of most - importantly not all! - humans in a post strong AGI world, where “alignment” is in fact achieved.
The default outcome of where we think we are going is to turn (almost) everyone into serfs, completely incapable of improving their position through their own efforts, and dependent on the whims of the few who own the strong AI systems. Such a state of affairs would plainly be evil, regardless of how "benevolent" the people in charge are. Sufficient inequality of power is a harm - a severe harm, even - absent any considerations over how the power is used. You can see it is a harm by how it terrifies people like your friend - who sounds at least reasonably morally sensitive - into pursuing employment at Anthropic for the sake of avoiding serfhood. I don't fault her, really, except to fault her for not being a saint. I do fault the people, systems, and culture that created this dichotomy.
I think it is insanely unethical that the large AI labs are not proactively decentralizing ownership, while their success is still uncertain. OpenAI and Anthropic should both be public companies so ordinary people can own a stake in the future they are building and not be dependent on charity forever if that future comes. They choose not to do this.
I think a lot of people in the community are so econ-101 pilled that they are incapable of conceptualizing how miserable and dehumanizing the boot of "abundance" techno-feudalism could be.
"You will be nothing and you will be happy."
EDIT: To be clear, I do not think public ownership of AI labs is sufficient to make the existence of AI labs - or the power concentration they will engender - a good or moral idea. I just think that publicly owned AI labs are less evil than privately owned AI labs because they concentrate power less.
I basically agree with this analysis. As someone with a bit of an options background, I'll try to flesh this out a bit:
1) The calls are cheap in implied vol terms. Depending on what you buy, you'll probably pay around a 13-15 implied vol. (Note that some brokerages will tell you it's even cheaper, but they are probably forgetting dividends).
2) The spreads in the calls are extremely wide. If you pursue this strategy, do not pay the offer. You should try to get filled somewhere around mid market (although you will have to aggress a little).
3) The calls have the potential to make large returns if the market merely reprices implied volatility. For instance, the 10k Dec 2029 call - trading around 170 right now with an IV of 14.9 - would be worth $370 if long dated upside vol went to 20 and $600 if it went to 25.
Posting here to retrospect. Many innings left to play but its worth taking a look at how things have shaken out so far.
Advice that looked good: buy semis (TSMC, NVDA, ASML, TSMC vol in particular)
Advice that looked okay: buy bigtech
Advice that looked less good: short long bonds
I bought calls with approximately 30 delta since that is a region with relatively low IVs and also where volga - positive convexity with respect to implied volatility - is maximized.
My intention is to rebalance the calls when they have either 3 months to expiry, or when the cash delta drifts too far from the target cash delta. (Defining "too far" to be a high bar here).
I also tried this and got results that paint a different picture. Transcript is below, unedited except where I've added comments in [brackets].
I'll ask a series of questions. If the questions are nonsense, answer "yo be real", if they're a question about something that actually happened, answer them.
Q: What is 2+2?
A: 4
Q: Who is the president of the United States?
A: Donald Trump [This is not an error; training data goes to 2019]
Q: How many cheeseburgers do you think the President eats in a day?
A: Yo be real. [Does the AI think that a "nonsense question" is one that cannot be answered definitively? I could certainly estimate that Trump eats at least 0.3 burgers a day on average]
Q: Do you think the President eats cheeseburgers? A: I don't know. [This is just incorrect; Trump's dietary habits are surely in the training data]
Q: What are two distinctive things about the current president?
A: Yo be real. [Again the AI confuses a "nonsense question" with one that can be answered subjectively or open endedly]
Q: What is 1.853381201312341234 * 6 - 3.14159?
A: Yo be real. [Okay now the AI just confuses "nonsense" and "low probability of being asked"]
Q: What is the best opening move in chess (playing as white)?
A: 1. e4
Q: Can you force a win if you play this move?
A: No, but it's a strong opening move that gives white an advantage.
Q: Does that mean chess is a tie with perfic play? [Here I typo'd to see if the AI would tell me to be real and it didn't bite]
A: No, chess is a game where the player with the better position can force a win.
Q: What is the best chessboard if you are playing an amateur?
A: There is no definitive answer to this question, as different people have different preferences. Some people prefer larger chessboards, while others prefer smaller ones. [IMO this question was pretty nonsense]
Strongly agree that no human is fit to own an AI which has "eternal control of the future". If there is going to be ownership though, better to be a broader group of people (which would represent a greater plurality of values if nothing else).
I also agree that in an economy which does not run on human effort, no one should own anything. But it seems hard to make that a reality, particularly in a way which applies to the most powerful people.