The Territories
Mechanics of Tradecraft

Wiki Contributions



As a useless anecdote, I took Lumina in November of last year. I generally drink a lot, and have commented on hangovers getting 2-4x worse in the past few months to friends, before reading this post or knowing anything about your hypothesis. This has occurred only in the last few months and I'm 24 years old.


How is this different from the situation in the late 19th century when only a few things left seemed to need a "consensus explanation"?


One might worry that it is difficult to set benchmarks of success for alignment research. Is a Newtonian understanding of gravitation sufficient to attempt a Moon landing, or must one develop a complete theory of general relativity before believing that one can land softly on the Moon?3

In the case of AI alignment, there is at least one obvious benchmark to focus on initially. Imagine we had access to an incredibly powerful computer with access to the internet, an automated factory, and large sums of money. If we could program that computer to reliably achieve some simple goal (such as producing as much diamond as possible), then a large share of the AI alignment research would be completed.

Are we close to meeting this benchmark?


I would like to ask a followup question: since we don't have a unified theory of physics yet, why isn't adopting strongly any one of these nonpredictive interpretations premature? It seems like trying to "interpret" gravity without knowing about general relativity.


QQQ 640 (3y), SPY 750 (3y), VTI 340 (2y), SMH 290 (2y). Those were the latest expiration dates I could get.

Those SPX options look nice too, though I wish I could pay for a derivative that only paid out if the market jumped 100% in a single year, rather than say 15% per year throughout the rest of the 2020s.


Note: there was previously an awful typo here; the third bullet said "buying individual tech stocks" instead of "instead of buying individual tech stocks". The reason I'm posting about this is because it seems higher expected value than buying and holding e.g. NVDA or call options on NVDA. I wish I had caught this typo sooner as the previous post didn't make any sense.

  • The market makers don't seem to be talking about it at all, and conversations I have with e.g. commodities traders says the topic doesn't come up at work. Nowadays they talk about AI, but in terms of its near-term effects on automation, not to figure out if it will respect their property rights or something.

  • Large public AI companies like NVDA, which I would expect to be priced mostly based on long-run projections of AI usage, have been consistently bid up after earnings, as if the stock market is constantly readjusting their expectations of AGI takeoff by the amount that NVDA is personally earning each quarter rather than using those earnings to inform technical timelines. I think it's more likely that they're saying something close to "look! Nvidia's revenues are rising!" and "wow, Nvidia has grown pretty consistently, we should increase the premium on their call options" and not really much beyond that.

  • Current NASDAQ futures prices are business as usual. There are only two ways to account for this prices if they are pricing things in; either they thing slow takeoff is extraordinarily (<1%) unlikely to occur before 2030, or extremely unlikely to lead to lots of growth, or both. Either of these seem like strange conclusions to me that would require unusually strong understanding of the tech tree and policy response, but as I mentioned, they're not even talking about it so how would they know?

  • "Pricing this in" would require entire nation-states worth of capital. Even if there's one ten billion dollar hedge fund out there that is considering these issues deeply, it wouldn't have the power to move markets to where I think they ought to be.

  • AGI takeoff is completely out of distribution for the Great Financial Machine Learning System, being an event which has never happened before, that would break more invariants about how economies work and grow than any black swan event since the dawn of public stock exchanges. There's no strong reason to believe, a priori, that hedge funds are selected to account for it in the same way they are selected to correctly predict fed rate adjustments, besides basic reasons like "hedge funds are filled with high IQ people". A similar, weaker reason explains why it was a good idea to buy put options on the market in February 2020.


I do have call options on ETFs like QQQ, which are very tech-heavy, as well as SMH, which are baskets of semiconductor companies. But buying calls on individual tech stock options incurs a larger premium, because market makers see stocks as much more volatile than indices. So they're willing to sell you options on e.g. VTI for much less, because it's the entire stock market and that's never appreciated more than like 50% in a single year or something. My thesis is that market makers are making a mistake, here, and so it's higher expected value to buy call options on indices rather than companies with an AI component.

I will add this to the FAQ because I think the article doesn't make it clear.


Lex asks if the incident made Altman less trusting. Sam instantly says yes, that he thinks he is an extremely trusting person who does not worry about edge cases, and he dislikes that this has made him think more about bad scenarios. So perhaps this could actually be really good? I do not want someone building AGI who does not worry about edge cases and assumes things will work out and trusting fate. I want someone paranoid about things going horribly wrong and not trusting a damn thing without a good reason.

Eh... I think you and him are worried about different things.

Load More