One year later, do you still agree with this analysis?
A post going over how much compute each frontier AI lab has will likely be very helpful.
I believe the Scramblers from blindsight weren’t self aware, which means they couldn’t think about their own interactions with the world.
As I recall the crew was giving one of the Scramblers a series of cognitive tests. It aced all the tests that had to do with numbers and spatial reasoning, but failed a test that required the testee to be self aware.
I agree with you that the "structure of suffering" is likely to be represented in the neurons of shrimp. I think it's clear that shrimps may "suffer" in the sense that they react to pain, move away from sources of pain, would prefer to be in a painless state rather than a painful state, etc.
But where I diverge from the conclusions drawn by Rethink Priorities is that I believe shrimp are less "conscious" (for a lack of a better word) than humans and less their suffering matters less. Though shrimp show outward signs of pain, I sincerely doubt that with just...
Is it an FDA issue or moreso a drug discovery issue?
Adderall helps to combat akrasia to an extent, though results may vary between people (possibly modafinal as well though I haven't tried it). Though it is far from a "magic pill" as the effects of the pill go away + side effects + utility from long term use is uncertain.
How easy would it be to develop a drug that's more effective than Adderall or the other ADHD stimulants? It was developed in the 1970s, nearly 50 years ago, and the fact that we don't have a better alternative right now tells me we picked the low hanging fruit. But are there active efforts to developing a better drug? I'm not sure.
Thanks for hosting this competition!
Fermi Estimate: How many lives would be saved if every person in the west donated 10% of their income to EA related, highly effective charities?
Model
The thing with NVIDIA though is that the IV is so high and so are premiums. I spent a few hours looking for a better trade than that, though I think it's pretty solid.
I think SPY calls can possibly be much better than NVIDIA calls. The market doesn't expect the stock market to go up significantly in the next few years, but I think theres a chance it will assuming timelines are short. Here's the SPY YoY growth during the internet boom in the 90s.
Year 2000 saw a -9.7% return ($86.54) 1999: +20.4% ($95.88) 1998: +28.7% ($79.65) 1997: +33.5% ($61.89) 1996: +22...
Assuming short timelines, I wonder how much NVIDIA's stock will increase and if anywhere near a 100x return is possible.
The further out and higher strike price NVIDIA call I could find is at 290$ SP, dated Jan 15 2027, at $13.25. If NVIDIA goes to a 10T market cap I get an 8x return on investment, if the company goes to a 15T market cap I get a ~20x return on investment.
I'm not sure how realistic it is for NVIDIA to increase past a 15 Trillion Market cap. Plus, increased government intervention seems like it would negatively impact profits.
Polymarket has gotten lots of attention in recent months, but I was shocked to find out how much inefficency there really is.
There was a market titled "What will Trump say during his RNC speech?" that was up a few days ago. At 7 pm, the transcript for the speech was leaked, and you could easily find it by a google search or looking at the polymarket discord.
Trump started his speech at 9:30, and it was immediately that he was using the script. One entire hour into the speech I stumbled onto the transcript on Polymarkets discord. Despite the word "prisons" b...
You said you've been buying calls on the general stock market. Instead, why not buy calls on 20-30 tech companies that'll likely benefit from slow takeoff?
This is very speculative, but if Anthropic/OpenAI/Google/Meta do achieve TAI and we head towards a slow takeoff, geopoltical risk from China may be a concern. To the best of my knowledge China is a few years behind us on AI, and doesn't have the compute capability to catch up. I doubt China will just sit back and let the US achieve such a strategic advantage, and may invade Taiwan to cut out our supply of GPUs.
Two Naive questions.
What bottlenecks in model capability are removed with the use of GB300s for pertaining? Should we expect pretraining model progress in 2026 to be significantly higher just based on the fact that GB300s are coming online?
Why has it taken 4ish years since the launch of GPT-4 to train a model with 2 OOMs more FLOPs?