This line leaves me wondering about human isolation on our little planet and what maladaptations humanity is stuck with because we lack neighbors to learn from.
Failing to adopt cheap and plentiful nuclear power comes to mind as a potential example.
I largely agree with the sentiment of your post. However, one nitpick:
The world's largest protest-riot ever, when measured by estimated damage to property.
This claim is questionable. The consensus is that the economic cost of the George Floyd Protests was between one and several billion. Perhaps it was the most expensive riot in US history (though when inflation-adjusted the LA riots may give it a run for its money) and the most expensive to be cleanly accounted for economically, but intuitively I would imagine many of the most violent riots in history, such as the partition riots in India and Pakistan, caused more economic damage.
Sam's comments a few months ago would also make sense given this context:
https://www.lesswrong.com/posts/ndzqjR8z8X99TEa4E/?commentId=XNucY4a3wuynPPywb
further progress will not come from making models bigger. “I think we're at the end of the era where it's going to be these, like, giant, giant models,” he told an audience at an event held at MIT late last week. “We'll make them better in other ways.” [...] Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them. [...] At MIT last week, Altman confirmed that his company is not currently developing GPT-5. “An earlier version of the letter claimed OpenAI is training GPT-5 right now,” he said. “We are not, and won't for some time.”
This new rumor about GPT-4's architecture is just that and should be taken with a massive grain of salt...
That said however, it would explain OpenAI's recent comments about difficulty training a model better than GPT-3. IIRC, OA spent a full year unable to substantially improve on GPT-3. Perhaps the scaling laws do not hold? Or they ran out of usable data? And thus this new architecture was deployed as a workaround. If this is true, it supports my suspicion that AI progress is slowing and that a lot of low-hanging fruit has been picked.
Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them.
This seems to insinuate a cool down in scaling compute and Sam previously acknowledged that the data bottleneck was a real roadblock.
Yep, just as developing countries don't bother with landlines, so to will companies, as they overcome inertia and embrace AI, choose to skip older outdated models and jump to the frontier, wherever that may lie. No company embracing LLMs in 2024 is gonna start by trying to first integrate GPT2, then 3, then 4 in an orderly and gradual manner.
Pretty sure that's just an inside joke about Lex being a robot that stems from his somewhat stiff personality and unwillingness to take a strong stance on most topics.
You're likely correct, but I'm not sure that's relevant. For one, Chinchilla wasn't announced until 2022, nearly two years after the release of GPT-3. So the slowdown is still apparent even if we assume OpenAI was nearly done training an undertrained GPT-4 (which I have seen no evidence of).
Moreover, the focus on efficiency itself is evidence of an approaching wall. Taking an example from the 20th century, machines got much more energy efficient after the 70s which is also when energy stopped getting cheaper. Why didn't OpenAI pivot their attention to fine-tuning and efficiency after the release of GPT-2? Because GPT-2 was cheap to train and relied on a tiny fraction of all available data, sidelining their importance. Efficiency is typically a reaction to scarcity.
AFAIK, no information regarding this has been publicly released. If my assumption that Bing's AI is somehow worse than GPT-4 is true, then I suspect some combination of three possible explanations must be true:
Interested in any of the roles. I haven't played chess competitively in close to a decade and my USCF elo was in the 1500s at the time of stopping. So long as I'm given a heads up in advance, I'm free almost all day on Wednesdays, Fridays, and Sundays.