In the past couple of years it’s been popular to read Richard Rhodes’ The Making of the Atomic Bomb, especially after Situational Awareness’s prediction / promotion of a new Manhattan Project in AI. However, I think you’ll find more which applies to the current moment in Dark Sun. Consider: physicists...
Who's done high quality work / can tell a convincing story about managing the economic transition to a world where machines can do every job better than humans? Some common tropes and why I don't think they're good enough: * "We've always managed in the past. Take the industrial revolution...
> Today, Retired U.S. Army General Paul M. Nakasone has joined our Board of Directors. A leading expert in cybersecurity, Nakasone’s appointment reflects OpenAI’s commitment to safety and security, and underscores the growing significance of cybersecurity as the impact of AI technology continues to grow. > > As a first...
Jack Clark's retrospective on GPT2 is full of interesting policy thoughts, I recommend reading the whole thing. One excerpt: > I've come to believe that in policy "a little goes a long way" - it's far better to have a couple of ideas you think are robustly good in all...
1. How Many Features are Active at Once? Previously I’ve seen the rule of thumb “20-100 for most models”. Anthropic says: > For all three SAEs, the average number of features active (i.e. with nonzero activations) on a given token was fewer than 300 2. Splitting SAEs Having multiple different-sized...
GPT-4o both has a new tokenizer and was trained directly on audio (whereas my understanding is that GPT-4 was trained only on text and images). Is there precedent for upgrading a model to a new tokenizer? It seems like it's probably better to think of it as an entirely new...
I previously expected open-source LLMs to lag far behind the frontier because they're very expensive to train and naively it doesn't make business sense to spend on the order of $10M to (soon?) $1B to train a model only to give it away for free. But this has been repeatedly...