For an introduction to young audiences, I think it's better to get the point across in less technical terms before trying to formalize it. The OP jumps to epsilon pretty quickly. I would try to get to a description like "A sequence converges to a limit L if its terms are 'eventually' arbitrarily close to L. That is, no matter how small a (nonzero) tolerance you pick, there is a point in the sequence where all of the remaining terms are within that tolerance." Then you can formalize the tolerance, epsilon, and the point in the sequence, k, that depends on epsilon.
Note that this doesn't depend on the sequence being indexed by integers or the limit being a real number. More generally, given a directed set (S, ≤), a topological space X, and a function f: S -> X, a point x in X is the limit of f if for any neighborhood U of x, there exists t in S where s ≥ t implies f(s) in U. That is, for every neighborhood U of x, f is "eventually" in U.
I have a hard time imagining a strong intelligence wanting to be perfectly goal-guarding. Values and goals don't seem like safe things to lock in unless you have very little epistemic uncertainty in your world model. I certainly don't wish to lock in my own values and thereby eliminate possible revisions that come from increased experience and maturity.
The size of the "we" is critically important. Communism can occasionally work in a small enough group where everyone knows everyone, but scaling it up to a country requires different group coordination methods to succeed.
This may help with the second one:
https://www.lesswrong.com/posts/k5JEA4yFyDzgffqaL/guess-i-was-wrong-about-aixbio-risks
How about this one?
A couple more (recent) results that may be relevant pieces of evidence for this update:
A multimodal robotic platform for multi-element electrocatalyst discovery
"Here we present Copilot for Real-world Experimental Scientists (CRESt), a platform that integrates large multimodal models (LMMs, incorporating chemical compositions, text embeddings, and microstructural images) with Knowledge-Assisted Bayesian Optimization (KABO) and robotic automation. [...] CRESt explored over 900 catalyst chemistries and 3500 electrochemical tests within 3 months, identifying a state-of-the-art catalyst in the octonary chemical space (Pd–Pt–Cu–Au–Ir–Ce–Nb–Cr) which exhibits a 9.3-fold improvement in cost-specific performance."
Generative design of novel bacteriophages with genome language models
"We leveraged frontier genome language models, Evo 1 and Evo 2, to generate whole-genome sequences with realistic genetic architectures and desirable host tropism [...] Experimental testing of AI-generated genomes yielded 16 viable phages with substantial evolutionary novelty. [...] This work provides a blueprint for the design of diverse synthetic bacteriophages and, more broadly, lays a foundation for the generative design of useful living systems at the genome scale."
Would you like a zesty vinaigrette or just a sprinkling of more jargon on that word salad?
I had to reread part 7 from your review to fully understand what you were trying to say. It’s not easy to parse on a quick read, so I’m guessing Zvi didn’t interpret the context and content correctly, like I didn’t on my first pass. On first skim, I thought it was a technical argument about how you disagreed with the overall thesis, which makes things pretty confusing.
Which of these is brilliant or funny? They all look nonsensical to me.
This had a decent start and the Timothée Chalamet line was genuinely funny to me, but it ended rather weakly. It doesn’t seem like Claude can plan the story arc as well it can operate on the local scale.