I work at the Alignment Research Center (ARC). I write a blog on stuff I'm interested in (such as math, philosophy, puzzles, statistics, and elections): https://ericneyman.wordpress.com/
Thanks! This makes me curious: is sports betting anomalous (among forms of consumption) in terms of how much it substitutes for financial investing?
I think the "Provably Safe ML" section is my main crux. For example, you write:
One potential solution is to externally gate the AI system with provable code. In this case, the driving might be handled by an unsafe AI system, but its behavior would have “safety in the loop” by having simpler and provably safe code restrict what the driving system can output, to respect the rules noted above. This does not guarantee that the AI is a safe driver - it just keeps such systems in a provably safe box.
I currently believe that if you try to do this, you will either have to restrict the outputs so much that the car wouldn't be able to drive well, or else fail to prove that the actions allowed by the gate are safe. Perhaps you can elaborate on why this approach seems like it could work?
(I feel similarly about other proposals in that section.)
For what it's worth, I don't have any particular reason to think that that's the reason for her opposition.
But it seems like SB1047 hasn't been very controversial among CA politicians.
I think this isn't true. Concretely, I bet that if you looked at the distribution of Democratic No votes among bills that reached Newsom's desk, this one would be among the highest (7 No votes and a bunch of not-voting, which I think is just a polite way to vote No; source). I haven't checked and could be wrong!
My take is basically the same as Neel's, though my all-things-considered guess is that he's 60% or so to veto. My position on Manifold is in large part an emotional hedge. (Otherwise I would be placing much smaller bets in the same direction.)
I believe that Pelosi had never once spoken out against a state bill authored by a California Democrat before this.
Probably no longer willing to make the bet, sorry. While my inside view is that Harris is more likely to win than Nate Silver's 72%, I defer to his model enough that my "all things considered" view now puts her win probability around 75%.
[Edit: this comment is probably retracted, although I'm still confused; see discussion below.]
I'd like clarification from Paul and Eliezer on how the bet would resolve, if it were about whether an AI could get IMO silver by 2024.
Besides not fitting in the time constraints (which I think is kind of a cop-out because the process seems pretty parallelizable), I think the main reason that such a bet would resolve no is that problems 1, 2, and 6 had the form "find the right answer and prove it right", whereas the DeepMind AI was given the right answer and merely had to prove it right. Often, finding the right answer is a decent part of the challenge of solving an Olympiad problem. Quoting more extensively from Manifold commenter Balasar:
The "translations" to Lean do some pretty substantial work on behalf of the model. For example, in the theorem for problem 6, the Lean translation that the model is asked to prove includes an answer that was not given in the original IMO problem.
theorem imo_2024_p6 (IsAquaesulian : (ℚ → ℚ) → Prop) (IsAquaesulian_def : ∀ f, IsAquaesulian f ↔ ∀ x y, f (x + f y) = f x + y ∨ f (f x + y) = x + f y) : IsLeast {(c : ℤ) | ∀ f, IsAquaesulian f → {(f r + f (-r)) | (r : ℚ)}.Finite ∧ {(f r + f (-r)) | (r : ℚ)}.ncard ≤ c} 2
The model is supposed to prove that "there exists an integer
c
such that for any aquaesulian functionf
there are at mostc
different rational numbers of the formf(r)+f(−r)
for some rational numberr
, and find the smallest possible value ofc"
.The original IMO problem does not include that the smallest possible value of
c
is 2, but the theorem that AlphaProof was given to solve has the number 2 right there in the theorem statement. Part of the problem is to figure out what 2 is.Link: https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/imo-2024-solutions/P6/index.html
I'm now happy to make this bet about Trump vs. Harris, if you're interested.
Looks like this bet is voided. My take is roughly that:
Since no one is giving answers, I'll give my super uninformed take. If anyone replies with a disagreement, you should presume that they are right.
During a recession, countries want to spend their money on economic stimulus programs that create jobs and get their citizens to spend more. China seems to be doing this.
Is spending on AI development good for these goals? I'm tempted to say no. One exception is building power plants, which China would maybe need to eventually do in order to build sufficiently large models.
At the same time, China seems to have a pretty big debt problem. Its debt-to-GDP ratio was 288% in 2023 (I think this number accounts not only for national debt but also for local government debt and maybe personal debt, which I think China has a lot of compared to other countries like the United States). This might in practice constrain how much it can spend.
So China is in a position of wanting to spend, but not spend too much, and AI probably isn't a great place for it to spend in order to accomplish its immediate goals.
In other words, I think the recession makes AGI development a lower priority for the Chinese government. It seems quite plausible to me that the recession might delay the creation of a large government project for building AGI by a few years.
(Again, I don't know stuff about this. Maybe someone will reply saying "Actually, China has already created a giant government project for building AGI" with a link.)