Jonathan Paulson

Wiki Contributions

Comments

Sorted by

Anyone have a good intuition for why Combinatorics is harder than Algebra, and/or why Algebra is harder than Geometry? (For AIs). Why is it different than for humans?

It’s funny to me that the one part of the problem the AI cannot solve is translating the problem statements to Lean. I guess it’s the only part that the computer has no way to check.

Does anyone know if “translating the problem statements” includes the providing the solution (eg “an even integer” for P1), and the AI just needs to prove the solution correct? Its not clear to me what’s human-written and what’s AI-written, and the solution is part of the “theorem” part which I’d guess is human-written.

For row V, why is SS highlighted but DD is lower?

I think there's a typo; the text refers to "Poltergeist Pummelers" but the input data says "Phantom Pummelers".

  My first pass was just to build a linear model for each exorcist based on the cases where they were hired, and assign each ghost the minimum cost exorcist according to the model. This happens to obey all the constraints, so no further adjustment is needed

My main concern with this is that the linear model is terrible (r2 of 0.12) for the "Mundanifying Mystics". It's somewhat surprising (but convenient!) that we never choose the Entity Eliminators.

A: Spectre Slayers (1926)
B: Wraith Wranglers (1930)
C: Mundanifying Mystics (2862)
D: Demon Destroyers (1807)
E: Wraith Wranglers (2154)
F: Mundanifying Mystics (2843)
G: Demon Destroyers (1353)
H: Phantom Pummelers (1923)
I: Wraith Wranglers (2126)
J: Demon Destroyers (1915)
K: Mundanifying Mystics (2842)
L: Mundanifying Mystics (2784)
M: Spectre Slayers (1850)
N: Phantom Pummelers (1785)
O: Wraith Wranglers (2269)
P: Mundanifying Mystics (2776)
Q: Wraith Wranglers (1749)
R: Mundanifying Mystics (2941)
S: Spectre Slayers (1667)
T: Mundanifying Mystics (2822)
U: Phantom Pummelers (1792)
V: Demon Destroyers (1472)
W: Demon Destroyers (1834)

Estimated total cost: 49822

I think you are failing to distinguish between "being able to pursue goals" and "having a goal".

Optimization is a useful subroutine, but that doesn't mean it is useful for it to be the top-level loop. I can decide to pursue arbitrary goals for arbitrary amounts of time, but that doesn't mean that my entire life is in service of some single objective.

Similarly, it seems useful for an AI assistant to try and do the things I ask it to, but that doesn't imply it has some kind of larger master plan.

Professors are selected to be good at research not good at teaching. They are also evaluated at being good at research, not at teaching. You are assuming universities primarily care about undergraduate teaching, but that is very wrong.

(I’m not sure why this is the case, but I’m confident that it is)

I think you are underrating the number of high-stakes decisions in the world. A few examples: whether or not to hire someone, the design of some mass-produced item, which job to take, who to marry. There are many more.

These are all cases where making the decision 100x faster is of little value, because it will take a long time to see if the decision was good or not after it is made. And where making a better decision is of high value. (Many of these will also be the hardest tasks for AI to do well on, because there is very little training data about them).

Why do you think so?

Presumably the people playing correspondence chess think that they are adding something, or they would just let the computer play alone. And it’s not a hard thing to check; they can just play against a computer and see. So it would surprise me if they were all wrong about this.

Load More