Anecdotal example of trade with ants (from a house in Bali, as described by David Abrams):
The daily gifts of rice kept the ant colonies occupied–and, presumably, satisfied. Placed in regular, repeated locations at the corners of various structures around the compound, the offerings seemed to establish certain boundaries between the human and ant communities; by honoring this boundary with gifts, the humans apparently hoped to persuade the insects to respect the boundary and not enter the buildings.
if you are smarter at solving math tests where you have to give the right answer, then that will make you worse at e.g. solving math "tests" where you have to give the wrong answer.
Is that true though? If you're good at identifying right answers, then by process of elimination you can also identify wrong answers.
I mean sure, if you think you're supposed to give the right answer then yes you will score poorly on a test where you're actually supposed to give the wrong answer. Assuming you get feedback, though, you'll soon learn to give wrong answers and then the previous point applies.
There’s a trap here where the more you think about how to prevent bad outcomes from AGI, the more you realize you need to understand current AI capabilities and limitations, and to do that there is no substitute for developing and trying to improve current AI!
A secondary trap is that preventing unaligned AGI probably will require lots of limited aligned helper AIs which you have to figure out how to build, again pushing you in the direction of improving current AI.
The strategy of “getting top AGI researchers to stop” is a tragedy of the commons: They can be replaced by other researchers with fewer scruples. In principle TotC can be solved, but it’s hard. Assuming that effort succeeds, how feasible would it be to set up a monitoring regime to prevent covert AGI development?
“no free lunch in intelligence” is an interesting thought, can you make it more precise?
Intelligence is more effective in combination with other skills, which suggests “free lunch” as opposed to tradeoffs.
Young kids don’t make a clear distinction between fantasy and reality. The process of coming to reject the Santa myth helps them clarify the distinction.
It’s interesting to me that young kids function as well as they do without the notions of true/false, real/pretend! What does “belief” even mean in that context? They change their beliefs from minute to minute to suit the situation.
Even for most adults, most beliefs are instrumental: We only separate true from false to the extent that it’s useful to do so!
Thanks for the comment!
I know you are saying it predicts *uncertainly,* but we still have to have some framework to map uncertainty to a state, we have to round one way or the other. If uncertainty avoids loss, the predictor will be preferentially inconclusive all the time.
There's a standard trick for scoring an uncertain prediction: It outputs its probability estimate p that the diamond is in the room, and we score it with loss if the diamond is really there, otherwise. Truthfully reporting p minimizes its loss.
So we could sharpen case two and say that sometimes the AI's camera intentionally lies to it on some random subset of scenarios
You're saying that giving it less information (by replacing its camera feed with a lower quality feed) is equivalent to sometimes lying to it? I don't see the equivalence!
if you overfit on preventing human simulation, you let direct translation slip away.
That's an interesting thought, can you elaborate?
"Train the predictor on lots of cases until it becomes incredibly good; then train the reporter only on the data points with missing information, so that it learns to do direct translation from the predictor to human concepts; then hope that reporter continues to do direct translation on other data points.”
That's different from what I had in mind, but better! My proposal had two separate predictors, and what it did is reduce the human strong predictor OI problem (OI = “ontology identification”, defined in the ELK paper) to the weak predictor strong predictor OI problem. The latter problem might be easier, but I certainly don’t see how to solve it!
Your version is better because it bypasses the OI problem entirely (the two predictors are the same!)
Now for the problem you point out:
The problem as I see it is that once the predictor is good enough that it can get data points right despite missing crucial information,
Here’s how I propose to block this. Let be a high-quality video and an action sequence. Given this pair, the predictor outputs a high-quality video of its predicted outcome. Then we downsample and to low-quality and , and train the reporter on the tuple where is the human label informed by the high-quality and .
We choose training data such that
1. The human can label perfectly given the high-quality data ; and
2. The predictor doesn't know for sure what is happening from the low-quality data alone.
Let’s compare the direct reporter (which truthfully reports the probability that the diamond is in the room, as estimated by the predictor who only has the low-quality data) with the human simulator.
The direct reporter will not get perfect reward, since the predictor is genuinely uncertain. Sometimes the predictor’s probability is strictly between 0 and 1, so it gets some loss.
But the human simulator will do worse than the direct reporter, because it has no access to the high-quality data. It can simulate what the human would predict from the low-quality data, but that is strictly worse than what the predictor predicts from the low-quality data.
I agree that we still have to "hope that reporter continues to do direct translation on other data points”, and maybe there’s a counterexample that shows it won’t? But at the very least the human simulator is no longer a failure mode!
I agree this is a problem. We need to keep it guessing about the simulation target. Some possible strategies:
None of these randomization strategies is foolproof in the worst case. But I can imagine proving something like "the model is exponentially unlikely to learn an simulator" where is now the full committee of all 100 experts. Hence my question about large deviations.
You're saying AI will be much better than us at long-term planning?
It's hard to train for tasks where the reward is only known after a long time (e.g. how would you train for climate prediction?)
Yep, it's a funny example of trade, in that neither party is cognizant of the fact that they are trading!
I agree that Abrams could be wrong, but I don't take the story about "spirits" as much evidence: A ritual often has a stated purpose that sounds like nonsense, and yet the ritual persists because it confers some incidental benefit on the enactor.