I think you are failing to distinguish between "being able to pursue goals" and "having a goal".Optimization is a useful subroutine, but that doesn't mean it is useful for it to be the top-level loop. I can decide to pursue arbitrary goals for arbitrary amounts of time, but that doesn't mean that my entire life is in service of some single objective.Similarly, it seems useful for an AI assistant to try and do the things I ask it to, but that doesn't imply it has some kind of larger master plan.
Professors are selected to be good at research not good at teaching. They are also evaluated at being good at research, not at teaching. You are assuming universities primarily care about undergraduate teaching, but that is very wrong.
(I’m not sure why this is the case, but I’m confident that it is)
I think you are underrating the number of high-stakes decisions in the world. A few examples: whether or not to hire someone, the design of some mass-produced item, which job to take, who to marry. There are many more.
These are all cases where making the decision 100x faster is of little value, because it will take a long time to see if the decision was good or not after it is made. And where making a better decision is of high value. (Many of these will also be the hardest tasks for AI to do well on, because there is very little training data about them).
Why do you think so?
Presumably the people playing correspondence chess think that they are adding something, or they would just let the computer play alone. And it’s not a hard thing to check; they can just play against a computer and see. So it would surprise me if they were all wrong about this.
https://www.iccf.com/ allows computer assistance
The idea that all cognitive labor will be automated in the near-future is a very controversial premise, not at all implied by the idea that AI will be useful for tutoring. I think that’s the disconnect here between Altman’s words and your interpretation.
Nate’s view here seems similar to “To do cutting-edge alignment research, you need to do enough self-reflection that you might go crazy”. This seems really wrong to me. (I’m not sure if he means all scientific breakthroughs require this kind of reflection, or if alignment research is special).
I don’t think many top scientists are crazy, especially not in a POUDA way. I don’t think top scientists have done a huge amount of self-reflection/philosophy.
On the other hand, my understanding is that some rationalists have driven themselves crazy via too much self-reflection in an effort to become more productive. Perhaps Nate is overfitting to this experience?
“Just do normal incremental science; don’t try to do something crazy” still seems like a good default strategy to me (especially for an AI).
Thanks for this write up; it was unusually clear/productive IMO.
(I’m worried this comment comes off as mean or reductive. I’m not trying to be. Sorry)
Tim Cook could not do all the cognitive labor to design an iPhone (indeed, no individual human could). The CEO of Boeing could not fully design a modern plane. Elon Musk could not make a Tesla from scratch. All of these cases violate all of your three bullet points. Practically everything in the modern world is too complicated for any single person to fully understand, and yet it all works fairly well, because successful outsourcing of cognitive labor is routinely successful.
It is true that a random layperson would have a hard time verifying an AI's (or anyone else's) ideas about how to solve alignment. But the people who are going to need to incorporate alignment ideas into their work - AI researchers and engineers - will be in a good position to do that, just as they routinely incorporate many other ideas they did not come up with into their work. Trying to use ideas from an AI sounds similar to me to reading a paper from another lab - could be irrelevant or wrong or even malicious, but could also have valuable insights you'd have had a hard time coming up with yourself.
"This is what it looks like in practice, by default, when someone tries to outsource some cognitive labor which they could not themselves perform."This proves way too much. People successfully outsource cognitive labor all the time (this describes most white-collar jobs). This is possible because very frequently, it is easier to be confident that work has been done correctly than to actually do the work. You shouldn't just blindly trust an AI that claims to have solved alignment (just like you wouldn't blindly trust a human), but that doesn't mean AIs (or other humans) can't do any useful work.
The link at the top is to the wrong previous scenario