Thanks for writing this. I’m not sure I’d call your beliefs moderate, since they involve extracting useful labor from misaligned AI by making deals with them, sometimes for pieces of the observable universe or by verifying with future tech.
On the point of “talking to AI companies”, I think this would be a healthy part of any attempted change although I see that PauseAI and other orgs tend to talk to AI companies in a way that seems to try to make them feel bad by directly stating that what they are doing is wrong. Maybe the line here is “You make sure that what you say will still result in you getting invited to conferences” which is reasonable but I don’t think that talking to AI companies gets at the difference between you and other forms of activism.
I think you’re pretty severely mistaken about bullshit jobs. You said
At the start of this post we mentioned “bullshit jobs” as a major piece of evidence that standard “theory of the firm” models of organization size don’t really seem to capture reality. What does the dominance-status model have to say about bullshit jobs?
But there are many counter examples of this not being a real concept. See here for many of them: https://www.thediff.co/archive/bullshit-jobs-is-a-terrible-curiosity-killing-concept/
How would a military which is increasingly run by AI factor into these scenarios? It seems most similar to organizational safety a la google building software with SWEs but the disanalogy might be that the AI is explicitly supposed to take over some part of the world and maybe it interpreted a command incorrectly. Or does this article only consider the AI taking over because it wanted to take over?
Huh, did you experience any side effects?
I think discernment is not essential to entertainment. If people really want to learn what a slightly off piano sounds like and also pay for expert piano tuning, then that’s fine, but I don’t think people should be looked down upon for not having that level of discernment.
How would the agent represent non-coherent others? Like humans don’t have entirely coherent goals and in cases where the agent learns that it may satisfy one or another goal, how would it select which goal to choose? Take a human attempting to lose weight, with goals to eat to satisfaction and to not eat. Would the agent give the human food or withhold it?
One thing I find weird is that most of these objects of payment are correlated. The best paying jobs also have the best peers also have the most autonomy also have the most fun. Low paid jobs were mostly drudgery along all axes in my experience
Thanks for the summary. Why should this be true?
The fact that sympathy for hedonic utilitarianism is strongly correlated with intelligence is a somewhat worrying datapoint in favor of the plausibility of squiggle-maximizers.
Embracing positive sensory experience due to higher human levels of intelligence implies a linearity that I don’t think is true among other animals. Are chimps more hedonic utilitarian than ants than bacteria? Human intelligence is too narrow for this to be evidence of what something much smarter would do
Thank you for writing this. My girlfriend and I would like kids, but I generally try not to bring AI up around her. She got very anxious while listening to an 80k hours podcast on AI and it seemed generally bad for her. I don't think any of my work will end up making an impact on AI, so I think basically the CS Lewis quote applies. Even if you know the game you're playing is likely to end, there isn't anything to do since there are no valid moves if the new game actually starts.
I did want to ask, how did you think about putting your children in school? Did you send them to a public school?
Isn’t the theory that consultants add value by saying true obvious things? If you realize you’re surrounded by sycophants, you might need someone who you’re sure won’t just tell you that you’re amazing (unless the consultant is also a yes man and dooms you even harder)