(I'll admit, there's another reason for programmers to want to use AI even if it did make them worse at their jobs: it outsources some of the most unpleasant programming labor, so even if it's slower, it's worth it in the eyes of a programmer because their experience of programming feels better when they use AI because they didn't spend a lot of time doing tasks they didn't enjoy doing, like typing out the code changes they already figured out in their head.)
Basically that's proposing to take the programmer job description and move it from hands-on write the code yourself to hands-off review and adjust the code the AI agents are writing.
Many people currently employed as programmers actually do enjoy the hands-on part and I suspect that even those doing it mostly for the money tend to like it more than code reviews. Code reviews are probably just below writing documentation and writing tests on the list of things most programmers don't like doing.
Now, personally I don't mind writing tests and don't mind doing code reviews myself, probably more so than most people I've worked with, and yet if that's what the job morphs into I'll probably change careers, circumstances permitting.
What I could see is that, as the job description changes, the kind of people who get into the job also changes with it. And there's certainly people who do think like you describe. Just not many of them in my bubble.
Do you have an example for that? It seems to me you're describing circular process, in which you'd naturally look for stable equilibria. Basically prediction will influence action, action will influence prediction, something like that. But I don't quite get it how the circle works.
Say I'm the agent faced with a decision. I have some options, I think through the possible consequences of each, and I choose the option that leads to the best outcome according to some metric. I feel it would be fair to say that the predictions I'm making about the future determine which choice I'll make.
What I don't see is how the choice I end up making influences my prediction about the future. From my perspective the first step is predicting all possible futures and the second step execution the action that leads to the best future. Whatever option I end up selecting, it was already reasoned through beforehand, as were all the other options I ended up not selecting. Where's the feedback loop?