0
hi, i've been learning about alignment and am new to lesswrong. here's my question.
there seems to be a consensus here that AI couldn't be used to solve the problem of AI control per se. that said, is there any discussion or literature on whether a future AI might be able to generate a very impactful political strategy which, if enacted, would engineer a sociopolitical situation where humans have better prospects to solve the problems around AGI?
this question came to my mind in discussing how it seems that, in principle, there should be a way to string together words (and tone, body language, etc) to convince anyone of anything. likewise, it seems there are in principle sequences of actions which would change society/culture to any arbitrary state. however, most of these strategies are far outside the range of what a human could come up with; but a smarter AI might be able to come up with them, or in general have very intelligent ideas humans can't come up with, as Robert Miles helped illustrate to me in this video (https://youtu.be/L5pUA3LsEaw?t=359).
Thanks for the response. I did think of this objection, but wouldn't it be obvious if the AI were trying to engineer a different situation than the one requested? E.g., wouldn't such a strategy seem unrelated and unconventional?
It also seems like a hypothetical AI with just enough ability to generate a strategy for the desired situation would not be able to engineer a strategy for a different situation which would both work, and deceive the human actors. As in, it seems the latter would be harder and require an AI with greater ability.