By mutual benefits I am thinking of what activities, endeavours, and designs may both future AGI(s) and future human groups, or society at large,  invest in that would yield net returns. 

The most commonly mentioned benefits are the ones where AGI would be enhancing existing human activities in statistical analysis, stock markets, forecasting, optimization, etc… that could be carried on regardless of AGI involvement or not. Where AGI would be a supplement.

However what can collaboration exclusively bring about? 

It would be nice to hear some thoughts on what novel developments may be possible. (Also by what ways AGI(s) could be benefiting, other than more compute, since there likely will be synergistic effects.)

New to LessWrong?

New Answer
New Comment
4 comments, sorted by Click to highlight new comments since: Today at 6:18 PM

I'm not sure if I understand how you want your answers shaped. Why does it need to be AGI? Are these new activities happening in a radically transformed world run by AGIs, or are we just imagining cool things to do with an AGI in today's world?

I intended to ask what can we not do presently that may be possible with the help of AGIs.

I second Charlie Steiner's questions, and add my own: why collaboration? A nice property of an (aligned) AGI would be that we could defer activities to it... I would even say that the full extent of "do what we want" at superhuman level would encompass pretty much everything we care about (assuming, again, alignment).

Because human deference is usually conditioned on motives beyond deferring for the sake of deferring. Thus even in that case there will still need to be some collaboration.