I agree, taking risks and generally being a 'yes man' is much more likely going to result in positive outcomes compared to taking no action.
But I do wonder, on average, are people incentivised to seek connection to satisfy their actual personal needs and circumstances, and how much is possibly from a culture that prescribes an 'instagram' lifestyle and a huge friendship network as a goal to work towards?
For me, I find that shared interests are the automatic icebreaker that circumvents the awkward/social convention and risk elements and that finding a group that does/discusses what I am already interested in makes the whole thing feel effortless/natural and fulfilling.
Reputation stops being verifiable beyond 2 degrees of separation.
At 1 degree you observed their behavior directly.
At 2 degrees someone you trust observed it.
At 3+ degrees it's pure performance, reviews can be bought, testimonials cherry-picked, social proof manufactured.
Humans broke away from the constraints of Dunbar's limit and while communities stayed small enough, reputation tracking sufficed, but the introduction of the internet and global connectivity has expanded our effective 'communities' to effectivly billions.
This is why every digital reputation system (LinkedIn endorsements, Trust Pilot scores, follower counts) fails at scale: they collapse verification radius to zero.
We're trying to run reputation-based trust in networks where nobody can actually verify anyone's claims. We rationally offload agency and accountability (for verification of trust) onto institutions who are themselves, perpetrators and participants in this incentive driven, optics obsessed disfunction.
Facinating post, and beautifuly written!
What I am struggling to understand without the full context (I am being lazy) is why the introduction of this super AI requires the removal of Humans at all? If it can find the cure to cancer, why can it not just go-ahead and do that alongside the existing cancer research efforts? It would be madness in any system to imediately remove the incumbent/legacy solution until the replacement is proven, do we really want to pin our hopes for cancer cures with no hedge.
I apprecaite that was not the point of your post at all, but felt compelled to say it.
More on topic, I would say that I actually sympathise with the views of Dr. Togelius. Primarily because I believe that 'purpose' is a fundamental human need for contentment. Now one might argue that there would be other fulfilling work that these young researchers could find, but realistically, the fields that these minds would migrate to would be next on the 'chopping block' any way. AI and robots doing everything for us does not sound appealing whatso ever, in fact the plot to 'Wall-E' comes to mind, The Human Race does not thrive in eutopia. Much like certain neurotransmitters like dopamine, it is the pursuit that = 'happiness'
The other issue is the speculation/implication that by staying as is, is some how akin to directly causing harm. For that to ever be a reasonable stance, one would have to be able to demonstrate with evidence that Super AI is guarenteed to 1. Find the cure to cancer 2. That continued Human research was unneccesary or harmful and unless I have missed something here, neither of those are true.
Back off topic again, but the whole thing seems an unneccesary conversation any way, given that the 'cure' or at least the means of preventing cancer is likely already discovered/known but remains unsurfaced due to our current incentive structures around oncology and cancer research. If we spent half the money currently spent on cancer research on resolving issues in research generally, such as the 'replication crisis' or publication bias, it may unironically lead to better outcomes for cancer patients.