Reducing the risk of catastrophically misaligned AI by avoiding the Singleton scenario: the Manyton Variant
This post does not try to add to the discussion about aligning an AGI/ASI to all human values but will rather focus on a smaller, arguably fundamental, subset of human values: prosocial behaviour. (Note: with many agreeing that AGI and ASI will occur so closely together the differentiation between the...