All of MMMaas's Comments + Replies

Nice, thanks for collating these!

Also perhaps relevant: https://forum.effectivealtruism.org/posts/pJuS5iGbazDDzXwJN/the-history-epistemology-and-strategy-of-technological 

and somewhat older: 
lc. ‘What an Actually Pessimistic Containment Strategy Looks Like’. LessWrong, 5 April 2022. https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like.
Hoel, Erik. ‘We Need a Butlerian Jihad against AI’. The Intrinsic Perspective (blog), 30 June 2021. https://erikhoel.substack.com/p/we-need-a-... (read more)

2Cleo Nardo10mo
Thanks! I've included Erik Hoel's and lc's essays. Your article doesn't actually call for AI slowdown/pause/restraint, as far as I can tell, and explicitly guards off that interpretation — But if you've written anything which explicitly endorses AI restraint then I'll include that in the list.
Answer by MMMaasNov 30, 2022Ω01

Sarah Hooker's concept of a 'Hardware Lottery' in early AI research might suit some of your criteria, though it was not a permanent lockin really - http://arxiv.org/abs/2009.06489 

I enjoyed this investigation a lot; it's fascinating to think of the uses to which this could have been put.

You may be interested in a related (ongoing) project I've been working on, to survey ‘paths untaken’--cases of historical technological delay, restraint, or post-development abandonment, and to try and assess their rationales or contributing factors. So far, it includes about 160 candidate cases. Many of these need much further analysis and investigation, but you can find the preliminary longlist of cases at https://airtable.com/shrVHVYqGnmAyEGsz/tbl... (read more)

thanks for sharing this! this fits in quite well with an ongoing research project I've been doing, into the history of technological restraint (with lessons for advanced AI). See primer at https://forum.effectivealtruism.org/posts/pJuS5iGbazDDzXwJN/the-history-epistemology-and-strategy-of-technological  & in-progress list of cases at https://airtable.com/shrVHVYqGnmAyEGsz/tbl7LczhShIesRi0j  -- I'll be curious to return to these cases soon.

In case of interest, I've been conducting AI strategy research with CSER's AI-FAR group, amongst others a project to survey historical cases of (unilaterally decided; coordinated; or externally imposed) technological restraint/delay, and their lessons for AGI strategy (in terms of differential technological development, or 'containment').

(see longlist of candidate case studies, including a [subjective] assessment of the strength of restraint, and the transferability to the AGI case)
https://airtable.com/shrVHVYqGnmAyEGsz 
This is still in-progress work,... (read more)

1tamgent2y
These are really interesting, thanks for sharing!

Thanks for this in-depth review, I enjoyed it a lot!

As a sub-distinction between agrarian societies, you might also be interested in this review by Sarah Constantin -- https://srconstantin.wordpress.com/2017/09/13/hoe-cultures-a-type-of-non-patriarchal-society/  -- discussing how pre-modern cultures that farmed by plow (=more productive per unit land, but requiring more intense upper-body strength), ended up having very distinct [and more unequal] gender roles compared to cultures that farmed by hoe (=more productive per hour of labour, but requires v... (read more)

1L Rudolf L2y
Thanks for the link to Sarah Constantin's post! I remember reading it a long time ago but couldn't have found it again now if I had tried. It was another thing (along with Morris's book) that made me update towards thinking that historical gender norms are heavily influenced by technology level and type. Evidence that technology type variation even within farming societies had major impacts on gender norms also seems like fairly strong support for Morris' idea that the even larger variation between farming societies and foragers/industrialists can explain their different gender norms. John Danaher's work looks relevant to this topic, but I'm not convinced that his idea of collective/individual/artificial intelligence as the ideal types of future axiology space is cutting it in the right way. In particular, I have a hard time thinking of how you'd summarize historical value changes as movement in the area spanned by these types.