"This study identifies 61 occasions across Australia, Canada, Europe, New Zealand and the United States when ADS [automated decision systems] projects were cancelled or paused."

The study gives a run-down of what sorts of systems were cancelled (or banned), how the cancellation came about (e.g. via external legal action or some internal decision), and the role of "civil society critique" in the decision-making.

"In combination, these findings suggest there are competing understandings of acceptable data practices. Given the civil society mobilisation, legal challenges and the number of interventions by oversight bodies, our research suggests that there are competing understandings about effectiveness, impact, accountability and how data systems can infringe people’s rights across areas of application. A key issue is that it takes a lot of work and resources to challenge a data system once in place."

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 5:39 PM

thanks for sharing this! this fits in quite well with an ongoing research project I've been doing, into the history of technological restraint (with lessons for advanced AI). See primer at https://forum.effectivealtruism.org/posts/pJuS5iGbazDDzXwJN/the-history-epistemology-and-strategy-of-technological  & in-progress list of cases at https://airtable.com/shrVHVYqGnmAyEGsz/tbl7LczhShIesRi0j  -- I'll be curious to return to these cases soon.