If an AI system or systems goes wrong in the near term and causes harm to humans in a way which is consistent with or supportive of alignment being a big deal, what might it look like?
I'm asking because I'm curious about potential fire-alarm scenarios (including things which just help to make AI risks salient to the wider public), and also looking to operationalise a forecasting question which is currently drafted as
By 2032, will we see an event precipitated by AI that causes at least 100 deaths and/or at least $1B 2021 USD in economic damage?
to allow a clear and sensible resolution.
Great point - I'm not sure if that contained aspects which are similar enough to AI to resolve such a question. This source doesn't think it counts as AI (though it doesn't provide much of an argument for this) and I can't find reference to machine learning or AI on the MCAS page, though clearly one could use AI tools to develop an automated control system like this and I don't feel well positioned to judge whether it should count.