Within AI Safety Policy Research, I am very focused on contributing to improving the definitions of the concepts "transparency" and "explainability" so that truly useful and actionable policy standards can be created in these vital areas. This has been an interest of mine for some time, but has been renewed with my recent discovery of Alondra Nelson's work (see https://www.ias.edu/sss/faculty/nelson). This includes her recent presentation at the IASEAI 2026 conference titled "Algorithmic Agnotology: On AI, Ignorance, and Power", in which she argues that current AI industry public discourse seems to intentionally blur the lines between what is truly unknowable/stochastic within AI technology and what companies actually DO know but choose to withhold from public knowledge (e.g. unpublished research and red-team findings, internal monitoring logs, crucial system card information that only becomes publicly available the same day a model is released, thus preventing pre-release public scrutiny or feedback, etc.).
Nelson posits that by intentionally keeping these conceptual lines vague in public dialogue—doing little to clarify uncertainties that are truly stochastic (fundamentally unknowable) from uncertainties that are actually epistemic (could be pursued and solved given sufficient resources and attention)—AI companies have succeeded in molding and managing our public internal narrative about the nature and extent of AI risks, as well as who, if anyone, should be addressing them. Essentially, by invoking "the spirit of the AI black box problem" regardless of the challenges being discussed, unknowability becomes operationalized as a public communication strategy for addressing all risks and public questions that AI companies prefer not to answer with actual evidence.
Within AI Safety Policy Research, I am very focused on contributing to improving the definitions of the concepts "transparency" and "explainability" so that truly useful and actionable policy standards can be created in these vital areas. This has been an interest of mine for some time, but has been renewed with my recent discovery of Alondra Nelson's work (see https://www.ias.edu/sss/faculty/nelson). This includes her recent presentation at the IASEAI 2026 conference titled "Algorithmic Agnotology: On AI, Ignorance, and Power", in which she argues that current AI industry public discourse seems to intentionally blur the lines between what is truly unknowable/stochastic within AI technology and what companies actually DO know but choose to withhold from public knowledge (e.g. unpublished research and red-team findings, internal monitoring logs, crucial system card information that only becomes publicly available the same day a model is released, thus preventing pre-release public scrutiny or feedback, etc.).
Nelson posits that by intentionally keeping these conceptual lines vague in public dialogue—doing little to clarify uncertainties that are truly stochastic (fundamentally unknowable) from uncertainties that are actually epistemic (could be pursued and solved given sufficient resources and attention)—AI companies have succeeded in molding and managing our public internal narrative about the nature and extent of AI risks, as well as who, if anyone, should be addressing them. Essentially, by invoking "the spirit of the AI black box problem" regardless of the challenges being discussed, unknowability becomes operationalized as a public communication strategy for addressing all risks and public questions that AI companies prefer not to answer with actual evidence.
I highly recommend her presentation: https://www.youtube.com/watch?v=5CRJiLSlywA . Her jointly authored book, Auditing AI, via MIT Press will be released on 21 April, with preordering available: https://amzn.to/4ssGjks