Nora_Ammann

Current: 

  • Director and Co-Founder of "Principles of Intelligent Behaviour in Biological and Social Systems" (pibbss.ai)
  • Research Affiliate and PhD student with the Alignment of Complex Systems group, Charles University (acsresearch.org)

Main research interests: 

  • How can a naturalized understanding of intelligent behavior (across systems, scales and substrates) be translated into concrete progress towards making AI systems safe and beneficial? 
  • What are scientific and epistemological challenges specific to making progress on questions in AI risk, governance and safety? And how can we overcome them?

Other interests:

  • Alternative AI paradigms and their respective capabilities-, safety-, and governability-profiles
  • The dual (descriptive-prescriptive) nature of the study of agency and the sciences of the artificial
  • Pluralist epistemic perspective on the landscape of AI risks
  • The "think" interface between technical and governance aspects of AI alignment
  • ...and more general ideas from philosophy & history of science, political philosophy, complex systems studies, and broadly speaking enactivist theories of cognition, ... — as they are relevance to questions in AI risk/governance/safety

Going back further, I have also spent a bunch of time thinking about how (bounded) minds make sense of and navigate a (complex) world (i.e. rationality, critical thinking, etc.). I have several years of experience in research organization, among others from working  at FHI, CHERI, Epistea, etc. I have a background in International Relations, and spend large parts of of 2017-2019 doing complex systems inspired research on understanding group decision making and political processes with the aim of building towards an appropriate framework for "Longterm Governance".  

Sequences

The Value Change Problem (sequence)
Thoughts in Philosophy of Science of AI Alignment

Wiki Contributions

Comments

Yes, we upload them to our Youtube account modulo the speaker agreeing to it. The first few recordings from this series should be uploaded very shortly. 

While I don't think it's so much about selfishness as such, I think this points at something important, also discussed eg here: The self-unalignment problem

Nora_Ammann4moΩ110

Does it seem like I'm missing something important if I say "Thing = Nexus" gives a "functional" explanation of what thing is, i.e. it serves the function of being an "inductive nexus of reference". This is not a foundational/physicalist/mechanistic explanation, but it is very much a sort of explanation that I can imagine being useful in some cases/for some purposes.

I'm suggesting this as a possibly different angle at "what sort of explanation is Thing=Nexus, and why is it plausibly not fraught despite it's somewhat-circularity?" It seems like it maps on to /doesn't contract anything you say (note: I only skimmed the post so might have missed some relevant detail, sorry!), but I wanted to check whether, even if not conflicting, it misses something you think is or might be important somehow.

Yeah, would be pretty keen to see more work trying to do this for AI risk/safety questions specifically: contrasting what different lenses "see" and emphasize, and what productive they critiques they have to offer to each other. 

Over the last couple of years, valuable progress has been made towards stating the (more classical) AI risk/safety arguments more clearly, and I think that's very productive for leading to better discourse (including critiques of those ideas). I think we're a bit behind on developing clear articulations of the complex systems/emergent risk/multi-multi/"messy transitions" angle on AI risk/safety, and also that progress on this would be productive on many fronts.

If I'm not mistaken there is some work on this in progress from CAIF (?), but I think more is needed. 

To follow up on this, we'll be hosting John's talk on Dec 12th, 9:30AM Pacific / 6:30PM CET

Join through this Zoom Link.

Title: AI would be a lot less alarming if we understood agents

Description:  In this talk, John will discuss why and how fundamental questions about agency - as they are asked, among others, by scholars in biology, artificial life, systems theory, etc. - are important to making progress in AI alignment. John gave a similar talk at the annual ALIFE conference in 2023, as an attempt to nerd-snipe researchers studying agency in a biological context.

--

To be informed about future Speaker Series events by subscribing to our SS Mailing List here. You can also add the PIBBSS Speaker Events to your calendar through this link.
 

I have no doubt Alexander would shine!

Happy to run a PIBBSS speaker event for this, record it and make it publicly available. Let me know if you're keen and we'll reach out to find a time.

FWIW I also think the "Key Phenomena of AI risk" reading curriculum (h/t TJ) does some of this at least indirectly (it doesn't set out to directly answer this question, but I think a lot of the answers to the question are comprise in the curriculum). 

(Edit: fixed link)

How confident are you about it not having been recorded? If not very, seems props worth checking again

Nora_Ammann5moΩ130

Re whether messy goal-seekers can be schemers, you may address this in a different place (and if so forgive me, and I'd appreciate you pointing me to where), but I keep wondering what notion of scheming (or deception, etc.) we should be adopting when, in particular: 

  • an "internalist" notion, where 'scheming' is defined via the "system's internals", i.e. roughly: the system has goal A, acts as if it has goal B, until the moment is suitable to reveal it's true goal A.
  • an "externalist" notion, where 'scheming' is defined, either, from the perspective of an observer (e.g. I though the system has goal B, maybe I even did a bunch of more or less careful behavioral tests to raise my confidence in this assumption, but in some new salutation, it gets revealed that the system pursues B instead)
  • or an externalist notion but defined via the effects on the world that manifest (e.g. from a more 'bird's-eye' perspective, we can observe that the system had a number of concrete (harmful) effects on one or several agents via the mechanisms that those agents misjudged what goal the system is pursuing (therefor e.g. mispredicting its future behaviour, and basing their own actions on this wrong assumption)

It seems to me like all of these notions have different upsides and downsides. For example:

  • the internalist notion seems (?) to assume/bake into its definition of scheming a high degree of non-sphexishness/consequentialist cognition
  • the observer-dependent notion comes down to being a measure of the observer's knowledge about the system 
  • the effects-on-the-world based notion seems plausibly too weak/non mechanistic to be helpful in the context of crafting concrete alignment proposals/safety tooling

Yeah neat, I haven't yet gotten to reading it but is definitely on my list. Seems (and some folks suggested to me) that it's quite related to the sort of thing I'm discussing in value change problem too.

Load More