I am a student in Philosophy and AI, with a particular interest in philosophy of science, political philosophy, complex systems studies, enactivist frameworks -- as they pertain to AI risk, governance and alignment. A lot of my thinking is influenced by exploring what we can learn from the study of intelligent behaviour in currently existing, natural systems (across scales and substrates) towards questions as to the nature, behaviour of and risks related to future AI systems. Among others, I am interested in a generative theory of value, the "think" interface between governance and technical aspects of AI alignment, and spend a decent chunk of time thinking about scientific and epistemological challenges specific to AI alignment research, and how to address them.
Going back further, I have also spent a bunch of time thinking about how (bounded) minds make sense of and navigate a (complex) world (rationality, critical thinking, etc.). I have several years of experience in research organization, among others from working at FHI, CHERI, Epistea, etc. I have a background in International Relations, and spend large parts of of 2017-2019 doing complex systems inspired research on understanding group decision making and political processes with the aim of building towards an appropriate framework for "Longterm Governance".