We are seeking a highly motivated and skilled research assistant to join our Research Team at IOL Lab at Zuse Institute Berlin. The successful candidate will work closely with our team on cutting-edge research projects in the field of Numerical Optimisation and Explainable AI.
Someone with a maths, theoretical physics or computer science background is preferred, but we are willing to consider anyone who can credibly demonstrate good reasoning and programming skills.
Apply till 14th of May or until fitting candidate is found
We want to build a theory that guarantees Interpretability (e.g. in the form of information bounds) for modern AI systems. Can we play AI agents against each other, so they are forced to cooperate with humans? Is it possible that AI systems prove the soundness of their reasoning to us? Can we give interpretations in text form? How robust are these approaches? Do the explanations degenerate when the AIs are incentivised to obscure their reasoning? Can we use a form of error-correction to prevent this?
Check out this paper to see how this might look like.
The position is part of the MATH+ project “Expanding Merlin-Arthur Classifiers - Interpretable Neural Networks through Interactive Proof Systems”. This research project is part of the Emerging Fields Area “Extracting dynamical Laws from Complex Data”. MATH+, the Berlin Mathematics Research Center, is a cross-institutional and interdisciplinary Cluster of Excellence. It sets out to explore and further develop new approaches in application-oriented mathematics. For more information, see: https://mathplus.de.
Write me (waeldchen@zib.de) for further information. If you know a candidate that might be a good fit for this position, please forward them this announcement.