The third iteration of the symposium is particularly focused on applications of AIT to the theory and practice of AI safety. AIXI has long been applied to model the risks of artificial superintelligence (ASI), particularly by MIRI and adjacent agent foundations researchers. It has also been used to suggest mitigations, notably by Michael K. Cohen (https://www.michael-k-cohen.com/publications).
Who should attend. This conference series has so far attracted mostly academics working on either AIT or its applications to understanding ML. This iteration has an (extra) focus on AI safety, so a wider variety of topics like understanding goal generalization mathematically can be of interest, along with any work on AIXI or other rigorous models of ASI. Research on robust RL/ML, imprecise probability, and Infra-Bayesianism is particularly relevant to recent AIXI safety directions.
If any of this sounds interesting and you would like to attend, please complete the interest form (also available through the announcement link above). If your research might be relevant, you can also apply to give a talk here.
We are organizing a symposium on the intersection of algorithmic information theory and machine learning July 27-29th at Oxford!
See the announcement here for details: https://sites.google.com/site/boumedienehamzi/third-symposium-on-machine-learning-and-algorithmic-information-theory
The third iteration of the symposium is particularly focused on applications of AIT to the theory and practice of AI safety. AIXI has long been applied to model the risks of artificial superintelligence (ASI), particularly by MIRI and adjacent agent foundations researchers. It has also been used to suggest mitigations, notably by Michael K. Cohen (https://www.michael-k-cohen.com/publications).
Who should attend. This conference series has so far attracted mostly academics working on either AIT or its applications to understanding ML. This iteration has an (extra) focus on AI safety, so a wider variety of topics like understanding goal generalization mathematically can be of interest, along with any work on AIXI or other rigorous models of ASI. Research on robust RL/ML, imprecise probability, and Infra-Bayesianism is particularly relevant to recent AIXI safety directions.
If any of this sounds interesting and you would like to attend, please complete the interest form (also available through the announcement link above). If your research might be relevant, you can also apply to give a talk here.