LESSWRONG
LW

External EventsLighthavenAI
Personal Blog

163

Announcing ILIAD — Theoretical AI Alignment Conference

by Nora_Ammann, Alexander Gietelink Oldenziel
5th Jun 2024
AI Alignment Forum
2 min read
18

163

Ω 67

External EventsLighthavenAI
Personal Blog

163

Ω 67

Announcing ILIAD — Theoretical AI Alignment Conference
8Chris_Leong
2Alexander Gietelink Oldenziel
5Bird Concept
41gw
15Bird Concept
5TsviBT
3Alex_Altair
7TsviBT
2TsviBT
24Mateusz Bagiński
15Vanessa Kosoy
8Lorxus
8Alex_Altair
1Lorxus
2Alexander Gietelink Oldenziel
1Lorxus
0Lorxus
0Review Bot
New Comment
18 comments, sorted by
top scoring
Click to highlight new comments since: Today at 6:56 PM
[-]Chris_Leong1yΩ380

How are applications processed? Sometimes applications are processed on a rolling basis, so it's important to submit as soon as possible. Other times, you just need to apply by the date, so if you're about to post something big, it makes sense to hold-off your application.

Reply
[-]Alexander Gietelink Oldenziel1y20

We intend to review end of the submit deadline June 30th but I wouldn't hold off on your application. 

Reply
[-]Bird Concept1yΩ257

Sidenote: I'm a bit confused by the name. The all caps makes it seem like an acronym. But it seems to not be? 

Reply1111
[-]gw1y411

I
Love
Interesting
Alignment
Donferences

Reply10543322
[-]Bird Concept1y150

ah that makes sense thanks

Reply2
[-]TsviBT1y51

honestly i prefer undonfrences

Reply
[-]Alex_Altair1y30

How about deconferences?

Reply
[-]TsviBT1y70

idk, sounds dangerously close to deferences

Reply
[-]TsviBT1y20

Insightful

Learning

Implore

Agreed

Delta

Reply
[-]Mateusz Bagiński1y242

Intentional
Lure for
Improvised
Acronym
Derivation

Reply211
[-]Vanessa Kosoy1y150

International League of Intelligent Agent Deconfusion

Reply4
[-]Lorxus1y82

It's the Independently-Led Interactive Alignment Discussion, surely.

Reply1
[-]Alex_Altair1y80

Interactively Learning the Ideal Agent Design

Reply3
[-]Lorxus11mo10

> https://www.lesswrong.com/posts/r7nBaKy5Ry3JWhnJT/announcing-iliad-theoretical-ai-alignment-conference#whqf4oJoYbz5szxWc

you didn't invite me so you don't get to have all the nice things, but I did leave several good artifacts and books I recommend lying around. I invite you to make good use of them!

Reply
[-]Alexander Gietelink Oldenziel11mo20

Thank you Lorxus, that's appreciated. I'm sure we can make good use of them.

Unfortunately, we get many more applications than we have spots so we have to make some tough choices. Better luck next time!

Reply11
[-]Lorxus1y10

https://manifold.markets/Lorxus/will-lorxus-attend-the-iliad-unconf?r=TG9yeHVz

Reply
[-]Lorxus1y00

Also: if I get accepted to come to ILIAD I am going to make delicious citrus sodas.[1] Maybe I could even run a pair of panels about that?[2] That seemed extremely out of scope though so I didn't put it in the application.

  1. ^

    Better than you've had before. Like, ever. Yes I am serious, I've got lost lore. Also, no limit on the flavor as long as it's a citrus fruit we can go and physically acquire on-site. Also, no need at all for a stove or heating element.

  2. ^

    There is a crucially important time-dependent step on the scale of hours, so a matched pair of panels would be the best format.

Reply
[-]Review Bot1y*00

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

Reply
Moderation Log
Curated and popular this week
18Comments

We are pleased to announce ILIAD — a 5-day conference bringing together 100+ researchers to build strong scientific foundations for AI alignment.

***Apply to attend by June 30!***

  • When: Aug 28 - Sep 3, 2024
  • Where: @Lighthaven (Berkeley, US)
  • What: A mix of topic-specific tracks, and unconference style programming, 100+ attendees. Topics will include , Agent Foundations, Causal Incentives, Computational Mechanics and more to be announced.
  • Who: Currently confirmed speakers include: Daniel Murfet, Jesse Hoogland, Adam Shai, Lucius Bushnaq, Tom Everitt, Paul Riechers, Scott Garrabrant, John Wentworth, Vanessa Kosoy, Fernando Rosas and James Crutchfield.
  • Costs: Tickets are free. Financial support is available on a needs basis. 

See our website here. For any questions, email iliadconference@gmail.com 

About ILIAD

ILIAD is a 100+ person conference about alignment with a mathematical focus. The theme is ecumenical, yet the goal is nothing less than finding the True Names of AI alignment.

Participants may be interested in all tracks, only one or two or none at all. The unconference format will mean participants have maximum freedom to direct their own time and energy. 

Program and Unconference Format

ILIAD will feature an unconference format - meaning that participants can propose and lead their own sessions. We believe that this is the best way to release the latent creative energies in everyone attending.

That said, freedom can be scary! If taking charge of your own learning sounds terrifying, rest assured there will be plenty of organized sessions as well. We will also run the topic-specific workshop tracks such as:

  • Computational Mechanics is a framework for understanding complex systems by focusing on their intrinsic computation and information processing capabilities. Pioneered by J. Crutchfield, it has recently found its way into AI safety. This workshop is led by Paul Riechers.
  • Singular learning theory, developed by S. Watanabe, is the modern theory of Bayesian learning. SLT studies the loss landscape of neural networks, using ideas from statistical mechanics, Bayesian statistics and algebraic geometry. The track lead is Jesse Hoogland.
  • Agent Foundations uses tools from theoretical economics, decision theory, Bayesian epistemology, logic, game theory and more to deeply understand agents: how they reason, cooperate, believe and desire. The track lead is Daniel Hermann.
  • Causal Incentives is a collection of researchers interested in using causal models to understand agents and their incentives.  The track lead is Tom Everitt.
  • “How It All Fits Together” turns its attention to the bigger picture — where are we coming from, and where are we going? — under the direction of John Wentworth.  

Financial Support

Financial support for accommodation & travel are available on a needs basis. Lighthaven has capacity to accommodate 60% of participants. Note that these rooms are shared.  

Singular Learning Theory
Mentioned in
304The Field of AI Alignment: A Postmortem, and What To Do About It
80Announcing ILIAD2: ODYSSEY
77Proceedings of ILIAD: Lessons and Progress