Epiphenomenalism leads to eliminativism about qualia
Clément L's Shortform
Clément L
Clément L's Shortform
Utilitarian AI Alignment: Building a Moral Assistant with the Constitutional AI Method
Computational irreducibility challenges the simulation hypothesis
Introduction In this post I will talk about the simulation argument, originally coming from Nick Bostrom and reformulated by David Chalmers. I will try to argue that taking into account the computational irreducibility principle, we are less likely to live in a simulated world. More precisely, this principle acts as...
Epiphenomenalism leads to eliminativism about qualia
Introduction In this post I will explain why I think that certain forms of realism about conscious experience face some issues that ultimately lead them to the conclusion that our belief that consciousness exists is not reliable, and thus consciousness may not exist at all, as counterintuitively as it seems....
I just got a feedback from the Bluedot Impact's judges, and thought I could share some parts of it :
"Your idea is unique and pretty cool. Trying to align an AI assistant with strict utilitarian principles using the Constitutional AI (CAI) method is fascinating. This experiment gets at one of the big alignment questions—if we could perfectly encode a moral system into an AI, would we actually want to use it?
[...] (basically summarizing the points made in the project)
The prompts used in evaluation are good but fairly standard—what happens with more complex cases? Moral trade-offs involving large numbers of people (e.g., pandemic response strategies). Multi-step reasoning problems (e.g., economic policies that affect... (read more)