I am quila, and have been studying alignment for the past year. 

After first reading the sequences as others advised, I have been poring over alignment literature every day since late 2022. I've also been discussing subjects and ideas with other alignment researchers via discord, but so far have not shared theory to the broader alignment community.

I think I'm ready to start doing that[1], so here's a post contextualizing my agenda. 

First, I think superintelligence will probably arrive soon. In that case, we may not have enough time to solve alignment from within the 'old framework' of highly optimized agents. Instead, my focus is towards a different (but still pivotal) goal: to enable the safe use of unaligned systems to steer reality.

I hope for this to bring Earth to a point where things are roughly okay, and where we have more time to solve the hard problems of aligning powerful agents.

Without this frame, my future posts may at first appear to some as ill-focused on problems outside that scope, such as myopia, performative prediction, and other concepts yet to be named. I hope that when read with the above focus in mind, there will be a clear connection to this longer-term plan.

Second, I expect superintelligent predictive models to be creatable in the future. Although current predictive models have promising properties, catastrophic failure modes are likely to arise at higher capability levels (e.g as detailed in 'conditioning predictive models'). My hope here is to develop methods which bridge the safety gap between current and superintelligent models, leaving no free variables whose optimization would effect the world in unexpected ways.

Lastly, a note on why I care to begin with. 

I suffered a lot as a human, and came to feel it is dire to minimize suffering in other beings (human, animal, artificial). Solving alignment seems to be the best way to do this in our lightcone and beyond.

There has been some discussion about how future value should be distributed. Although I do have some ideals for what a good universe would look like, they are minor in comparison to my opposition to suffering. 

Therefore, I have few worries about issues of who the eventual ASI is aligned to[2], or whether they 'follow through on the LDT handshake'. As long as the resulting world minimizes the occurrence of devastating forms of suffering, I will be mostly satisfied.

If you're interested in working together, please reach out to me on discord (username: quilalove) or matrix (username: @quilauwu:matrix.org).

  1. ^

    So far, I've ended up not doing that much for a mix of two reasons: (1) much of my research is now entangled with potential capabilities advances, (2) writing longform for LW ended up being very hard for me. That said, I'll try to discretionally share progress with interested+aligned entities, and you're welcome to reach out if you're reading this.

  2. ^

    This is no longer true, as I see potential for s-risks among humans and their simulations.

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 6:44 AM

Do you have any description of your research agenda, or is this just supposed to provide background?