Robert Kralisch

Hey, I am Robert Kralisch, an independent conceptual/theoretical Alignment Researcher. I have a background in Cognitive Science and I am interested in collaborating on an end-to-end strategy for AGI alignment.

The three main branches that I aim to contribute to are conceptual clarity (what should we mean by agency, intelligence, embodiment, etc), the exploration of more inherently interpretable cognitive architectures, and Simulator theory.

One of my concrete goals is to figure out how to design a cognitively powerful agent such that it does not become a Superoptimiser in the limit. 

Sequences

An Investigation of the Frameworks of “Positive Attractors” and “Inherently Interpretable Architectures” During the AISC 2023

Wiki Contributions

Comments

the maximum plan length is only  steps

You mean the maximum length for an efficient/minimal plan, right? Maybe good to clarify (even if obvious in this case). Just a thought.

I believe that it is very sensible to bring this sort of structure into our approach to AGI safety research, but at the same time it seems very clear that we should update that structure to the best of our ability as we make progress in understanding the challenges and potentials of different approaches. 

It is a feedback loop where we make each step according to our best theory of where to make it, and use the understanding gleaned from that step to update the theory (when necessary), which could well mean that we retrace some steps and recalibrate (this can be the case within and across questions). I think this connects to what both Charlie and Tekhne have said, though I believe Tekhne could have been more charitable.

In this light, it makes sense to emphasize the openness of the theory to being updated in this way, which also qualifies the ways in which the theory is allowed to be yet incomplete. Putting more effort into clarifying how this update process should look like seems like a promising addition to the framework that you propose. 

On a more specific note I felt that Q5 could just be in position 2 and maybe a sixth question would be "What is the predicted timeline for stable safety/control implementations?" or something of the sort. 

I also think that phrasing our research in terms of "avoiding bad outcomes" and "controlling the AGI" biases the way in which we pay attention to these problems. I am sure that you will also touch on this in the more detailed presentation of these questions, but at the resolution presented here, I would prefer the phrasing to be more open. 
"Aiming at good outcomes while/and avoiding bad outcomes" captures more conceptual territory, while still allowing for the investigation to turn out that avoiding bad outcomes is more difficult and should be prioritised. This extends to the meta-question of whether existential risk can be best adressed by focusing on avoiding bad outcomes, rather than developing a strategy to get to good outcomes (which are often characterised by a better abilitiy to deal with future risks) and avoid bad outcomes on the way there. It might rightfully appear that this is a more ambitious aim, but it is the less predisposed outlook! Many strategy games are based on the idea that you have to accumulate resources and avoid losses while at the same time improving your ability to accumulate resources and avoid losses in the future. Only focusing on the first aspect is a specific strategy in the space of possible ones, and often employed when one is close to losing. This isn't a perfect analogy in a number of ways, but serves to point out the more general outlook.
Similarly, we expect a superintelligent AGI to be out of our ability to control at some point, which invokes notions of "self-control" on part of the AGI or "justified trust" on our part - therefore, perhaps "influencing the development of the AGI" would be better, as, again, "influence" can cover more conceptual ground but can still be hardened into the more specific notion of "control" when appropriate.