OISs
Idea:
There is a paradigm missing from the discussion of ASI misalignment existential risk. The threat of ASI generalizes to the concept of "Outcome Influencing Systems" (OISs). My hope is that developing terminology and formalism around this model may mitigate the issues associated with existing terminology and aid in more productive discourse and interdisciplinary research applicable to ASI risk and social coordination issues.
Links:
- WIP document to become main LW post introducing OISs (comments welcome)
My involvement:
I am currently the only contributor. I think the idea has merit, but I am still at the point where I am seeking either to find collaborators and spread the idea, or to find people who can point out enough flaws in the idea for it to be worth abandoning.
Question about the OIS research direction: What sort of principles might you hope to learn about outcome influencing systems which could help with the problems of ASI?
It seems to me that the problem isn't OIS generally. We have plenty of safe/aligned OISs. This includes some examples from your OIS doc such as thermostats. Even a lot of non-AI software systems seem like safe/aligned OISs. The problem seems to be more with this specific type of OIS (powerful deep-learning based models) which are both much more capable and much more difficult to ensure the safe behavior of, compared to software with legible code. (I read your doc quickly so may have missed an answer about this which you already provide in there.)
--
Also, thanks for putting forth a novel research direction for addressing AI risk. I think we should have a high bar for which research directions we put scarce resources into, but at the same time it's super valuable to propose new research directions. What we are looking for could just be low-hanging fruit in the unexplored idea space, overlooked by all the people who are working on more established stuff.
Thank you for engaging : )
As per your note about directions and scarce resources, I agree. I hope the OIS agenda is not a waste of time, and if it is, I hope you can help me identify so quickly! Scout's mindset.
Sorry, in trying to respond to your question I wrote quite a bit. Feel free to skim it and see if any of it hits the mark of what you were trying to get at. Sorry if it feels like a repeat of things I already said in my doc.
First, not a principle, but an example of the use of the OIS lens, you identified "powerful deep-learning based models" as the dangerous type of OIS, and I think they are dangerous, but in the same way as c4 is dangerous. It isn't dangerous on it's own, but is once it becomes a component or tool in the creation of a new OIS. Is this an obvious or trivial thing to point out? It might be, but I think it is good to have language that makes it clear what we are talking about. More on this below.
I'll describe 3 ways that proceeding with OIS theory might be helpful:
Characterizing and Classifying OISs:
As you noted, many OISs are safe. ( As an aside, I might disagree that they are aligned, rather, their alignment is "sufficient" WRT their capabilities. ) But the safety of other OISs is much less clear, such as market dynamics and cult like social movements, and there are examples of OISs that are strictly harmful, such as disease and addiction.
I think the fact that there are many kinds of OISs which we understand in very different levels of detail is a great opportunity for learning from examples. If we can create a formalism of OISs, and classify the similarities and differences between the kinds of OISs that appear in different fields of study, we may be able to characterize the properties of different OISs and where they the do and do not apply. That would provide us with a key for translating between the formalisms of the many fields that study particular kinds of OISs which could hopefully be leveraged into understanding progressively more complicated and intractable kinds of OIS, with the goal of eventually understanding ASI well enough to build it safely.
I don't believe I have made too much progress on this characterization work yet. I'm more in the process of identifying important classes of properties such as the substrate the OIS is composed of, or it's interconnection with other OISs. I hope this work will lead to classifications of OISs about which we can make special statements that generalize to all OISs in that class.
Exploring and Explaining the Strategic Situation:
I think spreading and using the OIS lens might help people consider and communicate about AI risks with more clarity. Here are 3 aspects to the OIS lens that might make it helpful for the strategic situation surrounding AI risk:
i. Dense Interconnection:
In exploring OISs it becomes clear that OISs intersect one another and are composed of one another in complicated ways. I won't get into too much detail here, but note that a person is an OIS and may act ( among other roles ) as an employee as part of a team, which is an OIS, within a company which is yet another OIS.
This quickly becomes quite complicated in a system dynamics kind of way, but I think is important to understand, even if we cannot identify all OISs or model their dynamic interplay.
ii. Preference Independence:
Even though two OIS may be interconnected, say, an employee and their organization, that does not imply any relationship between the preferences of each of the OISs. The organization may be working to put the employee out of a job even as the employee contributes to that effort. In this situation the employee may be acting to obtain money as an instrumental goal and find themselves forced to act against their own long term interests.
Another example I like is the "dysfunctional romance". Each of the lovers preferences would be better served if they disbanded their relationship, but the relationship itself is an OIS that seems to prefer that the two people suffer.
The bottom line is that people building or contributing to an OIS does not immediately imply the OIS is capable or sufficiently aligned WRT it's capabilities. That needs to be proven by identifying the preference encoding and how it is acted upon by the system, or more weakly, it can be inferred empirically.
iii. Simple, Formal, Well Defined:
I think there is some degree to which the disorganization in AI terminology harms our ability to coordinate and inform stakeholders about the situation. I feel that if a sufficiently simple core model of an OIS could be defined, with examples, this would allow better communication. Confusion from people having different, fuzzy ideas of what terms like "intelligence" or "AGI" mean could be avoided. My hope is for the set of terminology of OIS to be mostly unrelated to current terminology, avoiding ambiguity by further overloading existing terms, and to cut up the space of relevant concepts in a way that is more useful than the way existing words do.
Formalizing Preference Encodings and their Related Semantic Mappings:
In a thermostat, the preferences are encoded in the position of the temperature control knob. It was designed to be that way with the preferences and capabilities clearly isolated from one another, but with neural nets and humans and human organizations, the situation is much more complicated. Is it hopeless to try to divine where and what the preferences are?
I don't think so. First, the question can be approached empirically, approximately, and statistically, but additionally, I think Mechanistic Interpretability (MI) and Unsupervised ML may help us make sense of this previously unanswerable question. I am currently contextualizing neural networks based on the way I am approaching them in my MI work, thinking of them as mappings between semantic spaces.
( I expanded on the idea of semantic spaces in "Zoom Out: Distributions in Semantic Spaces". )
In the example of a cat-dog classifier, the input is in image space, specifically concerned with two distributions in image space: the distribution of images that are of cats, and the distribution of images that are dogs. The semantics in this space is about the amount of red, green, and blue in each of the pixels. This is a very useful semantic for how cameras and computer screens work, but it is very bad for determining if the picture is a cat or a dog. For this reason the network is trained act as a semantic map from the image space to a cat-dog likelihood space.
I feel as though applying this concept to the OIS lens, especially in combination with unsupervised methods which create mappings to spaces with rich semantics, may allow us to become more clear on the separation between preferences and capabilities even in places where doing so seems impossible, such as in policy networks trained by RL, or even looking for human preferences encoded by brain scans.
It might be a good idea to share drafts with friends for feedback prior to publishing. Funders tend to have limited time to evaluate projects, and I personally found it difficult to judge the merit of these proposals based purely on the descriptions you've provided. It's alright to share early pieces of work, but you might have too many dangling TODOs, which does not inspire confidence and trust in your ability to reliably produce high quality outputs. If there was a prototype you could share of the interface you describe or other concrete evidence backing your technical skill or prior visible impact, that would add to the credibility of your stated plans. That said, I took a look at your comment history and came away with a better impression than if I'd stopped with this post, and I'd encourage you to slow down and keep updating it with a focus on adding structure as you flesh out your thoughts and put more work into grounding them in data. It's not clear to me how much funding you're asking for, over what period, and which options you've already explored (manifund? OpenPhil? other support from existing institutions?). If you provided your contact information when asking people to contact you, that would make it easier for them to get in touch. I'm sure all of these issues of presentation can be worked out as you acquire more experience and get tangible results to point at, and it's quite understandable to not have everything sorted when you're trying to move quickly. Best of luck!
I agree with most everything you've said and I'm grateful for your feedback!
I am hoping to replace those TODO's tomorrow, I wasn't really expecting any funders to stumble across this post before then, but I suppose you're right, if I don't want people seeing it I shouldn't publish it. I guess it's because I'm thinking of this more as a living document, so I'm leaving this post up, but in the future I'll be more hesitant to publish before a reasonable level of completion.
I think these are really good points I should address:
Thanks again for your feedback : )
This page is an index of the projects I am working on or contributing to. I plan to keep it up to date as I continue working on various things.
I am actively looking for funding to support my work on these projects, or roles working on similar concepts. Ideally I would like funding as an independent researcher and software developer publishing my research on LessWrong and providing contributions to software under open source licenses. I feel this is the best incentive structure given my focus on AI alignment and other public benefit projects. If you know of funding or roles that seem suitable, please contact me by Lesswrong message, or email at T r i s t a n T r i m at g m a i l dot c o m .
This page is directed towards people approving grants or those who would like to donate to support my work, and I am also looking for mentors, collaborators, and potentially accepting mentees. Feel free to use this page to browse my projects whoever you are!
Idea:
My project to stay motivated focusing on improving my knowledge and skill and applying it to AI Alignment and related projects. I hope this can serve as a public "progress report" justifying any funding I may receive as well as inspiring others and serving as a point of contact for peer and mentor feedback and collaboration.
Links:
My involvement:
I am the sole contributor to my journal entries, but I welcome any feedback on the contents or format.
Idea:
The "n-dimensional interactive scatter plot" (NDISP) is a working title for my project to create interactive visualization tools and applying them to mechanistic interpretability work. Analysis of data distributions in high dimensional space has many applications, so I believe the general core of the tools may benefit many areas.
Links:
My involvement:
This project has been inspired by the work of Mingwei Li, particularly Grand Tour and UMAP Tour, as well as my own thinking. I first extended the Grand Tour application as a student project for a data visualization class and then continued working on it as a directed studies with George Tzanetakis and then as an honours project with Teseo Schneider.
I plan to continue the project by developing and releasing standalone modules, a user friendly web app, and publishing papers describing the tool and mechanistic interpretability results found using it.
Idea:
There is a paradigm missing from the discussion of ASI misalignment existential risk. The threat of ASI generalizes to the concept of "Outcome Influencing Systems" (OISs). My hope is that developing terminology and formalism around this model may mitigate the issues associated with existing terminology and aid in more productive discourse and interdisciplinary research applicable to ASI risk and social coordination issues.
Links:
My involvement:
I am currently the only contributor. I think the idea has merit, but I am still at the point where I am seeking either to find collaborators and spread the idea, or to find people who can point out enough flaws in the idea for it to be worth abandoning.
Idea:
"Map Articulating All Talking" (MAAT) is a concept for a social media like app which could make public discourse and the state of academic fields clearer and easier to understand by compressing multiple versions of the same discussions into sets of idea nodes, avoiding confusion from differences in terminology and reducing wasted time spent finding progress among redundant discussion.
Links:
My involvement:
This is my original idea and I am interested in developing it, however, I would also be happy if a competent team with the correct motivations wanted to poach the idea. If anyone is interested, please contact me.