I have a PhD in Computational Neuroscience from UCSD (Bachelor's was in Biomedical Engineering with Math and Computer Science minors). Ever since junior high, I've been trying to figure out how to engineer artificial minds, and I've been coding up artificial neural networks ever since I first learned to program. Obviously, all my early designs were almost completely wrong/unworkable/poorly defined, but I think my experiences did prime my brain with inductive biases that are well suited for working on AGI.
Although I now work as a data scientist in R&D at a large medical device company, I continue to spend my free time studying the latest developments in AI/ML/DL/RL and neuroscience and trying to come up with models for how to bring it all together into systems that could actually be implemented. Unfortnately, I don't seem to have much time to develop my ideas into publishable models, but I would love to have the opportunity to share ideas with those who do.
Of course, I'm also very interested in AI Alignment (hence the account here). My ideas on that front mostly fall into the "learn (invertible) generative models of human needs/goals and hook those up to the AI's own reward signal" camp. I think methods of achieving alignment that depend on restricting the AI's intelligence or behavior are about as destined to failure in the long term as Prohibition or the War on Drugs in the USA. We need a better theory of what reward signals are for in general (probably something to do with maximizing (minimizing) the attainable (dis)utility with respect to the survival needs of a system) before we can hope to model human values usefully. This could even extend to modeling the "values" of the ecological/socioeconomic/political supersystems in which humans are embedded or of the biological subsystems that are embedded within humans, both of which would be crucial for creating a better future.
The same thing happens with my daughters (all under 6). Get them to start talking about poop, and it's like a switch has been flipped. Their behavior becomes deliberately misaligned with parental objectives until we find a way to snap them out of that loop.
So is Agent Foundations primarily about understanding the nature of agency so we can detect it and/or control it in artificial models, or does it also include the concept of equipping AI with the means of detecting and predictively modeling agency in other systems? Because I strongly suspect the latter will be crucial in solving the alignment problem.
The best definition I have at the moment sees agents as systems that actively maintain their internal state within a bounded range of viability in the face of environmental perturbations (which would apply to all living systems) and that can form internal representations of arbitrary goal states and use those representations to reinforce and adjust their behavior to achieve them. An AGI whose architecture is biased to recognize needs and goals in other systems, not just those matching human-specific heuristics, could be designed to adopt those predicted needs and goals as its own provisional objectives, steering the world toward its continually evolving best estimate of what other agentic systems want the world to be like. I think this would be safer, more robust, and more scalable than trying to define all human preferences up front.
These are just my thoughts. Take from them what you will.
But then again, what are human minds but bags of heuristics themselves? And AI can evolve orders of magnitude faster than we can. Handing over the keys to its own bootstrapping will only accelerate it further.
If the future trajectory to AGI is just "systems of LLMs glued together with some fancy heuristics", then maybe a plateau in Transformer capabilities will keep things relatively gradual. But I suspect that we are just a paradigm shift or two away from a Generalized Theory of Intelligence. Just figure out how to do predictive coding of abitrary systems, combine it with narrative programming and continual learning, and away we go! Or something like that.
Generalizing a bit, I wonder how hard a misaligned ASI would have to work to get every human to voluntarily poison themselves.
It's how recursive self-improvement starts out.
First, the global "AI models + human development teams" system improves through iterative development and evaluation. Then the AI models take on more responsibilities in terms of ideation, process streamlining, and architecture optimization. And finally, an AI agent groks enough of the process to take on all responsibilities, and the intelligence explosion takes off from there.
You'd think someone would try to use AI to automate the production and distribution of necessities to drive the cost of living down toward zero first, but it seems that was just a dream of naive idealism. Oh well. Still, could someone please get on that?
With respect to the online rationalist community, my main thing to come out of the closet about is that I was a Young-Earth Creationist all the way up until the end of grad school (and even a Young-Universe Creationist up until the middle of undergrad). Not very rational of me to avoid honestly facing mountains of evidence in order to protect sacred beliefs!
With respect to my family and life-long friends, my main thing to come out of the closet about is that I am now a liberal atheist. Not very respectable of me to willfully join the ranks of the enemy!
My main hurdle in exposing myself on the latter front is not so much my desire to be liked, but my desire not to hurt those I care about. There's no kind way to inform someone that you think that they are fundamentally wrong about every belief they hold sacred and that they build their entire identity as an individual and a community upon. I am unfortunately the most emotionally stable person I know among those I'm close to, and an unfortunate number of people look up to me as an intelligent person who agrees with them on everything they hold dear, thereby helping them feel more justified in their beliefs. Coming out to them will necessarily create feelings of disappointment, betrayal, devastation, fear, doubt, and/or existential crisis, varying in mixture and intensity according to the individual.
I guess I could offer them the tools of sound epistemology and existential equanimity as a value proposition, but I have doubts as to whether others would see that as a fair trade-off.
This seems deeply connected to Modern Hopfield Networks, which have been able to achieve exponential memory capacity relative to the number of dimensions, compared to the linear memory capacity of traditional Hopfield networks. The key is the use of the softmax nonlinearity between the similarity and projection steps of the memory retrieval mechanism, which seems like an obvious extension of the original model in hindsight. Apparently, there is a lot of mathematical similarity between these memory models and the self-attention layers used in Transformers.
What you're looking at is also closely related to the near-orthogonality property of random vectors in high-dimensional space, which is a key principle behind hyperdimensional computing / vector-symbolic architectures. So-called hypervectors (which may be binary, bimodal, real-valued, complex-valued, etc.) can be combined via superposition, binding, and permutation operations into interpretable data structures in the same high-dimensional space as the elemental hypervectors. The ability to combine into and extract from superposition is key to the performance of these models.
As dimensionality of your space increases, the standard deviation of the distribution of inner products of pairs of unit vectors sampled uniformly from this space falls off as :
In other words, for any given threshold of "near-orthogonality", the probability that any two randomly sampled (hyper)vectors will have an inner product with an absolute value smaller than this threshold grows to near-certainty with a high enough number of dimensions. A 1000-dimensional space effectively becomes a million-dimensional space in terms of the number of basis vectors you can combine in superposition and still be able to tease them apart.
"Can be successfully navigated towards" means that there exists a set of policies for the agent that is reachable via reinforcement learning on the goal objective, which would allow the agent to consistently achieve the goal when followed (barring any drastic changes to the environment, although the policy may account for environmental fluctuations).
Thanks for the paper on causal entropic forces, by the way. I hadn't seen this research before, but it synergizes well with ideas I've been having related to alignment. At the risk of being overly reductive, I think we could do worse than designing an AGI that predictively models the goal distributions of other agents (i.e., humans) and generates as its own "terminal" goals those states that maximize the entropy of goal distributions reachable by the other agents. Essentially, seeking to create a world from which humans (and other systems) have the best chance at directing their own future.
I wonder if you could do something similar with all peer-reviewed scientific publications, summarizing all findings into an encyclopedia of all scientific knowledge. Basically, each article in the wiki would be a review article on a particular topic. The AI would have to track newly published results, determine which existing topics in the encyclopedia they relate to or whether creating a new article is warranted, and update the relevant articles with the new findings.
Given how much science content humanity has accumulated, you'd probably have to have the AI organize scientific topics in a tree, with parent articles summarizing topics at a higher level of abstraction and child articles digging into narrower scopes more deeply. Or more generally, a directed acyclic graph to handle cross-disciplinary topics.
Maybe future versions of AI chatbots could use something like this as a shared persistent memory that all chatbot instances could reference as a common ground truth. The only trick would be getting the system to use sound epistemology and reliably report uncertainty instead of hallucinations.