RogerDearnaley

I'm an staff artificial intelligence engineer in Silicon Valley currently working with LLMs, and have been interested in AI alignment, safety and interpretability for the last 15 years. I'm now actively looking for employment working in this area.

Sequences

AI, Alignment, and Ethics

Comments

That's not necessarily required. The Scientific Method works even if the true "Unified Field Theory" isn't yet under consideration, merely some theories that are closer to it and others further away from it: it's possible to make iterative progress.

In practice,  considered as search processes, the Scientific Method, Bayesianism, and stochastic gradient descent all tend to find similar answers: yet unlike Bayesianism gradient descent doesn't explicitly consider every point in the space including the true optimum, it just searches for nearby better points. It can of course get trapped in local minima: Singular Learning Theory highilights why that's less of a problem in practice than it sounds in theory.


The important question here is how good an approximation the search algorithm in use is to Bayesianism. As long as the AI understands that what it's doing is (like the scientific method and stochastic gradient descent) a computationally efficient approximation to the computationally intractable ideal of Bayesianism, then it won't resist the process of coming up with new possibly-better hypotheses, it will instead regard that as a necessary part of the process (like hypothesis creation in the scientific method, the mutational/crossing steps in an evolutionary algorithm, or the stochastic batch noise in stochastic gradient descent).

Cool! That makes a lot of sense. So does it in fact split into three before it splits into 7, as I predicted based on dimensionality? I see a green dot, three red dots, and seven blue ones… On the other hand, the triangle formed by the three red  dots is a lot smaller than the heptagram, which I wasn't expecting…

I notice it's also an oddly shaped heptagram.

This seems like it would be helpful: the adversary can still export data, for example encoded steganographically in otherwise-low-perplexity text, but this limits the information density they can transmit, making the process less efficient for them and making constraints like upload limits tighter.

One other thing that would make this even harder is if we change the model weights regularly, in ways where combining parts exfiltrated from separate models is hard. We know for Singular Learning Theory that the optima found by Stochastic gradient descent tend to have high degrees of symmetry. Some of these (like, say, permuting all the neurons in a layer along with this weights) are discrete, obvious, and both easy to generate and fairly easy for an attacker to compensate for. Others, sich as adjusting various weights in various layers in ways that compensate for each other, are continuous symmetries, harder to generate and would be harder for an attacker to reason about. If we had an efficient way to explore these continuous symmetries (idelly one that's easy to implement given a full set of the model weights, but hard to reconstruct from multiple partial pieces of multiple equivlent models), then we could explore this high-dimensional symmetry space of the model optimum and create multiple equivalent-but-not easy to peice together sets of models, and rotate between them over time (and/or deploy different ones to different instances) in order to make the task of exfiltrating the weights even harder.

So, to anyone who knows more about SLT than I do, a computationally efficient way to explore the continuous symmetries (directions in which both the slope and the Hessian are flat) of the optimum of a trained model could be very useful.

In practice, most current AIs are not constructed entirely by RL, partly because it has instabilities like this. For example, LLMs instruction-trained by RLHF uses a KL-divergence loss term to limit how dramatically the RL can alter the base model behavior trained by SGD. So the result deliberately isn't pure RL.

Yes, if you take a not-yet intelligent agent, train it using RL, and give it unrestricted access to a simple positive reinforcement avenue unrelated to the behavior you actually want, it is very likely to "wire-head" by following that simple maximization path instead. So people do their best not to do that when working with RL.

What I would be interested to understand about feature splitting is whether the fine-grained features are alternatives, describing an ontology, or are defining a subspace (corners of a simplex, like R, G, and B defining color space). Suppose a feature X in a small VAE is split into three features X1, X2, and X3 in a larger VAE for the same model. If occurrences of X1, X2, and X3 are correlated, so activations containing any of them commonly have some mix of them, then they span a 2d subspace (in this case the simplex is a triangle). If, on the other hand, X1, X2 and X3 co-occur in an activations only rarely (just as two randomly-selected features rarely co-occur), then they describe three similar-but-distinct variations on a concept, and X is the result of coarse-graining these together as a singly concept at a higher level in an ontology tree (so by comparing VAEs of different sizes we can generate a natural ontology).

This seems like it would be a fairly simple, objective experiment to carry out. (Maybe someone already has, and can tell me the result!) It is of course quite possible that some split features describe subspaces, and other ontologies, or indeed something between the two where the features co-occur rarely but less rarely than two random features. Or X1 could be distinct but X2 and X3 might blend to span a 1-d subspace. Nevertheless, understanding the relative frequency of these different behaviors would be enlightening.

It would be interesting to validate this using a case like the days of the week, where we believe we already understand the answer: they are 7 alternatives that are laid out in a heptagon in a 2-dimensional subspace that enables doing modular addition/subtraction modulo 7. So if we have a VAE small enough that it represented all day-of-the week names by a single feature, if we increase the VAE size somewhat we'd expect to see this to split into three features spanning a 2-d subspace, then if we increased it more we'd expect to see it resolve into 7 mutually-exclusive alternatives, and hopefully then stay at 7 in larger VAEs (at least until other concepts started to get mixed in, if that ever happened).

AI that is trained by human teachers, giving it rewards will eventually wirehead, as it becomes smarter and more powerful, and its influence over its master increases. It will, in effect, develop the ability to push its own “reward” button. Thus, its behavior will become misaligned with whatever its developers intended.

This seems like an unproven statement. Most humans are aware of the possibility of wireheading, both the actual wire version and the more practical versions involving psychotropic drugs. The great majority of humans don't choose to do that to themselves. Assuming that AI will act differently seems like an unproven assumption, one which might, for example, be justified for some AI capability levels but not others.

If you're not already familiar with the literature on Value Learning, I suggest reading some of it. The basic idea is that goal modification is natural, if what the agent has is not a detailed specification of a goal (such as a utility function mapping descriptions of world states to their utility), but instead is a simple definition of a goal (such as "want whatever outcomes the humans want") that makes it clear that the agent does not yet know the true detailed utility function and thus requires it to go attempt to find out what the detailed specification of the utility function pointed to by the goal is (for example, by researching what outcome humans want).

Then a human shutdown instruction becomes the useful information "you have made a large error in your research into the utility function, and as a result are doing harm, please shut down and let us help you correct it". Obeying that is then natural (to the extent that the human(s) are plausibly more correct than the AI).

Answer by RogerDearnaley20

There has been a lot of useful discussion in the answers and comments, which has caused me to revise and expand parts of my list. So that readers looking for practical career advice don't have to read the entire comments section to find the actual resulting advice, it seemed useful to me to give a revised list. Doing this as an answer in the context of this question seems better than either making it a whole new post, or editing the list in the original post in a way that would remove the original context of the answers and comments discussion.

This is my personal attempt to summarize the answers and comments discussion: other commenters may not agree (and are of course welcome to add comments saying so). As the discussion continues and changes my opinion, I will keep this version of the list up to date (even if that requires destroying the context of any comments on it).

List of Job Categories Safe from AI/Robots (Revised)

  1. Doing something that machines can do better, but that people are still willing to pay to watch a very talented/skilled human do about as well as any human can (on TV or in person).

    Examples: chess master, Twitch streamer, professional athlete, Cirque du Soleil performer.

    Epistemic status: already proven for some of these, the first two are things that machines have already been able to do better than a human for a while, but people are still interested in paying to watch a human do them very well for a human. Also seems very plausible for the others that current robotics is not yet up to doing better.

    Economic limits: If you're not in the top O(1000) people in the world at some specific activity that plenty of people in the world are interested in watching, then you can make roughly no money off this. Despite the aspirations of a great many teenaged boys, being an unusually good (but not amazing) video gamer is not a skill that will make you any money at all.

  2. Doing some intellectual and/or physical work that AI/robots can now do better, but for some reason people are willing to pay at least an order of magnitude more to have it done less well by a human, perhaps because they trust humans better. This could include jobs where people's willingness to pay came in the form of a legal requirement that certain work be done of supervised by a (suitably skilled/accredited) human (and these requirements have not yet been repealed).

    Example: Doctor, veterinarian, lawyer, priest, babysitter, nanny, nurse, primary school teacher.

    Epistemic status: Many people tell me "I'd never let an AI/a robot do <high-stakes intellectual or physical work> for me/my family/my pets…" They are clearly quite genuine in this opinion, but it's unclear how deeply they have considered the matter. It remains to be seen how long this opinion will last in the presence of a very large price differential when the AI/robot-produced work is actually, demonstrably, just as good if not better.

    Economic limits: I suspect there will be a lot of demand for this at first, and that it will decrease over time, perhaps even quite rapidly (though perhaps slower for some such jobs than others). Requires being reliably good at the job, and at appearing reassuringly competent while doing so.

  3. Giving human feedback/input/supervision to/of AI/robotic work/models/training data, in order to improve, check, or confirm its quality.

    Examples: current AI training crowd-workers, wikipedian (currently unpaid), acting as a manager or technical lead to a team of AI white collar workers, focus group participant, filling out endless surveys on the fine points of Human Values

    Epistemic status: seems inevitable, at least at first.

    Economic limits: I imagine there will be a lot of demand for this at first, I'm rather unsure if that demand will gradually decline, as the AIs get better at doing things/self-training without needing human input, or if it will increase over time because the overall economy is growing so fast and/or more capable models need more training data and/or society keeps moving out-of-previous distribution so new data is needed. [A lot of training data is needed, more training data is always better, and the resulting models can be used a great many times, however there is clearly an element of diminishing returns on this as more data is accumulated, and we're already getting increasingly good at generating synthetic training data.] Another question is whether a lot of very smart AIs can extract a lot of this sort of data from humans without needing their explicit paid cooperation — indeed, perhaps granting permission for them to do so and not intentionally sabotaging this might even become a condition for part of UBI (at which point deciding whether to call allowing this a career or not is a bit unclear).

  4. Skilled participant in an activity that heavily involves interactions between people, where humans prefer to do this with other real humans, are willing to pay a significant premium to do so, and you are sufficiently more skilled/talented/capable/willing to cater to others' demands than the average participant that you can make a net profit off this exchange.
    Examples: director/producer/lead performer for amateur/hobby theater, skilled comedy-improv partner, human sex-worker
    Epistemic status: seems extremely plausible
    Economic limits: Net earning potential may be limited, depending on just how much better/more desirable you are as a fellow participant than typical people into this activity, and on the extent to which this can be leveraged in a one-producer-to-many-customers way — however, making the latter factor high is is challenging because it conflicts with the human-to-real-human interaction requirement that allows you to out-compete an AI/robot in the first place. Often a case of turning a hobby into a career. 
  5. Providing some nominal economic value while being a status symbol, where the primary point is to demonstrate that the employer has so much money they can waste some of it on employing a real human ("They actually have a human maid!")

    This can either be full-time employment as a status-symbol for a specific status-signaler, or you can be making high-status "luxury" goods where owning one is a status signal, or at least has cachet. For the latter, like any luxury good, they need to be rare: this could be that they are individually hand made, and-or were specifically commissioned by a specific owner, or that they are reproduced only in a "limited edition".

    Examples: (status symbol) receptionist, maid, personal assistant; (status-symbol maker) "High Art" artist, Etsy craftsperson, portrait or commissions artist.

    Epistemic status: human nature (for the full-time version, assuming there are still people this unusually rich).

    Economic limits: For the full-time-employment version, there are likely to be relatively few positions of this type, at most a few per person so unusually rich that they feel a need to show this fact off. (Human nobility used to do a lot of this, centuries back, but there the servants were supplying real, significant economic value, and the being-a-status-symbol component of it was mostly confined to the uniforms the servant swore while doing so.) Requires rather specific talents, including looking glamorous and expensive, and probably also being exceptionally good at your nominal job.

    For the "maker of limited edition human-made goods" version: pretty widely applicable, and can provide a wide range of income levels depending on how skilled you are and how prestigious your personal brand is. Can be a case of turning a hobby into a career.

  6. Providing human-species-specific reproductive or medical services.

    Examples: Surrogate motherhood, wet-nurse, sperm/egg donor, blood donor, organ donor.

    Epistemic status: still needed.

    Economic limits: Significant medical consequences, low demand, improvements in medicine may reduce demand.

Certain jobs could manage to combine two (or more) of these categories. Arguably categories 1. and 5. are subsets of category 2.

I intended to capture that under category 2. "…but for some reason people are willing to pay at least an order of magnitude more to have it done less well by a human, perhaps because they trust humans better…" — the regulatory capture you describe (and those regulations not yet having been repealed) would be a category of reason why (and an expression of the fact that) people are willing to pay more. Evidently that section wasn't clear enough and I should have phrased this better or given it as an example.

As I said above under category 2., I expect this to be common at first but to decrease over time, perhaps even quite rapidly, given the value differentials involved.

Load More