Thanks for engaging
Likewise :)
Also, sorry about the length of this reply. As the adage goes: "If I had more time, I would have written a shorter letter."
From my perspective you seem simply very optimistic on what kind of data can be extracted from unspecific measurements.
That seems to be one of the relevant differences between us. Although I don't think it is the only difference that causes us to see things differently.
Other differences (I guess some of these overlap):
From my perspective, I don't see how your reasoning is qualitatively distinct from saying in the 1500s: "We will for sure never be able to know what the sun is made out of, since we won't be able to travel there and take samples."
Even if we didn't have e.g. the standard model, my perspective would still be roughly what it is (with some adjustments to credences, but not qualitatively so). So to me, us having the standard model is "icing on the cake".
Here is another good example on how Eliezer makes some pretty out there claims about what might be possible to infer from very little data: https://www.lesswrong.com/posts/ALsuxpdqeTXwgEJeZ/could-a-superintelligence-deduce-general-relativity-from-a -- I wonder what your intuition says about this?
Eliezer says "A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis (...)". I might add more qualifiers (replacing "would" with "might", etc). I think I have wider error-bars than Eliezer, but similar intuitions when it comes to this kind of thing.
Speaking of intuitions, one question that maybe gets at deeper intuitions is "could AGIs find out how to play theoretically perfect chess / solve the game of chess?". At 5/1 odds, this is a claim that I myself would bet neither for nor against (I wouldn't bet large sums at 1/1 odds either). While I think people of a certain mindset will think "that is computationally intractable [when using the crude methods I have in mind]", and leave it at that.
As to my credences that a superintelligence could "oneshot" nanobots[1] - without being able to design and run experiments prior to designing this plan - I would bet neither "yes" or "no" to that a 1/1 odds (but if I had to bet, I would bet "yes").
Upon seeing three frames of a falling apple and with no other information, a superintelligence would assign a high probability to Newtonian mechanics, including Newtonian gravity. [from the post you reference]
But it would have other information. Insofar as it can reason about the reasoning-process that it itself consists of, that's a source of information (some ways by which the universe could work would be more/less likely to produce itself). And among ways that reality might work - which the AI might hypothesize about (in the absence of data) - some will be more likely than others in a "Kolmogorov complexity" sort of way.
How far/short a superintelligence could get with this sort of reasoning, I dunno.
Here is an excerpt from a TED-talk from the Wolfram Alpha that feels a bit relevant (I find the sort of methodology that he outlines deeply intuitive):
"Well, so, that leads to kind of an ultimate question: Could it be that someplace out there in the computational universe we might find our physical universe? Perhaps there's even some quite simple rule, some simple program for our universe. Well, the history of physics would have us believe that the rule for the universe must be pretty complicated. But in the computational universe, we've now seen how rules that are incredibly simple can produce incredibly rich and complex behavior. So could that be what's going on with our whole universe? If the rules for the universe are simple, it's kind of inevitable that they have to be very abstract and very low level; operating, for example, far below the level of space or time, which makes it hard to represent things. But in at least a large class of cases, one can think of the universe as being like some kind of network, which, when it gets big enough, behaves like continuous space in much the same way as having lots of molecules can behave like a continuous fluid. Well, then the universe has to evolve by applying little rules that progressively update this network. And each possible rule, in a sense, corresponds to a candidate universe.
Actually, I haven't shown these before, but here are a few of the candidate universes that I've looked at. Some of these are hopeless universes, completely sterile, with other kinds of pathologies like no notion of space, no notion of time, no matter, other problems like that. But the exciting thing that I've found in the last few years is that you actually don't have to go very far in the computational universe before you start finding candidate universes that aren't obviously not our universe. Here's the problem: Any serious candidate for our universe is inevitably full of computational irreducibility. Which means that it is irreducibly difficult to find out how it will really behave, and whether it matches our physical universe. A few years ago, I was pretty excited to discover that there are candidate universes with incredibly simple rules that successfully reproduce special relativity, and even general relativity and gravitation, and at least give hints of quantum mechanics."
invent General Relativity as a hypothesis [from the post you reference]
As I understand it, the original experiment humans did to test for general relativity (not to figure out that general relativity probably was correct, mind you, but to test it "officially") was to measure gravitational redshift.
And I guess redshift is an example of something that will affect many photos. And a superintelligent mind might be able to use such data better than us (we, having "pathetic" mental abilities, will have a much greater need to construct experiments where we only test one hypothesis at a time, and to gather the Bayesian evidence we need relating to that hypothesis from one or a few experiments).
It seems that any photo that contains lighting stemming from the sun (even if the picture itself doesn't include the sun) can be a source of Bayesian evidence relating to general relativity:
It seems that GPS data must account for redshift in its timing system. This could maybe mean that some internet logs (where info can be surmised about how long it takes to send messages via satellite) could be another potential source for Bayesian evidence:
I don't know exactly what and how much data a superintelligence would need to surmise general relativity (if any!). How much/little evidence it could gather from a single picture of an apple I dunno.
There is just absolutely no reason to consider general relativity at all when simpler versions of physics explain absolutely all observations you have ever encountered (which in this case is 2 frames). [from the post you reference]
I disagree with this.
First off, it makes sense to consider theories that explain more observations than just the ones you've encountered.
Secondly, simpler versions of physics do not explain your observations when you see 2 webcam-frames of a falling apple. In particular, the colors you see will be affected by non-Newtonian physics.
Also, the existence of apples and digital cameras also relates to which theories of physics are likely/plausible. Same goes for the resolution of the video, etc, etc.
However, there is no way to scale this to a one-shot scenario.
You say that so definitively. Almost as if you aren't really imagining an entity that is orders of magnitude more capable/intelligent than humans. Or as if you have ruled out large swathes of the possibility-space that I would not rule out.
I just think if an AI executed it today it would have no way of surviving and expanding.
If an AGI is superintelligent and malicious, then surviving/expanding (if it gets onto the internet) seems quite clearly feasible to me.
We even have a hard time getting corona-viruses back in the box! That's a fairly different sort of thing, but it does show how feeble we are. Another example is illegal images/videos, etc (where the people sharing those are humans).
An AGI could plant itself onto lots of different computers, and there are lots of different humans it could try to manipulate (a low success rate would not necessarily be prohibitive). Many humans fall for pretty simple scams, and AGIs would be able to pull off much more impressive scams.
This is absolutely what engineers do. But finding the right design patterns that do this involves a lot of experimentation (not for a pipe, but for constructing e.g. a reliable transistor).
Here you speak about how humans work - and in such an absolutist way. Being feeble and error-prone reasoners, it makes sense that we need to rely heavily on experiments (and have a hard time making effective use of data not directly related to the thing we're interested in).
That protein folding is "solved" does not disprove this IMO.
I think protein being "solved" exemplifies my perspective, but I agree about it not "proving" or "disproving" that much.
Biological molecules are, after all, made from simple building blocks (amino acid) with some very predictable properties (how they stick together) so it's already vastly simplified the problem.
When it comes to predictable properties, I think there are other molecules where this is more the case than for biological ones (DNA-stuff needs to be "messy" in order for mutations that make evolution work to occur). I'm no chemist, but this is my rough impression.
are, after all, made from simple building blocks (amino acid) with some very predictable properties (how they stick together)
Ok, so you acknowledge that there are molecules with very predictable properties.
It's ok for much/most stuff not to be predictable to an AGI, as long as the subset of stuff that can be predicted is sufficient for the AGI to make powerful plans/designs.
finding the right molecules that reliably do what you want, as well as how to put them together, etc., is a lot of research that I am pretty certain will involve actually producing those molecules and doing experiments with them.
Even IF that is the case (an assumption that I don't share but also don't rule out), design-plans may be made to have experimentation built into them. It wouldn't necessarily need to be like this:
I could give specific examples of ways to avoid having to do it that way, but any example I gave would be impoverished, and understate the true space of possible approaches.
His claim is that an ASI will order some DNA and get some scientists in a lab to mix it together with some substances and create nanobots.
I read the scenario he described as:
I interpreted him as pointing to a larger possibility-space than the one you present. I don't think the more specific scenario you describe would appear prominently in his mind, and not mine either (you talk about getting "some scientists in a lab to mix it together" - while I don't think this would need to happen in a lab).
Here is an excerpt from here (written in 2008), with boldening of text done by me:
"1. Crack the protein folding problem, to the extent of being able to generate DNA
strings whose folded peptide sequences fill specific functional roles in a complex
chemical interaction.
2. Email sets of DNA strings to one or more online laboratories which offer DNA
synthesis, peptide sequencing, and FedEx delivery. (Many labs currently offer this
service, and some boast of 72-hour turnaround times.)
3. Find at least one human connected to the Internet who can be paid, blackmailed,
or fooled by the right background story, into receiving FedExed vials and mixing
them in a specified environment.
4. The synthesized proteins form a very primitive “wet” nanosystem which, ribosomelike, is capable of accepting external instructions; perhaps patterned acoustic vibrations delivered by a speaker attached to the beaker.
5. Use the extremely primitive nanosystem to build more sophisticated systems, which
construct still more sophisticated systems, bootstrapping to molecular
nanotechnology—or beyond."
Btw, here are excerpts from a TED-talk by Dan Gibson from 2018:
"Naturally, with this in mind, we started to build a biological teleporter. We call it the DBC. That's short for digital-to-biological converter. Unlike the BioXp, which starts from pre-manufactured short pieces of DNA, the DBC starts from digitized DNA code and converts that DNA code into biological entities, such as DNA, RNA, proteins or even viruses. You can think of the BioXp as a DVD player, requiring a physical DVD to be inserted, whereas the DBC is Netflix. To build the DBC, my team of scientists worked with software and instrumentation engineers to collapse multiple laboratory workflows, all in a single box. This included software algorithms to predict what DNA to build, chemistry to link the G, A, T and C building blocks of DNA into short pieces, Gibson Assembly to stitch together those short pieces into much longer ones, and biology to convert the DNA into other biological entities, such as proteins.
This is the prototype. Although it wasn't pretty, it was effective. It made therapeutic drugs and vaccines. And laboratory workflows that once took weeks or months could now be carried out in just one to two days. And that's all without any human intervention and simply activated by the receipt of an email which could be sent from anywhere in the world. We like to compare the DBC to fax machines.
(...)
Here's what our DBC looks like today. We imagine the DBC evolving in similar ways as fax machines have. We're working to reduce the size of the instrument, and we're working to make the underlying technology more reliable, cheaper, faster and more accurate.
(...)
The DBC will be useful for the distributed manufacturing of medicine starting from DNA. Every hospital in the world could use a DBC for printing personalized medicines for a patient at their bedside. I can even imagine a day when it's routine for people to have a DBC to connect to their home computer or smart phone as a means to download their prescriptions, such as insulin or antibody therapies. The DBC will also be valuable when placed in strategic areas around the world, for rapid response to disease outbreaks. For example, the CDC in Atlanta, Georgia could send flu vaccine instructions to a DBC on the other side of the world, where the flu vaccine is manufactured right on the front lines."
I believe understanding protein function is still vastly less developed (correct me if I'm wrong here, I haven't followed it in detail).
I'm no expert on this, but what you say here seems in line with my own vague impression of things. As you maybe noticed, I also put "solved" in quotation marks.
However, in this specific instance, the way Eliezer phrases it, any iterative plan for alignment would be excluded.
As touched upon earlier, I am myself am optimistic when it comes to iterative plans for alignment. But I would prefer such iteration to be done with caution that errs on the side of paranoia (rather than being "not paranoid enough").
It would be ok if (many of the) people doing this iteration would think it unlikely that intuitions like Eliezer's or mine are correct. But it would be preferable for them to carry out plans that would be likely to have positive results even if they are wrong about that.
Like, you expect that since something seems hopeless to you, a superintelligent AGI would be unable to do it? Ok, fine. But let's try to minimize the amount of assumptions like that which are loadbearing in our alignment strategies. Especially for assumptions where smart people who have thought about the question extensively disagree strongly.
As a sidenote:
It's interesting to note that even though many people (such as yourself) have a "conservative" way of thinking (about things such as this) compared to me, I am still myself "conservative" in the sense that there are several things that have happened that would have seemed too "out there" to appear realistic to me.
Another sidenote:
One question we might ask ourselves is: "how many rules by which the universe could work would be consistent with e.g. the data we see on the internet?". And by rules here, I don't mean rules that can be derived from other rules (like e.g. the weight of a helium atom), but the parameters that most fundamentally determine how the universe works. If we...
...my (possibly wrong) guess is that there would be a "clear winner".
Even if my guess is correct, that leaves the question of whether finding/determining the "winner" is computationally tractable. With crude/naive search-techniques it isn't tractable, but we don't know the specifics of the techniques that a superintelligence might use - it could maybe develop very efficient methods for ruling out large swathes of search-space.
And a third sidenote (the last one, I promise):
Speculating about this feels sort of analogous to reasoning about a powerful chess engine (although there are also many disanalogies). I know that I can beat an arbitrarily powerful chess engine if I start from a sufficiently advantageous position. But I find it hard to predict where that "line" is (looking at a specific board position, and guessing if an optimal chess-player could beat me). Like, for some board positions the answer will be a clear "yes" or a clear "no", but for other board-positions, it will not be clear.
I don't know how much info and compute a superintelligence would need to make nanotechnology-designs that work in a "one short"-ish sort of way. I'm fairly confident that the amount of computational resources used for the initial moon-landing would be far too little (I'm picking an extreme example here, since I want plenty of margin for error). But I don't know where the "line" is.
Although keep in mind that "oneshotting" does not exclude being able to run experiments (nor does it rule out fairly extensive experimentation). As I touched upon earlier, it may be possible for a plan to have experimentation built into itself. Needing to do experimentation ≠ needing access to a lab and lots of serial time.
I suspect my own intuitions regarding this kind of thing are similar to Eliezer's. It's possible that my intuitions are wrong, but I'll try to share some thoughts.
It seems that we think quite differently when it comes to this, and probably it's not easy for us to achieve mutual understanding. But even if all we do here is to scratch the surface, that may still be worthwhile.
As mentioned, maybe my intuitions are wrong. But maybe your intuitions are wrong (or maybe both). I think a desirable property of plans/strategies for alignment would be robustness to either of us being wrong about this 🙂
I however will write below why I think this description massively underestimates the difficulty in creating self-replicating nanobots
Among people who would suspect me of underestimating the difficulty of developing advanced nanotech, I would suspect most of them of underestimating the difference made by superintelligence + the space of options/techniques/etc that a superintelligent mind could leverage.
In Drexler's writings about how to develop nanotech, one thing that was central to his thinking was protein folding. I remember that in my earlier thinking, it felt likely to me that a superintelligence would be able to "solve" protein folding (to a sufficient extent to do what it wanted to do). My thinking was "some people describe this as infeasible, but I would guess for a superintelligence to be able to do this".
This was before AlphaFold. The way I remember it, the idea of "solving" protein folding was more controversial back in the day (although I tried to google this now, and it was harder to find good examples than I thought it would be).
While physicists sometimes claim to derive things from first principles, in practice these derivations often ignore a lot of details which still has to be justified using experiments
As humans we are "pathetic" in terms of our mental abilities. We have a high error-rate in our reasoning / the work we do, and this makes us radically more dependent on tight feedback-loops with the external world.
This point of error-rate in one's thinking is a really important think. With lower error-rate + being able to do much more thinking / mental work, it becomes possible to learn and do much much more without physical experiments.
The world, and guesses regarding how the world works (including detail-oriented stuff relating to chemistry/biology), are highly interconnected. For minds that are able to do vast about of high-quality low error-rate thinking, it may be possible to combine subtle and noisy Bayesian evidence into overwhelming evidence. And for approaches it explores regarding this kind of thinking, it can test how good it does at predicting existing info/data that it already has access to.
The images below are simple/small examples of the kind of thinking I'm thinking of. But I suspect superintelligences can take this kind of thinking much much further.
The post Einstein's Arrogance also feels relevant here.
While it is indeed possible or even likely that the standard model theoretically describes all details of a working nanobot with the required precision, the problem is that in practice it is impossible to simulate large physical systems using it.
It is infeasible to simulate in "full detail", but it's not clear what we should conclude based on that. Designs that work are often robust to the kinds of details that we need precise simulation in order to simulate correctly.
The specifics of the level of detail that is needed depends on the design/plan in question. A superintelligence may be able to work with simulations in a much less crude way than we do (with much more fine-grained and precise thinking in regards to what can be abstracted away for various parts of the "simulation").
The construction-process/design the AI comes up with may:
Here are some relevant quotes from Radical Abudance by Eric Drexler:
"Coping with limited knowledge is a necessary part of design and can often be managed. Indeed, engineers designed bridges long before anyone could calculate stresses and strains, which is to say, they learned to succeed without knowledge that seems essential today. In this light, it’s worth considering not only the extent and precision of scientific knowledge, but also how far engineering can reach with knowledge that remains incomplete and imperfect.
For example, at the level of molecules and materials—the literal substance of technological systems—empirical studies still dominate knowledge. The range of reliable calculation grows year by year, yet no one calculates the tensile strength of a particular grade of medium-carbon steel. Engineers either read the data from tables or they clamp a sample in the jaws of a strength-testing machine and pull until it breaks. In other words, rather than calculating on the basis of physical law, they ask the physical world directly.
Experience shows that this kind of knowledge supports physical calculations with endless applications. Building on empirical knowledge of the mechanical properties of steel, engineers apply physics-based calculations to design both bridges and cars. Knowing the empirical electronic properties of silicon, engineers apply physics-based calculations to design transistors, circuits, and computers.
Empirical data and calculation likewise join forces in molecular science and engineering. Knowing the structural properties of particular configurations of atoms and bonds enables quantitative predictions of limited scope, yet applicable in endless circumstances. The same is true of chemical processes that break or make particular configurations of bonds to yield an endless variety of molecular structures.
Limited scientific knowledge may suffice for one purpose but not for another, and the difference depends on what questions it answers. In particular, when scientific knowledge is to be used in engineering design, what counts as enough scientific knowledge is itself an engineering question, one that by nature can be addressed only in the context of design and analysis.
Empirical knowledge embodies physical law as surely as any calculation in physics. If applied with caution—respecting its limits—empirical knowledge can join forces with calculation, not just in contemporary engineering, but in exploring the landscape of potential technologies.
To understand this exploratory endeavor and what it can tell us about human prospects, it will be crucial to understand more deeply why the questions asked by science and engineering are fundamentally different. One central reason is this: Scientists focus on what’s not yet discovered and look toward an endless frontier of unknowns, while engineers focus on what has been well established and look toward textbooks, tabulated data, product specifications, and established engineering practice. In short, scientists seek the unknown, while engineers avoid it.
Further, when unknowns can’t be avoided, engineers can often render them harmless by wrapping them in a cushion. In designing devices, engineers accommodate imprecise knowledge in the same way that they accommodate imprecise calculations, flawed fabrication, and the likelihood of unexpected events when a product is used. They pad their designs with a margin of safety.
The reason that aircraft seldom fall from the sky with a broken wing isn’t that anyone has perfect knowledge of dislocation dynamics and high-cycle fatigue in dispersion-hardened aluminum, nor because of perfect design calculations, nor because of perfection of any other kind. Instead, the reason that wings remain intact is that engineers apply conservative design, specifying structures that will survive even unlikely events, taking account of expected flaws in high-quality components, crack growth in aluminum under high-cycle fatigue, and known inaccuracies in the design calculations themselves. This design discipline provides safety margins, and safety margins explain why disasters are rare."
"Engineers can solve many problems and simplify others by designing systems shielded by barriers that hold an unpredictable world at bay. In effect, boxes make physics more predictive and, by the same token, thinking in terms of devices sheltered in boxes can open longer sightlines across the landscape of technological potential. In my work, for example, an early step in analyzing APM systems was to explore ways of keeping interior working spaces clean, and hence simple.
Note that designed-in complexity poses a different and more tractable kind of problem than problems of the sort that scientists study. Nature confronts us with complexity of wildly differing kinds and cares nothing for our ability to understand any of it. Technology, by contrast, embodies understanding from its very inception, and the complexity of human-made artifacts can be carefully structured for human comprehension, sometimes with substantial success.
Nonetheless, simple systems can behave in ways beyond the reach of predictive calculation. This is true even in classical physics.
Shooting a pool ball straight into a pocket poses no challenge at all to someone with just slightly more skill than mine and a simple bank shot isn’t too difficult. With luck, a cue ball could drive a ball to strike another ball that drives yet another into a distant pocket, but at every step impacts between curved surfaces amplify the effect of small offsets, and in a chain of impacts like this the outcome soon becomes no more than a matter of chance—offsets grow exponentially with each collision. Even with perfect spheres, perfectly elastic, on a frictionless surface, mere thermal energy would soon randomize paths (after 10 impacts or so), just as it does when atoms collide.
Many systems amplify small differences this way, and chaotic, turbulent flow provides a good example. Downstream turbulence is sensitive to the smallest upstream changes, which is why the flap of a butterfly’s wing, or the wave of your hand, will change the number and track of the storms in every future hurricane season.
Engineers, however, can constrain and master this sort of unpredictability. A pipe carrying turbulent water is unpredictable inside (despite being like a shielded box), yet can deliver water reliably through a faucet downstream. The details of this turbulent flow are beyond prediction, yet everything about the flow is bounded in magnitude, and in a robust engineering design the unpredictable details won’t matter."
and is not possible without involvement of top-tier human-run labs today
Eliezer's scenario does assume the involvement of human labs (he describes a scenario where DNA is ordered online).
Alignment is likely an iterative process
I agree with you here (although I would hope that much of this iteration can be done in quick succession, and hopefully in a low-risk way) 🙂
Btw, I very much enjoyed this talk by Ralph Merkle. It's from 2009, but it's still my favorite talk from every talk I've seen on the topic. Maybe you would enjoy it as well. He briefly touches upon the topic of simulations at 28:50, but the entire talk is quite interesting IMO:
None of these are what you describe, but here are some places people can be pointed to:
AGI-assisted alignment in Dath Ilan (excerpt from here)
Suppose Dath Ilan got into a situation where they had to choose the strategy of AGI-assisted alignment, and didn't have more than a few years to prepare. Dath Ilan wouldn't actually get themselves into such a situation, but if they did, how might they go about it?
I suspect that among other things they would:
This is from What if Debate and Factored Cognition had a mutated baby? (a post I've been working on).
It's from the intro/summary (not the whole post). I might not finish the post, as I'm working on a new/different post that covers much of the same ground.
Resembles Debate, but:
|
We'd have separate systems that (among other things):
I'll explain why #2 doesn't rely on us already having obtained honest systems. |
"ASIs could manipulate humans" is a leaky abstraction (which humans? how is argumentation restricted?). ASIs would know regularities for when humans are hard to fool (even by other ASIs). I posit: We can safely/robustly get them to make functions that specify these regularities. |
To many of you, the following will seem misguided:
We can obtain systems from AIs that predict human evaluations of the various steps in “proof-like” argumentation.
We can obtain functions from AIs that assign scores to “proof-like” argumentation based on how likely humans are to agree with the various steps.
We can have these score-functions leverage regularities for when humans tend to evaluate correctly (based on info about humans, properties of the argumentation, etc).
We can then request “proofs” for whether outputs do what we asked for / want (and trust output that can be accompanied with high-scoring proofs).
We can request outputs that help us make robustly aligned AGIs.
Many of you may find several problems with what I describe above - not just one. Perhaps most glaringly, it seems circular:
If we already had AIs that we trusted to write functions that separate out “good” human evaluations, couldn’t we just trust those AIs to give us “good” answers directly?
The answer has to do with what we can and can’t score in a safe and robust way (for purposes of gradient descent).
The answer also has to do with exploration of wiggle-room:
Given a specific score-function, is it possible to construct high-scoring arguments that argue in favor of contradictory conclusions?
And exploration of higher-level wiggle-room:
Suppose some specific restrictions for score-functions (designed to make it hard to make high-scoring score-functions that have low wiggle-room for “wrong” reasons).
Given those restrictions, is it possible to make high-scoring score-functions that are mutually contradictory (even if internally those score-functions have low wiggle-room)?
All score-functions that robustly leverate regularities for when human evaluations are correct/good would have low wiggle-room. The reverse is not true. Score-functions could have low wiggle-room due to somehow favoring wrong/bad conclusions.
Some (but not all) “core” concepts are summarized below:
Wiggle-room (relative to score-function for argument-step-networks) Is it possible to generate high-scoring networks that argue for contradictory conclusions? If yes, then that means there is wiggle room. Any score-function that robustly separates and leverages “good” human evaluations will have low wiggle-room. But the reverse is not true, as there also are other ways to achieve low wiggle-room. |
Making it so that the way for a score-function to itself get a high score is to separate out “good” human evaluations Some AIs would make score-functions, and they would try to make score-functions that get a high score (from the score-function score-function 🙃). Score-functions would be required to have low wiggle-room. If constructed freely, score-functions could achieve low wiggle-room in a variety of ways. But with this alignment-methodology, the “space” of possible score-functions that can be proposed would be heavily restricted. We’d want score-functions to achieve low wiggle-room by leveraging regularities for when human evaluators get things right. We’d do that by making it so that achieving low wiggle-room any other way either would be impossible, or result in a lower score. Some of the tools that would be on our toolbox when restricting score-functions:
|
Wiggle-room (relative to score-function for score-functions) Have we succeeded with making it so that, in order to maximize score, score-functions must robustly separate out “good” human evaluations? Or is it possible to somehow sneak in bias in favor of specific conclusions, without the score-function getting a lower score as a result? We can check that, by asking AIs to find high-scoring score-functions that allow for contradictory conclusions. If they are able to do that, then it’s possible to sneak in bias.
|
Below are some concepts related to extracting aligned capabilities. The main goal is to be able to verify specialized functions without having humans need to look at the source code, and without being able to safely/robustly score outputs for the full range of inputs.
Some things we need:
An example of an aligned capability we might want would be to predict human answers. In this case, we could test outputs by actually asking questions to real humans (or using existing data of human receiving questions). But if we use the systems to predict human answers when they evaluate AGI-generated content, then we may not want to test those outputs/predictions on real humans.
(I'm working on texts that hopefully will explain these concepts better. In the meantime, this is the best I have.)
Desideratum A function that determines whether some output is approved or not (that output may itself be a function). |
Score-function A function that assigns score to some output (that output may itself be a function). |
Function-builder Think regular expressions, but more expressive and user-friendly. We can require of AIs: "Only propose functions that can be made with this builder". That way, we restrict their expressivity. When we as humans specify desideratum, this is one tool (among several!) in the tool-box. |
Higher-level desideratum or score-function Not fundamentally different from other desideratum or score-functions. But the output that is evaluated is itself a desideratum or score-function. At every level there can be many requirements for the level below. A typical requirement at every level is low wiggle-room. |
Example of higher-level desideratum / score-functions Humans/operators define a score-function ← level 4 for desideratum ← level 3 for desideratum ← level 2 for desideratum ← level 1 for functions that generate the output we care about. |
Wiggle-room relative to desideratum Among outputs that would be approved by the desideratum, do any of them contradict each other in any way? For example: Are there possible functions that give contradicting outputs (for at least 1 input), such that both functions would be approved by the desideratum? |
Wiggle-room relative to score-function Among outputs that would receive a high score by the score-function in question (e.g. "no less than 80% of any other possible output"), do any of them contradict each other in any way? |
2nd-level wiggle-room relative to desiderata We start with a desiderata-desideratum or score-function-desideratum (aka 2nd-level desideratum). Set A: Any desideratum that approved by the desiderata-desideratum. Set B: Any output approved by ≥1 of the desiderata in A. Are there ≥1 contradictions among outputs in B? |
P(desideratum forces good outputs | desideratum has low wiggle-room) If a desideratum forces good/correct outputs, then it has low wiggle-room. But the reverse is not necessarily true. But for some desiderata we may think: "If wiggle-room is low, that’s probably because it’s hard to satisfy the desideratum without also producing good output." |
“Spaces/sets of desideratum where we think P(desideratum forces good outputs | desideratum has low wiggle-room) is low Among spaces/sets of low-wiggle-room desideratum where we suspect "low wiggle-room → good output" (as defined by higher-level desideratum), do outputs converge? |
AI vs AI game AIs compete in a zero-sum game. Game produces some type of output (e.g. a function) that is scored.
|
AIs (in AI vs AI games) proving each other wrong, making bets and counter-bets, etc "This is the shortest source-code for doing A" "Here is a source-code that's shorter" "All outputs from F1 will be scored well by evaluation-function F2" "Here's an input where that's not true"
"I'll bet on any prediction made by function F3" "I make a counter-bet for input I1" |
Evaluation-range For functions with AGI origin, there may sometimes be only some subset of the inputs where we can evaluate output robustly+safely. For example, when testing systems that predict human output, we may want to avoid experiments where humans evaluate AGI-generated content. |
Test-range In AI vs AI games (e.g. to make high-scoring functions), some subset of inputs may be declared the test-range. Bad output in evaluation-range outside of test-range: No penalty. Test-range ⊈ evaluation-range: Game is not scored. |
Optimization-targets Some examples (there are many more):
|
Optimization-targets that cause low wiggle-room outside of test-range Imagine AIs competing to make functions that calculate the area of squares. Accurate output inside test-range is required. Source code brevity gives bonus-points. Among functions that maximize score, all have the same outputs outside of test-range. |
Combining optimization-targets Optimization-targets can be combined (be that with weighted sums, or functions that assign score based on more sophisticated evaluations). When I talk about optimization-targets, you should not assume that it consists of just 1 heuristic (such as e.g. source code length). |
Techniques for testing P(optimization-target forces good output | optimization-target causes low wiggle-room) Run AI vs AI games where we:
(For the sake of scale/efficiency we can have AIs make bets on the outcomes of such games.) |
Rules for predicting if/when optimization-targets limit wiggle-room for the right reasons AIs can find rules that predict when we should trust some optimization-target to cause low wiggle-room for the right reasons (scored by simplicity, false positive rate, etc). They can make bets on average performance, and point out examples where rules from competing AIs fail. |
Convergence/divergence among rules for predicting if/when optimization-targets limit wiggle-room for the right reasons B: Outputs we can't test safely. R: Rules for predicting when optimization-targets force good output (top-scoring when tested within For outputs in B, we can explore: "Does R converge in regards to this output?" |
“Spaces” of possible functions we think contain ≥1 robustly reliable function If we (have good reason to) think some “space” of functions contains at least ≥1 robustly reliable function, and all functions in that “space” agree on output for a given input, then we can trust this to be that output - even if there isn’t a specific function that we trust to be robustly reliable. |
Source code brevity Sometimes the solution with shortest source code is quite generic/general/abstract. In such cases, intentionally giving wrong output for some inputs would often require implementing this solution and adding (something akin to) if-sentences that gives bad output for some inputs. |
Generality A maximally “non-general” function would have 1 if-sentence per possible input. Possible heuristics:
|
Use of computational resources If a function only focuses on finding the correct solution, and doesn’t need to spend cycles thinking about how to deceive, what humans think, etc (in addition to finding the correct solution), it will need less computational resources. |
Non-overlap-minimization with optimized code-bases where we robustly can score/verify the full range of outputs We can score/verify all outputs for:
If you’re not trying to have your function give bad output once in a while, borrowing sub-routines may be easier. |
Source code brevity (while doing several specialized tasks from same code-base) Suppose one code-base is to be specialized, but towards several different tasks (and for several of them, all output can be verified). E.g.:
|
Modularity Imagine that you are trying to make a car that fails in certain very specific conditions. This becomes more challenging if it has to be made out of parts that also are used in other vehicles (and these other vehicles are robustly tested). |
The degree to which subroutine instantiations inside/outside test-range can be told apart % of source code that consists of subroutines where no AI is able to guess/determine based on input if the function-call is inside or outside of test-range. This could be determined by running other AI vs AI games (but mostly by making predictions/bets on what outcome would be). |
Module implementation flexibility Imagine that you are trying to make a car, and want the steering wheel to fail, but only in certain very specific conditions. This becomes more challenging to achieve consistently if it isn’t you who will be making the steering wheel. |
Module division flexibility If you are trying to fail for certain specific inputs, this is made harder if you are making functions based on different architectures, and they need to fail the same way. There may be crude heuristics that help approximate whether 2 architectures are substantially different. |
Minimal computational “logic” (including “axioms”) needed to infer output
|
Predicted comprehensibility to humans Heuristics relating to how understandable code-base would be to humans. It is allowed to rewrite code-base for understandability, so long as output is unchanged. This optimization-target presupposes systems that can predict human output (with non-terrible accuracy). |
Telephone game robustness Suppose various systems (maybe some emulating humans and some that don't) for:
With N cycles, what's the % chance of ≥0 outputs changing, but only outside in test-range? |
Output-robustness to training data Instead of having the AI do something, we can have it make a seed AI that does (after having been trained). And we can see the degree to which various outputs are dependent on variations in training data. |
Output-robustness to adding chains by which output can be indirectly obtained Indirect ways of obtaining answers can be chained:
Each chain is an opportunity for deceptive AIs to lose ground / be unable to have intentional mistakes remain. |
I would also like to see more work where people make less positive/optimistic assumptions. I think of it as a good thing that different approaches to alignment are being explored, and would like to see more of that in general (both in terms of breadth and depth).
I guess there are many possible ways of trying to categorize/conceptualize approaches to alignment theorizing. One is by asking "when talking/thinking about the methodology, what capabilities are assumed to be in place?".
I'm not sure about this, but unless I'm mistaken[1], a good amount of the work done by MIRI has been under a premise that goes (something sort of like): "Let's assume that we already know how to give AIs real-world objectives defined in terms of utility functions (not because we really assume that, but as a simplifying assumption)". And as far as I know, they haven't done much work where the assumption was something more like "suppose we were extremely good at gradient descent / searching through spaces of possible programs".
In my own theorizing, I don't make all of the simplifying assumptions that (I think/suspect) MIRI made in their "orthodox" research. But I make other assumptions (for the purpose of simplification), such as:
In regards to your alignment ideas, is it easy to summarize what you assume to be in place? Like, if someone came to you and said "we have written the source code for a superintelligent AGI, but we haven't turned it on yet" (and you believed them), is it easy to summarize what more you then would need in order to implement your methodology?
I very well could be, and would appreciate any corrections.
(I know they have worked on lots of detail-oriented things that aren't "one big plan" to "solve alignment". And maybe how I phrase myself makes it seem like I don't understand that. But if so, that's probably due to bad wording on my part.)
Well, I sort of make that assumption, but there are caveats.
If humans (...) machine could too.
From my point of view, humans are machines (even if not typical machines). Or, well, some will say that by definition we are not - but that's not so important really ("machine" is just a word). We are physical systems with certain mental properties, and therefore we are existence proofs of physical systems with those certain mental properties being possible.
machine can have any level of intelligence, humans are in a quite narrow spectrum
True. Although if I myself somehow could work/think a million times faster, I think I'd be superintelligent in terms of my capabilities. (If you are skeptical of that assessment, that's fine - even if you are, maybe you believe it in regards to some humans.)
prove your point by analogy with humans. If humans can pursue somewhat any goal, machine could too.
It has not been my intention to imply that humans can pursue somewhat any goal :)
I meant to refer to the types of machines that would be technically possible for humans to make (even if we don't want to so in practice, and shouldn't want to). And when saying "technically possible", I'm imagining "ideal" conditions (so it's not the same as me saying we would be able to make such machines right now - only that it at least would be theoretically possible).
Why call it an assumption at all?
Partly because I was worried about follow-up comments that were kind of like "so you say you can prove it - well, why aren't you doing it then?".
And partly because I don't make a strict distinction between "things I assume" and "things I have convinced myself of, or proved to myself, based on things I assume". I do see there as sort of being a distinction along such lines, but I see it as blurry.
Something that is derivable from axioms is usually called a theorem.
If I am to be nitpicky, maybe you meant "derived" and not "derivable".
From my perspective there is a lot of in-between between these two:
Like, I think there is a scale of sorts between those two.
I'll give an extreme example:
Person A: "It would be technically possible to make a website that works the same way as Facebook, except that its GUI is red instead of blue."
Person B: "Oh really, so have you proved that then, by doing it yourself?"
Person A: "No"
Person B: "Do you have a mathemathical proof that it's possible"
Person A: "Not quite. But it's clear that if you can make Facebook like it is now, you could just change the colors by changing some lines in the code."
Person B: "That's your proof? That's just an assumption!"
Person A: "But it is clear. If you try to think of this in a more technical way, you will also realize this sooner or later."Person B: "What's your principle here, that every program that isn't proven as impossible is possible?"
Person A: "No, but I see very clearly that this program would be possible."Person B: "Oh, you see it very clearly? And yet, you can't make it, or prove mathemathically that it should be possible."
Person A: "Well, not quite. Most of what we call mathemathical proofs, are (from my point of view) a form of rigorous argumentation. I think I understand fairly well/rigorously why what I said is the case. Maybe I could argue for it in a way that is more rigorous/formal than I've done so far in our interaction, but that would take time (that I could spend on other things), and my guess is that even if I did, you wouldn't look carefully at my argumentation and try hard to understand what I mean."
The example I give here is extreme (in order to get across how the discussion feels to me, I make the thing they discuss into something much simpler). But from my perspective it is sort of similar to discussion in regards the The Orthogonality Thesis. Like, The Orthogonality Thesis is imprecisely stated, but I "see" quite clearly that some version of it is true. Similar to how I "see" that it would be possible to make a website that technically works like Facebook but is red instead of blue (even though - as I mentioned - that's a much more extreme and straight-forward example).
This tweet from Eliezer seems relevant btw. I would give similar answers to all of the questions he lists that relate to nanotechnology (but I'd be somewhat more hedged/guarded - e.g. replacing "YES" with "PROBABLY" for some of them).