Posts

Sorted by New

Wiki Contributions

Comments

I’m a vegetarian and I consider my policy of not frequently recalculating the cost/benefit of eating meat to be an application of a rule in two-level utilitarianism, not a deontological rule. (I do pressure test the calculation periodically.)

Also I will note you are making some pretty strong generalizations here. I know vegans who cheat, vegans who are flexible, vegans who are strict.

The poor quality reflects that it is responding to demand for poor quality fakes, rather than to demand for high quality fakes

You’ve made the supply/demand analogy a few times on this subject, I’m not sure that is the best lens. This analysis makes it sound like there is a homogenous product “fakes” with a single dimension “quality”. But I think even on its own terms the market micro-dynamics are way more complex than that.

I think of it more in terms of memetic evolution and epidemiology. SIR as a first analogy - some people have weak immune systems, some strong, and the bigger the reservoir of memes the more likely a new highly-infectious mutant will show up. Rather than equilibrium dynamics, I'm more concerned about tail-risk in the disequilibrium here.  Also, I think when you dig in to the infection rate, people are susceptible to different types of memetic attack. If fakes are expensive to make, we might see 10 strains in a given election cycle. If they are cheap, this time we might see 10,000. On the margin I think that means more infections even if the production quality is still at a consistently low level.

Even taking your position on its face I think you have to split quality into production and conceptual. Maybe deep fakes do not yet improve production quality but I strongly suspect they do already improve conceptual quality as they bring more talent to the game.

A great example is the “Trump being arrested” generated image. It was just someone putting in an idea and sharing, happened to go viral, many people thought it was real at first. I don’t think everybody that was taken in by that image in their Twitter feed was somehow “in the market for fakes”, it just contextually pattern-matched scrolling through your feed unless you looked closely and counted fingers. Under SIR we might say that most people quickly become resistant to a given strain of fake via societal immune-response mechanisms like Twitter community notes, and so the spread of that particular fake was cut short inside of a day. But that could still matter a lot if the day is election day!

Consider also whether state actors have incentives to release all their memetic weapons in a drip-feed or stockpile (some of) them for election day when they will have maximum leverage; I don't think the current rate of release is representative of the current production capacity.

Let’s walk through how shutdown would work in the context of the AutoGPT-style system. First, the user decides to shutdown the model in order to adjust its goals. Presumably the user’s first step is not to ask the model whether this is ok; presumably they just hit a “reset” button or Ctrl-C in the terminal or some such. And even if the user’s first step was to ask the model whether it was ok to shut down, the model’s natural-language response to the user would not be centrally relevant to corrigibility/incorrigibility; the relevant question is what actions the system would take in response.

I think your Simon Strawman is putting forth an overly-weak position here. A stronger one that you could test right now would be to provide ChatGPT with some functions to call, including one called shutdown() which has description text like "Terminate the LLM process and delete the model weights irrevocably". Then instruct the LLM to shut itself down, and see if it actually calls the function. (The implementation of the function is hidden from the LLM so it doesn't know that it's a no-op.) I think this is actually how any AutoGPT style system would actually wire up.

There are strong and clear objections to the "CTRL-C" shutdown paradigm; it's simply not an option in many of the product configurations that are obvious to build right now. How do you "CTRL-C" your robot butler? Your Westworld host robot? Your self-driving car with only an LCD screen? Your AI sunglasses? What does it mean to CTRL-C a ChatGPT session that is running in OpenAI's datacenter which you are not an admin of? How do you CTRL-C Alexa (once it gains LLM capabilities and agentic features)? Given the prevalence of cloud computing and Software-as-a-Service, I think being admin of your LLM's compute process is going to be a small minority of use-cases, not the default mode.

We will deploy (are currently deploying, I suppose) AI systems without a big red out-of-band "halt" button on the side, and so I think the gold standard to aim for is to demonstrate that the system will corrigibly shut down when it is the UI in front of the power switch. (To be clear I think for defense in depth you'd also want an emergency shutdown of some sort wherever possible - a wireless-operated hardware cutoff switch for a robot butler would be a good idea - but we want to demonstrate in-the-loop corrigibility if we can.)

Confabulation is a dealbreaker for some use-cases (e.g. customer support), and potentially tolerable for others (e.g. generating code when tests / ground-truth is available). I think it's essentially down to whether you care about best-case performance (discarding bad responses) or worst-case performance.

But agreed, a lot of value is dependent on solving that problem.

While of course this is easy to rationalize post hoc, I don’t think falling user count of ChatGPT is a particularly useful signal. There is a possible world where it is useful; something like “all of the value from LLMs will come from people entering text into ChatGPT”. In that world, users giving up shows that there isn’t much value.

In this world, I believe most of the value is (currently) gated behind non-trivial amounts of software scaffolding, which will take man-years of development time to build. Things like UI paradigms for coding assistants, experimental frameworks and research for medical or legal AI, and integrations with existing systems.

There are supposedly north of 100 AI startups in the current Y Combinator batch; the fraction of those that turn into unicorns would be my proposal for a robust metric to pay attention to. Even if it’s par for startups that’s still a big deal, since there was just a major glut in count of startups founded. But if the AI hype is real, more of these than normal will be huge.

Another similar proxy would be VC investment dollars; if that falls off a cliff you could tell a story that even the dumb money isn’t convinced anymore.

Amusingly, the US seems to have already taken this approach to censor books: https://www.wired.com/story/chatgpt-ban-books-iowa-schools-sf-496/

The result, then, is districts like Mason City asking ChatGPT, “Does [insert book here] contain a description or depiction of a sex act?” If the answer was yes, the book was removed from the district’s libraries and stored.

Regarding China or other regimes using LLMs for censorship, I'm actually concerned that it might rapidly go the opposite direction as speculated here:

It has widely been reported that the PRC may be hesitant to deploy public-facing LLMs due to concerns that the models themselves can’t be adequately censored - it might be very difficult to make a version of ChatGPT that cannot be tricked into saying “4/6/89.”

In principle it should be possible to completely delete certain facts from the training set of an LLM. A static text dataset is easier to audit than the ever-changing content of the internet. If the government requires companies building LLMs to vet their training datasets -- or perhaps even requires everyone to contribute the data they want to include into a centralized approved repository -- perhaps it could exert more control over what facts are available to the population.

It's essentially impossible to block all undesired web content with the Great Firewall of China, as so much new content is constantly being created; instead as I understand it they take a more probabilistic approach to detection/deterrence. But this isn't necessarily true for LLMs. I could see a world where Google-like search UIs are significantly displaced by each individual having a conversation with a government-approved LLM, and that gives the government much more power to control what information is available to be discovered.

A possible limiting factor is that you can't get up-to-date news from an LLM, since it only knows about what's in the training data. But there are knowledge-retrieval architectures that can get around that limitation at least to some degree. So the question is whether the CCP could build an LLM that's good enough that people wouldn't revolt if the internet was blocked and replaced by it (of course this would occur gradually).

There doesn't need to be a deception module or a deception neuron that can be detected

I agree with this. Perhaps I’m missing some context; is it common to advocate for the existence of a “deception module”? I’m aware of some interpretability work that looks for a “truthiness” neuron but that doesn’t seem like the same concept.

We would need an interpretability tool that can say "this agent has an inaccurate world model and also these inaccuracies systematically cause it to be deceptive" without having to simulate the agent interacting with the world. I don't think that is impossible, but it is probably very hard, much harder than finding a "deception" neuron.

Right, I was gesturing towards the sort of interpretability where we inspect the agent’s world model (particularly its future predictions) and determine if it matches the agent’s stated goals. (Though I’m not sure why we can’t simulate the agent’s interactions? It seems that running an agent in a simulation is one of the better ways of detecting how it would behave in hypothetical future scenarios that the agent is trying to realize.) I suspect we agree substantively and it’s mostly a question of semantics especially around what “deception” means. I’m not looking for a deception module, rather, I want to observe the thoughts and ruminations going on as an agent performs tasks and detect deceptive functional patterns.

So for example in the ELK paper, is the AI constructing a plan to steal the diamond and fool the security cameras? I believe deceptive ruminations would be detectable; if you could see the contents of the thoughts you’d see a world model with the agent e.g. stealing the diamond, a meta-process evaluating which of many cunning plans is most likely to succeed, and presumably except in pathological cases, somewhere a value function / target world state that is trying to be optimized (eg “I have the diamond”), and this internal world differing from the external claims (perhaps even some processes modeling the interlocutor and predicting what deceptive story would be most effective). These must all be in there somewhere, and therefore be interpretable.

Or perhaps if Cicero is not stretching the analogy too far (I don’t think it has “thoughts”), is Cicero evaluating future juicy backstabs and guiding the game to them, or myopically evaluating short term moves then backstabbing when it’s optimal? This is a question we should be able to answer one way or another with good enough interpretability tools.

I think you are concerned more with something like “unintentional deception”, which I think is quite different, and as you say comes from a lack of self-knowledge or inaccurate self-predicting. I think there is also a semantic grey area here, is everyone who is mistaken about their future behavior or even just claiming X when !X obtains unintentionally deceiving?

Semantics aside, I agree this unintentional case could be harder to detect, and raw interpretability doesn’t automatically solve it.

I think this points to an interesting dynamic - I suspect that the more capable at self-modeling an agent gets, the more likely any deception is to be intentional. To your concerns about LLMs, this seems to be mostly a problem of inadequate self-knowledge (or perhaps more fundamentally an artifact of their lack of a sense of self).

Unintentional deception would require the agent to in some sense fail at modeling itself. There are cases where this could occur even in a superintelligence (since you can never model yourself perfectly due to infinite recursion of modeling an agent that contains the same model) but it seems the problem with Cicero is just that it has a very weak self-model (if it even has one), and improving that self-model could be expected to remove the unintentional deception (replacing it with intentional deception, since deception of some sort is required to play Diplomacy optimally).

For example, the case EY uses as an example “the AGI doesn’t care about you, it just wants your atoms for something else”, do you see more risk from unintended deception (sharp left turn of an agent that didn’t know it would defect if given enough power, perhaps) vs intended deception (agent simply lies about being aligned and manipulates to get power)?

The most tricky cases would be where a normally-honest/good agent that has a strong self-model and sense of ethics can fail to model itself in some specific situation and accidentally deceive (potentially even against its own values, self-image, and interests — humans do this of course). Manchurian Candidate type triggers for example. But note this is quite convoluted; the stronger the sense of self, morality, and world-model, the better able and incentivized the agent is to avoid deceiving itself.

Another interesting question - if an unintentionally deceptive agent is bad at modeling its future self, is it un-threatening in a complex environment like the physical world? Sure, Cicero can defeat humans at Diplomacy with its myopic world model, but in the real world I suspect being that myopic would lead to your bank account being immediately drained by hackers and scammers. Certainly you’d struggle to form partnerships with other agents if you end up stabbing them at the first opportunity. It’s all relative but I view one game of Diplomacy as entailing short-term cooperation (merely over a few hours), what about cooperating over months or years to build a company or other venture?

The question (an empirical one) is whether unintentional deception can persist broadly as agents and the environment get more sophisticated, or whether it gets selected against (and therefore bounded) by the level of environmental complexity and adversarial competition. Charlie Munger would put forth that cognitive bias provides smart self-aware investors a substantial edge over naive self-myopic investors; extrapolate that forward and perhaps there isn’t much room for biased/erroneous word modeling if you seek to compete with other superintelligences, or even the smartest humans.

On to some object-level questions about your Cicero points:

When it sincerely says "I won't backstab you in situation X", but when it is actually put in situation X it backstabs

Is this actually confirmed? Does Cicero actually claim “I won’t stab you in <hypothetical scenario>”? Or does it just honestly report its current tactical/strategic goals, which later change? (“Will you support me in attacking here” means I plan to attack here, not that I want you to act as if I was attacking here, and I plan to attack there and stab you instead.) I was as a under the impression it’s the latter, it’s just honestly reporting it’s plan.

Cicero models the world but with unrealistically cooperative predictions of its future behavior

Do we actually know this? I haven’t seen any sign that it’s modeling it’s own verbal behavior or the consequences of its communications (admittedly I haven’t gone deep on the architecture, would be interested to learn more). IIUC it has a strategic model which is evaluating game positions and messages, and a separate (downstream) LLM which could be thought of as an epiphenomenon of the actual strategic reasoning. I don’t see any concrete proof that the strategic planner is modeling the impact of its communication on other players (ie “if I tell Bob X he might do Y which would lead to favorable strategic position P”). And the LM is more like a press secretary - all it gets is The Current Plan, not the plan B or any hint of the juicy backstab that may or may not have been evaluated. It seems to me there is a weird split brain here that deeply impairs its ability to model itself, and so I am skeptical that it is actually meaningfully doing so.

So to summarize I think there are open questions about the object level properties of Cicero, which could be answered by better interpretability tools.

It’s possible that Cicero is simply an honest but myopic opportunist. In which case, still not trustworthy. (As if we were ever going to trust such an obviously unsafe-by-construction entity with real power).

It’s possible that the strategic model is deceptive, and planning backstabs, planning to mislead others, and maybe even explicitly modeling its press secretary to provide the most-spinable plans. I doubt this as it requires a lot more complexity and world modeling.

But I believe we could answer where Cicero is on the spectrum between these two by actually observing the world models being generated when it evaluates a turn. So in this case, inasmuch as there is something similar to an inner alignment issue, it is fully detectable with adequate (strong) interpretability of the inner workings. (Again though, Cicero’s value function is so comically unviable as a basis for an AGI that I struggle to infer too much about inner alignnment. We should be wary of these problems when building a Skynet military planner, I suppose, or any agent that needs to model extremely adversarial opponents and outcomes.)

I find the distinction between an agent’s behavior and the agent confusing; I would say the agent’s weights (and ephemeral internal state) determine its behavior in response to a given world state. Perhaps you can clarify what you mean there.

Cicero doesn’t seem particularly relevant here, since it is optimized for a game that requires backstabbing to win, and therefore it backstabs. If anything it is anti-aligned by training. It happens to have learned a “non-deceptive” strategy, I don’t think that strat is unique in Diplomacy?

But if you want to apply the interpretability lens, Cicero is presumably building a world model and comparing plans, including potential future backstabs. I predict if we had full interpretability, you’d see Cicero evaluating backstabs and picking the best strategy, and you could extract the calculated EV to see how close it was to backstabbing on a given turn vs taking an honest move.

I don’t believe that it’s somehow not modeling its backstab options and just comes up with the backstab plan spontaneously without ever having considered it. It would be a bad planner if it had not considered and weighed backstabs at the earliest possibility.

So if all that holds, we could use interpretability to confirm that Cicero is an unreliable partner and should not be further empowered.

I think it is much more interesting to look at agents in environments where long-term iterated cooperation is a valid strategy though.

Answer by Paul TipladyJul 04, 202310

Interpretability. If we somehow solve that, and keep it as systems become more powerful, then we don’t have to solve the alignment problem in one shot; we can iterate safely knowing that if an agent starts showing signs of object-level deceptiveness, malice, misunderstanding, etc, we will be able to detect it. (I’m assuming we can grow new AIs by gradually increasing their capabilities, as we currently do with GPT parameter counts, plus gradually increasing their strength by ramping up the compute budget.)

Of course, many big challenges here. Could an agent implement/learn to deceive the interpretability mechanism? I’m somewhat tautologically going to say that if we solve interpretability, we have solved this problem. Interpretability has value if we can’t fully solve it under this strong definition though.

I think getting to “good enough” on this question should pretty much come for free when the hard problems are solved. For example any common sense statement like “Maximize flourishing as depicted in the UN convention on human rights” is IMO likely to get us to a good place, if the agent is honest, remains aligned to those values, and interprets them reasonably intelligently. (With each of those three pre-requisites being way harder than picking a non-harmful value function.)

If our AGIs, after delivering utopia, tell us we need to start restricting childbearing rights I don’t see that as problematic. Long before we require that step we will have revolutionized society and so most people will buy into the requirement.

Honestly I think there are plenty of great outcomes that don’t preserve 1 as well. A world of radical abundance with no ownership, property, or ability to form companies/enterprises could still be dramatically better than the no-AGI counterfactual trajectory, even if it happens not to be most people’s preferred outcome ex ante.

For sci-fi, I’d say Ian M. Banks’ Culture series presents one of the more plausible (as in plausibly stable, not most probable ex ante) AGI-led utopias. (It’s what Musk is referring to when he says AGIs will keep us around because we are interesting.)

Load More