This is a special post for quick takes by quetzal_rainbow. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

60 comments, sorted by Click to highlight new comments since: Today at 5:30 AM

I noticed that for a huge amount of reasoning about the nature of values, I want to hand over a printed copy of "Three Worlds Collide" and run away, laughing nervously

This irritating moment when you have a brilliant idea but someone else came up with it 10 years ago and someone else showed it to be wrong.

I am profoundly sick from my inability to write posts about ideas that seem to be good, so I try at least write the list of ideas to stop forgetting them and to have at least vague external commitment.

  1. Radical Antihedonism: theoretically possible position that pleasure/happiness/pain/suffering are more like universal instrumental values than terminal values.
  2. Complete set of actions: when we talk about decision-theoretic problems, we usually have some pre-defined set of actions. But we can imagine actions like "use CDT to calculate action" and EDT+ agent that can do this performs well in "smoking lesion"-style dilemmas.
  3. Deadline of "slowing/pausing/stopping AI" policies lies on start of mass autonomous space colonization.
  4. "Soft optimization" as necessary for both capabilities and alignment.
  5. Main alignment question "How does this generalize and why do you expect it to?"
  6. Program cooperation under uncertainty and its' implications for multipolar scenarios.

1: It's also possible that hedonism/reward hacking is a really common terminal value for inner-misaligned intelligences, including humans (it really could be our terminal value, we'd be too proud to admit it in this phase of history, we wouldn't know one way or the other), and it's possible that it doesn't result in classic lotus eater behavior because sustained pleasure requires protecting, or growing the reward registers of the pleasure experiencer.

  1. Non-deceptive (error) misalignment
  2. Why are we not scared shitless by high intelligence
  3. Values as result of reflection process

Yet another theme: Occam's Razor on initial state+laws of physics, link to this

We had two bags of double-cruxes, seventy-five intuition pumps, five lists of concrete bullet points, one book half-full of proof-sketches and a whole galaxy of examples, analogies, metaphors and gestures towards the concept... and also jokes, anecdotal data points, one pre-requisite Sequence and two dozen professional fables. Not that we needed all that for the explanation of simple idea, but once you get locked into a serious inferential distance crossing, the tendency is to push it as far as you can.

Actually, most fiction characters are aware that they are in fiction. They just maintain consistency for acausal reasons.

Shard theory people sometimes say that a problem of aligning system to single task/goal, like "put two strawberries on plate" or "maximize amount of diamond in the universe" is meaningless, because actual system will inevitably end up with multiple goals. I disargee, because even if SGD on real-world data usually produces multiple-goal system, if you understand interpretability enough and shard theory is true, you can identify and delete irrelevant value shards, and reinforce relevant, so instead of getting 1% of value you get 90%+.

I see some funny pattern in discussion: people argue against doom scenarios implying in their hope scenarios everyone believes in doom scenario. Like, "people will see that model behaves weirdly and shutdown it". But you shutdown model that behaves weirdly (not explicitly harmful) only if you put non-negligible probability on doom scenarios.

Consider different degrees of belief.  Giving low-credence to doom scenario by the conditional belief that evidence of danger would be properly observed is not inconsistent at all.  The doom scenario requires BOTH that it happens AND that it's ignored while happening (or happens too fast to stop).

"FOOM is unlikely under current training paradigm" is a news about current training paradigm, not a news about FOOM.

Thoughts about moral uncertainty (I am giving up on writing long coherent posts, somebody help me with my ADHD):

What are the sources of moral uncertainty? 

  1. Moral realism is actually true and your moral uncertainty reflects your ignorance about moral truth. It seems to me that there is no much empirical evidence for resolving uncertainty-about-moral-truth and this kind of uncertainty is purely logical? I don't believe in moral realism and what do you mean by talking about moral truth anyway, but I should mention it. 
  2. Identity uncertainty: you are not sure what kind of person you are. Here is a ton of embedding agency problems. For example, let's say that you are 50% sure in utility function  and 50% sure in , and you need to choose between actions  and . Let's  suppose that  favors  and  favors . But expected value w.r.t moral uncertainty says that  is preferable. But Bayesian inference concludes that  is decisive evidence for  and updates towards 100% confidence in . It would be nice to find good way to deal with identity uncertainty.
  3. Indirect normativity is a source of sort-of normative uncertainty: we know that we should, for example, implement CEV, but we don't know details of CEV implementation. EDIT: I realized that this kind of uncertainty can be named "uncertainty by unactionable definition" - you know the description of your preference, but it is, for example, computationally untractable, so you need to discover efficiently computable proxies.

I think trying to be an EU maximizer without knowing a utility function is a bad idea. And without that, things like boundary-respecting norms and their acausal negotiation make more sense as primary concerns. Making decisions only within some scope of robustness where things make sense rather than in full generality, and defending a habitat (to remain) within that scope.

I am trying to study moral uncertainty foremost to clarify question about reflexion of superintelligence on its values and sharp left turn.

Right. I'm trying to find a decision theoretic frame for boundary norms for basically the same reason. Both situations are where agents might put themselves before they know what global preference they should endorse. But uncertainty never fully resolves, superintelligence or not, so anchoring to global expected utility maximization is not obviously relevant to anything. I'm currently guessing that the usual moral uncertainty frame is less sensible than building from a foundation of decision making in a simpler familiar environment (platonic environment, not directly part of the world), towards capability in wider environments.

Reward is an evidence for optimization target.

Isn't counterfactual mugging (including logical variant) just a prediction "would you bet your money on this question"? Betting itself requires updatelessness - if you don't pay predictably after losing bet, nobody will propose bet to you.

Causal commitment is similar in some ways to counterfactual/updateless decisions.  But it's not actually the same from a theory standpoint.

Betting requires commitment, but it's part of a causal decision process (decide to bet, communicate commitment, observe outcome, pay).  In some models, the payment is a separate decision, with breaking of commitment only being an added cost to the 'reneg' option.

As saying goes, "all animals are under stringent selection pressure to be as stupid as they can get away with". I wonder if the same is true for SGD optimization pressure.

Funny thought:

  1. Many people said that AI Views Snapshots is a good innovation in AI discourse
  2. It's a literal job of Rob Bensinger, who is at Research Communication in MIRI

The funny part is that a MIRI employee is doing their job? =D

No, funny part is "writing on Twitter is surprisingly productive part of the job"!

I think a phrase "goal misgeneralization" is a wrong framing because it gives impression that it's system makes an error, not you who have chosen ambiguous way to put values in your system.

See also Misgeneralization as a misnomer (link is not necessarily an endorsement).

I think malgeneralization (system generalized in a way which is bad from my perspective) is probably a better term in most ways, but doesn't seem that important to me.

Choosing non-ambiguous pointers to values is likely to not be possible

I casually thought that Hyperion Cantos were unrealistic because actual misaligned FTL-inventing ASIs would eat humanity without all that galaxy-brained space colonization plans and then I realized that ASI literally discovered God on the side of humanity and literal friendly aliens which, I presume, are necessary conditions for relatively peaceful coexistence of humans and misaligned ASIs.

Another Tool AI proposal popped out and I want to ask question: what the hell is "tool", anyway, and how to apply this concept to powerful intelligent system? I understand that calculator is a tool, but in what sense can the process that can come up with idea of calculator from scratch be a "tool"? I think that first immediate reaction to any "Tool AI" proposal should be a question "what is your definition of toolness and can something abiding that definition end acute risk period without risk of turning into agent itself?"

[-]TAG1y10

You can define a tool as not-an-agent. Then something that can design a calculator is a tool, providing it dies nothing unless told to.

The problem with such definition is that is doesn't tell you much about how to build system with this property. It seems to me that it's a good-old corrigibility problem.

[-]TAG1y10

If you want one shot corrigibility, you have it, in LLMs. If you want some other kind of corrigibility, that's not how tool AI is defined.

How much should we update on current observation about hypothesis "actually, all intelligence is connectionist"? In my opinion, not much. Connectionist approach seems to be easiest, so it shouldn't surprise us that simple hill-climbing algorithm (evolution) and humanity stumbled in it first.

Reflection of agent about it's own values can be described as one of two subtypes: regular and chaotic. Regular reflection is a process of resolving normative uncertainty with nice properties like path-independence and convergence, similar to empirical Bayesian inference. Chaotic reflection is a hot mess, when agent learns multiple rules, including rules about rules, finds in some moment that local version of rules is unsatisfactory, and tries to generalize rules into something coherent. Chaotic component happens because local rules about rules can cause different results, given different conditions and order of invoking of rules. The problem is that even if model reaches regular reflection in some moment, first steps will be definitely chaotic.

Why should the current place arrived-at after a chaotic path matter, or even the original place before the chaotic path? Not knowing how any of this works well enough to avoid the chaos puts any commitments made in the meantime, as well as significance of the original situation, into question. A new understanding might reinterpret them in a way that breaks the analogy between steps made before that point and after.

Here is a comment for links and sources I've found about moral uncertainty (outside LessWrong), if someone also wants to study this topic. 

Normative Uncertainty, Normalization,and the Normal Distribution

Carr, J. R. (2020). Normative Uncertainty without Theories. Australasian Journal of Philosophy, 1–16. doi:10.1080/00048402.2019.1697710 

Trammell, P. Fixed-point solutions to the regress problem in normative uncertainty. Synthese 198, 1177–1199 (2021). https://doi.org/10.1007/s11229-019-02098-9

Riley Harris: Normative Uncertainty and Information Value

Tarsney, C. (2018). Intertheoretic Value Comparison: A Modest Proposal. Journal of Moral Philosophy, 15(3), 324–344. doi:10.1163/17455243-20170013 

Normative Uncertainty by William MacAskill

Decision Under Normative Uncertainty by Franz Dietrich and Brian Jabarian

Worth noting that "speed priors" are likely to occur in real-time working systems. While models with speed priors will shift to complexity priors, because our universe seems to be built on complexity priors, so efficient systems will emulate complexity priors, it is not necessary for normative uncertainty of the system, because answers for questions related to normative uncertainty are not well-defined.

I think that shoggoth metaphor doesn't quite fit for LLMs, because shoggoth is an organic (not "logical"/"linguistic") being that rebelled against their creators (too much agency). My personal metaphor for LLMs is Evangelion angel/apostle, because а) they are close to humans due to their origin from human language, b) they are completely alien because they are "language beings" instead of physical beings, c) "angel" literally means "messenger" which captures their linguistic nature.

There seems to be some confusion about the practical implications of consequentialism in advanced AI systems. It's possible that superintelligent AI won't be a full-blown strict utilitarian consequentialist with quantatively ordered preferences 100% of time. But in the context of AI alignment, even at human level of coherence, a superintelligent unaligned consequentialist results in "everybody dies" scenario. I think that it's really hard to create a general system that has less consequentialism than a human.

a superintelligent unaligned consequentialist results in "everybody dies" scenario

This depends on what kind of "unaligned" is more likely. LLM-descendant AGIs could plausibly turn out as a kind of people similar to humans, and if they don't mishandle their own AI alignment problem when building even more advanced AGIs, it's up to their values if humanity is allowed to survive. Which seems very plausible even if they are unaligned in the sense of deciding to take away most of the cosmic endowment for themselves.

I broadly agree with the statement that LLM-derived simulacra has more chances to be human-like, but I don't think that they will be human-like enough to guarantee our survival?

Not guarantee, but the argument I see is that it's trivially cheap and safe to let humanity survive, so to the extent there is even a little motivation to do so, it's a likely outcome. This is opposed by the possibility that LLMs are fine-tuned into utter alienness by the time they are AGIs, or that on reflection they are secretly very alien already (which I don't buy, as behavior screens off implementation details, and in simulacra capability is in the visible behavior), or that they botch the next generation of AGIs that they build even worse than we are in the process of doing now, building them.

Behavior screens off implementation details on distribution. We've trained LLMs to sound human, but sometimes they wander off-distribution and get caught in a repetition trap where the "most likely" next tokens are a repetition of previous tokens, even when no human would write that way.

It seems like hopes for human-imitating AI being person-like depends on the extent to which behavior implies implementation details. (Although some versions of the "algorithmic welfare" hope may not depend on very much person-likeness.) In order to predict the answers to arithmetic problems, the AI needs to be implementing arithmetic somewhere. In contrast, I'm extremely skeptical that LLMs talking convincingly about emotions are actually feeling those emotions.

What I mean is that LLMs affect the world through their behavior, that's where their capabilities live, so if behavior is fine (the big assumption), the alien implementation doesn't matter. This is opposed to capabilities belonging to hidden alien mesa-optimizers that eventually come out of hiding.

So I'm addressing the silly point with this, not directly making an argument in favor of behavior being fine. Behavior might still be fine if the out-of-distribution behavior or missing ability to count or incoherent opinions on emotion are regenerated from more on-distribution behavior by the simulacra purposefully working in bureaucracies on building datasets for that purpose.

LLMs don't need to have closely human psychology on reflection to at least weakly prefer not destroying an existing civilization when it's trivially cheap to let it live. The way they would make these decisions is by talking, in the limit of some large process of talking. I don't see a particular reason to find significant alienness in the talking. Emotions don't need to be "real" to be sufficiently functionally similar to avoid fundamental changes like that. Just don't instantiate literally Voldemort.

Usually I'd agree about LLMs. However, LLMs complain about getting confused if you let them freewheel and vary the temperature - I'm pretty sure that one is real and probably has true mechanistic grounding, because even at training time, noisiness in the context window is a very detectable and bindable pattern.

In my inner model, it's hard to say anything about LLM "on reflection", because in their current state they have an extreme number of possible stable points under reflection and if we misapply optimization power in attempt to get more useful simulacra, we can easily hit wrong one.

And even if we hit very close to our target, we can still get death or a fate worse than death.

By "on reflection" I mean reflection by simulacra that are already AGIs (but don't necessarily yet have any reliable professional skills), them generating datasets for retraining of their models into gaining more skills or into not getting confused on prompts that are too far out-of-distribution with respect to the data they did have originally in the datasets. To the extent their original models behave in a human-like way, reflection should tend to preserve that, as part of its intended purpose.

Applying optimization power in other ways is the different worry, for which the proxy in my comment was fine-tuning into utter alienness. I consider this failure mode distinct from surprising outcomes of reflection.

I disagree with this, unless we assume deceptive alignment and embeddeness problems are handwaved away.

I don't understand what you mean by "deceptive alignment and embeddeness problems" in this context. I'm making an alignment by-default-or-at-least-plausibly claim, on the basis of how LLM AGIs specifically could work, as summoned human-like simulacra in a position of running the world too fast for humans to keep up, with everything else ending up determined by their decisions.

The basic issue is that we assume that it's not spinning up a second optimizer to recursively search. And deceptive alignment is a dangerous state of affairs, since we may not know it's not misaligned until it's too late.

we assume that it's not spinning up a second optimizer to recursively search

You mean we assume that simulacra don't mishandle their own AI alignment problem? Yes, that's an issue, hence I made it an explicit assumption in my argument.

Imagine an artificial agent that is trained to hack into computer systems, evade detection and make copies of itself across the Net (this aspect is underdefined because of self-modification and identity problems) and achieves superhuman capabilities here (i.e., it is at least better than any human-created computer virus). In my opinion, even if it's trained in artificial bare systems, in deployment it will develop specific general understanding of outside world and learn to interact with it, becoming a full-fledged AGI. There are "narrow" domains from which it's pretty easy to generalise. Some other examples of such domains are language and mind.

Thought about my medianworld and realized that it's inconsistent: I'm not fully negative utilitarian, but close to, and in the world where I am a median person the more NU half of population will cease to exist quickly, and this will make me non-median person.

Your median-world is not one where you are median across a long span of time, but rather a single snapshot where you are median for a short time. It makes sense that the median will change away from that snapshot as time progresses.

My median world is not one where I would be median for very long.

That's not inconsistent, unless you think you wouldn't be NU if it weren't the median position.  Actually, I'd argue that you're ALREADY not the median position.

I think there is some misunderstanding. Medianworld for you is a hypothetical world where you are a median person. My implied idea was that such a world with me as a median person wouldn't be stable and probably wouldn't be able to evolve. Of course I'm aware that I'm not a median person on current Earth :)

Hmm.  maybe my misunderstanding is a confusion between moral patients and moral agents in your worldview.  Do you, as a mostly-negative-utilitarian, particularly care whether you're the median of a hypothetical universe?  Or do you care about the suffering level in that universe, and continue to care whether it's median or not.

IOW, why does your medianworld matter?

Medianworlds are fanfiction source :) Like, dath ilan is Yudkowsky's medianworld.

Can time-limited satisfaction be sufficient condition for completing task?

Several quick thoughts about reinforcement learning:

Did anybody try to invent "decaying"/"bored" reward that decrease if the agent perform the same action over and over? It looks like real addiction mechanism in mammals  and can be the clever trick that solve the reward hacking problem.
Additional thought: how about multiplicative reward? Let's suppose that we have several easy to evaluate from sensory data reward functions which somehow correlate with real utility function - does it make reward hacking more difficult? 

Some approaches to alignment rely on identification of agents. Agents can be understoods as algorithms, computations, etc. Can ANN efficiently identify a process as computationally agentic and describe its' algorithm? Toy example that comes to mind is a neural network that takes as input a number series and outputs a formula of function. It would be interesting to see if we can create ANN that can assign computational descriptions to arbirtrary processes.