Wiki Contributions

Comments

drocta3mo70

For example, it is not clear to me if once I consider a program that outputs 0101 I will simply ignore other programs that output that same thing plus one bit (e.g. 01010).

No, the thing about prefixes is about what strings encode a program, not about their outputs.
The purpose of this is mostly just to define a prior over possible programs, in a way that conveniently ensures that the total probability assigned over all programs is at most 1. Seeing as it still works for different choices of language, it probably doesn't need to exactly use this kind of defining the probabilities, and I think any reasonable distribution over programs will do (at least, after enough observations)
But, while I think another distribution over programs should work, this thing with the prefix-free language is the standard way of doing it, and there are reasons it is nice.
The analogy for a normal programming language would be if no python script was a prefix of any other python script (which isn't true of python scripts, but could be if they were required to end with some "end of program" string)
There will be many different programs which produce the exact same output when run, and will all be considered when doing Solomonoff induction.
 

The programs in A have 5 bits of Kolmogorov complexity each. The programs in B have 6 bits. The program C has 4

This may be pedantic of me, but I wouldn't call the lengths of the programs, the Kolmogorov complexity of the program. The lengths of the programs are (upper bounds on) the Kolmogorov complexity of the outputs of the programs. The Kolmogorov complexity of a program g, would be the length of the shortest program which outputs the program g, not the length of g.

When you say that program C has 4 bits, is that just a value you picked, or are you obtaining that from somewhere?

Also, for a prefix-free programming language, you can't have 2^5 valid programs of length 5, and 2^6 programs of length 6, because if all possible binary strings of length 5 were valid programs, then no string of length 6 would be a valid program.
This is probably getting away from the core points though
(You could have the programming language be such that, e.g. 00XXXXX outputs the bits XXXXX, and 01XXXXXX outputs the bits XXXXXX, and other programs start with a 1, and any other program might want to encode is somehow encoded using some scheme)

the priors for each models will be 2^-5 for model A, 2^-6 for model B and 2^-4 for model C, according to their Kolmogorov complexity? 

yeah, the (non-normalized) prior for each will be  2^(-(length of a program which directly encodes a 5 bit string to output)) for the programs which directly encode some 5 bit string and output it,  2^(-(length of a program which directly encodes a 6 bit string to output)) for the programs which directly encode some 6 bit string and output it, and (say) 2^(-4) for program C.
And those likelihoods you gave are all correct for those models.

So, then, the posteriors (prior to normalization)
would be 2^(-(length of a program which directly encodes a 5 bit string to output)) (let's say this is 2^(-7) for the program that essentially is print("HTHHT"),
2^(-(length of a program which directly encodes a 6 bit string to output)) (let's say this is 2^(-8) ) for the programs that essentially are print("HTHHTH") and print("HTHHTT") respectively
2^(-4) * 2^(-5) = 2^(-9) for model C.

If we want to restrict to these 4 programs, then, adding these up, we get 2^(-7) + 2^(-8) + 2^(-8) + 2^(-9) = 2^(-6) + 2^(-9) = 9 * 2^(-9), and dividing that, we get
(4/9) chance for the program that hardcodes HTHHT (say, 0010110)
(2/9) chance for the program that hardcodes HTHHTH (say, 01101101)
(2/9) chance for the program that hardcodes HTHHTT (say, 01101100)
(1/9) chance for the program that produces a random 5 bit string. (say, 1000)
So, in this situation, where we've restricted to these programs, the posterior probability distribution for "what is the next bit" would be
(4/9)+(1/9)=(5/9) chance that "there is no next bit" (this case might usually be disregarded/discared, idk.)
(2/9) chance that the next bit is H
(2/9) chance that the next bit is T

drocta3mo10

Thanks! The specific thing I was thinking about most recently was indeed specifically about context length, and I appreciate the answer tailored to that, as it basically fully addresses my concerns in this specific case.

However, I also did mean to ask the question more generally. I kinda hoped that the answers might also be helpful to others who had similar questions (as well as if I had another idea meeting the same criteria in the future), but maybe thinking other people with the same question would find the question+answers here, was not super realistic, idk.

Answer by droctaJan 10, 202441

Here is my understanding:
we assume a programming language where a program is a finite sequence of bits, and such that no program is a prefix of another program. So, for example, if 01010010 is a program, then 0101 is not a program.
Then, the (not-normalized) prior probability for a program is 
Why that probability?
If you take any infinite sequence of bits, then, because no program is a prefix of any other program, at most one program will be a prefix of that sequence of bits.
If you randomly (with uniform distribution) select an infinite sequence of bits, the probability that the sequence of bits has a particular program as a prefix, is then  (because there's a factor of (1/2) for each of the bits of the program, and if the first (length of the program) bits match the program, then it doesn't matter what the rest of the bits that come after are.
(Ah, I suppose you don't strictly need to talk about infinite sequences of bits, and you could just talk about randomly picking the value of the next bit, stopping if it ever results in a valid program in the programming language..., not sure which makes it easier to think about.)

If you want this to be an actual prior, you can normalize this by dividing by (the sum over all programs, of  ).

The usual way of defining Solomonoff induction, I believe has the programs being deterministic, but I've read that allowing them to use randomness has equivalent results, and may make some things conceptually easier.
So, I'll make some educated guesses about how to incorporate the programs having random behavior.

Let G be the random variable for "which program gets selected", and g be used to refer to any potential particular value for that variable. (I avoided using p for program because I want to use it for probability. The choice of the letter g was arbitrary.)
Let O be the random variable for the output observed, and let o be used for any particular value for that variable.



And, given a program g, the idea of P(O=o|G=g) makes sense, and P(G=g) is proportional to  (missing a normalization constant, but this will be the same across all g),
And, P(O=o) will also be a normalization constant that is the same across all g.

And so, if you can compute values P(O=o|G=g) (the program g may take too long to run in practice) we can compute values proportional to P(G=g|O=o) .

Does that help any?
(apologies if this should be a "comment" rather than an "answer". Hoping it suffices.)

drocta10mo20

Well, I was kinda thinking of  as being, say, a distribution of human behaviors in a certain context (as filtered through a particular user interface), though, I guess that way of doing it would only make sense within limited contexts, not general contexts where whether the agent is physically a human or something else, would matter. And in this sort of situation, well, the action of "modify yourself to no-longer be a quantilizer" would not be in the human distribution, because the actions to do that are not applicable to humans (as humans are, presumably, not quantilizers, and the types of self-modification actions that would be available are not the same). Though, "create a successor agent" could still be in the human distribution.

Of course, one doesn't have practical access to "the true probability distribution of human behaviors in context M", so I guess I was imagining a trained approximation to this distribution.

Hm, well, suppose that the distribution over human-like behaviors includes both making an agent which is a quantilizer and making one which isn't, both of equal probability. Hm. I don't see why a general quantilizer in this case would pick the quantilizer over the plain optimizer, as the utility...

Hm...
I get the idea that the "quantilizers correspond to optimizing an infra-function of form [...]" thing is maybe dealing with a distribution over a single act?

Or.. if we have a utility function over histories until the end of the episode, then, if one has a model of how the environment will be and how one is likely to act in all future steps, given each of one's potential actions in the current step, one gets an expected utility conditioned on each of the potential actions in the current step, and this works as a utility function over actions for the current step,
and if one acts as a quantilizer over that, each step.. does that give the same behavior as an agent optimizing an infra-function defined using the condition with the  norm described in the post, in terms of the utility function over histories for an entire episode, and reference distributions for the whole episode?

argh, seems difficult...

drocta10mo10

For the "Crappy Optimizer Theorem", I don't understand why condition 4, that if  , then  , isn't just a tautology[1]. Surely if  , then no-matter what  is being used,
as  , then letting  , then  , and so  .

I guess if the 4 conditions are seen as conditions on a function  (where they are written for  ), then it no-longer is automatic, and it is just when specifying that  for some , that condition 4 becomes automatic? 

______________

[start of section spitballing stuff based on the crappy optimizer theorem]
Spitball 1:
What if instead of saying  , we had  ? would we still get the results of the crappy optimizer theorem?
If we define if s(f) is now a distribution over X, then, I suppose instead of writing Q(s)(f)=f(s(f)) should write Q(s)(f) = s(f)(f) , and, in this case, the first 2 and 4th conditions seem just as reasonable. The third condition... seems like it should also be satisfied?

Spitball 2:
While I would expect that the 4 conditions might not be exactly satisfied by, e.g. gradient descent, I would kind of expect basically any reasonable deterministic optimization process to at least "almost" satisfy them?  (like, maybe gradient-descent-in-practice would fail condition 1 due to floating point errors, but not too badly in reasonable cases).
Do you think that a modification of this theorem for functions Q(s) which only approximately satisfy conditions 1-3, would be reasonably achievable?

______________

  1. ^

    I might be stretching the meaning of "tautology" here. I mean something provable in our usual background mathematics, and which therefore adding it as an additional hypothesis to a theorem, doesn't let us show anything that we couldn't show without it being an explicit hypothesis.

drocta10mo20

I thought CDT was considered not reflectively-consistent because it fails Newcomb's problem?
(Well, not if you define reflective stability as meaning preservation of anti-Goodhart features, but, CDT doesn't have an anti-Goodhart feature (compared to some base thing) to preserve, so I assume you meant something a little broader?) 
Like, isn't it true that a CDT agent who anticipates being in Newcomb-like scenarios would, given the opportunity to do so, modify itself to be not a CDT agent? (Well, assuming that the Newcomb-like scenarios are of the form "at some point in the future, you will be measured, and based on this measurement, your future response will be predicted, and based on this the boxes will be filled")

My understanding of reflective stability was "the agent would not want to modify its method of reasoning". (E.g., a person with an addiction is not reflectively stable, because they want the thing (and pursue the thing), but would rather not want (or pursue) the thing.
The idea being that, any ideal way of reasoning, should be reflectively stable.

And, I thought that what was being described in the part of this article about recovering quantilizers, was not saying "here's how you can use this framework to make quantalizers better", so much as "quantilizers fit within this framework, and can be described within it, where the infrafunction that produces quantilizer-behavior is this one: [the (convex) set of utility functions which differ (in absolute value) from the given one, by, in expectation under the reference policy, at most epsilon]"

So, I think the idea is that, a quantilizer for a given utility function  and reference distribution  is, in effect, optimizing for an infrafunction that is/corresponds-to the set of utility functions  satisfying the bound in question,
and, therefore, any quantilizer, in a sense, is as if it "has this bound" (or, "believes this bound")

And that therefore, any quantilizer should -

- wait.. that doesn't seem right..? I was going to say that any quantilizer should therefore be reflectively stable, but that seems like it must be wrong? What if the reference distribution includes always taking actions to modify oneself in a way that would result in not being a quantilizer? uhhhhhh

 

Ah, hm, it seems to me like the way I was imagining the distribution  and the context in which you were considering it, are rather different. I was thinking of  as being an accurate distribution of behaviors of some known-to-be-acceptably-safe agent, whereas it seems like you were considering it as having a much larger support, being much more spread out in what behaviors it has as comparably likely to other behaviors, with things being more ruled-out rather than ruled-in ?

drocta10mo10

Whoops, yes, that should have said  , thanks for the catch! I'll edit to make that fix.

Also, yes, what things between  and   should be sent to, is a difficulty..
A thought I had which, on inspection doesn't work, is that (things between   and )  could be sent to  , but that doesn't work, because  might be terminal, but (thing between  and ) isn't terminal. It seems like the only thing that would always work would be for them to be sent to something that has an arrow (in B) to  (such as f(a), as you say, but, again as you mention, it might not be viable to determine f(a) from the intermediary state).

I suppose if  were a partial function, and one such that all states not in its domain have a path to a state which is in its domain, then that could resolve that?

I think equivalently to that, if you modified the abstraction to get,  defined as,  and  
so that B' has a state for each state of B, along with a second copy of it with no in-going arrows and a single arrow going to the normal version,
uh, I think that would also handle it ok? But this would involve modifying the abstraction, which isn't nice. At least the abstraction embeds in the modified version of it though.
Though, if this is equivalent to allowing f to be partial, with the condition that anything not in its domain have arrows leading to things that are in its domain, then I guess it might help to justify a definition allowing  to be partial, provided it satisfies that condition.

Are these actually equivalent?

Suppose  is partial and satisfies that condition.
Then, define  to agree with f on the domain of f, and for other , pick an  in the domain of f such that  , and define 
In a deterministic case, should pick  to be the first state in the path starting with  to be in the domain of . (In the non-deterministic case, this feels like something that doesn't work very well...)
Then, for any non-terminal , either it is in the domain of f, in which case we have the existence of  such that a and where , or it isn't, in which case we have that there exists  such that a and where  , and so f' satisfies the required condition.
On the other hand, if we have an  satisfying the required condition, we can co-restrict it to B, giving a partial function  , where, if  isn't in the domain of f, then, assuming a is non-terminal, we get some  s.t.  a and where  , and, as the only things in B' which have any arrows going in are elements of B, we get that f'(a'') is in B, and therefore that a'' is in the domain of f.
But what if a is terminal? Well, if we require that  non-terminal implies  non-terminal, then this won't be an issue because pre(b) is always non-terminal, and so anything terminal is in the domain.
Therefore the co-restriction f of f', does yield a partial function satisfying the proposed condition to require for partial maps.

So, this gives a pair of maps, one sending functions  satisfying appropriate conditions to partial function  satisfying other conditions, and one sending the other way around, and where, if the computations are deterministic, these two are inverses of each-other. (if not assuming deterministic, then I guess potentially only a one-sided inverse?)

So, these two things are equivalent.

uh... this seems a bit like an adjunction, but, not quite. hm.

drocta11mo10

A thought on the "but what if multiple steps in the actual-algorithm correspond to a single step in an abstracted form of the algorithm?" thing :
This reminds me a bit of, in the topic of "Abstract Rewriting Systems", the thing that the   vs  distinction handles. (the asterisk just indicating taking the transitive reflexive closure)

Suppose we have two abstract rewriting systems  and .
(To make it match more closely what you are describing, we can suppose that every node has at most one outgoing arrow, to make it fit with how you have timesteps as functions, rather than being non-deterministic. This probably makes them less interesting as ARSs, but that's not a problem)
Then, we could say that  is a homomorphism (I guess) if,

for all  such that  has an outgoing arrow, there exists  such that  and  .

These should compose appropriately [err, see bit at the end for caveat], and form the morphisms of a category (where the objects are ARSs).

I would think that this should handle any straightforward simulation of one Turing machine be another.

As for whether it  can handle complicated disguise-y ones, uh, idk? Well, if it like, actually simulates the other Turing machine, in a way where states of the simulated machine have corresponding states in the simulating machine, which are traversed in order, then I guess yeah. If the universal Turing machine instead does something pathological like, "search for another Turing machine along with a proof that the two have the same output behavior, and then simulate that other one", then I wouldn't think it would be captured by this, but also, I don't think it should be?

This setup should also handle the circuits example fine, and, as a bonus(?), can even handle like, different evaluation orders of the circuit nodes, if you allow multiple outgoing arrows.

And, this setup should, I think, handle anything that the "one step corresponds to one step" version handles?

It seems to me like this set-up, should be able to apply to basically anything of the form "this program implements this rough algorithm (and possibly does other stuff at the same time)"? Though to handle probabilities and such I guess it would have to be amended.

I feel like I'm being overconfident/presumptuous about this, so like,
sorry if I'm going off on something that's clearly not the right type of thing for what you're looking for?

__________

Checking that it composes:

suppose A, B, C, 
then for any  which has an outgoing arrow and where f(a) has an outgoing arrow,   where 
so either b' = f(a) or there is some sequence  where  , and so therefore,
Ah, hm, maybe we need an additional requirement that the maps be surjective?
or, hm, as we have the assumption that  has an outward arrow,
then we get that there is an  s.t.  and s.t. 

ok, so, I guess the extra assumption that we need isn't quite "the maps are surjective", so much as, "the maps f are s.t. if f(a) is a non-terminal state, then f(a) is a non-terminal state", where "non-terminal state" means one with an outgoing arrow. This seems like a reasonable condition to assume.

drocta11mo20

In the line that ends with "even if God would not allow complete extinction.", my impulse is to include " (or other forms of permanent doom)" before the period, but I suspect that this is due to my tendency to include excessive details/notes/etc. and probably best not to actually include in that sentence.

(Like, for example, if there were no more adult humans, only billions of babies grown in artificial wombs (in a way staggered in time) and then kept in a state of chemically induced euphoria until the age of 1, and then killed, that technically wouldn't be human extinction, but, that scenario would still count as doom.)

Regarding the part about "it is secular scientific-materialists who are doing the research which is a threat to my values" part: I think it is good that it discusses this! (and I hadn't thought about including it)
But, I'm personally somewhat skeptical that CEV really works as a solution to this problem? Or at least, in the simpler ways of it being described.
Like, I imagine there being a lot of path-dependence in how a culture's values would "progress" over time, and I see little reason why a sequence of changes of the form "opinion/values changing in response to an argument that seems to make sense" would be that unlikely to produce values that the initial values would deem horrifying? (or, which would seem horrifying to those in an alternate possible future that just happened to take a difference branch in how their values evolved)

[EDIT: at this point, I start going off on a tangent which is a fair bit less relevant to the question of improving Stampy's response, so, you might want to skip reading it, idk]

My preferred solution is closer to, "we avoid applying large amounts of optimization pressure to most topics, instead applying it only to topics where there is near-unanimous agreement on what kinds of outcomes are better (such as, "humanity doesn't get wiped out by a big space rock", "it is better for people to not have terrible diseases", etc.), while avoiding these optimizations having much effect on other areas where there is much disagreement as to what-is-good.
 

Though, it does seem plausible to me, as a somewhat scary idea, that the thing I just described is perhaps not exactly coherent?

 

(that being said, even though I have my doubts about CEV, at least in the form described in the simpler ways it is described, I do think it would of course be better than doom.
Also, it is quite possible that I'm just misunderstanding the idea of CEV in a way that causes my concerns, and maybe it was always meant to exclude the kinds of things I describe being concerned about?)

drocta1y90

I want to personally confirm a lot of what you've said here. As a Christian, I'm not entirely freaked out about AI risk because I don't believe that God will allow it to be completely the end of the world (unless it is part of the planned end before the world is remade? But that seems unlikely to me.), but that's no reason that it can't still go very very badly (seeing as, well, the Holocaust happened).

In addition, the thing that seems to me most likely to be the way that God doesn't allow AI doom, is for people working on AI safety to succeed. One shouldn't rely on miracles and all that (unless [...]), so, basically I think we should plan/work as if it is up to humanity to prevent AI doom, only that I'm a bit less scared of the possibility of failure, but I would hope only in a way that results in better action (compared to panic) rather than it promoting inaction.

(And, a likely alternative, if we don't succeed, I think of as likely being something like,
really-bad-stuff happens, but then maybe an EMP (or many EMPs worldwide?) gets activated, solving that problem, but also causing large-scale damage to power-grids, frying lots of equipment, and causing many shortages of many things necessary for the economy, which also causes many people to die. idk.)

Load More