drocta

Wiki Contributions

Comments

drocta10

Yes. I believe that is consistent with what I said.

"not((necessarily, for each thing) : has [x] -> those [x] are such that P_1([x]))"
is equivalent to, " (it is possible that something) has [x], but those [x] are not such that P_1([x])"

not((necessarily, for each thing) : has [x] such that P_2([x]) -> those [x] are such that P_1([x]))
is equivalent to "(it is possible that something) has [x], such that P_2([x]), but those [x] are not sure that P_1([x])" .

The latter implies the former, as (A and B and C) implies (A and C), and so the latter is stronger, not weaker, than the former.

Right?

drocta10

Doesn't "(has preferences, and those preferences are transitive) does not imply (completeness)" imply (has preferences) does not imply (completeness)" ? Surely if "having preferences" implied completeness, then "having transitive preferences" would also imply completeness?

drocta10

"Political category" seems, a bit strong? Like, sure, the literal meaning of "processed" is not what people are trying to get at. But, clearly, "those processing steps that are done today in the food production process which were not done N years ago" is a thing we can talk about. (by "processing step" I do not include things like "cleaning the equipment", just steps which are intended to modify the ingredients in some particular way. So, things like, hydrogenation. This also shall not be construed as indicating that I think all steps that were done N years ago were better than steps done today.)

drocta70

For example, it is not clear to me if once I consider a program that outputs 0101 I will simply ignore other programs that output that same thing plus one bit (e.g. 01010).

No, the thing about prefixes is about what strings encode a program, not about their outputs.
The purpose of this is mostly just to define a prior over possible programs, in a way that conveniently ensures that the total probability assigned over all programs is at most 1. Seeing as it still works for different choices of language, it probably doesn't need to exactly use this kind of defining the probabilities, and I think any reasonable distribution over programs will do (at least, after enough observations)
But, while I think another distribution over programs should work, this thing with the prefix-free language is the standard way of doing it, and there are reasons it is nice.
The analogy for a normal programming language would be if no python script was a prefix of any other python script (which isn't true of python scripts, but could be if they were required to end with some "end of program" string)
There will be many different programs which produce the exact same output when run, and will all be considered when doing Solomonoff induction.
 

The programs in A have 5 bits of Kolmogorov complexity each. The programs in B have 6 bits. The program C has 4

This may be pedantic of me, but I wouldn't call the lengths of the programs, the Kolmogorov complexity of the program. The lengths of the programs are (upper bounds on) the Kolmogorov complexity of the outputs of the programs. The Kolmogorov complexity of a program g, would be the length of the shortest program which outputs the program g, not the length of g.

When you say that program C has 4 bits, is that just a value you picked, or are you obtaining that from somewhere?

Also, for a prefix-free programming language, you can't have 2^5 valid programs of length 5, and 2^6 programs of length 6, because if all possible binary strings of length 5 were valid programs, then no string of length 6 would be a valid program.
This is probably getting away from the core points though
(You could have the programming language be such that, e.g. 00XXXXX outputs the bits XXXXX, and 01XXXXXX outputs the bits XXXXXX, and other programs start with a 1, and any other program might want to encode is somehow encoded using some scheme)

the priors for each models will be 2^-5 for model A, 2^-6 for model B and 2^-4 for model C, according to their Kolmogorov complexity? 

yeah, the (non-normalized) prior for each will be  2^(-(length of a program which directly encodes a 5 bit string to output)) for the programs which directly encode some 5 bit string and output it,  2^(-(length of a program which directly encodes a 6 bit string to output)) for the programs which directly encode some 6 bit string and output it, and (say) 2^(-4) for program C.
And those likelihoods you gave are all correct for those models.

So, then, the posteriors (prior to normalization)
would be 2^(-(length of a program which directly encodes a 5 bit string to output)) (let's say this is 2^(-7) for the program that essentially is print("HTHHT"),
2^(-(length of a program which directly encodes a 6 bit string to output)) (let's say this is 2^(-8) ) for the programs that essentially are print("HTHHTH") and print("HTHHTT") respectively
2^(-4) * 2^(-5) = 2^(-9) for model C.

If we want to restrict to these 4 programs, then, adding these up, we get 2^(-7) + 2^(-8) + 2^(-8) + 2^(-9) = 2^(-6) + 2^(-9) = 9 * 2^(-9), and dividing that, we get
(4/9) chance for the program that hardcodes HTHHT (say, 0010110)
(2/9) chance for the program that hardcodes HTHHTH (say, 01101101)
(2/9) chance for the program that hardcodes HTHHTT (say, 01101100)
(1/9) chance for the program that produces a random 5 bit string. (say, 1000)
So, in this situation, where we've restricted to these programs, the posterior probability distribution for "what is the next bit" would be
(4/9)+(1/9)=(5/9) chance that "there is no next bit" (this case might usually be disregarded/discared, idk.)
(2/9) chance that the next bit is H
(2/9) chance that the next bit is T

drocta10

Thanks! The specific thing I was thinking about most recently was indeed specifically about context length, and I appreciate the answer tailored to that, as it basically fully addresses my concerns in this specific case.

However, I also did mean to ask the question more generally. I kinda hoped that the answers might also be helpful to others who had similar questions (as well as if I had another idea meeting the same criteria in the future), but maybe thinking other people with the same question would find the question+answers here, was not super realistic, idk.

Answer by drocta41

Here is my understanding:
we assume a programming language where a program is a finite sequence of bits, and such that no program is a prefix of another program. So, for example, if 01010010 is a program, then 0101 is not a program.
Then, the (not-normalized) prior probability for a program is 
Why that probability?
If you take any infinite sequence of bits, then, because no program is a prefix of any other program, at most one program will be a prefix of that sequence of bits.
If you randomly (with uniform distribution) select an infinite sequence of bits, the probability that the sequence of bits has a particular program as a prefix, is then  (because there's a factor of (1/2) for each of the bits of the program, and if the first (length of the program) bits match the program, then it doesn't matter what the rest of the bits that come after are.
(Ah, I suppose you don't strictly need to talk about infinite sequences of bits, and you could just talk about randomly picking the value of the next bit, stopping if it ever results in a valid program in the programming language..., not sure which makes it easier to think about.)

If you want this to be an actual prior, you can normalize this by dividing by (the sum over all programs, of  ).

The usual way of defining Solomonoff induction, I believe has the programs being deterministic, but I've read that allowing them to use randomness has equivalent results, and may make some things conceptually easier.
So, I'll make some educated guesses about how to incorporate the programs having random behavior.

Let G be the random variable for "which program gets selected", and g be used to refer to any potential particular value for that variable. (I avoided using p for program because I want to use it for probability. The choice of the letter g was arbitrary.)
Let O be the random variable for the output observed, and let o be used for any particular value for that variable.



And, given a program g, the idea of P(O=o|G=g) makes sense, and P(G=g) is proportional to  (missing a normalization constant, but this will be the same across all g),
And, P(O=o) will also be a normalization constant that is the same across all g.

And so, if you can compute values P(O=o|G=g) (the program g may take too long to run in practice) we can compute values proportional to P(G=g|O=o) .

Does that help any?
(apologies if this should be a "comment" rather than an "answer". Hoping it suffices.)

drocta20

Well, I was kinda thinking of  as being, say, a distribution of human behaviors in a certain context (as filtered through a particular user interface), though, I guess that way of doing it would only make sense within limited contexts, not general contexts where whether the agent is physically a human or something else, would matter. And in this sort of situation, well, the action of "modify yourself to no-longer be a quantilizer" would not be in the human distribution, because the actions to do that are not applicable to humans (as humans are, presumably, not quantilizers, and the types of self-modification actions that would be available are not the same). Though, "create a successor agent" could still be in the human distribution.

Of course, one doesn't have practical access to "the true probability distribution of human behaviors in context M", so I guess I was imagining a trained approximation to this distribution.

Hm, well, suppose that the distribution over human-like behaviors includes both making an agent which is a quantilizer and making one which isn't, both of equal probability. Hm. I don't see why a general quantilizer in this case would pick the quantilizer over the plain optimizer, as the utility...

Hm...
I get the idea that the "quantilizers correspond to optimizing an infra-function of form [...]" thing is maybe dealing with a distribution over a single act?

Or.. if we have a utility function over histories until the end of the episode, then, if one has a model of how the environment will be and how one is likely to act in all future steps, given each of one's potential actions in the current step, one gets an expected utility conditioned on each of the potential actions in the current step, and this works as a utility function over actions for the current step,
and if one acts as a quantilizer over that, each step.. does that give the same behavior as an agent optimizing an infra-function defined using the condition with the  norm described in the post, in terms of the utility function over histories for an entire episode, and reference distributions for the whole episode?

argh, seems difficult...

drocta10

For the "Crappy Optimizer Theorem", I don't understand why condition 4, that if  , then  , isn't just a tautology[1]. Surely if  , then no-matter what  is being used,
as  , then letting  , then  , and so  .

I guess if the 4 conditions are seen as conditions on a function  (where they are written for  ), then it no-longer is automatic, and it is just when specifying that  for some , that condition 4 becomes automatic? 

______________

[start of section spitballing stuff based on the crappy optimizer theorem]
Spitball 1:
What if instead of saying  , we had  ? would we still get the results of the crappy optimizer theorem?
If we define if s(f) is now a distribution over X, then, I suppose instead of writing Q(s)(f)=f(s(f)) should write Q(s)(f) = s(f)(f) , and, in this case, the first 2 and 4th conditions seem just as reasonable. The third condition... seems like it should also be satisfied?

Spitball 2:
While I would expect that the 4 conditions might not be exactly satisfied by, e.g. gradient descent, I would kind of expect basically any reasonable deterministic optimization process to at least "almost" satisfy them?  (like, maybe gradient-descent-in-practice would fail condition 1 due to floating point errors, but not too badly in reasonable cases).
Do you think that a modification of this theorem for functions Q(s) which only approximately satisfy conditions 1-3, would be reasonably achievable?

______________

  1. ^

    I might be stretching the meaning of "tautology" here. I mean something provable in our usual background mathematics, and which therefore adding it as an additional hypothesis to a theorem, doesn't let us show anything that we couldn't show without it being an explicit hypothesis.

drocta20

I thought CDT was considered not reflectively-consistent because it fails Newcomb's problem?
(Well, not if you define reflective stability as meaning preservation of anti-Goodhart features, but, CDT doesn't have an anti-Goodhart feature (compared to some base thing) to preserve, so I assume you meant something a little broader?) 
Like, isn't it true that a CDT agent who anticipates being in Newcomb-like scenarios would, given the opportunity to do so, modify itself to be not a CDT agent? (Well, assuming that the Newcomb-like scenarios are of the form "at some point in the future, you will be measured, and based on this measurement, your future response will be predicted, and based on this the boxes will be filled")

My understanding of reflective stability was "the agent would not want to modify its method of reasoning". (E.g., a person with an addiction is not reflectively stable, because they want the thing (and pursue the thing), but would rather not want (or pursue) the thing.
The idea being that, any ideal way of reasoning, should be reflectively stable.

And, I thought that what was being described in the part of this article about recovering quantilizers, was not saying "here's how you can use this framework to make quantalizers better", so much as "quantilizers fit within this framework, and can be described within it, where the infrafunction that produces quantilizer-behavior is this one: [the (convex) set of utility functions which differ (in absolute value) from the given one, by, in expectation under the reference policy, at most epsilon]"

So, I think the idea is that, a quantilizer for a given utility function  and reference distribution  is, in effect, optimizing for an infrafunction that is/corresponds-to the set of utility functions  satisfying the bound in question,
and, therefore, any quantilizer, in a sense, is as if it "has this bound" (or, "believes this bound")

And that therefore, any quantilizer should -

- wait.. that doesn't seem right..? I was going to say that any quantilizer should therefore be reflectively stable, but that seems like it must be wrong? What if the reference distribution includes always taking actions to modify oneself in a way that would result in not being a quantilizer? uhhhhhh

 

Ah, hm, it seems to me like the way I was imagining the distribution  and the context in which you were considering it, are rather different. I was thinking of  as being an accurate distribution of behaviors of some known-to-be-acceptably-safe agent, whereas it seems like you were considering it as having a much larger support, being much more spread out in what behaviors it has as comparably likely to other behaviors, with things being more ruled-out rather than ruled-in ?

drocta10

Whoops, yes, that should have said  , thanks for the catch! I'll edit to make that fix.

Also, yes, what things between  and   should be sent to, is a difficulty..
A thought I had which, on inspection doesn't work, is that (things between   and )  could be sent to  , but that doesn't work, because  might be terminal, but (thing between  and ) isn't terminal. It seems like the only thing that would always work would be for them to be sent to something that has an arrow (in B) to  (such as f(a), as you say, but, again as you mention, it might not be viable to determine f(a) from the intermediary state).

I suppose if  were a partial function, and one such that all states not in its domain have a path to a state which is in its domain, then that could resolve that?

I think equivalently to that, if you modified the abstraction to get,  defined as,  and  
so that B' has a state for each state of B, along with a second copy of it with no in-going arrows and a single arrow going to the normal version,
uh, I think that would also handle it ok? But this would involve modifying the abstraction, which isn't nice. At least the abstraction embeds in the modified version of it though.
Though, if this is equivalent to allowing f to be partial, with the condition that anything not in its domain have arrows leading to things that are in its domain, then I guess it might help to justify a definition allowing  to be partial, provided it satisfies that condition.

Are these actually equivalent?

Suppose  is partial and satisfies that condition.
Then, define  to agree with f on the domain of f, and for other , pick an  in the domain of f such that  , and define 
In a deterministic case, should pick  to be the first state in the path starting with  to be in the domain of . (In the non-deterministic case, this feels like something that doesn't work very well...)
Then, for any non-terminal , either it is in the domain of f, in which case we have the existence of  such that a and where , or it isn't, in which case we have that there exists  such that a and where  , and so f' satisfies the required condition.
On the other hand, if we have an  satisfying the required condition, we can co-restrict it to B, giving a partial function  , where, if  isn't in the domain of f, then, assuming a is non-terminal, we get some  s.t.  a and where  , and, as the only things in B' which have any arrows going in are elements of B, we get that f'(a'') is in B, and therefore that a'' is in the domain of f.
But what if a is terminal? Well, if we require that  non-terminal implies  non-terminal, then this won't be an issue because pre(b) is always non-terminal, and so anything terminal is in the domain.
Therefore the co-restriction f of f', does yield a partial function satisfying the proposed condition to require for partial maps.

So, this gives a pair of maps, one sending functions  satisfying appropriate conditions to partial function  satisfying other conditions, and one sending the other way around, and where, if the computations are deterministic, these two are inverses of each-other. (if not assuming deterministic, then I guess potentially only a one-sided inverse?)

So, these two things are equivalent.

uh... this seems a bit like an adjunction, but, not quite. hm.

Load More