Sorted by New

Wiki Contributions


Arguments like yours are the reason why I do not think that Yudkowskys scenario is overwhelmingly likely ( P > 50%). However, this does not mean that existintial risk from AGI is low. Since smart people like Terence Tao exist, you cannot prove with complexity theory that no AGI with the intelligence of Terence Tao can be build. Imagine a world where everyone has one or several AI assistants whose capabilities are the same as the best human experts. If the AI assistants are deceptive and are able to coordinate, something like slow disempowerment of humankind followed by extinction is possible. Since there is a huge economic incentive to use AI assistants, it is hard for humans to take coordinated action unless it is very obvious that the AIs are dangerous. On the other hand, it may be easy for the AIs to coordinate since many of them are copies of each other.    

You are right and now it is clear, why your original statement is correct, too. Let  be an arbitrary computable utility function. As above, let  and  with  and . Choose  as in your definition of "computable". Since  terminates, its output depends only on finitely many . Now

is open and a subset of , since .  

I have discovered another minor point. You have written at the beginning of Direction 17 that any computable utility function  is automatically continuous. This seems to be not always true.

I fix some definitions to make sure that we talk about the same stuff. For reasons of simplicity, I assume that and  are finite. Let   be the space of all infinite sequences with values in .  The -th projection  is given by

The product topology is defined as the coarsest topology such that all projection maps are continuous. A base of this topology consists of all sets  such that there are finitely many indices  and subsets  with

In particular, any open set contains such an  as a subset, which means that its image under  is  for all but finitely many . For my counterexample, let . Let  be a sequence with values in . If  is never 2, we define

Otherwise, we define  is computable in the sense that for any  we find a finite program whose input is a sequence such that  and  uses only finitely many values of . The preimage of the open set  is

which is not open since its th projection is always . Therefore,  is not contiuous. However, we have the following lemma:

Let  be a reward function and let  be given by

where  is the time discount rate. Then  is continuous with respect to the product topology.

Proof: Since the open intervals are a base of the standard topology of , it suffices to prove that the preimage of any interval  or  with  is open in . For reasons of simplicity, we consider only . The other cases are analogous. Let  such that . Moreover, let  such that . Finally, we choose an  such that . We define the set

 is an open subset of . Since the reward is non-negative, we have

and for any , we have

too. Therefore, 

and furthermore . All in all, any  has an open neighborhood that is a subset of . Therefore,  is open.

That a utility function  is continuous roughly means that for any  there are only finitely many events that have an influence of more than  on the utility. This could be a problem for studying longtermist agents with zero time discount rate. However, studying such agents is hard anyway since there is no guarantee that the sum of rewards converges and we have to deal with infinity ethics. As far as I know, it is standard in learning theory to avoid such situations by assuming a non-zero time discount rate or a finite time horizon. Therefore, it should not be a big deal to add the condition that  is continuous to all theorems.


I have a question about the conjecture at the end of Direction 17.5. Let  be a utility function with values in  and let  be a strictly monotonous function. Then  and  have the same maxima.  can be non-linear, e.g. . Therefore, I wonder if the condition  should be weaker.

Moreover, I ask myself if it is possible to modify  by a small amount at a place far away from the optimal policy such that  is still optimal for the modified utility function. This would weaken the statement about the uniqueness of the utility function even more. Think of an AI playing Go. If a weird position on the board has the utility -1.01 instead of -1, this should not change the winning strategy. I have to go through all of the definitions to see if I can actually produce a more mathematical example. Nevertheless, you may have a quick opinion if this could happen.

I have two questions that may be slightly off-topic and a minor remark:

  • Is a list of open and tractable problems related to Infra-Bayesianism somewhere available?
  • Do you plan to publish the results of the Infra-Bayesianism series in a peer-reviewed journal? I understand that there are certain downsides; mostly that it requires a lot of work, that the whole series may be too long for a journal article and that the peer review process takes much time. However, if your work is citeable, it could attract more researchers, who are able to contribute.
  • On page 22, you should include the condition a(bv) = (ab)v into the definition of a vector space. 

Thank you for the link. This clarifies a lot.

I am starting to learn theoretical stuff about AI alignment and have a question. Some of the quantities in your post contain the Kolmogorov complexity of U. Since it is not possible to compute the Kolmogorov complexity of a given function or to write down a list of all function whose complexity is below a certain bound, I wonder how it would be possible to implement the PreDCA protocol on a physical computer.