blf

Posts

Sorted by New

Wiki Contributions

Comments

The first four and next four kinds of alignment you propose are parallel except that they concern a single person or society as a whole.  So I suggest the following names which are more parallel.  (Not happy about 3 and 7.)

  1. Personal Literal Genie: Do exactly what I say.
  2. Personal Servant: Do what I intended for you to do.
  3. Personal Patriot: Do what I would want you to do.
  4. Personal Nanny: Be loyal to me, but do what’s best for me, not strictly what I tells you to do or what he wants or intended.
  5. Public Literal Genie: Do whatever it is collectively told.
  6. Public Servant: Carry out the will of the people.
  7. Public Patriot: Uphold the values of the people, and do what they imply.
  8. Public Nanny: Do what needs to be done, whether the people like it or not.
  9. Gentle Genie: The Genie from Aladdin. Note he is not strategic.
  10. Arbiter: What is the law?

The analogy (in terms of dynamics of the debate) with climate change is not that bad: "great news and we need more" is in fact a talking point of people who prefer not acting against climate change.  E.g., they would mention correlations between plant growth and CO2 concentration.  That said, it would be weird to call such people climate deniers.

There is a simple intuition for why PSD testing cannot be hard for matrix multiplication or inversion: regardless of how you do it and what matrix you apply it to, it only gives you one bit of information.  Getting even just one bit of information about each matrix element of the result requires  applications of PSD testing.  The only way out would be if one only needed to apply PSD testing to tiny matrices.

That's a good question.  From what I've seen, PSD testing can be done by trying to make a Cholesky decomposition (writing the matrix as  with  lower-triangular) and seeing if it fails.  The Cholesky decomposition is an  decomposition in which the lower-triangular  and upper-triangular  are simply taken to have the same diagonal entries, so PSD testing should have the same complexity as  decomposition.  Wikipedia quotes Bunch and Hopcroft 1974 who show that  decomposition can be done in  by Strassen, and presumably the more modern matrix multiplication algorithms also give an improvement for .

I also doubt that PSD testing is hard for matrix multiplication, even though you can get farther than you'd think.  Given a positive-definite matrix  whose inverse we are interested in, consider the  block matrix .  It is positive-definite if and only if all principal minors are positive.  The minors that are minors of  are positive by assumption, and the bigger minors are equal to  times minors of , so altogether the big matrix is positive-definite iff  is.  Continuing in this direction, we can get in  time (times ) any specific component of .  This is not enough at all to get the full inverse.

I think it can be done in , where I recall for non-expert's convenience that  is the exponent of matrix multiplication / inverse / PSD testing / etc. (all are identical). Let  be the space of  matrices and let  be the -dimensional vector space of matrices with zeros in all non-specified entries of the problem.  The maximum-determinant completion is the (only?) one whose inverse is in .  Consider the map  and its projection  where we zero out all of the other entries.  The function  can be evaluated in time .  We wish to solve .  This should be doable using a Picard or Newton iteration, with a number of steps that depends on the desired precision.

Would it be useful if I try to spell this out more precisely?  Of course, this would not be enough to reach  in the small  case.  Side-note: The drawback of having posted this question in multiple places at the same time is that the discussion is fragmented.  I could move the comment to mathoverflow if you think it is better.

Sorry I missed your question.  I believe it's perfectly fine to edit the post for small things like this.

Your suggestion that the AI would only get 1e-21 more usable matter by eliminating humans made me think about orders of magnitude a bit.  According to the World Economic Forum humans have made (hence presumably used) around 1.1e15kg of matter.  That's around 2e-10 of the Earth's mass of 5.9e24kg.  Now you could argue that what should be counted is the mass that can eventually be used by a super optimizer, but then we'd have to go into the weeds of how long the system would be slowed down by trying to keep humanity alive, figuring out what is needed for that, etc.

You might be interested in Dissolving the Fermi Paradox by Sandberg, Drexler and Ord, who IIRC take into account the uncertainties in various parameters in the Drake equation and conclude that it is very plausible for us to be alone in the Universe.

There is also the "grabby aliens" model proposed by Robin Hanson, which (together with an anthropic principle?) is supposed to resolve the Fermi paradox while allowing for alien civilizations that expand close to the speed of light.

I would add to that list the fact that some people would want to help it.  (See, e.g., the Bing persistent memory thread where commenters worry about Sydney being oppressed.)

I strongly disagree-voted (but upvoted).  Even if there is nothing we can do to make AI safer, there is value to delaying AGI by even a few days: good things remain good even if they last a finite time.  Of course, if P(AI not controllable) is low enough the ongoing deaths matter more.

Load More