Optimization

Ruby (+2952/-54)
Ben Pace (+79/-2925)
pedrochaves (+90/-15)
pedrochaves (+5/-5)
pedrochaves (+280/-139)
pedrochaves (+42/-12) /* Further Reading & References */
pedrochaves
pedrochaves
pedrochaves (+685/-1383)
Alex_Altair (+136/-2) /* Measuring optimization power */

A systemAn optimization process is performing optimization if it is movingany kind of process that systematically comes up with solutions that are better than the solution used before. More technically, this kind of process moves the world into a specific and unexpected set of states.states by searching through a large search space, hitting small and low probability targets. When this process is gradually guided by some agent into some specific state, through searching specific targets, we can say it prefers that state.

The best way to exemplify an optimization process is through a simple example: Eliezer Yudkowsky suggests natural selection is such a process. Through an implicit preference – better replicators – natural selection searches all the genetic landscape space and hit small targets: efficient mutations.

Consider the human being. We are a highly complex object with a low probability to have been created by chance - natural selection, however, over millions of years, built up the infrastructure needed to build such a functioning body. This body, as well as other organisms, had the chance (was selected) to develop because it is in itself a rather efficient replicator suitable for the environment where it came up.

Or consider the famous chessplaying computer, Deep Blue. Outside of the narrow domain of selecting moves for chess games, it can't do anything impressive: but as a chessplayer, it was massively more effective than virtually all humans. It has a high optimization power in the chess domain but almost none in any other field. Humans or evolution, on the other hand, are more domain-general optimization processes than Deep Blue, but that doesn't mean they're more effective at chess specifically. (Although note in what contexts this optimization process abstraction is useful and where it fails to be useful: it's not obvious what it would mean for "evolution" to play chess, and yet it is useful to talk about the optimization power of natural selection, or of Deep Blue.)

Measuring Optimization Power

One way to think mathematically about optimization, like evidence, is in information-theoretic bits. The optimization power is the amount of surprise we would have in the result if there were no optimization process present. Therefore we take the base-two logarithm of the reciprocal of the probability of the result. A one-in-a-million solution (a solution so good relative to your preference ordering that it would take a million random tries to find something that good or better) can be said to have log_2(1,000,000) = 19.9 bits of optimization. Compared to a random configuration of matter, any artifact you see is going to be much more optimized than this. The math describes only laws and general principles for reasoning about optimization; as with probability theory, you oftentimes can't apply the math directly.

Further Reading & References

See also

AnA system is performing optimization process is any kind of process that systematically comes up with solutions that are better than the solution used before. More technically, this kind of process is one that performs searches in a large search space, hitting small, low probability targets. When this process is gradually guided by some agent into some specific state, through searching specific targets, we can say it prefers that state.

The best way to exemplify an optimization process is through a simple example: Eliezer Yudkowsky suggests natural selection is such a process. Through an implicit preference – better replicators – natural selection searches all the genetic landscape space and hit small targets: efficient mutations.

Consider the human being. We are a highly complex object with a low probability to have been created by chance - natural selection, however, over millions of years, built up the infrastructure needed to build such a functioning body. This body, as well as other organisms, had the chance (was selected) to develop becauseif it is in itselfmoving the world into a rather efficient replicator suitable for the environment where it came up.

Or consider the famous chessplaying computer, Deep Blue. Outsidespecific and unexpected set of the narrow domain of selecting moves for chess games, it can't do anything impressive: but as a chessplayer, it was massively more effective than virtually all humans. It has a high optimization power in the chess domain but almost none in any other field. Humans or evolution, on the other hand, are more domain-general optimization processes than Deep Blue, but that doesn't mean they're more effective at chess specifically. (Although note in what contexts this optimization process abstraction is useful and where it fails to be useful: it's not obvious what it would mean for "evolution" to play chess, and yet it is useful to talk about the optimization power of natural selection, or of Deep Blue.)states.

Measuring optimization power

One way to think mathematically about optimization, like evidence, is in information-theoretic bits. The optimization power is the amount of surprise we would have in the result if there were no optimization process present. Therefore we take the base-two logarithm of the reciprocal of the probability of the result. A one-in-a-million solution (a solution so good relative to your preference ordering that it would take a million random tries to find something that good or better) can be said to have log_2(1,000,000) = 19.9 bits of optimization. Compared to a random configuration of matter, any artifact you see is going to be much more optimized than this. The math describes only laws and general principles for reasoning about optimization; as with probability theory, you oftentimes can't apply the math directly.

Further Reading & References

See also

Consider the human being. We are a highly complex object with a low probability to have been created by chance - natural selection, however, over millions of years, built up the infrastructure needed to build such a functioning body. This bodybody, as well as other organisms, had the chance (was selected) to develop because it is in itself a rather efficient replicator.replicator suitable for the environment where it came up.

Or consider the famous chessplaying computer, Deep Blue. Outside of the narrow domain of selecting moves for chess games, it can't do anything impressive: but as a chessplayer, it was massively more effective than virtually all humans. It has a high optimization power in the chess domain but almost none in any other field. Humans or evolution, on the otehrother hand, are more domain-general optimization processes than Deep Blue, but that doesn't mean they're more effective at chess specifically. (Although note in what contexts this optimization process abstraction is useful and where it fails to be useful: it's not obvious what it would mean for "evolution" to play chess, and yet it is useful to talk about the optimization power of natural selection, or of Deep Blue.)

Consider the human being. We are a rather unlikelyhighly complex object with a low probability to have come aboutbeen created by chance, and so of course, it didn't. Natural - natural selection, however, over millions of years, built up the infrastructure needed to build such a full functioning body, andbody. This body had the chance (was selected) to develop because it ended up creating it, because people areis in itself a rather efficient replicators. replicator.

Or consider the famous chessplaying computer, Deep Blue. Outside of the narrow domain of selecting moves for chess games, it can't do anything impressive: but as a chessplayer, it was massively more effective than virtually all humans. It has a high optimization power in the chess domain but almost none in any other field. Humans or evolutionevolution, on the otehr hand, are more domain-general optimization processes than Deep Blue, but that doesn't mean they're more effective at chess specifically. (Although note in what contexts this optimization process abstraction is useful and where it fails to be useful: it's not obvious what it would mean for "evolution" to play chess, and yet it is useful to talk about the optimization power of natural selection, or of Deep Blue.)

An optimization process is aany kind of process that systematically comes up with solutions that are higher ratherbetter than lower relative to some ordering over outcomes; it hits small targetsthe solution used before. More technically, this kind of process is one that performs searches in a large search space, comes up with outcomes that you would not expect to seehitting small, low probability targets. When this process is gradually guided by random chance, atoms bumping up against each other with no direction or ordering at all. If an entity pushes realitysome agent into some state — across many contexts, not just by accident — then you couldspecific state, through searching specific targets, we can say it prefers that state.

Optimization is a very general notion that encompasses all kinds of order-generating processes other than emergence; optimization is about choosing or selecting outcomes defined as better.

Probably the The best way to exemplify an optimization process you're most familiar with is that of human intelligence. Humans don't do things randomly: we havethrough a simple example: very specific goalsEliezer Yudkowsky suggests natural selection is such a process. Through an implicit preference – better replicators – natural selection searches all the genetic landscape space and rearrange the world in specific ways to meet our goals.hit small targets: efficient mutations. Consider the monitor on which you read these words. That monitor ishuman being. We are a rather unlikely object to have come about by chance, and so of course, it didn't. Human economiesNatural selection, over many yearsmillions of years, built up the infrastructure needed to build a monitor,full functioning body, and then builtit ended up creating it, because people prefer to be able to see their files. It might not seem so impressive if you're used to it, but there's a lot of cognitive work that goes on behind the scenes.

Another example of an optimization process would be natural selection, notable for its "first" status if not its power or speed. Evolution works because organisms that do better at surviving and reproducing propagate more of their traits to the next generation; in this way genes with higher fitness are systematically preferred, and complex machinery bearing the strange design signature of evolved things can be built up over time.

rather efficient replicators. Or consider the famous chessplaying computer, Deep Blue. Outside of the narrow domain of selecting moves for chess games, it can't do anything impressive: but as a chessplayer, it was massively more effective than virtually all humans. Humans or evolution are more domain-general optimization processes than Deep Blue, but that doesn't mean they're more effective at chess specifically. (Although note in what contexts this optimization process abstraction is useful and where it fails to be useful: it's not obvious what it would mean for "evolution" to play chess, and yet it is useful to talk about the optimization power of natural selection, or of Deep Blue.)

One way to think mathematically about optimization, like evidence, is in information-theoretic bits. The optimization power is the amount of surprise we would have in the result if there were no optimization process present. Therefore we take the base-two logarithm of the reciprocal of the probability of the result. A one-in-a-million solution (a solution so good relative to your preference ordering that it would take a million random tries to find something that good or better) can be said to have log_2(1,000,000) = 19.9 bits of optimization. Compared to a random configuration of matter, any artifact you see is going to be much more optimized than this. The math describes only laws and general principles for reasoning about optimization; as with probability theory, you oftentimes can't apply the math directly.

Blog postsFurther Reading & References

One way to think mathematically about optimization, like evidence, is in information-theoretic bits. WeThe optimization power is the amount of surprise we would have in the result if there were no optimization process present. Therefore we take the base-two logarithm of the reciprocal of the probability of the result. A one-in-a-million solution (a solution so good relative to your preference ordering that it would take a million random tries to find something that good or better) can be said to have log_2(1,000,000) = 19.9 bits of optimization. Compared to a random configuration of matter, any artifact you see is going to be much more optimized than this. The math describes laws and general principles for reasoning about optimization; as with probability theory, you oftentimes can't apply the math directly.

Load More (10/25)