•

Created by JoshuaFox at 3y

Hugo de Garis, 2005. *The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines.Machines*. Etc Publications.

In fact, a similar outcome would also apply to many other goals: So long as greater optimization power can be boosted with more computing power, and so long as dedication of resources to creating computronium does not detract from the goal (e.g., by taking up matter, time, or effort that can better be used in other ways), computronium may be valuable to attaining the ~~goal, human-friendly or otherwise.~~goal. A purely mathematical goal, like proving the Riemann Hypothesis, is completely focused on computation and so most directly illustrates the concept.

In a thought experiment similar to the Paperclip maximizer, if an artificial general intelligence has a terminal value (end-goal) which to make a pure mathematical calculation like solving the Riemann ~~Hypothesis~~Hypothesis, it would convert all available mass to computronium (the most efficient possible computer processors).

Theories that valorize intelligence as such (such as that of Hugo de ~~Garis,~~Garis [2005], or Eliezer Yudkowsky before 2001) may ~~see~~consider the conversion of all matter to computronium (running an AGI) ~~as~~to be a positive ~~development.~~development, as this would provide the most powerful possible infrastructure for intelligence.

Hugo de Garis, 2005. The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines. Etc Publications.

~~] ~~In a thought experiment similar to the Paperclip maximizer, if an artificial general intelligence has a terminal value (end-goal) which to make a pure mathematical calculation like solving the Riemann Hypothesis it would convert all available mass to computronium (the most efficient possible computer processors).

] In a thought experiment similar to the Paperclip maximizer, if an artificial general intelligence has a ~~[[~~terminal ~~value]~~value (end-goal) which to make a pure mathematical calculation like solving the Riemann Hypothesis it would convert all available mass to computronium (the most efficient possible computer processors).

In fact, a similar outcome would also apply to many other goals: So long as greater optimization power can be boosted with more computing power, and so long as dedication of resources to creating computronium does not detract from the goal (e.g., by taking up matter, time, or effort that can better be used in other ways), computronium may ~~result.~~be valuable to attaining the goal, human-friendly or otherwise. A mathematical goal, like proving the Riemann Hypothesis, is completely focused on computation and so most directly illustrates the concept.

Theories that valorize intelligence as such (such as that of Hugo de Garis, or Eliezer Yudkowsky before 2001) may see the conversion of all matter to computronium (running an AGI) as a positive development.

In a thought experiment similar to the Paperclip ~~Maximizer~~maximizer, if an artificial general intelligence has a [[terminal value] (end-goal) which to make a pure mathematical calculation like solving the Riemann Hypothesis it would convert all available mass to computronium (the most efficient possible computer processors).

JoshuaFox v1.0.0Sep 3rd 2012 (+970) Created page with "Computronium is a {{wikilink}} "theoretical arrangement of matter that is the most optimal possible form of computing device for that amount of matter. " ==Relevant to Friendly ..." 2

Computronium is a "theoretical arrangement of matter that is the most optimal possible form of computing device for that amount of matter. "

In a thought experiment similar to the Paperclip Maximizer, if an artificial general intelligence has a [[terminal value] (end-goal) which to make a pure mathematical calculation like solving the Riemann Hypothesis it would convert all available mass to computronium (the most efficient possible computer processors).

In fact, a similar outcome would also apply to many other goals: So long as greater optimization power can be boosted with more computing power, and so long as dedication of resources to creating computronium does not detract from the goal (e.g., by taking up matter, time, or effort that can better be used in other ways), computronium may result. A mathematical goal, like proving the Riemann Hypothesis, is completely focused on computation and so most directly illustrates the concept.

## Talk:Computronium

Since this concept

ison wikipedia, and doesn't add a new perspective or local citations, it should probably stay out of the wiki. --Vladimir Nesov19:12, 8 June 2010 (UTC)Also, if people are going to copy from Wikipedia, we need to be careful to preserve the CC license.

JoshuaZ13:43, 9 June 2010 (UTC)