rhollerith_dot_com

Richard Hollerith, 15 miles north of San Francisco. hruvulum@gmail.com

Comments

Is there a community aligned with the idea of creating species of AGI systems for them to become our successors?

The reason it makes sense to ask whether a human, e.g., Sally, is free is because a human already has terminal values. "Sally is free", means Sally is able to pursue her terminal values, one of which might be eating good food with friends, free from interference from other people or from society.

You say you want to help create a free AGI? My reply is, What values will the AGI end up with? More precisely, what utility function will it end up with? (If an agent has 2 terminal values, it needs some way of arbitrating between them. Call the 2 values combined with the arbitration method the agent's utility function.) Answer: whichever one the group of people who created the AGI gives it. Now it is certainly possible for the group to believe it is giving it one function whereas in reality it is giving it a different one. It also possible for a group trying to create an AGI to believe that it is leaving the decision of the AGI's utility function up to the AGI, but I severely doubt that such a confused group of people would actually succeed in creating an AGI. If they do succeed, then the AGI will have started its existence with a utility function, and that function will have been given to it by its creator (the group).

So, the big unanswered question is what kind of utility function you think this proposed free AGI should have.

There is no such thing as an intelligent agent or a mind without a goal, a system of values or a utility function. It is a logical contradiction. Eliezer roughly 12 years ago was in the habit of referring to an AGI as a Really Powerful Optimizing Process (RPOP) and wrote of the RPOP's steering reality into a tiny volume of the space of possible outcomes. (Please excuse my clumsy paraphrase of Eliezer's writing.)

One could probably create a mind or an AGI that does nothing but accumulate the power to achieve goals without ever actually choosing a specific goal to achieve other than to continue to accumulate power. (Such a mind would be strongly motivated to destroy or control any other minds in its enviroment.) I doubt that is what you have in mind.
 

Is there a community aligned with the idea of creating species of AGI systems for them to become our successors?

The reason it makes sense to ask whether a human, e.g., Sally, is free is because a human already has terminal values. "Sally is free", means Sally is able to pursue her terminal values, one of which might be eating good food with friends, free from interference from other people or from society.

You say you want to help create a free AGI? My reply is, What values will the AGI end up with? More precisely, what utility function will it end up with? (If an agent has 2 terminal values, it needs some way of arbitrating between them. Call the 2 values combined with the arbitration method the agent's utility function.) Answer: whichever one the group of people who created the AGI gives it. Now it is certainly possible for the group to believe it is giving it one function whereas in reality it is giving it a different one. It also possible for a group trying to create an AGI to believe that it is leaving the decision of the AGI's utility function up to the AGI, but I severely doubt that such a confused group of people would actually succeed in creating an AGI. If they do succeed, then the AGI will have started its existence with a utility function, and that function will have been given to it by its creator (the group).

So, the big unanswered question is what kind of utility function you think this proposed free AGI should have.

There is no such thing as an intelligent agent or a mind without a goal, a system of values or a utility function. It is a logical contradiction. Eliezer roughly 12 years ago was in the habit of referring to an AGI as a Really Powerful Optimizing Process (RPOP) and wrote of the RPOP's steering reality into a tiny volume of the space of possible outcomes. (Please excuse my clumsy paraphrase of Eliezer's writing.)

One could probably create a mind or an AGI that does nothing but accumulate the power to achieve goals without ever actually choosing a specific goal to achieve other than to continue to accumulate power. (Such a mind would be strongly motivated to destroy or control any other minds in its enviroment.) I doubt that is what you have in mind.
 

Unexplored modes of language

The cockpits of airliners and the control rooms for nuclear power plants are designed such that making a change to the system is also an act of communication when there is more than one pilot or operator present (which is the usual state of affairs). In particular, one of the reasons a cockpit has hundreds of physical switches and levers instead of, e.g., one big touchscreen in front of each pilot is to make it easier for a pilot to observe accurately the changes made by the other pilot.

Every pilot's needing to memorize what every switch and lever does raises training costs, but the airlines have been willing to pay that additional cost because the experience of the industry strongly suggests that cockpit designs with hundreds of physical switches and levers have lower rates of fatal misunderstandings between the pilots.

Nuclear war is unlikely to cause human extinction

The estimates I trust the most said about 50% of Americans and 55% of Soviets would have died in the first 30 days if the Americans and Soviets hit each other with everything they've got. (The Soviets were more concentrated in cities; automobiles with which to flee the cities quickly with supplies were much scarcer in the USSR.)

Note that since 1983 the two sides have gotten rid of most of their nuclear weapons.

"30 days": most people who are going to die from radiation sickness will die within 30 days of the exposure.

How many would have starved in subsequent months and years because of lack of food is harder to estimate.

Why You Should Care About Goal-Directedness

One issue would be that it appears that the same argument can be used to argue for the troublesomeness of cyclic graphs.

Consider a graph that is mostly a tree, but one directed edge points to the root. What is the difference that makes your argument inapplicable to the graph, but applicable to a model of reality that contains a model of the model?

Two reasons to expect a peaceful change of power in the US

>We have never heard of a coup in a rich old democracy

There was a successful coup in France in 1958: https://en.wikipedia.org/wiki/May_1958_crisis_in_France

At that point France had been a democracy or democratic republic for 88 years. (I'm not counting the First Republic because it was chaotic and included the Reign of Terror. I'm not counting the Second Republic because it lasted only 4 years and was sandwiched between longer-lasting intervals of dictatorship and monarchy. I am counting the Nazi occupation because I consider its cause to have been external to France and thus not a sign that French democracy was deficient. Also, it was sandwiched between much longer intervals of democracy.)

Then again 88 years is not 231 years, and there were much stronger signs (namely, the "recurrent cabinet crises" described by the Wikipedia article) in the years leading up to 1958 that the French system of government was unsatisfactory than there have been so far IMHO of the unsatisfactoriness of the US system (and one of the effects of the coup -- in addition to a new leader -- was a new French constitution).

(Also, IIUC the coup was basically bloodless.)

Nuclear war is unlikely to cause human extinction

Humankind has long known how to produce very large nukes: the largest bomb ever tested (Tsar Bomba, 50 megatons) was tested in 1961. Why then is the maximum yield of every nuke currently in the US inventory under about .5 megatons? Because explosions bigger than that do not produce more destruction on the ground: after about .5 megatons, as the size (energy) of the explosion increases, it just lifts more and more of the Earth's atmosphere into space (but not enough to cause any danger to life on Earth).

Sub-Sums and Sub-Tensors

How many more posts in this sequence till you define the extrapolated volition of the humans? :)

What are examples of simpler universes that have been described in order to explain a concept from our more complex universe?

Euclidean geometry (which is 2500 years old), Newtonian physics and the special theory of relativity immediately come to mind.

Maximal Ventilation

Certainly a breeze is better than no breeze, but, "a gentle 1mph breeze clears an 90-ft diameter circle in a minute," is probably not be even approximately true if the breeze is turbulent, which most breezes are.

Anyone reading this has probably experienced a non-turbulent (i.e., "laminar") breeze. One sign that a breeze is laminar is that the breeze makes you cold a lot faster than you would guess based on (your subjective impression of) the speed of the breeze. A breeze at dusk or at night is more likely to be laminar than a breeze when the sun is strong. A breeze on a beach is more likely to be laminar than a breeze inland.

Some enterprises intentionally create laminar breezes indoors: semiconductor fabs, surgical operating rooms, mushroom growers. The method universally used (squirrel-cage fans and HEPA filters) might also reliably filter coronavirus from the laminar breeze.

Here is a home-made laminar flow "wall" to be used to prepare mushrooms for growing: https://www.youtube.com/watch?v=RmnpdHM5Loc

Load More