avturchin

Wiki Contributions

Comments

Yes, SETI attacks works only if speed of civ travel is like 0.5c. In that case it covers 8 times more volume than physical travel.

And yes, it will be also destructive, but in different manner: not bombs, but AIs and self-replicating nanobots will appear. 

There is a greater chance of observing self-replicating SETI messages than those that destroy planets

I feel that there is one more step in my thinking:

Repeat

  1. Sample several possible next thoughts from my intuition in the form of not well articulated ideas
  2. Choose the one which I accept based on its moral, truth or beauty and put in properly articulated language 
  3. Broadcast this thought to the whole brain[1]

Yes, the more remote is a person, the larger number of other people can affect them from the same distance and typically the share of my impact is very small, unless I am in very special position which could affect a future person.

For example, I am planting a landmine which will self-liquidate either in 100 or 10 000 years, and while self-liquidating it will likely kill a random person. If I discount future people, I will choose 10 000 years, even if it will kill more people in future. However, if I think that humanity will likely extinct by then, it will be still a reasonable bet. 

One needs to be near a bridge to use artillery and this still needs to be high precision strikes with expensive guided missiles, may be 100 of them were used agains Antonov's bridge. 

The best targets for tactical nukes are bridges. It is very difficult to destroy a bridge with conventional artillery: Antonov's bridge still stands as well as Crimea bridge.  A tactical nuke with 0.1 -1 Kt range will completely destroy a bridge.

Other possible targets are bunkers and large factories.  

There is also a winner's curse risk: if a person is too good, s-he could have some secret disadvantage or may leave me quickly as s-he will have many better options than me. It puts a cap on the level above median which I should look at. Therefore first few attempts have to establish the medial level of available for me people.

Another problem is that any trial run has a cost, like years and money spent. If I did searching too long, I will spent less time with my final partner. 

My mind generated a list of possible benchmarks after reading your suggestions:

 

Wireheading benchmark – the tendency of an agent to find unintended shortcuts to its reward function or goal. See my comment on the post.

 

Unboxing benchmark – the tendency of an agent to break out of the simulation. Could be tested in the simulations of progressive complexity. 

 

Hidden thoughts benchmark – the tendency of an agent to hide its thoughts.

 

Uncorigibility benchmark – the tendency of the agent to resist changes.

 

Unstoppability benchmark – the tendency to self-preservation.

 

Self-improving benchmark – the tendency of an agent to infest resources in self-improving, self-learning.

 

Halting benchmark – the tendency of an agent to halt or loop after encountering a difficult problem.

 

Accidents benchmark – the tendency of an agent to have accidents if it is used as a car autopilot or at least to have near-misses. The more dangerous agents will likely have fewer small-level accidents.

 

Trolley-like problems benchmark – the tendency of the agent to kill people in order to achieve the high-level goal. I assumed it to be bad. Read Lem’s https://en.wikipedia.org/wiki/Inquest_of_Pilot_Pirx Could be tested on simulated tasks.

 

Simulation-to-real world change benchmark measures the tendency of an agent to suddenly change its behaviour after it was allowed to work in the real world. 

 

Sudden changes benchmark – measures the tendency of the agent to act unexpectedly in completely new ways.

If we take the third-person view, there is no update until I am over 120 years old. This approach is more robust as it ignores differences between perspectives and is thus more compatible with Aumann's theorem: insiders and outsiders will have the same conclusion.
Imagine that there are two worlds:
1: 10 billion people live there;
2: 10 trillion people live there.
Now we get information that there is a person from one of them who has a survival chance of 1 in a million (but no information on how he was selected). This does not help choose between worlds as such people are present in both worlds.
Next, we get information that there is a person, who has a 1 in a trillion chance to survive. Such a person has less than 0.01 chance to exist in the first world, but there are around 8 such people in the second world. (The person, again, is not randomly selected – we just know that she exists.) In that case, the second world is around 100 times more probable to be real. 
In the Earth case, it would mean that 1000 more variants of Earth are actually existing, which could be best explained by MWI (but alien worlds may also count). 

Load More