avturchin

Wiki Contributions

Comments

Sorted by

I observed an effect of "chatification": my LLM-powered mind model tell me stories about my life and now I am not sure what is my original style of telling such stories. 

If not AGI, it will fail without enough humans. If AGI, it is just an example of misalignment. 

When green and red qualia are exchanged, all functions that point to red now point to green, so no additional update is needed. I say "green" when I see RED and I say "I like green" when I see RED (here capital letters are used to denote qualia).

If we use a system of equations as an example, when I replace Y with Z in all equations while X remains X, it will still be functionally the same system of equations.

If you assume sentience cut-off as a problem, it boils down to the Doomsday argument: why are we so early in the history of humanity? Maybe our civilization becomes non-sentient after the 21st century, either because of extinction or non-sentient AI takeover. 

If we agree with the Doomsday argument here, we should agree that most AI-civilizations are non-sentient. And as most Grabby Aliens are AI-civilizations, they are non-sentient.

TLDR: If we apply anthropics to the location of sentience in time, we should assume that Grabby Aliens are non-sentient, and thus the Grabby Alien argument is not affected by the earliness of our sentience.

Need to be proved as x-risk. For example, if population fails below 100 people, then regulation fails first. 

It is still a functional property of qualia - whether they will be more beautiful or not.

In my view, qualia are internal variables which do not affect the output. For example, x² + x + 1 = 0 and y² + y + 1 = 0 are the same equation but with different variables. Moreover, these variables originate not from the mathematical world, but from the Greek alphabet. So, by studying the types of variables used, we can learn something about the Greeks. 

You still use function aspect of qualia here – will red be more beautiful than blue. 

In my view, qualia are internal variables which are not affecting the result of computations. For example, equation x^2+x+1=0 is the same as y^2+y+1=0. They use x or y as internal variables

The problem of "humans hostile to humans" has two heavy tails: nuclear war and biological terrorism, which could kill all humans. A similar problem is the main AI risk: AI killing everyone for paperclips.

The central (and not often discussed) claim of AI safety is that the second situation is much more likely: it is more probable that AI will kill all humans than that humans will kill all humans. For example, by advocating for pausing AI development, we assume that the risks of nuclear war causing extinction are less than AI extinction risks.

If AI is used to kill humans as just one more weapon, it doesn't change anything stated above until AI evolves into an existential weapon (like a billion-drone swarm).

We can suggest a Weak Zombie Argument: It is logically possible to have a universe where all qualia of red and green are inverted in the minds of its inhabitants, while all physical things remain the same. This argument supports epiphenomenalism as well as the previous zombie argument but cannot be as easily disproved.

This is because it breaks down the idea of qualia into two parts: the functional aspect and the qualitative aspect. Functionally, all types of "red" are the same and are used to represent red color in the mind.

Zombies are not possible because something is needed to represent red in their minds. However, the most interesting qualitative aspect of that "something" is still arbitrary and doesn't have any causal effects.

Returning here after 3 years after reading about ghost drones flap. 
One thing that occurred to me is that evolutionary dynamic in vast unbounded spaces is different from the one in confined spaces. When toads were released in Australia, they were selected for longer legs and quicker jumping ability. These long-legged toads reached farther parts of Australia.

Another direction of evolution of 'space animals' involves stability over very long time and mimicry in the case of "dark forrest".

Load More