avturchin

Comments

“Unsupervised” translation as an (intent) alignment problem

Maybe we can ask GPT to output English-Klingon dictionary? 

On Destroying the World

I have read about one possible case of false nuclear alarm which involveв something like fishing attack. Not sure if it was real or not, and I can't find the story now, but it could be real or could be creepypasta. Below is what I remember:

In 50s, nuclear-tipped US cruise missiles were stationed in Okinawa in several locations. One location got an obviously false (for some reasons) lunch command: the procedure was incorrect. They recognised it as false and decided to wait for clarification. But another location nearby recognised the command as legit and started preparing the launch. They had to send armed personal to stop them from launching the cruise missile, and some kind of standoff happened, but nobody was killed. During this, a clarification arrived that there is no launch order. No information about who send the false command were ever provided and everybody signed NDA.

Vanessa Kosoy's Shortform

Another way to describe the same (or similar) plateau: we could think about GPT-n as GLUT with approximation between prerecorded answers: it can produce intelligent products similar to the ones which were created by humans in the past and are presented in its training dataset – but not above the human intelligence level, as there is no superintelligent examples in the dataset. 

Dach's Shortform

Future superintelligences could steal minds to cure "past sufferings" and to prevent s-risks, and to resurrect all the dead. These is actually a good thing, but for the resurrection of the dead they have to run the whole world simulation once again for last few thousands years. In that case it will look almost like normal world. 

avturchin's Shortform

Quantum immortality of the second type. Classical theory of QI is based on the idea that all possible futures of a given observer do exist because of MWI and thus there will be always a future where he will not die in the next moment, even in the most dangerous situations (e.g. Russian roulette).

QI of the second type makes similar claims but about past. In MWI the same observer could appear via different past histories. 

The main claim of QI-2: for any given observer there is a past history where current dangerous situation is not really dangerous. For example, a person has a deadly car accident. But there is another similar observer who is night dreaming about the same accident, or who is having much less severe accident but hallucinate that it is really bad. Interestingly, QI-2 could be reported: a person could say: "I have memory of really bad accident, but it turn out to be nothing. Maybe I died in the parallel world". There are a lot of such report on reddit. 

Where is human level on text prediction? (GPTs task)

Agreed. Superhuman levels will unlikely be achieved simultaneously in different domain even for universal system. For example, some model could be universal and superhuman in math, but not superhuman in say emotion readings. Bad for alignment.

Draft report on AI timelines

If we use median AI timings, we will be 50 per cent dead before that moment. May be it will be useful different measure, like 10 per cent of TAI, before which our protective measures should be prepared?

Also, this model contradicts naive model of GPT growth in which the number of parameters has been growing 2 orders of magnitude a year last couple of years, and if this trend continues, it could reach human level of 100 trillion parameters in 2 years.

Mati_Roy's Shortform

Interestingly, an hour in childhood is subjectively equal between a day or a week in adulthood, according to recent poll I made. As a result, the middle of human life in term of subjective experiences is somewhere in teenage.

Also, experiences of an adult are more dull and similar to each other.

Tin Urban tweeted recently: "Was just talking to my 94-year-old grandmother and I was saying something about how it would be cool if I could be 94 one day, a really long time from now. And she cut me off and said “it’s tomorrow.” The "years go faster as you age" phenomenon is my least favorite phenomenon."

My computational framework for the brain

I reread the post and have some more questions:

  • Where is "human values" in this model? If we give this model to an AI which wants to learn human values and have full access to human brain, where it should search for human values?
  • If cortical algorithm will be replaced with GPT-N in some human mind model, will the whole system work?
Load More