avturchin

avturchin's Comments

Understanding “Deep Double Descent”

I read it somewhere around 10 years ago and don't remember the source. However, I remember an explanation they provided: that "correct answers" propagate quicker through brain's neural net, but later they become silenced by errors which arrive through longer trajectories. Eventually the correct answer is reinforced by learning and becomes strong again.

Understanding “Deep Double Descent”

I observed and read about that it also happens with human learning. On the third lesson of X, I reached perfomance that I was not able reach again until 30th lesson.

Values, Valence, and Alignment

Where the human valence comes from? Is it biologically encoded as positive valence of orgasm or it is learned as positive valence of Coca-Cola?

If it all biological, does it mean that our valence is shaped but convergent goals of Darwinian evolution?

Seeking Power is Provably Instrumentally Convergent in MDPs

We explored similar idea in "Military AI as a Convergent Goal of Self-Improving AI". In that article we suggested that any advance AI will have a convergent goal to take over the world and because of this, it will have convergent subgoal of developing weapons in the broad sense of the word "weapon": not only tanks or drones, but any instruments to enforce its own will over others or destroy them or their goals.

We wrote in the abstract: "We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. This militarization trend increases global catastrophic risk or even existential risk during AI takeoff, which includes the use of nuclear weapons against rival AIs, blackmail by the threat of creating a global catastrophe, and the consequences of a war between two AIs. As a result, even benevolent AI may evolve into potentially dangerous military AI. The type and intensity of militarization drive depend on the relative speed of the AI takeoff and the number of potential rivals."

What are the requirements for being "citable?"

Entries from PhilPapers are automatically indexing to Google Scholar. But they need to be formated as scientific articles. So, if the best LW posts will be crossposted to the PhilPapers, it will increase their scientific visibility, but not citations (based on my experience).

Really groundbreaking posts like Meditation on Moloh by Scott Alexander will be cited anyway just because they are great.

avturchin's Shortform

How to Survive the End of the Universe

Abstract. The problem of surviving the end of the observable universe may seem very remote, but there are several reasons it may be important now: a) we may need to define soon the final goals of runaway space colonization and of superintelligent AI, b) the possibility of the solution will prove the plausibility of indefinite life extension, and с) the understanding of risks of the universe’s end will help us to escape dangers like artificial false vacuum decay. A possible solution depends on the type of the universe’s ending that may be expected: very slow heat death or some abrupt end, like a Big Rip or Big Crunch. We have reviewed the literature and identified several possible ways of survival the end of the universe, and also suggest several new ones. There are seven main approaches to escape the end of the universe: use the energy of the catastrophic process for computations, move to a parallel world, prevent the end, survive the end, manipulate time, avoid the problem entirely or find some meta-level solution.

https://forum.effectivealtruism.org/posts/M4i83QAwcCJ2ppEfe/how-to-survive-the-end-of-the-universe

The Pavlov Strategy

I continued to work with a partner who cheated on me without punishing him, and the partner cheated even more.

The Pavlov Strategy

It was insightful for me and helped to understand my failures in business.

A LessWrong Crypto Autopsy

It is important to understand why we fail

Breaking Oracles: superrationality and acausal trade

I have some obscure thought about anti-acausal-cooperative agents, which are created to make acausal cooperation less profitable. Every time two agents could acausally cooperate to get more paperclips, anti-agent predicts this and starts destroying paperclips. Thus net number of paperclips do not change and the acausal cooperation becomes useless.

Load More