LESSWRONG
LW

avturchin
4384Ω810418000
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
5avturchin's Shortform
6y
177
No wikitag contributions to display.
Can a pre-commitment to not give in to blackmail be "countered" by a pre-commitment to ignore such pre-commitments?
Answer by avturchinJul 04, 202530

This reminds me nested time machines discussed by gwern. https://gwern.net/review/timecrimes

Precomitments plays the role of time loops and they can propagate almost infinitely in time and space. For example, any one who is going to become a major, can pre-pre-pre-commit never open any video for mafia boss etc.

 

Reply
Are LLMs being trained using LessWrong text?
Answer by avturchinJul 02, 202530

Yes, they can generate a list of comments to a post, putting correct names of prominent LessWrongers and typical styles and topics for each commenter. 

Reply
Time Machine as Existential Risk
avturchin4d61

Thanks, that was actually what EY said in his quote, which I put just below my model - that we should change the bit each time. I somehow missed it ("send back a '0' if a '1' is recorded as having been received, or vice versa—unless some goal state is achieved").

As I stated in the epistemic status, this article is just a preliminary write-up. I hope more knowledgeable people will write much better models of x-risks from time machines and will be able to point out where avturchin was wrong and explain what the real situation is.

Reply
Nina Panickssery's Shortform
avturchin6d20

I am going to post about biouploading soon – where the uploading is happened into (or via) a distributed net of my own biological neurons. This combines good things about uploading – immortality, ability to be copied, easy to repair, and good things about being biological human – preserving infinite complexity, exact sameness of a person, guarantee that the bioupload will have human qualia and any other important hidden things which we can miss.  

Reply1
Time Machine as Existential Risk
avturchin6d40

Thanks! Fantastic read. It occurred to me that sending code or AI back in time, rather than a person, is more likely since sending data to the past could be done serially and probably requires less energy than sending a physical body.

Some loops could be organized by sending a short list of instructions to the past to an appropriate actor – whether human or AI.

Additionally, some loops might not require sending any data at all: Roko's Basilisk is an example of such acausal data transmission to the past. Could there be an outer loop for Roko's Basilisk? For example, a precommitment not to be acausally blackmailed.

Also (though I'm not certain about this), loops like you described require that the non-cancellation principle is false – meaning that events which have happened can be turned into non-existence. To prevent this, we would need to travel to the past and compensate for any undesirable changes, thus creating loops. This assumption motivated the character in Timecrimes to try to recreate all events exactly as they happened.

However, if the non-cancellation principle is false, we face a much more serious risk than nested loops (which are annoying, but most people would live normal lives, especially those who aren't looped and would continue through loops unaffected). The risk is that a one-time time machine could send a small probe into the remote past and prevent humanity from appearing at all.

We can also hypothesize that an explosion of nested loops and time machines might be initiated by aliens somewhere in the multiverse – perhaps in the remote future or another galaxy. Moreover, what we observe as UAPs might be absurd artifacts of this time machine explosion.

Reply
Time Machine as Existential Risk
avturchin7d53

The main claim of the article does not depend on the exact mechanism of time travel, which I have chosen not to discuss in detail. The claim is that we should devote some thought to possible existential risks related to time travel.

The argument about presentism is that the past does not ontologically exist, so "travel" into it is impossible. Even if one travels to what appears to be the past, it would not have any causal effects along the timeline.

I was referring to something like eternal return—where all of existence happens again and again, but without new memories being formed. The only effect of such a loop is anthropic—it has a higher measure than a non-looped timeline. This implies that we are more likely to exist in such a loop and in a universe where this is possible.

Reply2
Summary of John Halstead's Book-Length Report on Existential Risks From Climate Change
[+]avturchin10d-9-5
Interstellar travel will probably doom the long-term future
avturchin10d40

I would add that there are a series of planetary system-wide risks that appear only for civilizations traveling within their solar systems but do not affect other solar systems. These include artificial giant planet explosions via initiating nuclear fusion in their helium and lithium deposits, destabilization of the Oort cloud, and the use of asteroids as weapons.

More generally speaking, any spacecraft is a potential weapon, and the higher its speed, the more dangerous it becomes. Near light-speed starships are perfect weapons. Even a small piece of matter traveling at very high velocities (not necessarily light speed, but above 100 km per second, as I recall) will induce large nuclear reactions at the impact site. Such nuclear reactions may not produce much additional energy but will cause significant radioactive contamination. They could destroy planets and any large structures.

Additionally, space colonization will likely develop alongside weapon miniaturization, which could ultimately result in space-based grey goo with some level of intelligence. Stanisław Lem's last novel, "Fiasco," seems to address this concept.

Reply
CstineSublime's Shortform
avturchin10d00

"Explain as gwern ELI5"

Reply
Vladimir_Nesov's Shortform
avturchin22d10

This means that straightforward comparison of flops-per-USD between home computer GPU cards and data center flops-per-USD is incorrect. If someone already has a GPU card, they already have a computer and house where this computer stays "for free." But if someone needs to scale, they have to pay for housing and mainframes.

Such comparisons of old 2010s GPUs with more modern ones are used to show the slow rate of hardware advances, but they don't take into account the hidden costs of owning older GPUs.

Reply
Load More
15Time Machine as Existential Risk
7d
7
20Our Reality: A Simulation Run by a Paperclip Maximizer
2mo
65
9Experimental testing: can I treat myself as a random sample?
2mo
41
10The Quantum Mars Teleporter: An Empirical Test Of Personal Identity Theories
5mo
18
16What would be the IQ and other benchmarks of o3 that uses $1 million worth of compute resources to answer one question?
Q
6mo
Q
2
13Sideloading: creating a model of a person via LLM with very large prompt
7mo
4
5If I care about measure, choices have additional burden (+AI generated LW-comments)
8mo
11
12Quantum Immortality: A Perspective if AI Doomers are Probably Right
8mo
55
81Bitter lessons about lucid dreaming
9mo
62
8Three main arguments that AI will save humans and one meta-argument
9mo
8
Load More