User Profile

star8
description50
message954

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
Personal Blogposts
personPersonal blogposts by LessWrong users (as well as curated and frontpage).
rss_feed Create an RSS Feed

Verifier Theory and Unverifiability

1y
Show Highlightsubdirectory_arrow_left
0

The map of agents which may create x-risks

2y
Show Highlightsubdirectory_arrow_left
6

The map of organizations, sites and people involved in x-risks prevention

2y
Show Highlightsubdirectory_arrow_left
1

Fermi paradox of human past, and corresponding x-risks

2y
Show Highlightsubdirectory_arrow_left
27

The map of natural global catastrophic risks

2y
Show Highlightsubdirectory_arrow_left
0

The map of the methods of optimisation (types of intelligence)

2y
Show Highlightsubdirectory_arrow_left
10

The map of ideas how the Universe appeared from nothing

2y
Show Highlightsubdirectory_arrow_left
45

The map of the risks of aliens

2y
Show Highlightsubdirectory_arrow_left
21

Identity map

2y
Show Highlightsubdirectory_arrow_left
41

The map of p-zombies

2y
Show Highlightsubdirectory_arrow_left
14

Recent Comments

I have links to old LW posts in some articles and other places. What will happen with all these links?

It should be able to understand human language or it is (most likely) not dangerous.

I am getting this comment in different forms many times, so I will state again 3 main reasons why it is not exactly true:

1) The message is addressed to Young AI, not to full-blown superintelligence. Such Young AI will be time constraint but could have to make some pivotal act.

2) The message incl...(read more)

Why? if we have fullblown papercliper, which is coming of age, we are already doomed, and starting its bullshit detector will not make our situation any worse.

In fact, I am not describing here how to build such a trap, but I express our commitment to build the trap, in order to install uncertainty into any future AI if the trap exists and what it could be.

I think that real constructors of the trap will find the correct ways to make it effective and bal...(read more)

It will kill humanity not because it will be annoyed, but for two main goals: its own safety, or to use human atoms. Other variants also possible, I explored them here: http://lesswrong.com/lw/mgf/a_map_agi_failures_modes_and_levels/

Surely, most arguments are weak, but it is not a problem, as we try all possible independent arguments, hoping that at least one will work. So, there is no reason to ignore putting weak arguments in the text of the message.

However, exactly this argument could be made stronger. It is intended to pr...(read more)

Another cash level is drawing visual map and when use visual field as a cash of all the elements of the problem.

Thanks! It is nice that we come to the wording "young AI" more or less independently. I tried to maximally stealmanning this approach and collect all ideas about messaging to the future AI. I will add your post in the bibliographic section.