User Profile

star1406
description99
message2960

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
Personal Blogposts
personPersonal blogposts by LessWrong users (as well as curated and frontpage).
rss_feed Create an RSS Feed

Can corrigibility be learned safely?

23d
3 min read
Show Highlightsubdirectory_arrow_left
92

Multiplicity of "enlightenment" states and contemplative practices

1mo
2 min read
Show Highlightsubdirectory_arrow_left
4

Combining Prediction Technologies to Help Moderate Discussions

1y
1 min read
Show Highlightsubdirectory_arrow_left
15

[link] Baidu cheats in an AI contest in order to gain a 0.24% advantage

3y
1 min read
Show Highlightsubdirectory_arrow_left
32

Is the potential astronomical waste in our universe too small to care about?

4y
2 min read
Show Highlightsubdirectory_arrow_left
14

What is the difference between rationality and intelligence?

4y
1 min read
Show Highlightsubdirectory_arrow_left
52

Six Plausible Meta-Ethical Alternatives

4y
2 min read
Show Highlightsubdirectory_arrow_left
36

Look for the Next Tech Gold Rush?

4y
1 min read
Show Highlightsubdirectory_arrow_left
115

Outside View(s) and MIRI's FAI Endgame

5y
1 min read
Show Highlightsubdirectory_arrow_left
60

Three Approaches to "Friendliness"

5y
2 min read
Show Highlightsubdirectory_arrow_left
86

Recent Comments

>Only seems to require EDT. I don't see how. I think an EDT agent would make the decision by simulating (or doing some analysis that's equivalent to this) a bunch of worlds, then look at the worlds where it or agents like it happened to make the message benign/malign to see what the humans do in ...(read more)

It's an interesting idea, but it seems like there are lots of difficulties.

What if the current node is responsible for the error instead of one of the subqueries, how do you figure that out? When you do backprop, you propagate the error signal through all the nodes, not just through a single path ...(read more)

Suppose you had to translate a sentence that was ambiguous (with two possible meanings depending on context) and the target language couldn't express that ambiguity in the same way so you had to choose one meaning. In your task decomposition you might have two large subtrees for "how likely is meani...(read more)

To respond to your thinking (in the [linked blog post](https://sideways-view.com/2018/03/23/on-seti/)) that, to a first order approximation, if we find an AI in the alien message we should run it: >The preceding analysis takes a cooperative stance towards aliens. Whether that’s correct or not is...(read more)

Just realized, if you combine colonization and radio beacons, 1/1000x galaxy mass would be enough to make an artificial pattern of >2.5mJy sources over an area of the sky that's bigger than NVSS's beam size, and that may have been noticed by someone as an anomalous cluster/pattern of radio sources.

According to [this paper](https://arxiv.org/pdf/0808.0165.pdf) (which I linked to), it looked in detail at a set of S > 1.3 Jy radio sources (274 of them), in a small patch of the sky, which makes me think that there are enough bright radio sources that 1.5 Jy wouldn't stand out that much. EDIT: Oh ...(read more)

If you put 1/1000 the mass of a galaxy into radio signals over 10 GHz bandwidth over 10 billion years, you get [2.7e28 W/Hz](http://www.wolframalpha.com/input/?i=8.538%C3%9710%5E58+joules+%2F+10+billion+years+%2F+1000+%2F+10+GHz+in+W%2FHz) power spectral density. According to [this paper](http://ads...(read more)

I suspect there may be a miscommunication here. To elaborate on "Relying purely on local validity won’t get you very far in playing chess", what I had in mind is that if you decided to play a move only if you can prove that's it's the optimal move, you won't get very far, since we can't produce proo...(read more)

> Don’t these numbers not add up? If mass is 1000x luminosity, and quasars are 100x galaxy, then how is the ratio 75x? The ratio for the sun is actually 1480 to be exact, plus the rest of the galaxy is apparently dimmer per unit mass than the sun is. For 1/1000x, I think if you put most of th...(read more)

I find your point 1 very interesting but point 2 may be based in part on a misunderstanding. > To expand on the last point, if A[*], the limiting agent, is aligned with H then it must contain at least implicitly some representation of H’s values (retrievable through IRL, for example). And so must...(read more)