User Profile

star24948
description101
message3032

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

A general model of safety-oriented AI development

9d
58 points
1 min read
Show Highlightsubdirectory_arrow_left
8

Beyond Astronomical Waste

13d
68 points
2 min read
Show Highlightsubdirectory_arrow_left
26

Can corrigibility be learned safely?

3mo
76 points
3 min read
Show Highlightsubdirectory_arrow_left
110

Multiplicity of "enlightenment" states and contemplative practices

3mo
92 points
2 min read
Show Highlightsubdirectory_arrow_left
4

[Link] Examples of Superintelligence Risk (by Jeff Kaufman)

1y
5 points
Show Highlightsubdirectory_arrow_left
1

Combining Prediction Technologies to Help Moderate Discussions

2y
13 points
1 min read
Show Highlightsubdirectory_arrow_left
15

[link] Baidu cheats in an AI contest in order to gain a 0.24% advantage

3y
10 points
1 min read
Show Highlightsubdirectory_arrow_left
32

Is the potential astronomical waste in our universe too small to care about?

4y
24 points
2 min read
Show Highlightsubdirectory_arrow_left
14

What is the difference between rationality and intelligence?

4y
11 points
1 min read
Show Highlightsubdirectory_arrow_left
52

Six Plausible Meta-Ethical Alternatives

4y
34 points
2 min read
Show Highlightsubdirectory_arrow_left
36

Recent Comments

> I think it is likely best to push against including that sort of thing in the Overton window of what’s considered AI safety /​ AI alignment literature. I'm really sympathetic to these concerns but I'm worried about the possible unintended consequences of trying to do this. There will inevitably...(read more)

> Do you see your “not updating” scheme as the appropriate new theory applicable to very large universes? It doesn't fully solve problems associated with very large universes, but I think it likely provides a framework in which those problems will eventually be solved. See [this post](https://ww...(read more)

> I do want to stay in explore-options and gather data mode What kind of analysis are you thinking of doing on the data that you're gathering? I'm curious, and also [pre-registration](https://en.wikipedia.org/wiki/Trial_registration) may be a good idea in situations like this to reduce bias. >...(read more)

But I took it seriously enough to come up with a counter-argument against it. Doesn't that count for something? :) To be clear I'm referring to the second post in [that thread](http://everything-list.105.n7.nabble.com/the-one-universe-tt2761.html), where I wrote: > Let me try to generalize the...(read more)

It looks like I should actually claim priority for this idea myself, since I came up with something very similar on the way to UDT. From this [1998 post](http://everything-list.105.n7.nabble.com/the-one-universe-tt2761.html): > One piece of information about the real universe you have direct acce...(read more)

> Instead, start with a prior chosen somehow, then update it according to the evidence that a being such as you exists in the universe. This seems very similar to Radford Neal’s [full non-indexical conditioning](https://arxiv.org/abs/math/0608592): > I will here consider what happens if you ig...(read more)

I do this myself for a couple of reasons: 1. Laziness - the marginal benefit of voting on something decreases with the absolute value of its current karma, but the cost of voting stays constant. 2. To prevent the "rich get richer" phenomenon, where if everyone pays more attention to posts/commen...(read more)

Having a high amount of voting power basically feels like a disadvantage to me instead of a benefit, because it makes me more reluctant to exercise strong voting power. Maybe other people won't be able to tell who voted on something, but the part of my brain that worries about this kind of thing isn...(read more)

> If voting takes an hour, then it’s worth it iff you’re otherwise winning less than 10 picolightcones per lifetime. Do you mean "per hour" instead? > If a lifetime is 100,000 hours, that means less than a microlightcone per lifetime. Have you thought about how to estimate microlightcone pe...(read more)

Another issue that I don't know if you've thought of: strong votes from people with high karma are not very anonymous. If I'm talking with someone and I strong up/down vote them (like [here](https://www.lesswrong.com/posts/uG3ri4y3siWyC52bD/a-rationalist-argument-for-voting#KXK9jMaxyNAoXaEE9) recent...(read more)