User Profile

star4
description28
message310

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

Decision Theory and the Irrelevance of Impossible Outcomes

1y
Show Highlightsubdirectory_arrow_left
3

Why Altruists Should Focus on Artificial Intelligence

1y
Show Highlightsubdirectory_arrow_left
0

[Link] How the Simulation Argument Dampens Future Fanaticism

2y
1 min read
Show Highlightsubdirectory_arrow_left
13

In Praise of Maximizing – With Some Caveats

3y
11 min read
Show Highlightsubdirectory_arrow_left
19

Meetup : First LW Meetup in Warsaw

4y
1 min read
Show Highlightsubdirectory_arrow_left
8

Literature-review on cognitive effects of modafinil (my bachelor thesis)

4y
2 min read
Show Highlightsubdirectory_arrow_left
42

Meetup : First Meetup in Cologne (Köln)

5y
1 min read
Show Highlightsubdirectory_arrow_left
13

[Link] Should Psychological Neuroscience Research Be Funded?

5y
2 min read
Show Highlightsubdirectory_arrow_left
12

Meetup : First meetup in Innsbruck

6y
1 min read
Show Highlightsubdirectory_arrow_left
2

Recent Comments

The open beta will end with a vote of users with over a thousand karma on whether we should switch the lesswrong.com URL to point to the new code and database

How will you alert these users? (I'm asking because I have over 1000 karma but I don't know where I should vote.)

One of the more crucial points, I think, is that positive utility is – for most humans – complex and its creation is conjunctive. Disutility, in contrast, is disjunctive. Consequently, the probability of creating the former is smaller than the latter – all else being equal (of course, all else is n...(read more)

[The article that introduced the term "s-risk"](https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/) was shared on LessWrong in [October 2016](http://lesswrong.com/lw/o0t/reducing_risks_of_astronomical_suffering_srisks_a/). The content of the article and ...(read more)

Here is another question that would be very interesting, IMO:

“For what value of X would you be indifferent about the choice between A) creating a utopia that lasts for one-hundred years and whose X inhabitants are all extremely happy, cultured, intelligent, fair, just, benevolent, etc. and lead r...(read more)

I think it's great that you're doing this survey!

I would like to suggest two possible questions about acausal thinking/superrationality:

1) > Newcomb’s problem: one box or two boxes?

> * Accept: two boxes > * Lean toward: two boxes > * Accept: one box > * Lean toward: one box > * Other

(This...(read more)

First of all, I don't think that morality is objective as I'm a proponent of moral anti-realism. That means that I don't believe that there is such a thing as "objective utility" that you could objectively [measure](https://foundational-research.org/measuring-happiness-and-suffering/).

But, to use...(read more)

Great list!

IMO, one should add Prescriptions, Paradoxes, and Perversities to the list. Maybe to the section "Medicine, Therapy, and Human Enhancement".

I don't understand why you exclude [risks of astronomical suffering](https://foundational-research.org/risks-of-astronomical-future-suffering/) ("hell apocalypses").

[Below](http://lesswrong.com/r/discussion/lw/nxz/seven_apocalypses/dfkg) you claim that those risks are "Pascalian" but this [seems w...(read more)

Cool that you are doing this!

Is there also a facebook event?

>That's not true -- for example, in cases where the search costs for the full space are trivial, pure maximizing is very common.

Ok, sure. I probably should have written that pure maximizing or satisficing is hard to find in *important*, *complex* and *non-contrived* instances. I had in mind such d...(read more)