LESSWRONG
LW

1688
MarkHHerman
8040
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
No posts to display.
Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions
MarkHHerman16y10

Do you think a cog psych research program on “moral biases” might be helpful (e.g., regarding existential risk reduction)?

[The conceptual framework I aim working on (philosophy dissertation) targets a prevention-amenable form of “moral error” that requires (a) the perpetrating agent’s acceptance of the assessment of moral erroneousness (i.e., individual relativism to avoid categoricity problems), and (b) that the agent, for moral reasons, would not have committed the error had he been aware of the erroneousness (i.e., sufficiently motivating v. moral indifference, laziness, and/or akrasia).]

Reply
Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions
MarkHHerman16y20

What is the practical value (e.g., predicted impact) of the Less Wrong website (and similar public communication regarding rationality) with respect to FAI and/or existential risk outcomes?

(E.g., Is there an outreach objective? If so, for what purpose?)

Reply
Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions
MarkHHerman16y40

To what extent is the success of your FAI project dependent upon the reliability of the dominant paradigm in Evolutionary Psychology (a la Tooby & Cosmides)?

Old, perhaps off-the-cuff, and perhaps outdated quote (9/4/02): “ well, the AI theory assumes evolutionary psychology and the FAI theory definitely assumes evolutionary psychology” (http://www.imminst.org/forum/lofiversion/index.php/t144.html).

Thanks for all your hard work.

Reply
With whom shall I diavlog?
MarkHHerman16y10

Someone with whom establishing a connection might make the difference in being able to get them to appear at a future Singularity Summit. Also, someone with whom an association enhances your credibility.

Reply