User Profile

star0
description0
message17

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

No posts to display.

Recent Comments

I came to this post via a Google search (hence this late comment). The problem that Cyan's pointing out - the lack of calibration of Bayesian posteriors - is a real problem, and in fact something I'm facing in my own research currently. Upvoted for raising an important, and under-discussed, issue.

<i>The default case of FOOM is an unFriendly AI</i> Before this, we also have: "The default case of an AI is to not FOOM at all, even if it's self-modifying (like a self-optimizing compiler)." Why not anti-predict that no AIs will FOOM at all?

<i>This AI becomes able to improve itself in a haphazar...(read more)

@Don: Eliezer says in his AI risks paper , criticising Bill Hibbard, that one cannot use supervised learning to specify the goal system for an AI. And although he doesn't say this in the AI risks paper (contra what I said in my previous comment), I remember him saying somewhere (was it in a mailing ...(read more)

I don't get this post. There is no big mystery to asynchronous communication - a process looks for messages whenever it is convenient for it to do so, very much like we check our mail-boxes when it is convenient for us. Although it is not clear to me how asynchronous communication helps in building ...(read more)

I am interested in what Scott Aaronson says to this.

I am unconvinced, and I agree with both the commenters g and R above. I would say Eliezer is underestimating the number of problems where the environment gives you correlated data and where the correlation is essentially a distraction. Hash funct...(read more)

To me it seems like this post evades what is to me the hard question of morality. If my own welfare often comes in conflict with the welfare of others, then how much weight should I attach to my own utility in comparison to the utility of other humans? This post seems to say I should look into the m...(read more)

<i>Ayn Rand? Aleister Crowley? How exactly do you get there? What Rubicons do you cross? It's not the justifications I'm interested in, but the critical moments of thought.</i>

My guess is that Ayn Rand at least applied a "reversed stupidity = intelligence" heuristic. She saw examples of ostens...(read more)

"There are no-free-lunch theorems in computer science - in a maxentropy universe, no plan is better on average than any other. " I don't think this is correct - in this form, the theorem is of no value, since we know the universe is not max-entropy. No-free-lunch theorems say that no plan is better ...(read more)

@billswift: I do not want to divert the thread onto the topic of animal rights. It was only an example in any case. See Paul Gowder's comment previous to mine for a more detailed (and different) example of how empirical knowledge can affect our moral judgements.

A few processes to explain moral progress (but probably not all of it): a) Acquiring new knowledge (e.g. the knowledge that chimps and humans are, on an evolutionary scale, close relatives), which leads us to throw away moral judgements that make assumptions which are inconsistent with such knowledg...(read more)