User Profile

star14
description0
message69

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

No posts to display.

Recent Comments

<i>My understanding is that, while there are still people in the world who speak with reverence of Brooks's subsumption architecture, it's not used much in commercial systems on account of being nearly impossible to program.</i>

I once asked one of the robotics guys at IDSIA about subsumption archi...(read more)

In recent years I've become more appreciative of classical statistics. I still consider the Bayesian solution to be the correct one, however, often a full Bayesian treatment turns into a total mess. Sometimes, by using a few of the tricks from classical statistics, you can achieve nearly as good p...(read more)

<p>Valdimir,</p>

<p>Firstly, "maximizing chances" is an expression of your creation: it's not something I said, nor is it quite the same in meaning. Secondly, can you stop talking about things like "wasting hope", concentrating on metaphorical walls or nature's feelings?</p>

<p>To quote my positi...(read more)

<p>Vladimir,</p>

<p><i>Nature doesn't care if you "maximized you chances" or leapt in the abyss blindly, it kills you just the same.</i></p>

<p>When did I ever say that nature cared about what I thought or did? Or the thoughts or actions of anybody else for that matter? You're regurgitating slog...(read more)

Eli,

<i>FAI problems are AGI problems, they are simply a particular kind and style of AGI problem in which large sections of the solution space have been crossed out as unstable.</i>

Ok, but this doesn't change my point: you're just one small group out of many around the world doing AI research, a...(read more)

Eli, sometimes I find it hard to understand what your position actually is. It seems to me that your position is:

1) Work out an extremely robust solution to the Friendly AI problem

Only once this has been done do we move on to:

2) Build a powerful AGI

Practically, I think this strategy is risk...(read more)

Roko: Well, my thesis would be a start :-) Indeed, pick up any text book or research paper on reinforcement learning to see examples of utility being defined over histories.

Roko, why not:

U( alternating A and B states ) = 1 U( everything else ) = 0

Roko:

<i>So allow me to object: not all configurations of matter worthy of the name "mind" are optimization processes. For example, my mind doesn't implement an optimization process as you have described it here.</i>

I would actually say the opposite: Not all optimisation processes are worthy of t...(read more)

<i>And with the Singularity at stake, I thought I just had to proceed at all speed using the best concepts I could wield at the time, not pause and shut down everything while I looked for a perfect definition that so many others had screwed up...</i>

In 1997, did you think there was a reasonable ch...(read more)