User Profile


Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

No posts to display.

Recent Comments

Considering how much stuff like convays game of life which bears no resemblance to our universe is played I'd put the probability much lower.

Whenever you run anything which simulates anything turing compatible (Ok. Finite state machine is actually enough due to finite amount of information storage...(read more)

> but Y has solved the (interesting) problem of understanding how people write novels.

I think the whole point in AI research is to do something, not find out how humans do something. You personally might find psychology (How humans work) far more interesting than AI research (How to do things tra...(read more)

> If something goes wrong and our learned rules and basic instincts aren't working, consciousness has to step in and try to cobble a solution together on the fly (usually badly).

Considering that we've so completely kicked ass against any other species that we haven't been even on the same playing ...(read more)

> but it just tells us that those problems are less interesting than we thought.

Extrapolating from the trend it would not suprise me greatly if we'd eventually find out that intelligence in general is not as interesting as we thought.

When something is actually understood the problem suffers from...(read more)

I merely wished to clarify the difference between conciousness and how it is implemented in the brain. I had no intention of implying that it was part of the discussion. On retrospect the clarification was not required.

It's just way too common for the two issues to get mixed up, as can be seen on ...(read more)

Quantum computing in the brain might be happening, but if we want to understand conciousness it is irrelevant (Unless conciousness is noncomputable where it becomes a claim about quantum physics yet again). It's as relevant as details about transistors or vacuum tubes are for understanding sorting a...(read more)

> It is why I am hesitant to argue that there are no quantum effects of any sort in the brain (although the quantum effects people have suggested so far haven't been convincing).

Considering that quantum physics is turing complete (unless it's nonlinear etc) any quantum effects could be reproduced ...(read more)

It seems I was wrong about Dennett's claims and misinterpreted the relevant sentence.

However the original question remains and can be rephrased: What predictions follow from world containing some intrinsic blueness?

The topmost cached thought I have is that this is exactly the same kind of confus...(read more)

> You can do a Dennett and deny that anything is really blue.

I'd like to see what he'd do if presented with blue and a red balls and given a task: "Pick up the blue ball and you'll receive 3^^^3 dollars".

Even though many claim to be confused about these common words their actual behaviour betray...(read more)

As there is the 1:1 mapping between set of all reals and unit interval we can just use the unit interval and define a uniform mapping there. As whatever distribution you choose we can map it into unit interval as Pengvado said. In case of set of all integers I'm not completely certain. But I'd l...(read more)