LESSWRONG
LW

267
aletheilia
620200
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Hard problem? Hack away at the edges.
aletheilia14y00

He might have changed his mind till now, but in case you missed it: Recommended Reading for Friendly AI Research

Reply
Interview with Singularity Institute Research Fellow Luke Muehlhauser
aletheilia14y10

This idea probably just comes from looking at the Blue Brain project that seems to be aiming in the direction of WBE and uses an expensive supercomputer for simulating models of neocortical columns... right, Luke? :)

(I guess because we'd like to see WBE come before AI, due to creating FAI being a hell of a lot more difficult than ensuring a few (hundred) WBEs behave at least humanly friendly and thereby be of some use in making progress on FAI itself.)

Reply
[SEQ RERUN] Occam's Razor
aletheilia14y00

Perhaps the following review article can be of some help here: A Philosophical Treatise of Universal Induction

Reply
Singularity Institute Strategic Plan 2011
aletheilia14y10

Time to level-up then, eh? :)

(Just sticking to my plan of trying to encourage people for this kind of work.)

Reply
Singularity Institute Strategic Plan 2011
aletheilia14y10

Before resorting to 'large financial prizes', shouldn't level 1 include 'formalize open problems and publicise them'?

The trouble is, 'formalizing open problems' seems like by far the toughest part here, and it would thus be nice if we could employ collaborative problem-solving to somehow crack this part of the problem... by formalizing how to formalize various confusing FAI-related subproblems and throwing this on MathOverflow? :) Actually, I think LW is more appropriate environment for at least attempting this endeavor, since it is, after all, what a large part of Eliezer's sequences tried to prepare us for...

Reply
Singularity Institute Strategic Plan 2011
aletheilia14y00

...open problems you intend to work on.

You mean we? :)

...and we can start by trying to make a list like this, which is actually a pretty hard and important problem all by itself.

Reply
Help Fund Lukeprog at SIAI
aletheilia14y100

I wonder if anyone here shares my hesitation to donate (only a small amount, since I unfortunately can't afford anything bigger) due to thinking along the lines of "let's see, if I donate a 100$, that may buy a few meals in the States, especially CA, but on the other hand, if I keep them, I can live ~2/3 of a month on that and since I also (aspire to) work on FAI-related issues, isn't this a better way to spend the little money I have?"

But anyway, since even the smallest donations matter (tax laws an' all that, if I'm not mistaken) and -5$ isn't going to kill me, I've just made this tiny donation...

Reply
IntelligenceExplosion.com
aletheilia14y30

What is the difference between the ideas of recursive self-improvement and intelligence explosion?

They sometimes get used interchangeably, but I'm not sure they actually refer to the same thing. It wouldn't hurt if you could clarify this somewhere, I guess.

Reply
Hanson Debating Yudkowsky, Jun 2011
aletheilia14y40

How about a LW poll regarding this issue?

(Is there some new way to make one, since the site redesign, or are we still at vote-up-down-karma-balance pattern?)

Reply
SIAI’s Short-Term Research Program
aletheilia14y130

Even if we presume to know how to build an AI, figuring out the Friendly part still seems to be a long way off. Some AI building plans or/and architectures (e.g. evolutionary methods) are also totally useless F-wise, even though they may lead to a general AI.

What we actually need is knowledge about how to build a very specific type of an AI, and unfortunately, it appears that the A(G)I (sub)field with it's "anything that works" attitude isn't going to provide one.

Reply
Load More