New to LessWrong?

New Comment
38 comments, sorted by Click to highlight new comments since: Today at 11:27 AM

My (unflattering) comment on EY's presentation style.

But, he appears to be wearing a bondage-gear leather vest.

Eliezer FTW.

[-][anonymous]13y140

He looks like he showed up at the last minute from a Renaissance Faire run by sadomasochists.

A day in the life of Eliezer Yudkowsky.

I think the vest is harmless, but perhaps I'm not the person to go to for how to dress to impress normal people. I don't seem to be on the autism spectrum, but so far as clothes are concerned, I'm amazed at how much people read into them.

What does horrify me is that I was needed to post the link. Why didn't the Singularity Institute do it?

I guess they assumed since it was on the front page of the SingInst site there was no need...?

Er...not really. I mean, I know what that would look like and, um, no. Sometimes leather is just leather.

[-][anonymous]13y00

No, not really. It's an exaggeration for humorous effect.

I know what that would actually look like too, and while it might cause even bigger signaling problems for Eliezer, it would certainly be interesting to watch. Did I inadvertantly suggests that I thought Ren Faires and BDSM were bad things?

Well, it would be kinda hard to give a presentation through the branks.

Right garment, wrong venue.

[-][anonymous]13y30

.

It's great to see high status people like Max Tegmark and Jaan Tallinn publicly support the Singularitarian cause (i.e., trying to push the future towards a positive Singularity). Tallinn specifically mentioned LW as the main influence for his becoming a Singularitarian (or in his newly invented term, "CL3 Generation"). Does anyone know Tegmark's story?

I suspect that Tegmark is bright enough to have arrived on his own,
given cosmology, physical law and a strict adherence to materialism.

(In terms of how he arrived at a Singularitarian worldview, not how he came to affiliate with the SIAI.)

In his own words (2007):

I believe that consciousness is, essentially, the way information feels when being processed. Since matter can be arranged to process information in numerous ways of vastly varying complexity, this implies a rich variety of levels and types of consciousness. The particular type of consciousness that we subjectively know is then a phenomenon that arises in certain highly complex physical systems that input, process, store and output information. Clearly, if atoms can be assembled to make humans, the laws of physics also permit the construction of vastly more advanced forms of sentient life. Yet such advanced beings can probably only come about in a two-step process: first intelligent beings evolve through natural selection, then they choose to pass on the torch of life by building more advanced consciousness that can further improve itself.

I suspect that Tegmark is bright enough to have arrived on his own, given cosmology, physical law and a strict adherence to materialism.

What I want to know is how he became motivated to push for a positive Singularity. It seems to me that people who think that a Singularity is possible, or even likely, greatly outnumber people who think they ought to do something about it. (I used to wonder why most people are so apathetic in the face of such danger and opportunity, but maybe a better question is how the few people who are not became that way.)

My favorite talk is Jaan's.

I really liked Jaan's talk as well, but I wonder how "Level 2" people react to it. Would they be offended by the suggestion that they are maximizing their social status instead of doing what's best for future society, or by the Levels terminology which implies that they are inferior to "Level 3" people? (The implication seems clear despite repeated disclaimers from Jaan.)

My first reaction to seeing Jaan's talk was "someone ought to forward this to Bill Gates", but now I'm not so sure.

Yup, I'd want to change that part.

I'm not sure it should be changed, just saying that Jaan might want to do a bit of "market research" before putting his message in front of a different audience. Who knows, maybe being described as status junkies and "level 2" is actually a good way to make people like Bill Gates reconsider their priorities?

I'm sure this has nothing to do with him paying a large chunk of your salary :P

Seriously, though, it was a great talk, with a great conclusion, too. But can't say that the label "CL3 Generation" is catchy enough.

There was a discussion at a dinner afterwards about what the C stood for.

It took far too long to remember, but "exactivist" was a popular alternative. (exact + ist, ex - activist, ex (as in x-risk) activist, and probably a few more that I forgot).

A good talk, but as others have mentioned, "CL3" thing is strange, and it seems like the whole idea of there being levels has only weak motivation (and raises irrelevant objections, that prompt disclaimers about "levels other than CL3 being OK" that Jaan was forced to repeatedly make). On the other hand, the categorization into three unordered areas of activism/concern seems solid.

That is probably a good example of how not to attempt to launch a meme.

Can you be more specific?

The "CL3 Generation" meme. It even managed to remind me of Scientology's "OT auditing levels".

Perhaps more time on 4chan is needed.

[-][anonymous]13y30

Is it possible to obtain the slides from EY's presentation?

Not what you asked, but... I did upload his list of open problems here.

Seriously Luke, slides - the video was kind of blurry. Use the Force (if you have to)!

I think there is such a thing as professionalism, and it's not always bad. Posting slides for your talks is common practice. In EY's case we can chuck it up to absentminded genius, but this is why we have well organized people like you at SingInst. I say it as a supporter.

Just got permission from Eliezer to post his Singularity Summit 2011 slides. Here you go.

[-][anonymous]12y00

Great!

Thanks a lot Luke.

extension of Solomonoff induction to anthropic reasoning and higher-order logic – why ideal rational agents still seem to need anthropic assumptions.

I would say it lacks a rationale. AFAIK, intelligent agents just maximise some measure of utility. Anthropic issues are dealt with automatically as part of this process.

Much the same is true of this one:

Theory of logical uncertainty in temporal bounded agents.

Again, this is a sub-problem of solving the maximisation problem.

Breaking a problem down into sub-problems is valuable - of course. On the other hand you don't want to mistake one problem for three problems - or state a simple problem in a complicated way.

How do you construe a utility function from a psychologically realistic detailed model of a human’s decision process?

It may be an obvious thing to say - but there is an existing research area that deals with this problem: revealed preference theory.

I would say obtaining some kind of utility function from observations is rather trivial - the key problem is compressing the results. However, general-purpose compression is part of the whole project of building machine intelligence anyway. If we can't compress, we get nowhere, and if we can compress, then we can (probably) compress utility functions.

It may be an obvious thing to say - but there is an existing research area that deals with this problem: revealed preference theory.

Right. Also, choice modeling in economics and preference extraction in AI / decision support systems.

Better formalize hybrid of causal and mathematical inference.

I'm not convinced that there is much to be done there. Inductive inference is quite general, while causal inference involves its application to systems that change over time in a lawful manner. Are we talking about optimising inductive inference systems to preferentially deal with causal patterns?

That is similar to the "reference machine" problem - in that eventually you can expose the machine to some real-world data and then let it design its own reference machine. Hand-coding a reference machine might help with getting off the ground initially, however.

Does anyone understand better - or have a link?

Making hypercomputation conceivable

This one seems to be a pretty insignificant problem, IMHO. Real icing-on-the-cake stuff that isn't worth spending time on at this stage.

All the videos here: http://www.youtube.com/user/SingularitySummits ...are currently private.

Guys, have a look: www.cl3generation.com

Very preliminary at this stage. Cheers.

My comments on the video: Eliezer Yudkowsky: Open Problems in Friendly Artificial Intelligence

An ordinary utility maximiser calculates its future utility conditional on it choosing to defect - and do the same conditional on it choosing to cooperate. If it knows it its is playing the Prisoner's Dilemma against its clone it will expect the clone to make the same deterministic decisions that it does. So, it will choose to cooperate - since that maximises its own utility. That is what behaviour to expect from a standard utility maximiser.

...and...

05:55 - What about the well-known list of informational harms? E.g. see the Bostrom "Information Hazards" paper.

I notice that multiple critical comments have been incorrectly flagged as spam on this video. Some fans have a pretty infantile way of expressing disagreement.