569

LESSWRONG
LW

568
Personal Blog

14

Active AGI and/or FAI researchers

by hankx7787
7th Jan 2012
1 min read
14

14

Personal Blog

14

Active AGI and/or FAI researchers
13Stuart_Armstrong
10lukeprog
4[anonymous]
4Grognor
-1Vladimir_Nesov
0Stuart_Armstrong
4Vladimir_Nesov
4MixedNuts
2Kaj_Sotala
0Stuart_Armstrong
0Kaj_Sotala
-1Vladimir_Nesov
3Stuart_Armstrong
1Vladimir_Nesov
New Comment
14 comments, sorted by
top scoring
Click to highlight new comments since: Today at 1:11 PM
[-]Stuart_Armstrong14y130

Currently working for the FHI, mainly on FAI-like problems. Got a paper coming out soon on Oracle AI (http://www.aleph.se/papers/oracleAI.pdf)

Reply
[-]lukeprog14y100

See here for starters.

Reply
[-][anonymous]14y40

I just wanted to say thank you for all the cool websites you made/got SIAI to spend resources on! :)

Reply
[-]Grognor14y40

Also thank Lightwave for being the resources.

Reply
[-]Vladimir_Nesov14y-10

You won't get a lot of responses if you ask people to name themselves.

Reply
[-]Stuart_Armstrong14y00

Why, btw?

Reply
[-]Vladimir_Nesov14y40

Because all else equal, announcing that you're doing something impressive feels like a (small) status hit, so if nothing else moves people to overcome this trivial inconvenience (for example, recognizing that it actually isn't a status hit, or that the default behavior in a given context is to respond rather than stay silent, or if someone personally asked you), nothing gets done.

Reply
[-]MixedNuts14y40

announcing that you're doing something impressive feels like a (small) status hit

Something isn't right here. Do you mean it feels like a status grab - a status hit to others, avoided out of politeness? Or that people who do extremely impressive (as opposed to moderately impressive ones) things shouldn't need to announce it, so "I'm saving the world" is a status loss but "I'm learning Swahili" is a status gain?

Reply
[-]Kaj_Sotala14y20

I interpreted this as meaning that needing to nominate yourself implies that nobody else cares enough about your work to name you as an example, meaning that you're not actually that important.

Reply
[-]Stuart_Armstrong14y00

Convoluted. But do you feel it's plausible?

Reply
[-]Kaj_Sotala14y00

I know I've felt that way every now and then, though on those occasions the reason has also been clear to me. I'm not sure if it's equally plausible for someone to feel that way and not realize the logic behind it.

Reply
[-]Vladimir_Nesov14y-10

Something isn't right here. Do you mean it feels like a status grab - a status hit to others, avoided out of politeness?

Politeness is about covert/deniable transactions in status-related attributes, so it's a curiosity stopper in this context, not an explanation. It probably feels like a status hit because it's expected (perhaps incorrectly) to feel like a status grab to others. What you feel isn't generally a reason for responding a certain way, instead it's a means: something external should be a reason, whose detection might be represented as a feeling, in turn triggering a behavior.

Reply
[-]Stuart_Armstrong14y30

Then it's lucky I don't overthink these things.

And your story sounds plausible, but the opposite would sound equally plausible to me.

Reply
[-]Vladimir_Nesov14y10

Then it's lucky I don't overthink these things.

I was characterizing an emotional response, not reasoning. There doesn't seem to be a clear argument for that response being correct in this case.

Reply
Moderation Log
More from hankx7787
View more
Curated and popular this week
14Comments

I am curious in general about who here, if anyone, is actively researching the AGI and/or FAI problems directly in a full-time capacity (or soon will be). So if that's you, please say hello! Or if you know another website/mailing-list/etc this question would be more appropriate to ask, please let me know.

If you are interested in saying more about who you are and what you're doing, I've included some additional questions below. Feel free to provide as much or as little information as you'd like - but the more the better!

  • Are you working for a particular organization or a known AGI project? If so, which? Link? If not, are you working on these issues independently, or can you otherwise explain your situation?
  • What is your overall theory/philosophy on FAI/AGI? How are you similar to and how are you different from Eliezer Yudkowsky in this respect?
  • What has been your overall approach to study for this line of research / what specific curriculum and what specific books/papers/etc would you recommend? I would be interested in as much detail as you can provide here.
  • Do you have any published material (even informal/in-progress information, documentation, discussion, blogs, etc)? Links?
  • What are you working on now and what's coming next in your work? Are you solving some interesting problem, creating some interesting new idea, bringing together a grand theory, actually building a working FAI/AGI or similar, or approaching some big milestone along any of these paths or others? What do your plans and timeline for the future look like?