Gyrodiot

I want to make machines understand language, for greater communication, cooperation and progress. I hold a Ph.D. in AI (natural language processing). My day job is to assist humans with an AI that understands their job. I'm based in France, where rationality material is scarce. I want to change that.

Gyrodiot's Comments

Beliefs: A Structural Change

I sometimes do a brief presentation of rationality to acquaintances, and I often stress the importance of being able to change your mind. Often, in the Sequences, this is illustrated by thought experiments, which sound a bit contrived when taken out of context, or by wide-ranging choices, which sound too remote and dramatic for explanation purposes.

I don't encounter enough examples of day-to-day application of instrumental rationality, the experience of changing your mind, rather than the knowledge of how to do it. Your post has short glimpses of it, and I would very much enjoy reading a more in-depth description of these experiences. You seem to notice them, which is a skill I find very valuable.

On a more personal note, your post nudges me towards "write more things down", as I should track when I do change my mind. In other words, follow more of the rationality checklist advice. I'm too often frustrated by my lack of noticing stuff. So, thanks for this nudge!

Should an AGI build a telescope to spot intergalactic Segways?

Thanks for your clarification. Even though we can't rederive Intergalactic Segways from unknown strange aliens, could we derive information about those same strange aliens, by looking at the Segways? I'm reminded of some SF stories about this, and our own work figuring out prehistorical technology...

Should an AGI build a telescope to spot intergalactic Segways?

Thanks for your post. Your argumentation is well-written and clear (to me).

I am confused by the title, and the conclusion. You argue that a Segway is a strange concept, that an ASI may not be capable of reaching by itself through exploration. I agree that the space of possible concepts that the ASI can understand is far greater than the space of concepts that the ASI will compute/simulate/instantiate.

However, you compare this to one-shot learning. If an ASI sees a Segway, a single time, would it be able to infer what is does, what's it for, how to build it, etc.? I think so! The purpose of one-shot learning models is to provide a context, a structure, that can be augmented with a new concept based on a single example. This is far simpler than coming up with said new concept from scratch.

See, on efficient use of sensory data, That Alien Message.

I interpret your post as « no, an ASI shouldn't build the telescope, because it's a waste of resources and it wouldn't even need it » but I'm not sure this was the message you wanted to send.

European Community Weekend 2018 Announcement

I'll be there. As I said in the sister post on LW1.0:

The community weekend of 2017 was one of my best memories from the past year. Varied and interesting activities, broad ranges of topics, tons of fascinating discussions with people from diverse backgrounds. Organizers are super friendly.

One very, very important point is people there cooperate by default. Communication is easy, contribution is easy, getting help is easy, feedback is easy, learning is easy. Great times and productivity. And lots of fun!

Entirely worth it.

European Community Weekend 2018 Announcement

The Community Weekend of 2017 was one of the highlights of my past year. I strongly recommend it.

Excellent discussions, very friendly organizers, awesome activities.

Signed up!

The Cake is a Lie, Part 2.

I'll be blunt. Until this second post, there was a negative incentive to people on this site to comment on your first post. The expected reaction was downvote it to hell without bothering to comment. Now, with this second post, clarifying the context of the first, I'd still downvote the first, but I'd comment.

I read the first post three times before downvoting. I substituted words. I tried to untie the metaphor. Then I came to two personal conclusions:

  1. You offered us a challenge, ordering us to play along, with no reward, at a cost for us. HPMOR provided dozens of chapters and entertaining fiction before the Final Exam. You just posted once and expected effort.
  2. You impersonate an ASI with very very precise underlying hypotheses. An ASI that would blackmail us? Fair enough, that would be a variant of Roko's Basilisk. Your Treaty is not remotely close to what I expect an ASI to behave. As you state, the ASI make all important decisions, so why bother simulating a particular scenario involving human rights?

The first post was confusing, your second post is still confusing, neither fit the posting guidelines. You are not an ASI. Roleplaying an ASI leads to all sorts of human bias. I downvoted your two post because I do not expect anyone to be better equipped to think about superintelligences after reading them. That's it.

Hidden Hope For Control

Thanks for this post.

I'm not sure what is your central point, what is the component you announce at the start of the post. I understood that life contains a spectrum of situations we have more or less control over, that perfect control or perfect lack of control all the time is not desirable, and that we ought to have a wide range of experiences along that dimension to have enjoyable lives.

Did I miss something? Can you clarify your conclusions?

Load More