Gyrodiot

I'm Jérémy Perret. Based in France. PhD in AI (NLP). AI Safety & EA meetup organizer. Information sponge. Mostly lurking since 2014. Seeking more experience, and eventually a position, in AI safety/governance.

Extremely annoyed by the lack of an explorable framework for AI risk/benefits. Working on that.

Sequences

XiXiDu's AI Risk Interview Series

Comments

Looking Deeper at Deconfusion

I'm taking the liberty of pointing to Adam's DBLP page.

Why We Launched LessWrong.SubStack

All my hopes for this new subscription model! The use of NFTs for posts will, without a doubt, ensure that quality writing remains forever in the Blockchain (it's like the Cloud, but with better structure). Typos included.

Is there a plan to invest in old posts' NFTs that will be minted from the archive? I figure Habryka already holds them all, and selling vintage Sequences NFT to the highest bidder could be a nice addition to LessWrong's finances (imagine the added value of having a complete set of posts!)

Also, in the event that this model doesn't pan out, will the exclusive posts be released for free? It would be an excruciating loss for the community to have those insights sealed off.

Babble Challenge: 50 Ways to Overcome Impostor Syndrome

My familiarity with the topic gives me enough confidence to join this challenge!

  1. Write down your own criticism so it no longer feels fresh
  2. Have your criticism read aloud to you by someone else
  3. Argue back to this criticism
  4. Write down your counter-arguments so they stick
  5. Document your own progress
  6. Get testimonials and references even when you don't "need" them
  7. Praise the competence of other people without adding self-deprecation
  8. Same as above but in their vicinity so they'll feel compelled to praise you back
  9. Teach the basics of your field to newcomers
  10. Teach the basics of your field to experts from other fields
  11. Write down the basics of your field, for yourself
  12. Ask someone else to make your beverage of choice
  13. Ask them to tell you "you deserve it" when they're giving it to you
  14. If your instinct is to reply "no I don't", consider swapping the roles
  15. Drink your beverage, because it feels nice
  16. Build stuff that cannot possibly be built by chance alone
  17. Stare outside the window, wondering if anybody cares about you
  18. Consider a world where everyone is as insecure as you
  19. Ask friends about their insecurities
  20. Consider you're too stupid to drink a glass of water, then drink some water
  21. Meditate on the difference between map and territory
  22. Write instructions for the non-impostor version of you
  23. Write instructions for whoever replaces you when people find out you're an impostor
  24. Validate those instructions with other experts, passing it off as project planning
  25. Follow the instructions to keep the masquerade on
  26. Refine the instructions since they're "obviously" not perfect
  27. Publish the whole thing here, get loads of karma
  28. Document everything you don't know for reference
  29. Publish the thing as a list of open problems
  30. Criticize harshly other people's work to see how they take it
  31. Make amends by letting them criticize you
  32. Use all this bitterness to create a legendary academic rivalry
  33. Consider "impostor" as a cheap rhetorical attack that doesn't hold up
  34. Become very good at explaining why other people are better than you
  35. Publish the whole thing as in-depth reporting of the life of scientists
  36. Focus on your deadline, time doesn't care if you're an impostor or not
  37. Make yourself lunch, balance on one foot, solve a sudoku puzzle
  38. Meditate on the fact you actually can do several complex things well
  39. Consider that competence is not about knowing exactly how one does things
  40. Have motivational pictures near you and argue how they don't apply to you
  41. Consider the absurdity of arguing with pictures
  42. Do interesting things instead, not because you have to, but to evade the absurdity
  43. Practice the "I have no idea what I'm doing, but no one does" stance
  44. Ask people why they think they know how they do things
  45. If they start experimenting impostor syndrome as well, support them
  46. Join a club of impostors, to learn from better impostors than you
  47. Write an apology letter to everyone you think you've duped
  48. Simulate the outrage of anyone reading this letter
  49. Cut ties with everyone who would actually treat you badly after reading
  50. Sleep well, eat well, exercise, brush your teeth, take care of yourself
Google’s Ethical AI team and AI Safety

I hope this makes the case at least somewhat that these events are important, even if you don’t care at all about the specific politics involved.

I would argue that the specific politics inherent in these events are exactly why I don't want to approach them. From the outside, the mix of corporate politics, reputation management, culture war (even the boring part), all of which belong in the giant near-opaque system that is Google, is a distraction from the underlying (indeed important) AI governance problems.

For that particular series of events, I already got all the governance-relevant information I needed from the paper that apparently made the dominoes fall. I don't want my attention to get caught in the whirlwind. It's too messy (and still is after months). It's too shiny. It's not tractable for me. It would be an opportunity cost. So I take a deep breath and avert my eyes.

Suggestions of posts on the AF to review

My gratitude for the already posted suggestions (keep them coming!) - I'm looking forward to work on the reviews. My personal motivation resonates a lot with the help people navigate the field part; in-depth reviews are a precious resource for this task.

some random parenting ideas

This is one of the rare times I can in good faith use the prefix "as a parent...", so thank you for the opportunity.

So, as a parent, lots of good ideas here. Some I couldn't implement in time, some that are very dependent on living conditions (finding space for the trampoline is a bit difficult at the moment), some that are nice reminders (swamp water, bad indeed), some that are too early (because they can't read yet)...

... but most importantly, some that genuinely blindsided me, because I found myself agreeing with them, and they were outside my thought process! The one-Brilliant-problem a day one, the let-them-eat-more-cookies, mainly.

I appreciate, in particular, the breadth of the ideas. Thanks for sharing, even if you don't practice what you preach, you'll be able to get feedback.

Last day of voting for the 2019 review!

After several nudges (which I'm grateful for, in hindsight), my votes are in.

Luna Lovegood and the Chamber of Secrets - Part 1

This is very nice. I subscribed for the upcoming parts (there will be, I suppose?)

Learning from counterfactuals

I think not mixing up the referents is the hard part. One can properly learn from fictional territory when they can clearly see in which ways it's a good representation of reality, and where it's not.

I may learn from an action movie the value of grit and what it feels like to have principles, but I wouldn't trust them on gun safety or CPR.

It's not common for fiction to be self-consistent enough and preserve drama. Acceptable breaks from reality will happen, and sure, sometimes you may have a hard SF universe were the alternate reality is very lawful and the plot arises from the logical consequences of these laws (often happens in rationalfic), but more often than not things happen "because it serves the plot".

My point is, yes, I agree, one should be confused only by lack of self-consistency fiction or not. Yet, given the vast amount of fiction that is set in something close to real Earth, by the time you're skilled enough to tell apart what's transferable and what isn't, you've already done most of the learning.

Not counting the meta-skill of detecting inconsistencies, which is indeed extremely useful, for fiction or not, but I'm still unclear where exactly one learns it from.

Why those who care about catastrophic and existential risk should care about autonomous weapons

Thank you for this clear and well-argued piece.

From my reading, I consider three main features of AWSs in order to evaluate the risk they present:

  • arms race avoidance: I agree that the proliferation of AWSs is a good test bed for international coordination on safety, which extends to the widespread implementation of safe powerful AI systems in general. I'd say this extends to AGI, were we would need all (or at least the first, or only some, depending on takeoff speeds) such deployed systems to conform to safety standards.
  • leverage: I agree that AWSs would have much greater damage/casualties per cost, or per human operator. I have a question regarding persistent autonomous weapons which, much like landmines, do not require human operators at all once deployed: what, in that case, would be the limiting component of their operation? Ammo, energy supply?
  • value alignment: the relevance of this AI safety problem to the discussion would depend, in my opinion, on what exactly is included in the OODA loop of AWSs. Would weapon systems have ways to act in ways that enable their continued operation without frequent human input? Would they have other ways than weapons to influence their environment? If they don't, is the worst-case damage they can do capped at the destruction capabilities they have at launch?

I would be interested by a further investigation on the risk brought by various kinds of autonomy, expected time between human command and impact, etc.

Load More