LESSWRONG
LW

169
Gyrodiot
688Ω37714639
Message
Dialogue
Subscribe

I'm Jérémy Perret. Based in France. PhD in AI (NLP). AI Safety & EA meetup organizer. Information sponge. Mostly lurking since 2014. Seeking more experience, and eventually a position, in AI safety/governance.

Extremely annoyed by the lack of an explorable framework for AI risk/benefits. Working on that.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Methods of Psychomagic
XiXiDu's AI Risk Interview Series
Words make us Dumb #1: The “Point”lessness of Knowledge
Gyrodiot4d30

Let's see if your post has successfully overcome my mental filters (at the very least, I clicked). Here's my reformulation of your claims, as if I had to explain them to someone else.

  1. You need a special effort to grab the attention of humans
  2. Humans can't process all the words thrown at them and select "impressive" content
  3. You need several tries to transmit knowledge properly
  4. Beyond being impressive, words need to be "relevant" to transmit knowledge efficiently
  5. Words can't create a perfectly impressive and relevant content
  6. Being very impressive doesn't guarantee relevance
  7. Content impressive for you doesn't make it more relevant for you
  8. This is a toy model, humans also have incentives to shape which content gets thrown or not

Now that I've written the points above, I study again the "what if" part at the end and say, "oh, so the idea is that human language may not be the best way to transmit knowledge because what gets your attention often isn't what lets you learn easily, cool, then what"

Then... you claim that there might be a Better Language to cut through these issues. That would be extremely impressive. But then I scroll back up and I see the titles of the following posts. I'm afraid that you will only describe issues with human communication without suggesting techniques to overcome them (at least in specific contexts).

For instance, you gave an example comparison in impression (asteroid vs. climate change). Could you provide a comparison for relevance? Something that, by your lights, gets processed easily?

Reply
A brief argument against utilitarianism
Gyrodiot1mo30

they abandoned simple metrics in favour of analyses in which qualitative factors play a large role, because all the metrics they evaluated failed to have good properties

 

Do you have more specific statements from GiveWell for this shift? I have not been able to find a clear enough argument for your claim from their website, nor from research on the EA Forum.

Also, your view on well-behaved utility functions may vary. You need to get an approximation of ideal utilitarianism, with a nice ordering of world-states by total happiness/suffering (depending on flavor) and how to get there. I think we can coordinate on some good enough approximations to be able to give. Is that well-behaved enough, or are you pointing at something stronger here?

Reply
If Anyone Builds It, Everyone Dies: Call for Translators (for Supplementary Materials)
Gyrodiot3mo31

I am now curious about the omission of French, hoping that's because you already have competent people for it, maybe the aforementioned kind souls?

Reply
"It isn't magic"
Gyrodiot4mo116

Related recent post: Intelligence Is Not Magic, But Your Threshold For "Magic" Is Pretty Low (similar point, focused on human short-timeframe feats rather than technological achievements).

Reply
AISEC: Why to not to be shy.
Gyrodiot4mo10

Format note: your list is missing a number 3.

Reply
Roman Malov's Shortform
Gyrodiot4mo42

Two separate points:

  • compared to physics, the field of alignment has a slow-changing set of questions (e.g. corrigibility, interpretability, control, goal robustness, etc.) but a fast-evolving subject matter, as capability progresses. I use the analogy of a biologist suddenly working on a place where evolution runs 1000x faster, some insights get stale very fast and it's hard to know which ones in advance. Keeping up with the frontier is, then, used to know whether one's work still seems relevant (or where to send newcomers). Agent foundations as a class of research agendas was the answer to this volatility, but progress is slow and the ground keeps shifting.
  • there is some effort to unify alignment research, or at least provide a textbook to get to the frontier. My prime example is the AI Safety Atlas, I would also consider the BlueDot courses as structure-building, AIsafety.info as giving some initial directions. There's also a host of papers attempting to categorize the sub-problems but they're not focused on tentative answers.
Reply
All Rationalists hate & sabotage Strategy without having any awareness of it.
Gyrodiot5mo10

Ah. Thank you for your attempt to get through to this community anyway, in the face of such incompatibility. Enjoy your freedom then, I hope you'll do better than us.

Reply1
All Rationalists hate & sabotage Strategy without having any awareness of it.
Gyrodiot5mo20

Alright, the first chunk of my frowning was from claims about Rationality as a generic concept (and my immediate reaction to it). Second, I am puzzled by a few of your sentences.

Likewise, I consistently see Rationalists have no awareness or care of goals in the first place. Every human acts for a goal. If you don't set an external one, then your default one becomes the goals motivated by human motivations systems.

What do you make of Goal Factoring, one of the techniques designed to patch that class of behaviors ? If I see a self-identified rationalist not being aware of their own goals, and there are a bunch, goal factoring would be my first suggestion. I would expect them to be curious about it.

If improving your ability to think by going through the uncomfortable process of utilizing a system of the brain that you are unfamiliar with is not something that interests you, then this document is not for you.

Mostly unnecessary caveat; one of the main draws of this website is to study the flaws of our own lenses.

Please be undeterred by the negative karma, it's only a signal that this particular post may fail at its intended purpose. Namely:

I say all this to bring context to this document's demand that the. reader does not ask for external justifications of claims. Instead, this document requires that readers test the concepts explored in this document in the real-world. It demands that the readers do not use validity-based reasoning to understand it.

...where is this document? Here I see a warning about the document, a surface clash of concepts, another warning of ignoring advice from other groups, and a bullet point list with too little guidance on how to get those heuristics understood.

Listing the virtues is a starting point, but one does not simply say "go forth and learn for yourself what Good Strategy is" and see that done without a lot of nudging, or else one might stay in the comfort of "validity-based reasoning" all call it a day. Which I would find disappointing.

Reply
Lao Mein's Shortform
Gyrodiot5mo10

"Internal betting markets" may be a reference to the Logical Induction paper? Unsure it ties strongly to stop-button/corrigibility.

Reply
Load More
37Review of "Learning Normativity: A Research Agenda"
Ω
4y
Ω
0
65Review of "Fun with +12 OOMs of Compute"
Ω
5y
Ω
21
11Learning from counterfactuals
5y
5
43Mapping Out Alignment
Ω
5y
Ω
0
45Resources for AI Alignment Cartography
Ω
6y
Ω
8
19Layers of Expertise and the Curse of Curiosity
7y
1
10Willpower duality
9y
8
7Open thread, Oct. 31 - Nov. 6, 2016
9y
83
IABIED
a month ago
(+1020)
HPMOR (discussion & meta)
6 months ago
(+80)
HPMOR Fanfiction
6 months ago
(+188)
In Russian
6 months ago
(+93)
Literature Reviews
5 years ago
(+665)
Parfit's Hitchhiker
5 years ago
(+13/-12)
Parfit's Hitchhiker
5 years ago
(+181/-195)
Litany of Hodgell
5 years ago
(+11/-11)
Litany of Gendlin
5 years ago
(+296)