I'm Jérémy Perret. Based in France. PhD in AI (NLP). AI Safety & EA meetup organizer. Information sponge. Mostly lurking since 2014. Seeking more experience, and eventually a position, in AI safety/governance.
Extremely annoyed by the lack of an explorable framework for AI risk/benefits. Working on that.
they abandoned simple metrics in favour of analyses in which qualitative factors play a large role, because all the metrics they evaluated failed to have good properties
Do you have more specific statements from GiveWell for this shift? I have not been able to find a clear enough argument for your claim from their website, nor from research on the EA Forum.
Also, your view on well-behaved utility functions may vary. You need to get an approximation of ideal utilitarianism, with a nice ordering of world-states by total happiness/suffering (depending on flavor) and how to get there. I think we can coordinate on some good enough approximations to be able to give. Is that well-behaved enough, or are you pointing at something stronger here?
I am now curious about the omission of French, hoping that's because you already have competent people for it, maybe the aforementioned kind souls?
Related recent post: Intelligence Is Not Magic, But Your Threshold For "Magic" Is Pretty Low (similar point, focused on human short-timeframe feats rather than technological achievements).
Format note: your list is missing a number 3.
Two separate points:
Ah. Thank you for your attempt to get through to this community anyway, in the face of such incompatibility. Enjoy your freedom then, I hope you'll do better than us.
Alright, the first chunk of my frowning was from claims about Rationality as a generic concept (and my immediate reaction to it). Second, I am puzzled by a few of your sentences.
Likewise, I consistently see Rationalists have no awareness or care of goals in the first place. Every human acts for a goal. If you don't set an external one, then your default one becomes the goals motivated by human motivations systems.
What do you make of Goal Factoring, one of the techniques designed to patch that class of behaviors ? If I see a self-identified rationalist not being aware of their own goals, and there are a bunch, goal factoring would be my first suggestion. I would expect them to be curious about it.
If improving your ability to think by going through the uncomfortable process of utilizing a system of the brain that you are unfamiliar with is not something that interests you, then this document is not for you.
Mostly unnecessary caveat; one of the main draws of this website is to study the flaws of our own lenses.
Please be undeterred by the negative karma, it's only a signal that this particular post may fail at its intended purpose. Namely:
I say all this to bring context to this document's demand that the. reader does not ask for external justifications of claims. Instead, this document requires that readers test the concepts explored in this document in the real-world. It demands that the readers do not use validity-based reasoning to understand it.
...where is this document? Here I see a warning about the document, a surface clash of concepts, another warning of ignoring advice from other groups, and a bullet point list with too little guidance on how to get those heuristics understood.
Listing the virtues is a starting point, but one does not simply say "go forth and learn for yourself what Good Strategy is" and see that done without a lot of nudging, or else one might stay in the comfort of "validity-based reasoning" all call it a day. Which I would find disappointing.
"Internal betting markets" may be a reference to the Logical Induction paper? Unsure it ties strongly to stop-button/corrigibility.
Let's see if your post has successfully overcome my mental filters (at the very least, I clicked). Here's my reformulation of your claims, as if I had to explain them to someone else.
Now that I've written the points above, I study again the "what if" part at the end and say, "oh, so the idea is that human language may not be the best way to transmit knowledge because what gets your attention often isn't what lets you learn easily, cool, then what"
Then... you claim that there might be a Better Language to cut through these issues. That would be extremely impressive. But then I scroll back up and I see the titles of the following posts. I'm afraid that you will only describe issues with human communication without suggesting techniques to overcome them (at least in specific contexts).
For instance, you gave an example comparison in impression (asteroid vs. climate change). Could you provide a comparison for relevance? Something that, by your lights, gets processed easily?