This is a special post for quick takes by laserfiche. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

8 comments, sorted by Click to highlight new comments since: Today at 7:53 PM

Under the tag of AI Safety Materials, 48 posts come up.  There are exactly two posts by sprouts:

An example elevator pitch for AI doom Score: -8[1]

On urgency, priority and collective reaction to AI-Risks: Part I Score: -12

These are also the only two posts with negative scores.  

In both cases, it was the user's first post.  For Denreik in particular you can tell that he suffered over it and put many hours into it. 

Is it counterproductive to discourage new arrivals attempting to assist in the AI alignment effort?

Is there a systemic bias against new posters?

  1. ^

    Full disclosure, this was posted by me.  

Man I have conflicting opinions about this. "People want to help" is a good thing. But then the upvote/downvote mechanism is not about the poster but about the post and has the function of ranking things that others find helpful.

And both posts you linked just…aren't that great? Yours doesn't deserve getting downvoted, but it also doesn't really deserve getting upvoted all that much imho—there's so much AI alignment intro material out there, from popular articles to youtube videos to book-length explainers from so many people…and e.g. this one fits pretty well into your desiderata?

As for Denreiks post: It doesn't smell like a thing I'd want to read (no paragraph breaks, no clear statement of the conclusion at the top, slightly confusing writing…), and while I haven't read it (and therefore didn't vote either way), such things are unfortunately a reliable indicator.

Then again: I'd love it if there was some way of showing someone "Hey, I like that you're trying to help! Maybe lurk moar (a lot moar, maybe ratio of 100:1 or 1000:1 for contributing/reading), start by commenting or shortforming". But also there needs to be some mechanism of ranking content.

Upvoted, I agree with the gist of what you saying, with some caveats. I think I would have expected the two posts to end up with a score of 0 to 5, but there is a world of difference between a 5 and a -12.

It's worth noting that the example explainer you linked to doesn't appeal to me at all.  And that's fine.  It doesn't mean that there's something wrong with the argument, or with you, or with me.  But it's important to note that it demonstrates a gap.  I've read all the alignment material[1], and I still see huge chunks of the population that will not be compelled by the existing arguments.  Also, many of the arguments are outdated and are less applicable to the current state of events.

 

  1. ^

    https://docs.google.com/document/d/1zx_WpcwuT3Stpx8GJJHcvJLSgv6dLje0eslVKvuk1yQ/edit

Huh, I see. Agree about the 0-5 vs. -12 (in this case -8) difference.

I don't see myself in the business of making good explainer material for the general public, so I'll defer to you on that (since you have read more of the introductions than I have).

Also, I guess posting that google doc here would probably would be upvoted?

I mostly don't want new people to contribute to public materials efforts. I want people to have thought concretely about the problem and fleshed out their understanding of it before focusing on communicating it to others.

I do want people who are entering the space to have a good experience. I'm mulling over some posts that give newcomers a clearer set of handholds on what to do to get started.

Are we misreporting p(doom)s?

I usually say that my p(doom) is 50%, but that doesn't mean the same thing that it does in a weather forecast.

In weather forecasts, the percentage states that they ran a series of simulations, and that percentage of simulations produced that result. A forecast of a 100% chance of rain, then, does not mean that there is near a 100% chance of rain. Forecasts still have error bars; 10 days out, a forecast will be wrong 50% of the time. Therefore, a 10 forecast of 100% chance of rain means that there is actually a 50%.

In my mental simulations, the outcome is bad 100% of the time. I can't construct a convincing scenario in my mind where things work out, at least contingent on the continued development of AI. But I know that there is much that I don't know, things I haven't yet considered, etc. Hence the 50% error margin. But like in the weather forecast, this can be misinterpreted as me thinking that 50% of the time it works out.

Is there a terminology that currently accounts for this? If not, does it mean that p(doom)s are being misunderstood, or reported with different meanings?

Yes, thank you, I think that's it exactly. I don't think that people are communicating this well when they are reporting predictions.