TL;DR the contest is right here, it closes tomorrow (Friday May 27th), and you have good odds of getting at least $500 out of it. Almost nobody knows about it, and most submissions are crap, so if you think about AGI a lot then you probably have a serious edge in getting a chunk of the $20k prize.

https://www.lesswrong.com/posts/3eP8D5Sxih3NhPE6F

 

The Problem

I've spent about four years in DC, and there is one problem that my connections and I keep running into; it's always the same, mind-numbingly frustrating problem:

When you tell someone that you think a supercomputer will one day spawn an unstoppable eldritch abomination, which proceeds to ruin everything for everyone forever, and the only solution is to give some people in SF a ton of money... the person you're talking to, no matter who, tends to reevaluate associating themself with you (especially compared to their many alternatives in the DC networking scene).

Obviously, AGI is a very serious concern, and computer scientists have known for decades that the invention of a machine smarter than a human is something to take really seriously, regardless of when it happens.

We Don't Do That Here

Although most scientists are aware that general intelligence has resulted in total dominance in all known cases (e.g. humans, chimpanzees, and bears), many policymakers find it unusually difficult to believe that intelligence is so decisive, in large part due to the fact that bureaucracies, in order to remain stable, need to generate unsolvable mazes that minimize infiltration by rich and/or intelligent opportunists.

It's become clear to many policy people that one-liners and other charismatic statements are a bottleneck for creating real change about AI. This isn't due to desperation or fire alarms or short time horizons or anything like that, it's because one-liners and short statements are so critical to success in the policy space. Working papers are generally best for persuasion, and they can be pretty damn charismatic too; but in some environments (e.g. government), people's attention spans are very short. Especially when the topic of unstoppable eldritch abominations come up.

The Solution

I don't actually know what org this is, they don't state it, but they will pay you to either write up or connect them to really clever one-liners and paragraphs that clearly and honestly make the case about AGI.

If you write a really good paragraph or one-liner, and it's in the top 40 of the best ones, then they give you $500. If you submit two that make it to the top 40, you get $1000. If you put in four of the top 40 ones, you get $2000. And so on.

Right now there's more than 300 comments, but that shouldn't intimidate you. A majority of them aren't entries, and maybe a majority of them aren't entries (I personally made at least 30 comments that don't count as entries at all).

If you go and select a random area of the comment space and read all the comments there, you'll probably make a rather lucrative discovery: most of the contest's entries are total garbage with zero chance of winning. With half an hour of effort, can probably make it into the top 40 and get $500 with a single entry, especially because all you need to do is find and copy a really clever twitter tweet about AGI safety.

I Don't Want To Live On This Planet Anymore

This scenario is extremely upsetting and disturbing to me. These people have invested a ton of money into a really important and really tractable problem (the absurdity heuristic), and they have opened up the contest to a ton of really smart people; and more than half of the entries into the contest are literally just 6-word slogans. Not the good kind of 6-word slogans, the kind of slogans that you'd hear chanted at a angry public protest.

I've submitted like, 90 entries, out of at most 250 entries total (most of them are really good Yudkowsky quotes from one old paper). And I like to think that very few of my entries are bad and that at least 5 will make it into the top 40. So I'm inviting you to swoop into the contest at the last minute and outcompete me, because seeing such an important contest ignored and strangled like this makes me not want to live on this planet anymore.

The contest closes tomorrow at 11:59. That's Pacific Time, they didn't say if it was am or pm. Happy hunting, all you bounty hunters out there.

41

12 comments, sorted by Click to highlight new comments since: Today at 1:01 AM
New Comment

(Semi-dumb LW category suggestion: Posts That Could Have Made You Good Money In Hindsight)

this suggests also a category for posts that could have lost you good money in hindsight

The majority of the entries are crappy 6-word slogans precisely because the contest is explicitly asking for one-liners to slap in the face of the audience. If the most effective strategy to solve something really is shouting one-liners to the policymakers, then I am the one who doesn't want to live on this planet anymore.

For what's worth, I strongly upvoted the first comment by johnswentworth on that post:

I'd like to complain that this project sounds epistemically absolutely awful. It's offering money for arguments explicitly optimized to be convincing (rather than true), it offers money only for prizes making one particular side of the case (i.e. no money for arguments that AI risk is no big deal), and to top it off it's explicitly asking for one-liners.

I think we're speaking different languages here, since I'm saying that the contest is obviously the right thing to do and you're saying that the contest is obviously the wrong thing to do. I have a significant policy background and I can't fathom why anyone would be so hostile to the contest; these people have short attention spans and expect to be lied to, so if we're going to be honest to them then we might as well be charismatic and persuasive while doing so.

For what it's worth, this is the second half of that comment by johnwentworth

I understand that it is plausibly worth doing regardless, but man, it feels so wrong having this on LessWrong.

Thank you for this post. I wish I had seen it earlier, but in the time I did have I had a lot of fun both coming up with my own stuff and binging a bunch of AI content and extracting the arguments that I found most compelling into a format suitable for the contest.

Meta: I endorse attempts to signal boost things that posters feel are neglected, especially things already on LessWrong. Upvoted.

I would guess that the resistance in Washington, is not so much resistance to the basic idea of risk from AI, but resistance to the idea that anyone in particular has the answer, especially a group not directly affiliated with a major technology company. Does that sound right?

This is important! We need higher-quality entries (although, due to the Pareto principle, I've submitted a good chunk of the low-quality 6-word slogans :/ )

Point is: you can easily do better in this market.

When you tell someone that you think a supercomputer will one day spawn an unstoppable eldritch abomination, which proceeds to ruin everything for everyone forever, and the only solution is to give some people in SF a ton of money... the person you're talking to, no matter who, tends to reevaluate associating themself with you (especially compared to their many alternatives in the DC networking scene).

I suspect that the best way of solving this problem is via social proof: get reputable people to acknowledge the problem and then say to the DC people "Look, Alice, Bob and Carol are all saying it's a big deal".

My understanding is that there are people like Elon Musk and Bill Gates who have said something like that, but I think we probably need something with more substance than "we should pay more attention to it". Hopefully something like "I think there is a >20% chance that humanity will be wiped out from unfriendly AI some time in the next 50 years."

It also seems worth doing some research into what sorts of statements the DC people would find convincing. Ie asking them "If I told you X how would you feel? What about Y? Z?" And also what sort of reputable people they would be influenced by. Professors? Tech CEOs? Public figures?

My understanding is that there are people like Elon Musk and Bill Gates who have said something like that, but I think we probably need something with more substance than "we should pay more attention to it".

 

Fun fact: Elon Musk and Bill Gates have actually stopped saying that. Now it's mostly crypto people like Sam Bankman-Fried and Peter Thiel, who will likely take the blame if revelations break that crypto was always just rich people minting worthless tokens and selling them to poor people. 

It's really easy to imagine a NYT article pointing fingers at the people who donate 5% of their income to a cause (AGI) that has nothing to do with inequality, or to malaria interventions in africa that "ignore people here at home". Hence why I think there should be plenty of ways to explain AGI to people with short attention spans: anger and righteous rage might one day be the thing that keeps their attention spans short.

[-][anonymous]1mo 12

It's really easy to imagine a NYT article pointing fingers at the people who donate 5% of their income to a cause (AGI) that has nothing to do with inequality, or to malaria interventions in africa that "ignore people here at home". Hence why I think there should be plenty of ways to explain AGI to people with short attention spans: anger and righteous rage might one day be the thing that keeps their attention spans short.

This is a serious problem with most proposed AI governance and outreach plans that I find unaddressed. It's not an unsolvable problem either, which irks me.

I threw in a few, I wasn't expecting to win, and I'm expecting probability of win to correlate with overall forum karma. Aka, it's not what's said, it's who's saying it.