Wiki Contributions

Comments

Yes, thank you, I think that's it exactly. I don't think that people are communicating this well when they are reporting predictions.

Are we misreporting p(doom)s?

I usually say that my p(doom) is 50%, but that doesn't mean the same thing that it does in a weather forecast.

In weather forecasts, the percentage states that they ran a series of simulations, and that percentage of simulations produced that result. A forecast of a 100% chance of rain, then, does not mean that there is near a 100% chance of rain. Forecasts still have error bars; 10 days out, a forecast will be wrong 50% of the time. Therefore, a 10 forecast of 100% chance of rain means that there is actually a 50%.

In my mental simulations, the outcome is bad 100% of the time. I can't construct a convincing scenario in my mind where things work out, at least contingent on the continued development of AI. But I know that there is much that I don't know, things I haven't yet considered, etc. Hence the 50% error margin. But like in the weather forecast, this can be misinterpreted as me thinking that 50% of the time it works out.

Is there a terminology that currently accounts for this? If not, does it mean that p(doom)s are being misunderstood, or reported with different meanings?

Are you assuming that avoiding doom in this way will require a pivotal act? It seem absent policy intervention and societal change, even if some firms exhibit a proper amount of concern many others will not.

A similar principle I have about this situation is: Don't get too clever.

Don't do anything questionable or too complicated. If you do, you're just as likely to cause harm as to cause good. The psychological warfare campaign you've envisioned against OpenAI is going to backfire on you and undermine your team.

Keep it simple. Promote alignment research. Persuade your friends. Volunteer on one of the many relevant projects.

Upvoted, I agree with the gist of what you saying, with some caveats. I think I would have expected the two posts to end up with a score of 0 to 5, but there is a world of difference between a 5 and a -12.

It's worth noting that the example explainer you linked to doesn't appeal to me at all.  And that's fine.  It doesn't mean that there's something wrong with the argument, or with you, or with me.  But it's important to note that it demonstrates a gap.  I've read all the alignment material[1], and I still see huge chunks of the population that will not be compelled by the existing arguments.  Also, many of the arguments are outdated and are less applicable to the current state of events.

 

  1. ^

    https://docs.google.com/document/d/1zx_WpcwuT3Stpx8GJJHcvJLSgv6dLje0eslVKvuk1yQ/edit

Under the tag of AI Safety Materials, 48 posts come up.  There are exactly two posts by sprouts:

An example elevator pitch for AI doom Score: -8[1]

On urgency, priority and collective reaction to AI-Risks: Part I Score: -12

These are also the only two posts with negative scores.  

In both cases, it was the user's first post.  For Denreik in particular you can tell that he suffered over it and put many hours into it. 

Is it counterproductive to discourage new arrivals attempting to assist in the AI alignment effort?

Is there a systemic bias against new posters?

  1. ^

    Full disclosure, this was posted by me.  

Denreik, I think this is a quality post and I know you spent a lot of time on it. I found your paragraphs on threat complexity enlightening - it is in hindsight an obvious point that a sufficiently complex or subtle threat will be ignored by most people regardless of its certainty, and that is an important feature of the current situation.

I agree that there are many situations where this cannot be used. But there appears at least to be a gap that arguments like this can fill that is missed by the existing explanations.

I find those first two and Lethalities to be too long and complicated for convincing an uninitiated, marginally interested person. Zvi's Basics is actually my current preference along with stories like It Looks Like You're Trying To Take Over The World (Clippy).

The best primer that I have found so far is Basics of AI Wiping Out All Value in the Universe by Zvi.  It's certainly not going to pass peer review, but it's very accessible, compact, covers the breadth of the topics, and links to several other useful references.  It has the downside of being buried in a very long article, though the link above should take you to the correct section.

Load More