Hmm, yeah I guess I could.
The first video just explains what the AGI safety act is and the stuff we can do about it, which I reckon this article does fairly well (unless it doesn't - please tell me if there's a way I can make this article explain " what the AGI safety act is and the stuff we can do about it"
the 3 videos after that I could make a written version, but I'd guesstimate that my text summary would take longer to read through than the videos, maybe watch the videos at 1.5x speed? the video talks pretty slow relative to how fast people listen, so I reckon that would work fine.
the 3 videos after that are conveniently not videos, they're written google docs, which is convenient
the 2 things after that are just paragraphs, because they only need a paragraph to describe. Like if I had to do a text summary of them, I'd just copy/paste
7. And most importantly, Parallel Projects!
I was talking to a friend of mine on animal welfare, and I thought, "Oh! ya know, I'm getting a buncha folks to send letters, make calls, meet with congress folk in-person, and come up with ideas for a bill, but none of that needs to be AI-specific. I could do all of that at the same time with other issues, like animal welfare!", so if y'all have any ideas for such a bill, do all of the above stuff, just replace "AI risks" with "Animal welfare", or whatever else. You can come up with ideas for geopolitics stuff, pandemic prevention stuff, civilizational resilience stuff, animal welfare stuff, and everything else on the community brainstorming spots on
https://app.gather.town/app/Yhi4XYj0zFNWuUNv/EA%20coworking%20and%20lounge?spawnToken=peMqGxRBRjq98J7xTCIK
8. Wanna do something else? Ask me, and there's a good chance I'll tell ya how!
So, in summary, videos 2, 3, and 4 can be watched at 1.5x speed, If you want feel free to have a go at making a text summary of them (I'd probably make a text summary longer than the video itself, but maybe you or someone reading this can write shorter), and the rest are already text summaries/have text summaries.
(Sorry if this sounds like I'm rambling. I'm sort of tired, and I sort of was rambling, which would explain why it sounds that way. Sorry!)
Domain: AI safety (better known as "The alignment problem")
Links:
here’s 16 minute’s worth of overview that basically covers all you’d need to know about what AI safety specifically is:
- Could AI wipe out humanity? | Most pressing problems (10 mins),
- The real risk of AI (6 mins)
- Look at this graph:
If you’d be curious to learn more about the nitty-gritty of AI safety, watch (in this order):
- Intro to AI Safety, Remastered (18 mins) (Sidenote: The original video is from 2016, & back then people thought AI would develop much slower.)
- The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment (23 mins)
- Using Dangerous AI, But Safely? (30 mins)
Author(s): 80000 Hours, Siliconversations, Metaculus, Rob Miles.
Type: list
Why: To use as a quick intro
I reckon I might be pretty good(ish) at coming up with good concepts, but given my downvotes, it seems I'm not good at presenting said concepts. Does anyone have any advice for that?
Yeah, agreed, but either way, it's a motivating sentence, so I reckon it's good topretendits true(except for when doing so would lead you to make a misinformed, worse decision. Then,knowthat it's only maybe true if we try hard/smart enough)good point, I changed it!