If successful, in just a few months, you might not have to worry about the alignment problem anymore(!), and we can help him with this bill.
This seems obviously false to me. Just because we have a law in place to restrict the behavior of frontier labs doesn't mean we get to stop worrying about alignment. It instead means that we stop having to worry quite so much that AI labs that fall under US jurisdiction will keep pressing forward in maximally dangerous ways, assuming there are good enforcement mechanisms, the bill doesn't get watered down, China doesn't take the lead and produce more dangerous models first, etc.
I'm not saying that such a law isn't good in theory (I have no idea if it would be actually good because we don't yet have the text of the bill), but just that this is a bit more excitment than I think is warranted if there were such a law.
Yeah, agreed, but either way, it's a motivating sentence, so I reckon it's good to pretend its true
(except for when doing so would lead you to make a misinformed, worse decision. Then, know that it's only maybe true if we try hard/smart enough)
good point, I changed it!
This is a rare, high-leverage juncture: a member of Congress is actively writing a bill that could (potentially) fully stop the risk of unaligned AGI from US labs.
It would, if wildly more successful than any law in human history has ever been, stop a very small fraction of the risk.
Thanks for this! Are you able to offer a text summary for folks who are busy and don't want to watch a bunch of videos?
Also suggest posting to EA Forum if you haven't already.
Hmm, yeah I guess I could.
The first video just explains what the AGI safety act is and the stuff we can do about it, which I reckon this article does fairly well (unless it doesn't - please tell me if there's a way I can make this article explain " what the AGI safety act is and the stuff we can do about it"
the 3 videos after that I could make a written version, but I'd guesstimate that my text summary would take longer to read through than the videos, maybe watch the videos at 1.5x speed? the video talks pretty slow relative to how fast people listen, so I reckon that would work fine.
the 3 videos after that are conveniently not videos, they're written google docs, which is convenient
the 2 things after that are just paragraphs, because they only need a paragraph to describe. Like if I had to do a text summary of them, I'd just copy/paste
7. And most importantly, Parallel Projects!
I was talking to a friend of mine on animal welfare, and I thought, "Oh! ya know, I'm getting a buncha folks to send letters, make calls, meet with congress folk in-person, and come up with ideas for a bill, but none of that needs to be AI-specific. I could do all of that at the same time with other issues, like animal welfare!", so if y'all have any ideas for such a bill, do all of the above stuff, just replace "AI risks" with "Animal welfare", or whatever else. You can come up with ideas for geopolitics stuff, pandemic prevention stuff, civilizational resilience stuff, animal welfare stuff, and everything else on the community brainstorming spots on
https://app.gather.town/app/Yhi4XYj0zFNWuUNv/EA%20coworking%20and%20lounge?spawnToken=peMqGxRBRjq98J7xTCIK
8. Wanna do something else? Ask me, and there's a good chance I'll tell ya how!
So, in summary, videos 2, 3, and 4 can be watched at 1.5x speed, If you want feel free to have a go at making a text summary of them (I'd probably make a text summary longer than the video itself, but maybe you or someone reading this can write shorter), and the rest are already text summaries/have text summaries.
(Sorry if this sounds like I'm rambling. I'm sort of tired, and I sort of was rambling, which would explain why it sounds that way. Sorry!)
At around 9 AM, on June 25, at a committee hearing titled “Authoritarians and Algorithms: Why U.S. AI Must Lead” (at the 11-minute, 50-second mark in the video), Congressman Raja Kirshnamoorthi, a democrat of Illinois’s 8th congressional district, Announced to the committee room “I'm working on a new bill [he hasn’t introduced it yet], ‘the AGI Safety Act’ that will require AGI to be aligned with human values and require it to comply with laws that apply to humans. This is just common sense.”
The hearing continued with substantive discussion from congress members of both parties(!) of AI safety and the need for policy to prevent misaligned AI.
This is a rare, high-leverage juncture: a member of Congress is actively writing a bill that could (potentially) fully stop the risk of unaligned AGI from US labs. If successful, in just a few months, you might not have to worry about the alignment problem as much, and we can help him with this bill.
Namely, after way too long, I (and others) finally finished up on a full thing explaining the AGI Safety Act:
& here's the explanation of the 8 ways folks can be a part of it!
4. Talking to congress, part 2: How to literally meet with congress folk and talk to them literally in person
5. Come up with ideas that might be put in the official AGI safety act
6. Getting AI labs to, by law, be required to test if their AI is risky & tell everyone if it turns out to be risky
7. And most importantly, Parallel Projects!
I was talking to a friend of mine on animal welfare, and I thought, "Oh! ya know, I'm getting a buncha folks to send letters, make calls, meet with congress folk in-person, and come up with ideas for a bill, but none of that needs to be AI-specific. I could do all of that at the same time with other issues, like animal welfare!", so if y'all have any ideas for such a bill, do all of the above stuff, just replace "AI risks" with "Animal welfare", or whatever else. You can come up with ideas for geopolitics stuff, pandemic prevention stuff, civilizational resilience stuff, animal welfare stuff, and everything else on the community brainstorming spots on
https://app.gather.town/app/Yhi4XYj0zFNWuUNv/EA%20coworking%20and%20lounge?spawnToken=peMqGxRBRjq98J7xTCIK
8. Wanna do something else? Ask me, and there's a good chance I'll tell ya how!