LESSWRONG
LW

Wes R
4560
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
1Wes R's Shortform
3mo
1
No wikitag contributions to display.
🫵YOU🫵 get to help the AGI Safety Act in Congress! This is real!
Wes R1mo*1-5

Yeah, agreed, but either way, it's a motivating sentence, so I reckon it's good to pretend its true

(except for when doing so would lead you to make a misinformed, worse decision. Then, know that it's only maybe true if we try hard/smart enough)

 

good point, I changed it!

Reply
🫵YOU🫵 get to help the AGI Safety Act in Congress! This is real!
Wes R1mo10

Hmm, yeah I guess I could. 

The first video just explains what the AGI safety act is and the stuff we can do about it, which I reckon this article does fairly well (unless it doesn't - please tell me if there's a way I can make this article explain " what the AGI safety act is and the stuff we can do about it" 

the 3 videos after that I could make a written version, but I'd guesstimate that my text summary would take longer to read through than the videos, maybe watch the videos at 1.5x speed? the video talks pretty slow relative to how fast people listen, so I reckon that would work fine.

the 3 videos after that are conveniently not videos, they're written google docs, which is convenient 

the 2 things after that are just paragraphs, because they only need a paragraph to describe. Like if I had to do a text summary of them, I'd just copy/paste 

7. And most importantly, Parallel Projects!

I was talking to a friend of mine on animal welfare, and I thought, "Oh! ya know, I'm getting a buncha folks to send letters, make calls, meet with congress folk in-person, and come up with ideas for a bill, but none of that needs to be AI-specific. I could do all of that at the same time with other issues, like animal welfare!", so if y'all have any ideas for such a bill, do all of the above stuff, just replace "AI risks" with "Animal welfare", or whatever else. You can come up with ideas for geopolitics stuff, pandemic prevention stuff, civilizational resilience stuff, animal welfare stuff, and everything else on the community brainstorming spots on 

https://app.gather.town/app/Yhi4XYj0zFNWuUNv/EA%20coworking%20and%20lounge?spawnToken=peMqGxRBRjq98J7xTCIK
8. Wanna do something else? Ask me, and there's a good chance I'll tell ya how!

So, in summary, videos 2, 3, and 4 can be watched at 1.5x speed, If you want feel free to have a go at making a text summary of them (I'd probably make a text summary longer than the video itself, but maybe you or someone reading this can write shorter), and the rest are already text summaries/have text summaries.

 

 

(Sorry if this sounds like I'm rambling. I'm sort of tired, and I sort of was rambling, which would explain why it sounds that way. Sorry!)

Reply
The Best Reference Works for Every Subject
Wes R3mo00

Domain: AI safety (better known as "The alignment problem")

Links:
here’s 16 minute’s worth of overview that basically covers all you’d need to know about what AI safety specifically is: 

  1. Could AI wipe out humanity? | Most pressing problems (10 mins),
  2. The real risk of AI (6 mins)
  3. Look at this graph: 

If you’d be curious to learn more about the nitty-gritty of AI safety, watch (in this order):

  1. Intro to AI Safety, Remastered (18 mins) (Sidenote: The original video is from 2016, & back then people thought AI would develop much slower.)
  2. The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment (23 mins)
  3. Using Dangerous AI, But Safely? (30 mins)

Author(s): 80000 Hours, Siliconversations, Metaculus, Rob Miles.
Type: list
Why: To use as a quick intro

Reply
Wes R's Shortform
Wes R3mo10

Had a conversation on AI safety (namely interperetability of AIs thinking, and what might cause AI to scheme), and I screenshot the key takeaways so I could share them with the world (well, mostly just so I have them saved somewhere).

Reply
80 concepts on my new version of AI: DecisionBots
Wes R4mo10
Reply
80 concepts on my new version of AI: DecisionBots
Wes R4mo10

I reckon I might be pretty good(ish) at coming up with good concepts, but given my downvotes, it seems I'm not good at presenting said concepts. Does anyone have any advice for that?

Reply
10🫵YOU🫵 get to help the AGI Safety Act in Congress! This is real!
1mo
5
1Wes R's Shortform
3mo
1
080 concepts on my new version of AI: DecisionBots
4mo
2
-2Would this solve the (outer) alignment problem, or at least help?
5mo
1
0Why you can add moral value, and if an AI has moral weights for these moral values, those might be off
5mo
1