I've been told that I'm bad at important communication, so to counteract that, just pretend I wrote a better version of whatever comment you clicked on. Thanks! 😅
Here's my Lesswrong,
This is my favorite YouTube video,
& I do world-modelling on the AMAZING whiteboard website excalidraw as a hobby. It's so fun!
Yeah, I agree with all of this,
by “good” i mean “good for a person’s morals/values/ideology/what-they-think-is-good/what-they-care-about”, there’s a lot of words for it,
so consider this my asterisk lol
but as long as it gets the idea across, that we have an international summit where world leaders would debate different morals/values/ideologies & come to a consensus, Making humanity a united force for good,
and as long as I say my how to help with that, (helping me get this to the right people, or getting this to the right people yourself)
that’s the important part!
That’s the part people could act on.
Unless I’m missing something they could act on, but I doubt it. Prove me wrong, I bet ya can’t!
I was gonna reply to this with some beautiful message on how we, as beings that exist, care about eachother regardless of what kind of being we are, and that you don't need to be a cat to care about cats, but then I forgot :P
I also would've included the point "Just replace every word 'cat' here with the word 'conscious being', and hopefully you'll end up with an aligned AGI!"
I also would've included the question, "How do we make it so the mysterious force of evolution also works to make all other animals happy? (This question goes both for nature and for factory farms!)
- for factory-farmed chickens though, you could maybe use govt. mandated-on-chicken-farms-above-100,000-chickens gene drive to manipulate the chickens into not feeling pain?
That does still sound cruel though...
Was this written by a cat?
Yeah, agreed, but either way, it's a motivating sentence, so I reckon it's good to pretend its true
(except for when doing so would lead you to make a misinformed, worse decision. Then, know that it's only maybe true if we try hard/smart enough)
good point, I changed it!
Hmm, yeah I guess I could.
The first video just explains what the AGI safety act is and the stuff we can do about it, which I reckon this article does fairly well (unless it doesn't - please tell me if there's a way I can make this article explain " what the AGI safety act is and the stuff we can do about it"
the 3 videos after that I could make a written version, but I'd guesstimate that my text summary would take longer to read through than the videos, maybe watch the videos at 1.5x speed? the video talks pretty slow relative to how fast people listen, so I reckon that would work fine.
the 3 videos after that are conveniently not videos, they're written google docs, which is convenient
the 2 things after that are just paragraphs, because they only need a paragraph to describe. Like if I had to do a text summary of them, I'd just copy/paste
7. And most importantly, Parallel Projects!
I was talking to a friend of mine on animal welfare, and I thought, "Oh! ya know, I'm getting a buncha folks to send letters, make calls, meet with congress folk in-person, and come up with ideas for a bill, but none of that needs to be AI-specific. I could do all of that at the same time with other issues, like animal welfare!", so if y'all have any ideas for such a bill, do all of the above stuff, just replace "AI risks" with "Animal welfare", or whatever else. You can come up with ideas for geopolitics stuff, pandemic prevention stuff, civilizational resilience stuff, animal welfare stuff, and everything else on the community brainstorming spots on
https://app.gather.town/app/Yhi4XYj0zFNWuUNv/EA%20coworking%20and%20lounge?spawnToken=peMqGxRBRjq98J7xTCIK
8. Wanna do something else? Ask me, and there's a good chance I'll tell ya how!
So, in summary, videos 2, 3, and 4 can be watched at 1.5x speed, If you want feel free to have a go at making a text summary of them (I'd probably make a text summary longer than the video itself, but maybe you or someone reading this can write shorter), and the rest are already text summaries/have text summaries.
(Sorry if this sounds like I'm rambling. I'm sort of tired, and I sort of was rambling, which would explain why it sounds that way. Sorry!)
Domain: AI safety (better known as "The alignment problem")
Links:
here’s 16 minute’s worth of overview that basically covers all you’d need to know about what AI safety specifically is:
- Could AI wipe out humanity? | Most pressing problems (10 mins),
- The real risk of AI (6 mins)
- Look at this graph:
If you’d be curious to learn more about the nitty-gritty of AI safety, watch (in this order):
- Intro to AI Safety, Remastered (18 mins) (Sidenote: The original video is from 2016, & back then people thought AI would develop much slower.)
- The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment (23 mins)
- Using Dangerous AI, But Safely? (30 mins)
Author(s): 80000 Hours, Siliconversations, Metaculus, Rob Miles.
Type: list
Why: To use as a quick intro
I reckon I might be pretty good(ish) at coming up with good concepts, but given my downvotes, it seems I'm not good at presenting said concepts. Does anyone have any advice for that?
That's a good point, and it made me go 'AAAAAHH!!'