LESSWRONG
LW

110
Wes R
-26101
Message
Dialogue
Subscribe

I've been told that I'm bad at important communication, so to counteract that, just pretend I wrote a better version of whatever comment you clicked on. Thanks! 😅

Here's my Lesswrong, 

This is my favorite YouTube video,

& I do world-modelling on the AMAZING whiteboard website excalidraw as a hobby. It's so fun!

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
1Wes R's Shortform
5mo
1
How we'll make all world leaders work together to make the world better (Expert-approved idea)
Wes R14d10

I honestly believe that if they had a summit like this, the outcome would be for all the leaders of the world to come together and formally agree that the supreme moral value of humanity is obedience and submission to the state.  That's the one thing that is in line with all of their desires, whether they want to admit it or not.  The leaders of America and a few others with a freedom-loving image to keep up would have to make a show of complaint, but even they could rationalize it away.


That's a good point, and it made me go 'AAAAAHH!!'

Reply
How we'll make all world leaders work together to make the world better (Expert-approved idea)
Wes R16d00

Yeah, I agree with all of this,

by “good” i mean “good for a person’s morals/values/ideology/what-they-think-is-good/what-they-care-about”, there’s a lot of words for it,

so consider this my asterisk lol

 

but as long as it gets the idea across, that we have an international summit where world leaders would debate different morals/values/ideologies & come to a consensus, Making humanity a united force for good,

and as long as I say my how to help with that, (helping me get this to the right people, or getting this to the right people yourself)

that’s the important part!

That’s the part people could act on.

Unless I’m missing something they could act on, but I doubt it. Prove me wrong, I bet ya can’t!

Reply
The Cats are On To Something
Wes R21d10

I was gonna reply to this with some beautiful message on how we, as beings that exist, care about eachother regardless of what kind of being we are, and that you don't need to be a cat to care about cats, but then I forgot :P

 

I also would've included the point "Just replace every word 'cat' here with the word 'conscious being', and hopefully you'll end up with an aligned AGI!"

 

I also would've included the question, "How do we make it so the mysterious force of evolution also works to make all other animals happy? (This question goes both for nature and for factory farms!)

- for factory-farmed chickens though, you could maybe use govt. mandated-on-chicken-farms-above-100,000-chickens gene drive to manipulate the chickens into not feeling pain?

That does still sound cruel though...

Reply
The Cats are On To Something
Wes R1mo10

Was this written by a cat?

Reply
🫵YOU🫵 get to help the AGI Safety Act in Congress! This is real!
Wes R3mo*1-5

Yeah, agreed, but either way, it's a motivating sentence, so I reckon it's good to pretend its true

(except for when doing so would lead you to make a misinformed, worse decision. Then, know that it's only maybe true if we try hard/smart enough)

 

good point, I changed it!

Reply
🫵YOU🫵 get to help the AGI Safety Act in Congress! This is real!
Wes R3mo10

Hmm, yeah I guess I could. 

The first video just explains what the AGI safety act is and the stuff we can do about it, which I reckon this article does fairly well (unless it doesn't - please tell me if there's a way I can make this article explain " what the AGI safety act is and the stuff we can do about it" 

the 3 videos after that I could make a written version, but I'd guesstimate that my text summary would take longer to read through than the videos, maybe watch the videos at 1.5x speed? the video talks pretty slow relative to how fast people listen, so I reckon that would work fine.

the 3 videos after that are conveniently not videos, they're written google docs, which is convenient 

the 2 things after that are just paragraphs, because they only need a paragraph to describe. Like if I had to do a text summary of them, I'd just copy/paste 

7. And most importantly, Parallel Projects!

I was talking to a friend of mine on animal welfare, and I thought, "Oh! ya know, I'm getting a buncha folks to send letters, make calls, meet with congress folk in-person, and come up with ideas for a bill, but none of that needs to be AI-specific. I could do all of that at the same time with other issues, like animal welfare!", so if y'all have any ideas for such a bill, do all of the above stuff, just replace "AI risks" with "Animal welfare", or whatever else. You can come up with ideas for geopolitics stuff, pandemic prevention stuff, civilizational resilience stuff, animal welfare stuff, and everything else on the community brainstorming spots on 

https://app.gather.town/app/Yhi4XYj0zFNWuUNv/EA%20coworking%20and%20lounge?spawnToken=peMqGxRBRjq98J7xTCIK
8. Wanna do something else? Ask me, and there's a good chance I'll tell ya how!

So, in summary, videos 2, 3, and 4 can be watched at 1.5x speed, If you want feel free to have a go at making a text summary of them (I'd probably make a text summary longer than the video itself, but maybe you or someone reading this can write shorter), and the rest are already text summaries/have text summaries.

 

 

(Sorry if this sounds like I'm rambling. I'm sort of tired, and I sort of was rambling, which would explain why it sounds that way. Sorry!)

Reply
The Best Reference Works for Every Subject
Wes R4mo00

Domain: AI safety (better known as "The alignment problem")

Links:
here’s 16 minute’s worth of overview that basically covers all you’d need to know about what AI safety specifically is: 

  1. Could AI wipe out humanity? | Most pressing problems (10 mins),
  2. The real risk of AI (6 mins)
  3. Look at this graph: 

If you’d be curious to learn more about the nitty-gritty of AI safety, watch (in this order):

  1. Intro to AI Safety, Remastered (18 mins) (Sidenote: The original video is from 2016, & back then people thought AI would develop much slower.)
  2. The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment (23 mins)
  3. Using Dangerous AI, But Safely? (30 mins)

Author(s): 80000 Hours, Siliconversations, Metaculus, Rob Miles.
Type: list
Why: To use as a quick intro

Reply
Wes R's Shortform
Wes R5mo10

Had a conversation on AI safety (namely interperetability of AIs thinking, and what might cause AI to scheme), and I screenshot the key takeaways so I could share them with the world (well, mostly just so I have them saved somewhere).

Reply
80 concepts on my new version of AI: DecisionBots
Wes R5mo10
Reply
80 concepts on my new version of AI: DecisionBots
Wes R6mo10

I reckon I might be pretty good(ish) at coming up with good concepts, but given my downvotes, it seems I'm not good at presenting said concepts. Does anyone have any advice for that?

Reply
Load More
No wikitag contributions to display.
-3How we'll make all world leaders work together to make the world better (Expert-approved idea)
16d
4
10🫵YOU🫵 get to help the AGI Safety Act in Congress! This is real!
3mo
5
1Wes R's Shortform
5mo
1
080 concepts on my new version of AI: DecisionBots
6mo
2
-2Would this solve the (outer) alignment problem, or at least help?
7mo
1
0Why you can add moral value, and if an AI has moral weights for these moral values, those might be off
7mo
1