LESSWRONG
LW

740
LWLW
2391550
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1LWLW's Shortform
10mo
68
1LWLW's Shortform
10mo
68
LWLW's Shortform
LWLW7d73

I think it leads to S-risks. I think people will remain in charge and use AI as a power-amplifier. The people most likely to end up with power like having power. They like having control over other people and dominating them. This is completely apparent if you spend the (unpleasant) time reading the Epstein documents that the House has released. We need societal and governmental reform before we even think about playing with any of this technology.


The answer to the world’s problems doesn’t rely on a bunch of individuals who are good at puzzles solving a puzzle and then we get utopia. It involves people recognizing the humanity of everyone around them and working on societal and governmental reform. And sure this stuff sounds like a long-shot but we’ve got to try. I wish I had a less vague answer but I don’t.

Reply
LWLW's Shortform
LWLW7d37-9

I just can’t wrap my head around people who work on AI capabilities or AI control. My worst fear is that AI control works, power inevitably concentrates, and then the people who have the power abuse it. What is outlandish about this chain of events? It just seems like we’re trading X-risk for S-risks, which seems like an unbelievably stupid idea. Do people just not care? Are they genuinely fine with a world with S-risks as long as it’s not happening to them? That’s completely monstrous and I can’t wrap my head around it.  The people who work at the top labs make me ashamed to be human. It’s a shandah.


This probably won’t make a difference, but I’ll write this anyways. If you’re working on AI-control, do you trust the people who end up in charge of the technology to wield it well? If you don’t, why are you working on AI control?

Reply3
anaguma's Shortform
LWLW11d10

Another reply, sorry I just think what you said is super interesting. The insight you shared about Eastern spirituality affecting attitudes towards AI is beautiful. I do wonder if our own Western attitudes towards AI are due to our flawed spiritual beliefs. Particularly the idea of a wrathful, judgemental Abrahamic god. I’m not sure if it’s a coincidence that someone who was raised as an Orthodox Jew (Eliezer) came to fear AI so much.

On another note, the Old Testament is horrible (I was raised reform/californian Jewish, I guess I’m just mentioning this because I don’t want to come across as antisemitic). It imbues what should be the greatest source of beauty with our weakest, most immature impulses. The New Testament’s emphasis on mercy is a big improvement/beautiful, but even then I don’t like the Book of Revelation talking about casting the sinners into a lake of fire. 

Reply
anaguma's Shortform
LWLW12d30

Those are all good points. Well I hope these things are nice.

Reply
anaguma's Shortform
LWLW12d40

I really don’t think it’s crazy to believe that humans figure out a way to control AGI at least. There’s enormous financial incentive for it, and power hungry capitalists want that massive force multiplier. There are also a bunch of mega-talented technical people hacking away at the problem. OpenAI is trying to recruit a ton of quants as well, so I think by throwing thousands of the greatest minds alive at the problem they might figure it out (obviously one might take issue with calling quants “the greatest minds alive.” So if you don’t like that replace “greatest minds alive” with “super driven, super smart people.”)


I also think it’s possible that the U.S. and China might already be talking behind the scenes about a superintelligence ban. That’s just a guess though. (Likely because it’s much more intuitive that you can’t control a superintelligence). AGI lets you stop having to pay wages and makes you enormously rich. But you don’t have to worry about being outsmarted.

Reply
LWLW's Shortform
LWLW1mo*30

Fun Fact of the Day: Kanye West’s WAIS is within two points of a fields medalist’s (the fields medalist is Richard Borcherds, their respective IQs are 135 and 137).

Extra Fun Fact: Kanye West was bragging about this to Donald Trump in the Oval Office. He revealed that his digit span was only 92.5 (which is what makes me think he actually had a psychologist-administered WAIS).

Extra Extra Fun Fact: Richard Borcherds was administered the WAIS-R by Sacha Baron Cohen's first cousin.

Reply
LWLW's Shortform
LWLW3mo10

Thank you so much! I will contact her.

Reply
LWLW's Shortform
LWLW3mo10

I am pretty good at math. At a T20 math program I was chosen for special mentorship and research opportunities over several people who made Top 500 on the Putnam due to me being deemed “more talented” (as nebulous as that phrase is, I was significantly faster in lectures than them and was able to digest graduate texts much quicker than them, I was also able to solve competition-style problems they couldn’t). My undergrad got interrupted by a health crisis so I never got a chance to actually engage in research or dedicated Putnam prep, but I believe most (maybe all if I’m being vain) of my professors would have considered me the brightest student in my year. I don’t know a lot about programming or ML at this point, but I am confident I could learn. I’m two years into my undergrad and will likely be returning next year.

Reply
LWLW's Shortform
LWLW3mo10

I’m weighing my career options, and the two issues that seem most important to me are factory farming and preventing misuse/s-risks from AI. Working for a lab-grown meat startup seems like a very high-impact line of work that could also be technically interesting. I think I would enjoy that career a lot.


However, I believe that S-risks from human misuse of AI and neuroscience introduce scenarios that dwarf factory-farming in awfulness. I think that there are lots of incredibly intelligent people working on figuring out how to align AIs to who/what we want. But I don’t think there’s nearly the same amount of effort being made towards the coordination problem/preventing misuse. So naturally, I’d really like to work on solving this, but I just don’t even know how I’d start tackling this problem. It seems much harder and much less straightforward than “help make lab-grown meat cheap enough to end factory farming.” So, any advice would be appreciated.

[This comment is no longer endorsed by its author]Reply
Should we aim for flourishing over mere survival? The Better Futures series.
LWLW3mo11

What do you mean by solve alignment? What is your optimal world? What you consider “near-optimal flourishing” is likely very different than many other people’s ideas of near-optimal flourishing. I think people working on alignment are just punting on this issue right now while they figure out how to implement intent and value alignment but I assume there will be a lot of conflict about what values a model will be aligned to and who a model will be aligned to if/when we have the technical ability to align powerful AIs.

Reply
Load More