LESSWRONG
LW

Maloew
27350
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Book review: Very Important People
Maloew4mo20

I wonder if partnering with high-impact charities would make the circuit more profitable? Somewhere in the book, a wealthy banker notes he feels bad about spending a bunch at one of these parties. If ~half of the money went to GiveWell's top charities, then they could feel good! They're saving lives at a significantly higher rate than just donating to a random charity! The club gets more consumption (hopefully), a tax writeoff, and could add a leaderboard to get more competition/number go up (eg. X person: 10 lives saved).

Reply
LessWrong has been acquired by EA
Maloew5mo20

Are the new songs going to be posted to youtube/spotify, or should I be downloading them?

Reply
Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible
Maloew6mo32

Very late followup question: how much additional effort do you think would be neccessary to try to do this for something like stamina/energy? It seems like some people (eg. Eliezer) are bottlenecked more on that than intelligence, and just in general alignment researchers having more energy than their capabilities counterparts seems very useful.

Reply
The Failed Strategy of Artificial Intelligence Doomers
Maloew7mo20

Instead, the U.S. government will do what it has done every time it’s been convinced of the importance of a powerful new technology in the past hundred years: it will drive research and development for military purposes.

I wonder if there is an actual path to alignment-pilling the US government by framing it as a race to solve alignment? That would get them to make military projects focused on aligning AI as quickly as possible, rather than building a hostile god. It also seems like a fairly defensible position politically, with everything being a struggle between powers to get aligned AI first, counting misaligned AI as one of the powers.

Something like: "Whoever solves alignment first wins the future of the galaxy, therefore we need to race to solve alignment. Capabilities don't help unless they're aligned, and move us closer to a hostile power (the AI) solving alignment and wiping us out."

Reply
Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible
Maloew7mo40

It seems plausible to me that some portion of iq-enhancing genes work through pathways outside the brain (blood flow, faster metabolism, nutrient delivery, stimulant-like effects, etc.). If that is the case and even just a small portion of the edits don't need to make it to the brain, couldn't you get huge iq increases without ever crossing the blood-brain barrier?

If timelines are short, it seems worthwhile to do that first and use the gains to bootstrap from there. Would that give significant returns fast enough to be worth doing? Is this something you're already trying to do?

Reply
12Why I'm Pouring Cold Water in My Left Ear, and You Should Too
7mo
0
11Has Someone Checked The Cold-Water-In-Left-Ear Thing?
Q
8mo
Q
0
4Has Anthropic checked if Claude fakes alignment for intended values too?
Q
8mo
Q
1