LESSWRONG
Petrov Day
LW

1969
Algon
2741Ω21337730
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
This is a review of the reviews
Algon3d20

But they are doing things that they believe introduce new, huge negative externalities on others without their consent. This rhymes with a historically very harmful pattern of cognition, where folks justify terrible things to themselves. 

Secondly, who said anything about Pausing AI? That's a separate matter. I'm pointing at a pattern of cognition, not advocating for a policy change. 

Reply
The Company Man
Algon3d*113

Bjartur Tomas asked me the same thing. I told him I thought it was a reference to Daniel Dennet. That just baffled him. Honestly, I think I just noticed the vibes kinda matched (consciousness philosopher, humorous text about consciousness) so I assumed that there had to be a Dennet joke in there somewhere. But no. Bjartur Tomas then told me what DANNet was really referencing. An arbitrary NN he found w/ about the same synapse count as a shrimp. (It's the first pure deep CNN to win computer vision contests, circa 2011.)

Reply1
This is a review of the reviews
Algon3d20

If you are actually confident that AI won't will kill us all (say, at P > 99%) then this critique doesn't apply to you. It applies to the folks who aren't that confident but say to go ahead anyway. 

Reply
This is a review of the reviews
Algon4d61

A person who tries to avoid moral shortcomings such as selfishness will reject the "doom" framing because it's just a primitive intelligence (humanity) being replaced with a much cleverer and more interesting one (ASI). 

I think it's deeply immoral to take a 5% of killing everyone on earth in the next decade or two w/o their consent, even if that comes with a 95% chance of utopia. 

I think that this sort of reasoning is sadly all too common.

I think there's a certain pattern of idealistic reasoning that, I think, may have produced the most evil pound-for-pound throughout history.  People say that for the sake of the Glorious Future, we can accept, must accept, huge amounts of suffering. Indeed, not just our suffering, but that of others, too. Yes, it may be an unpleasant business, but for the Glorious Future, surely it is a small price to pay? 

That great novel starring the Soviet's planned economy, Red Plenty, has a beautiful passage example of such a person.

 "They tried to crush us over and over again, but we wouldn't be crushed. We drove off the Whites. We winkled out the priests, out of the churches and more importantly out of people's minds. We got rid of the shopkeepers, thieving bastards, getting their dirty fingers in every deal, making every straight thing crooked. We dragged the farmers into the twentieth century, and that was hard, that was a cruel business, and there were some hungry years there, but it had to be done, we had to get the much off our boots. We realised that there were saboteurs and enemies among us, and we caught them, but it drove us mad for a while, and for a while we were seeing enemies and saboteurs everywhere, and hurting people who were brothers, sisters, good friends, honest comrades...

[...] Working for the future made the past tolerable, and therefore the present. [...] So much blood, and only one justification for it. Only one reason it could have been all right to have done such things, and aided their doing: if it had been all prologue, all only the last spasms of the death of the old, cruel world, and the birth of a new kind one."

This person has fallen into an affective death spiral, and is lost. Like the Khmer Rouge, like the witch hunters, like many other idealists throughout history, they found it oh so easy to commit the greatest of atrocities with pride. 

Perhaps it is all worth it. I'm doubtful, but it could be true. However, I would advise you to beware the skulls along the path when you commend actions with a >1% chance of killing everyone on earth.

Reply
Tomás B.'s Shortform
Algon5d1-6

There was a good article discussing this trend that I'm unable to find atm. But going off the top of my head, the most obvious executive overreach by Biden was the student loan forgiveness. 

Reply
Tomás B.'s Shortform
Algon5d1-12

Interesting take, but I'm not sure if I agree. IMO Trump's second term is another joke played on us by the God of Straight Lines: successive presidents centralize more and more power, sapping it from other institutions. 

Reply
AI Lobbying is Not Normal
Algon6d80

Algon claims 

Daniel Eth claims this. This is a linkpost. Says so at the top of the post. Sorry for the confusion! I added a screenshot from the start of the twitter thread. Hope that the edit make this clearer. 

Reply2
AI Lobbying is Not Normal
Algon6d82

Scott has an article which may answer your question. "Too much dark money in Almonds". 

Let me quote the beginning, which essentially expands on the text you quoted from David. 


Everyone always talks about how much money there is in politics. This is the wrong framing. The right framing is Ansolabehere et al’s: why is there so little money in politics? But Ansolabehere focuses on elections, and the mystery is wider than that.

Sure, during the 2018 election, candidates, parties, PACs, and outsiders combined spent about $5 billion – $2.5 billion on Democrats, $2 billion on Republicans, and $0.5 billion on third parties. And although that sounds like a lot of money to you or me, on the national scale, it’s puny. The US almond industry earns $12 billion per year. Americans spent about 2.5x as much on almonds as on candidates last year.

But also, what about lobbying? Open Secrets reports $3.5 billion in lobbying spending in 2018. Again, sounds like a lot. But when we add $3.5 billion in lobbying to the $5 billion in election spending, we only get $8.5 billion – still less than almonds.

What about think tanks? Based on numbers discussed in this post, I estimate that the budget for all US think tanks, liberal and conservative combined, is probably around $500 million per year. Again, an amount of money that I wish I had. But add it to the total, and we’re only at $9 billion. Still less than almonds!

What about political activist organizations? The National Rifle Association, the two-ton gorilla of advocacy groups, has a yearly budget of $400 million. The ACLU is a little smaller, at $234 million. AIPAC is $80 million. The NAACP is $24 million. None of them are anywhere close to the first-person shooter video game “Overwatch”, which made $1 billion last year. And when we add them all to the total, we’re still less than almonds.

Add up all US spending on candidates, PACs, lobbying, think tanks, and advocacy organizations – liberal and conservative combined – and we’re still $2 billion short of what we spend on almonds each year. In fact, we’re still less than Elon Musk’s personal fortune; Musk could personally fund the entire US political ecosystem on both sides for a whole two-year election cycle.


 

Reply
AI Lobbying is Not Normal
Algon7d20

Not sure I understand your point. Let me try re-phrasing it: "Legislators will find it too scary to build AI. If Crypto threatened the stability of the economy, legislators would want to build it, too." Is that correct?

Reply
The Company Man
Algon8d100

And I don't want to give them any cryptocurrency, despite having some FartCoin which has been doing very well lately, shockingly well, this FartCoin. I wonder if it will continue to "moon" to the point where I can quit my job and become a VC and go on podcasts in which I will try to downplay the source of my initial capital so as to maintain some illusion that this economy makes any kind of sense at all to me or anyone else for that matter.

I see your disdain for crypto is still alive. 

DANNet

This is beautiful. You've outdone yourself.

"He gave my chef friend a ten million dollar grant."

At this rate, I fear I'll become a broken record. 

Reply1
Load More
5Algon's Shortform
3y
34
117AI Lobbying is Not Normal
7d
10
9Toggle Hero Worship
17d
5
65The Best Resources To Build Any Intuition
1mo
9
13Against functionalism: a self dialogue
2mo
9
22Why haven't we auto-translated all AI alignment content?
Q
2mo
Q
10
5If we get things right, AI could have huge benefits
3mo
0
8Advanced AI is a big deal even if we don’t lose control
3mo
0
5Defeat may be irreversibly catastrophic
3mo
0
6AI can win a conflict against us
3mo
0
5Different goals may bring AI into conflict with us
3mo
2
Load More