LESSWRONG
LW

PeterMcCluskey
4067674640
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
When Money Becomes Power
PeterMcCluskey9d20

It is too decentralized to qualify as the kind of centralized power that WalterL was talking about, and probably too decentralized to fit the concerns that Gabriel expressed.

Reply
When Money Becomes Power
PeterMcCluskey10d4-6

So a greater power is necessary to prevent bad actors from concentrating it?

No. Amish society is pretty successful at stopping concentrations of power, mostly via peer pressure.

Reply
Being honest with AIs
PeterMcCluskey11d40

If we’re being honest, the compensation would probably have to be capped at some maximum amount. If the AIs gave up an 80% chance at world takeover for our benefit, it would probably not be within an AI company’s power to give away 80% of all future resources in compensation (or anything close to that).

It seems pretty hard to predict whether an AI company would have such power in conditions which are that unusual. After all, it would have a pretty powerful AI trying to enforce the agreement.

I don't see the benefit to setting a cap. Let's just inform the AI as best we can about the uncertainties involved, and promise to do the best we can to uphold agreements.

Reply
Mainstream Grantmaking Expertise (Post 7 of 7 on AI Governance)
PeterMcCluskey1mo40

As a donor, I'm nervous about charities that pay fully competitive wages, although it only gets about 2% weighting in my decisions. If someone can clearly make more money somewhere else, then that significantly reduces my concern that they'll mislead me about the value of their charity.

Reply
Are Intelligent Agents More Ethical?
PeterMcCluskey2mo20

I've found more detailed comments from Sumner on this topic, and replied to them here.

Reply
Foom & Doom 1: “Brain in a box in a basement”
PeterMcCluskey2mo53

Remember, if the theories were correct and complete, then they could be turned into simulations able to do all the things that the real human cortex can do[5]—vision, language, motor control, reasoning, inventing new scientific paradigms from scratch, founding and running billion-dollar companies, and so on.

So here is a very different kind of learning algorithm waiting to be discovered

There may be important differences in the details, but I've been surprised by how similar the behavior is between LLMs and humans. That surprise is in spite of me having suspected for decades that artificial neural nets would play an important role in AI.

It seems far-fetched that a new paradigm is needed. Saying that current LLMs can't build billion-dollar companies seems a lot like saying that 5-year-old Elon Musk couldn't build a billion-dollar company. Musk didn't seem to need a paradigm shift to get from the abilities of a 5-year-old to those of a CEO. Accumulation of knowledge seems like the key factor.

But thanks for providing an argument for foom that is clear enough that I can be pretty sure why I disagree.

Reply
AI #116: If Anyone Builds It, Everyone Dies
PeterMcCluskey3mo20

They've done even better over the past week. I've written more on my blog.

Reply
Please Donate to CAIP (Post 1 of 7 on AI Governance)
PeterMcCluskey3mo60

I've donated $30,000.

Reply
AI #116: If Anyone Builds It, Everyone Dies
PeterMcCluskey3mo20

The budget is attempting to gut nuclear

Yet the stock prices of nuclear-related companies that I'm following have done quite well this month (e.g. SMR). There doesn't seem to be a major threat to nuclear power.

Reply
AI 2027 Thoughts
PeterMcCluskey4mo20

I expect deals between AIs to make sense at the stage that AI 2027 describes because the AIs will be uncertain what will happen if they fight.

If AI developers expected winner-take-all results, I'd expect them to be publishing less about their newest techniques, and complaining more about their competitors' inadequate safety practices.

Beyond that, I get a fairly clear vibe that's closer to "this is a fascinating engineering challenge" than to "this is a military conflict".

Reply
Load More
27AI-Oriented Investments
1mo
0
13Are Intelligent Agents More Ethical?
2mo
7
29AI 2027 Thoughts
4mo
2
13Should AIs be Encouraged to Cooperate?
5mo
2
17Request for Comments on AI-related Prediction Market Ideas
Q
6mo
Q
1
5Medical Windfall Prizes
7mo
1
11Uncontrollable: A Surprisingly Good Introduction to AI Risk
7mo
0
18Genesis
8mo
0
22Corrigibility should be an AI's Only Goal
8mo
3
67Drexler's Nanotech Software
9mo
9
Load More