LESSWRONG
LW

PeterMcCluskey
4033664600
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Are Intelligent Agents More Ethical?
PeterMcCluskey4d20

I've found more detailed comments from Sumner on this topic, and replied to them here.

Reply
Foom & Doom 1: “Brain in a box in a basement”
PeterMcCluskey4d20

Remember, if the theories were correct and complete, then they could be turned into simulations able to do all the things that the real human cortex can do[5]—vision, language, motor control, reasoning, inventing new scientific paradigms from scratch, founding and running billion-dollar companies, and so on.

So here is a very different kind of learning algorithm waiting to be discovered

There may be important differences in the details, but I've been surprised by how similar the behavior is between LLMs and humans. That surprise is in spite of me having suspected for decades that artificial neural nets would play an important role in AI.

It seems far-fetched that a new paradigm is needed. Saying that current LLMs can't build billion-dollar companies seems a lot like saying that 5-year-old Elon Musk couldn't build a billion-dollar company. Musk didn't seem to need a paradigm shift to get from the abilities of a 5-year-old to those of a CEO. Accumulation of knowledge seems like the key factor.

But thanks for providing an argument for foom that is clear enough that I can be pretty sure why I disagree.

Reply
AI #116: If Anyone Builds It, Everyone Dies
PeterMcCluskey1mo20

They've done even better over the past week. I've written more on my blog.

Reply
Please Donate to CAIP (Post 1 of 7 on AI Governance)
PeterMcCluskey2mo60

I've donated $30,000.

Reply
AI #116: If Anyone Builds It, Everyone Dies
PeterMcCluskey2mo20

The budget is attempting to gut nuclear

Yet the stock prices of nuclear-related companies that I'm following have done quite well this month (e.g. SMR). There doesn't seem to be a major threat to nuclear power.

Reply
AI 2027 Thoughts
PeterMcCluskey3mo20

I expect deals between AIs to make sense at the stage that AI 2027 describes because the AIs will be uncertain what will happen if they fight.

If AI developers expected winner-take-all results, I'd expect them to be publishing less about their newest techniques, and complaining more about their competitors' inadequate safety practices.

Beyond that, I get a fairly clear vibe that's closer to "this is a fascinating engineering challenge" than to "this is a military conflict".

Reply
OpenAI lost $5 billion in 2024 (and its losses are increasing)
PeterMcCluskey3mo51

This reminds me a lot about what people said about Amazon near the peak of the dot-com bubble (and also about what people also said at the time of internet startups that actually failed).

Reply
Three Types of Intelligence Explosion
PeterMcCluskey4mo40

The first year or two of human learning seem optimized enough that they're mostly in evolutionary equilibrium - see Henrich's discussion of the similarities to chimpanzees in The Secret of Our Success.

Human learning around age 10 is presumably far from equilibrium.

I'll guess that I see more of the valuable learning taking place in the first 2 years or so than do other people here.

Reply
Three Types of Intelligence Explosion
PeterMcCluskey4mo60

I agree with most of this, but the 13 OOMs from the the software feedback loop sounds implausible.

From How Far Can AI Progress Before Hitting Effective Physical Limits?:

the brain is severely undertrained, humans spend only a small fraction of their time on focussed academic learning

I expect that humans spend at least 10% of their first decade building a world model, and that evolution has heavily optimized at least the first couple of years of that. A large improvement in school-based learning wouldn't have much effect on my estimate of the total learning needed.

Reply
Can time preferences make AI safe?
PeterMcCluskey4mo30

This general idea has been discussed under the term .

Reply
Load More
myopia
13Are Intelligent Agents More Ethical?
23d
7
29AI 2027 Thoughts
3mo
2
13Should AIs be Encouraged to Cooperate?
3mo
2
17Request for Comments on AI-related Prediction Market Ideas
Q
4mo
Q
1
5Medical Windfall Prizes
5mo
1
11Uncontrollable: A Surprisingly Good Introduction to AI Risk
6mo
0
18Genesis
6mo
0
22Corrigibility should be an AI's Only Goal
6mo
3
67Drexler's Nanotech Software
7mo
9
12AI Prejudices: Practical Implications
9mo
0
Load More