LESSWRONG
LW

1246
Nathan Young
22424631088
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Short AGI scenarios
AI Probability Trees
2Nathan Young's Shortform
3y
150
Nathan Young's Shortform
Nathan Young8d21

This feels like weak evidence against my point, though I think "timelines" and "overall AI risk" are different levels of safe to argue about.

Reply
My AI Vibes are Shifting
Nathan Young9d40

I think this is a question on which we should spend lots of time actually thinking and writing. I'm not sure my approximations will be good at guessing the final result.

Reply
My AI Vibes are Shifting
Nathan Young9d20

That seems more probably in a world where AI companies can bring all the required tools in house. But what if they have large supply chains for minerals and robotics and renting factory space and employign contractors to do the .0001% of work they can't.

At that point I still expect it to be hard for them to control bits of land without being governed, which I expect to be good for AI risk.

Reply
My AI Vibes are Shifting
Nathan Young9d2-23

Do you think governance is currently misaligned. It seems fine to me?

Reply
Nathan Young's Shortform
Nathan Young9d9-10

In the below article I give an honest list of considerations about my thoughts about AI. Currently it sits on -1 karma (without my own). 

This is sort of fine. I don't think it's a great article and I am not sure that it's highly worthy of people's attention, but I think that a community that wants to encourage thinking about AI might not want to penalise those who do so.

When I was a Christian, questions were always "acceptable". But if I asked "so why is the bible true" that would have received a sort of stern look and then a single paragraph answer. While the question was explictly accepted, it was implictly a thing I should stop doing. 

I think it's probable you are doing the same thing. "oh yes, lets think about AI" but if I write something about my thoughts on AI that disagrees with you, it isn't worth reading or engaging with. 

And my article is relatively mild pushback. What happens if a genuine critic comes on here and writes something. I agree that some criticism is bad, but what if it is in the form that you ask for (lists of considerations, transparently written)? 

Is the only criticism worth reading that which is actually convincing to you? And won't, due to some bias, that likely leave this place an echo chamber?

https://www.lesswrong.com/posts/HKHqFWT7qiac2tvtF/my-ai-vibes-are-shifting?commentId=RNFEEi7tm6eFcwkgP 

Reply
My AI Vibes are Shifting
Nathan Young9d50

I think considerations are in important input into decision making and if you downvote anyone who writes clear considerations without conforming to your extremely high standards then you will tax disagreement. 

Perhaps you are very confident that you are only taxing bad takes and not just contrary ones, but I am not as confident as you are.

Overall, I think this is poor behaviour from a truth-seeking community. I don't expect every critic to be complimented to high heaven (as sometimes happens on the EA forum) but I think that this seems like a bad equilibrium for a post that is (in my view) fine and presented in the way this community requests (transparent and with a list of considerations).

As for the title: 

If you titled this "some factors maybe in AI risk" or "some factors changes that have shifted my p(doom)" or something and left out the p(doom) I'd have upvoted because you have some interesting observations.

This is particular seems like a dodge. The actual title "My AI Vibes are Shifting" is hardly confident or declarative. Are you sure you would actully upvote if I had titled as you suggest?

Reply
My AI Vibes are Shifting
Nathan Young9d12

I see I have 4 votes, with neutral karma overall. I should hope that the downvotes thought this wasn't worth reading, as opposed to that they disagreed.

Reply
My AI Vibes are Shifting
Nathan Young9d0-7

These are vibes, not predictions.

But in the other worlds I expect governance to sit between many different AI actors and ensure that no single actor controls everything. And then to tax them to pay for this function.

Why doesn't SpaceX run a country?

Reply
My AI Vibes are Shifting
Nathan Young10d30

When do you expect this to happen by?

Reply
Debate experiments at The Curve, LessOnline and Manifest
Nathan Young3mo20

I think it would be different if it happened today. Harris position seems less controversial. Not sure you'd print he was a racialist today. 

Reply
Load More
22My AI Vibes are Shifting
10d
27
36Debate experiments at The Curve, LessOnline and Manifest
3mo
12
12Which journalists would you give quotes to? [one journalist per comment, agree vote for trustworthy]
Q
4mo
Q
26
88Dear AGI,
7mo
11
22The Peeperi (unfinished) - By Katja Grace
7mo
0
5Claude 3.5 Sonnet (New)'s AGI scenario
7mo
2
20Don't go bankrupt, don't go rogue
7mo
1
13Anatomy of a Dance Class: A step by step guide
8mo
0
27Will bird flu be the next Covid? "Little chance" says my dashboard.
8mo
0
31What I expected from this site: A LessWrong review
9mo
5
Load More
Effective altruism
a year ago
Effective altruism
a year ago
(+369/-72)
Effective altruism
2 years ago