LESSWRONG
LW

64
FVelde
582140
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2FVelde's Shortform
24d
3
Day #1 Fasting, Livestream, In protest of Superintelligent AI
FVelde17d0-2

What if you're wrong?

Reply
The Dutch are Working Four Days a Week
FVelde18d60

Effect is hard if not impossible to determine but the Netherlands have one of the lowest unemployment rates in Europe.

https://en.m.wikipedia.org/wiki/List_of_sovereign_states_in_Europe_by_unemployment_rate

Reply
FVelde's Shortform
FVelde21d20

Guido has already repeatedly done protest and even gotten arrested multiple times in front of OA. I don't know his exact reasons for choosing Anthropic now, but spreading the protests over the different actors makes sense to me. 


People also asked the same kind of 'why not ...' question when he and others repeatedly protested OA. In the end whatever reasons there may be to go somewhere else, you can only be in one place.

Reply
FVelde's Shortform
FVelde24d30

There is now one hunger striker in front of Anthropic and two in front of Google Deepmind. 

https://x.com/DSheremet_/status/1964749851490406546 

Reply
Winning the power to lose
FVelde1mo10

Could you give the source(s) of these anonymous surveys of engineers with insider knowledge about the arrival of AGI? I would be interested in seeing them.

Reply
Four ways learning Econ makes people dumber re: future AI
FVelde1mo100

Good observations. The more general problem is modeling. Models break and 'hope for the best expecting the worst' generally works better than any model. It matters how screwed you are when your model fails, not how close to reality the model is. In the case of AI, the models break at a really really important place. The same was true for models predating economic crises. One can go through life without modeling but with preparing for the worst but not the other way around.

Reply
Daniel Kokotajlo's Shortform
FVelde2mo10

I fail to see how that's an argument. It doesn't seem to me a reason not to cull now, only maybe not to advocate for it, and even that I would disagree with. Can you explain yourself?

Reply
Daniel Kokotajlo's Shortform
FVelde2mo50

This is great. 
Since you already anticipate the dangerous takeoff that is coming, and we are unsure if we notice and can act on time: why not cull now? 

I get that part of the point is slowing down the takeoff and culling now does not get that effect.
But what if March 2027 is too late? What if getting proto-AGIs to do AI R&D only requires minor extra training or unhobbling? 

I'd trust a plan that relies on already massively slowing down AI now way more than one that relies on it still being on time later.

Reply
A case for courage, when speaking of AI danger
FVelde3mo175

I can attest that for me talking about AI dangers in an ashamed way has rarely if ever prompted a positive response. I've noticed and been told that it gives 'intellectual smartass' vibes rather than 'concerned person' vibes.

Reply
The Value Proposition of Romantic Relationships
FVelde4mo50

A lot of this seems to be pointing to 'love'.

Reply
Load More
No wikitag contributions to display.
2FVelde's Shortform
24d
3
2How reasonable is taking extinction risk?
1y
4