LESSWRONG
LW

Nition
361110
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1Nition's Shortform
2mo
6
Nition's Shortform
Nition23d60

Is there any existing name for the kind of logical fallacy where one who actually considers whether they can achieve a thing is criticised above one who simply claims they'll do the thing and doesn't?

Examples abound in politics but here's one concrete example:

In 2007 the UN passed the "Declaration on the Rights of Indigenous Peoples". New Zealand, which was already putting some significant effort into supporting the rights of its indigenous people, genuinely considered whether they would be able to hold up the requirements of the declaration, and decided not to sign due to it being incredibly broad[1]. Many other countries, not doing much for their own indigenous people and recognising the declaration as non-binding, simply signed it essentially for the good vibes. As a result, New Zealand was criticised for not being willing to sign while others were, and was eventually pressured into signing (for the good vibes).

[1] See e.g. https://www.converge.org.nz/pma/decleov07.pdf For example the entire country could feasibly fall under the requirements for returning land to indigenous people.

Reply
Someone should fund an AGI Blockbuster
Nition1mo50

Sorry yeah, I was just joking, of course that very much shouldn't be the actual plot of the film. Just seemed funny because Yudkowsky was thinking about these things long before most people were. Good lesson that I shouldn't treat LessWrong discussion like a Reddit discussion.

Reply1
Someone should fund an AGI Blockbuster
Nition1mo2-2

It's Yudkowsky that's sent back. He starts a movement called LessWrong to get people thinking about AI risk. He takes a huge time-paradox gamble in writing a book directly called "If Anyone Builds It, Everyone Dies". But somehow it's still happening.

Edit: To clarify, this isn't an actual plot suggestion. Just seemed funny to me because Yudkowsky was thinking about these things long before most people were. I put some real thoughts on plot in my other comment here.

Reply
Someone should fund an AGI Blockbuster
Nition1mo*20

Thank you for writing this up, as I've been thinking the same thing for a while.

Totally agree re "slowburn realism". Start with things exactly as they are today. Then move to things that will likely happen soon, so that when those things do happen, people will be thinking directly back to the film they saw recently. Keep escalating until you get to whatever ending works - maybe something like AI2027, maybe something like in A Disneyland Without Children.

It doesn't even have to be a scenario where the AI is intentionally evil. We've had a thousand of those films already. An AI that's just trying to do what it's been told but is misaligned might be even scarier. No-one's done a paperclip maximiser film.

Whatever ends up destroying us in the script, if you must have a not-totally-bleak ending, maybe the main characters manage to escape into space. Maybe they look back to watch a grey mass visibly spreading across the green Earth.

Reply
HPMOR: The (Probably) Untold Lore
Nition1mo50

From what I remember it was the jump to Azkaban that seemed like a jarring escalation when I read the story, more than the Quirrell thing. Felt like it should be more overwhelming to Harry, going from relatively normal magic school life to terrifying evil prison.

Reply
Why haven't we auto-translated all AI alignment content?
Nition2mo30

Or a forum where people can discuss AI in whatever language they prefer, and things are automatically translated between users?

Cool idea. I've never seen this done, yet it sounds very achievable with today's tech. I presume you'd set your desired language to whatever you prefer, and every post would now appear in that language, either natively or auto-translated. Put a little "translated from x, click here to view original" on each auto-translated post. Write your own posts in whatever language you like and everyone would see them in their preferred language.

Reply
the jackpot age
Nition2mo202

Another simple way to understand why the coin flip scenario makes you lose money: If you have $100, double it with heads to $200, then lose 60% on tails, you have $80 - less than you started with.

Reply
New Improved Lottery
Nition2mo10

Reading this post today feels like a prophecy for the rise of reddit.com/r/wallstreetbets

Reply
Foom & Doom 1: “Brain in a box in a basement”
Nition2mo1-3

I suspect this is why many people's P(Doom) is still under 50% - not so much that ASI probably won't destroy us, but simply that we won't get to ASI at all any time soon. Although I've seen P(Doom) given a standard time range of the next 100 years, which is a rather long time! But I still suspect some are thinking directly about the recent future and LLMs without extrapolating too much beyond that.

Reply
Nition's Shortform
Nition2mo10

Thanks! I hadn't read that one before; it's a good point that more intelligence is required to be able to predict what any specific person might say than the intelligence of that person themselves. Having said that, I'm not convinced that a model trained on human text being super-intelligent at predicting human text necessarily means it can break out above human-level thinking.

If we discovered an intelligent alien species tomorrow, would we expect LLMs to be able to predict their next word? I'm fairly confident that the answer is "only if they thought very much like we do, just in a different language." Similarly, my suspicion is that a what-would-a-human-say predictor can never be a what-would-a-superintelligence-say predictor - or at least, only a predictor of what a human thinks a superintelligence would say.

Reply
Load More
1Nition's Shortform
2mo
6