LESSWRONG
LW

757
Eliezer Yudkowsky
155189Ω191695477113803
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
14Eliezer Yudkowsky's Shortform
Ω
3y
Ω
0
Wei Dai's Shortform
Eliezer Yudkowsky11h180

It's indeed the case that I haven't been attracted back to LW by the moderation options that I hoped might accomplish that. Even dealing with Twitter feels better than dealing with LW comments, where people are putting more effort into more complicated misinterpretations and getting more visibly upvoted in a way that feels worse. The last time I wanted to post something that felt like it belonged on LW, I would have only done that if it'd had Twitter's options for turning off commenting entirely.

So yes, I suppose that people could go ahead and make this decision without me. I haven't been using my moderation powers to delete the elaborate-misinterpretation comments because it does not feel like the system is set up to make that seem like a sympathetic decision to the audience, and does waste the effort of the people who perhaps imagine themselves to be dutiful commentators.

Reply211
Problems I've Tried to Legibilize
Eliezer Yudkowsky2dΩ7120

Has anyone else, or anyone outside the tight MIRI cluster, made progress on any of the problems you've tried to legibilize for them?

Reply
Warning Aliens About the Dangerous AI We Might Create
Eliezer Yudkowsky5d*4237

There is an extremely short period where aliens as stupid as us would benefit at all from this warning. In humanity's case, there's only a couple of centuries between when we can send and detect radio signals, and when we either destroy ourselves or perhaps get a little wiser. Aliens cannot be remotely common or the galaxies would be full and we would find ourselves at an earlier period when those galaxies were not yet full. The chance that any such signal helps any alien close enough to decode them is nearly 0.

Reply
I ate bear fat with honey and salt flakes, to prove a point
Eliezer Yudkowsky12d20

It's not really a fair question because we all have different things to do with our lives than launch snack lines or restaurant carts, but still: If people have discovered such an amazing delicious novel taste, both new and better than ice cream for 1/3 of those who try it, where are the people betting that it would be an amazing commercial success if only somebody produced more of it and advertised it more broadly?

Reply1
I ate bear fat with honey and salt flakes, to prove a point
Eliezer Yudkowsky13d60

And tbh, I wish I'd been there to try the food myself, because my actual first reaction here is, "Well, this sure is not a popular treat in supermarkets, so my guess is that some of my legion of admiring followers are so dead set on proving me wrong that they proclaimed the superior taste to them of something that sure has not been a wider commercial success, and/or didn't like ice cream much in the first place."

Reply
I ate bear fat with honey and salt flakes, to prove a point
Eliezer Yudkowsky13d666

I have the most disobedient cultists on the planet.

Reply30
Murder plots are infohazards
Eliezer Yudkowsky13d50

What about this is supposed to be an infohazard rather than just private info? It doesn't seem like either a cognitohazard, negatively-valued information (movie spoilers), or a socioinfohazard / exfohazard (each individual prefers to know themselves but prefers society not to know).

Reply
The Tale of the Top-Tier Intellect
Eliezer Yudkowsky14d53

See Simon Lerner above on how dead the horse appears to be.

Reply
On Fleshling Safety: A Debate by Klurl and Trapaucius.
Eliezer Yudkowsky22d4918

So far as I can tell, there are still a number of EAs out there who did not get the idea of "the stuff you do with gradient descent does not pin down the thing you want to teach the AI, because it's a large space and your dataset underspecifies that internal motivation" and who go, "Aha, but you have not considered that by TRAINING the AI we are providing a REASON for the AI to have the internal motivations I want! And have you also considered that gradient descent doesn't locate a RANDOM element of the space?"

I don't expect all that much that the primary proponents of this talk can be rescued, but maybe the people they propagandize can be rescued.

Reply
eggsyntax's Shortform
Eliezer Yudkowsky25d290

Then I now agree that you've identified a conflict of fact with what I said.

Thank you for taking the time to correct me and document your correction.  I hope I remember this and can avoid repeating this mistake in the future.

Reply1
Load More
Metaethics
Quantum Physics
Fun Theory
Ethical Injunctions
The Bayesian Conspiracy
Three Worlds Collide
Highly Advanced Epistemology 101 for Beginners
Inadequate Equilibria
The Craft and the Community
Load More (9/40)
85The Tale of the Top-Tier Intellect
15d
53
244On Fleshling Safety: A Debate by Klurl and Trapaucius.
22d
52
87Why Corrigibility is Hard and Important (i.e. "Whence the high MIRI confidence in alignment difficulty?")
2mo
54
149Re: recent Anthropic safety research
3mo
22
316The Problem
3mo
218
425HPMOR: The (Probably) Untold Lore
4mo
160
215The Sun is big, but superintelligences will not spare Earth a little sunlight
1y
143
346Universal Basic Income and Poverty
1y
147
170'Empiricism!' as Anti-Epistemology
2y
92
206My current LK99 questions
2y
38
Load More
Logical decision theories
a month ago
(+83)
Logical decision theories
5 months ago
(+803/-62)
Multiple stage fallacy
2 years ago
(+16)