Edit 2 - PARTIAL RESULT:
Yudkowsky's ted talk seems to already integrate what I was hoping to figure out how to convey with this discussion! Yay! Looking forward to when it's posted on youtube.
Edits:
- This is a bad post/question. It should not be highly upvoted. It was never intended or expected to be otherwise.
- This is a request for comments and debate. I don't feel we went back and forth usefully in figuring out what it is I'm concerned about about his approach; there are lots of object level parts of yud's views which I agree with, but those aren't the problem I'm worried about, the problem is he's participating in causing <?social autoimmune inflammation or something?>. Perhaps I'm looking for https://www.lesswrong.com/posts/KYzHzqtfnTKmJXNXg/the-toxoplasma-of-agi-doom-and-capabilities?
- I found another post which makes a similar point and reposted it here, which I think is a much better post than this request
- I added the [] to the title, also surrounded a couple of confused concepts with <??>
- The reply post that calls this post out for being badly written is totally right. Yup, this post is bad! Don't upvote it a lot! I have a good point to make, and I haven't made it, and don't yet totally know what it is.
- Thanks for reading, sorry about all the conflictons in my thinking and writing here. Help? :[
- bonus, maybe for a later post, maybe for this one: I'm pretty sure the "guess the definition" meaning of conflicton is already insightful, but I'm not quite sure how to formalize what I'm trying to say with the word. I spent some time trying out words and talking to language models to find one that already mostly means the right thing, the quanta of conflict, but I don't know quite exactly what the quanta of conflict actually is yet in a mechanistic or type signature sense.
original blurb:
https://mobile.twitter.com/QuintinPope5/status/1642100668126355456
this lesswrong post is not a high quality post and if its its score is far from zero (positive or negative) in two days after posting I'll be sad. yudkowsky is digging a hole and just won't stop digging. I don't have a clue how to explain to him what the problem is if it's not obvious on the surface, so this is a call for input: can anyone explain why yudkowsky is being a fool in a way he'll understand?
Strongly agreed with this model. (For others - I mentioned this next part to tammy/@carado and she edited the image a little to clarify already by adding the true/false and "how much of population" label, but still,) the image still seems like it contains the same problem I'm trying to figure out how to specify. Like, it is a claim in a model, and if the model is true, then this is simply an explanation of the truth. But someone who is highly uncertain about this claim, or even who currently has a lot of confidence pointing away from this claim, won't be m... (read more)