I think this is relevant: https://slatestarcodex.com/2017/07/24/targeting-meritocracy/ . 'Prevalence and tendency implies morality' - I don't think that's an argument that people gestured at here try to make.
Orange peel is a standard ingredient in Chinese cooking. Just be careful with pesticides.
Why wouldn't AI agents or corporations led by AIs develop their own needs and wants, the way corporations do that as well currently? An economy has no need for humans, it only needs needs and scarce potential to meet those needs.
Agree, but not sure what you are implying. Is it, Sam is not as concerned about risks because the expected capabilities are lower than he publicly lets on, timelines are longer than indicated and hence we should be less concerned as well?
On the one hand this is consistent with Sam's family planning. On the other hand, other OpenAI employees that are less publicly involved and perhaps have less marginal utility from hype messaging have consistent stories (e.g. roon, https://nitter.poast.org/McaleerStephen/status/1875380842157178994#m).
the only way to appropriately address [long term risk from AI systems of incredible capabilities] is to ship product and learn.
True, but nitpicking about the memorability: The long-term value may not be in the short-term value of the conversation itself. It may be in the introduction to someone by someone you briefly got to know in an itself low-value conversion, by the email for a job getting forwarded to you etc. You wouldn't necessarily say the conversation was memorable, but the value likely wouldn't have been realized without it.
It doesn't need to be a singular high-value conversation. I'd say the long-term value of conversations is heavy tailed and so it may pay to have lots of conversations of low expected value. https://www.sciencedirect.com/science/article/abs/pii/B9780124424500500250
The rationalist term is ‘front running steel man, for German Claude suggests Replikationsmangeleinsichtsanerkennung (‘acknowledgement of the insight despite lack of replication’):
Tess: There should be a German word that means “I see where you’re going with this, and while I agree with the point you will eventually get to, the scientific study you are about to cite doesn’t replicate.”
I would interpret Replikationsmangeleinsichtsanerkennung as 'positive recognition that sb. changed his mind and accepts that sth. doesn't replicate'. Perhaps replikationsmangelunbeschadete Zustimmung would be better. Yes, adjectives compound with compound nouns.
Both require an appropriate Overton Window.
Ok, but why do you think that AIs learn skills at a constant rate? Might it be that higher level skills need more time to learn because compute scales exponentially with time but for higher level skills data is exponentially more scarce and needs linearly in task length more context, that is, total data processed scales superexponentially with task level?