Ah ha ha, then my utility function is likely very different from the OP's!
I Have No Mouth And I Must Scream is one of the most terrifying stories ever.
We could try to pin down "the expected value of what", but no matter what utility function I tried to provide, I think I'll run into one of two issues:
1. Fanaticism forces out weird results I wouldn't want to accept
2. A sort of Sorites problem: I define a step function that says things like "Past a certain point, the value of physical torture becomes infinitely negative" that requires me to have hard breakpoints
Tangential, but I do think it's a mistake to only think of things in terms of expected value.
I wouldn't press the 60% utopia / 15% death button because that'd be a terrible risk to take for my family and friends. Assuming though that they could come with me, would I press the button? Maybe.
However, if the button had another option, which was a nonzero chance (literally any nonzero chance!) of a thousand years of physical torture, I wouldn't press that button, even if it's chance of utopia was 99.99%.
I consider pain to be an overwhelmingly dominant factor.
With software, I can see how this discernment would be useful to society, even if it's a burden for you individually: Your ability to find flaws in software presumably allows you to design better software, which everyone will be able to take advantage of, even if they don't presently realize how much better their current software could be.
However, I struggle with the original post's framing--
"If their art dies out, maybe nobody will know how bad all the pianos are. And then we'll all have slightly worse pianos than we would otherwise have. And I mean if that's the way things are going to go, then let's just steer the Earth into the Sun, because what's the point of any of this."
It seems to me like this level of discernment is only a con, not a pro, because it's only result is top level pianists and tuners detecting slightly worse notes, and therefore making themselves slightly less happy?
"in the context of my writing, AI has consistently proven to have terrible taste and to make awful suggestions"
I agree with this so much. I mostly use ChatGPT as a research or search-the-web tool, and as a way to check for my dumb coding mistakes. On the rare occasions when I'm tempted to ask it something "real", it never fails to answer in the most shallow, useless, frustrating, disappointing way. (And why would I expect better?)