I endorse this concern. I do think it is possible to create social value in this way though, especially for relatively simple activities with good alignment between apparent and real benefits, e.g. transferring money / fungible resources to an agent that is trying to do good, or supplying additional tax revenue. So I think there are at least some equilibria where the benefits significantly overwhelm the negative effects, and indeed are a significant fraction of the total loss to the signaler.
I think that reaching a good equilibrium is especially plausible amongst the rationalists/EAs.
If 90% of the price of a diamond ring goes to an efficient charity, then the ring seems to lose 90% of its signaling value for an EA. Suppose an EA is planning to donate $X or Y% of lifetime income to an efficient charity (believing that to be the optimal balance between selfish and altruistic values), then after buying the ring they would reduce their future donations by 90% of the price of ring, since that would maintain the optimal balance between their values. So the amount of money they "lost" by buying the ring is only 10% of its price, and that would be taken into account by the recipient of the ring and other observers.