Difficult to overstate the role of signaling as a force in human thinking, indeed, few random examples:
I guess the vastness of signaling importantly depends on how narrowly or broadly we define it in terms of: Whether we consciously have in mind to signal something vs. whether we instinctively do/like things that serve for us to signal quality/importance... But both signalling domains seem absolutely vast - and sometimes with actual value for society, but often zero-sum effects i.e. a waste of resources.
I read this as saying we’re somehow not ‘true’ to ourselves as we’re doing stuff nature didn’t mean us to do when it originally implanted our emotions.
Indeed, we might look ridiculous from the outside, but who’s there to judge - imho, nature is no authority.
Now, if you’re saying, we’re in a stupid treadmill, trying to increase our emotion of (long-term) happiness by following the most ridiculous proxy (short-term) emotions for that, and creating a lot of externalized suffering at the same time, and that we forget that besides our individual shallow ‘happiness’ there are some deeper emotional aims, like general human progress etc., I couldn’t agree more!
Or if you're saying the evolutionary market system creates many temptations that exploit small imperfections in our emotional setup to trick us into behaving ridiculous and strongly against our long-term emotional success, again, all with you, and we ought to reign in markets more to limit such things.
One consequence that seems to flow from this, and which I personally find morally counter-intuitive, and don't actually believe, but cannot logically dismiss, is that if you're going to lie you have a moral obligation to not get found out. This way, the damage of your lie is at least limited to its direct effects.
With widespread information sharing, the 'can't foll all the people all the time'-logic extends to this attempt to lie without consequences: We'll learn people 'hide well but lie still so much', so we'll be even more suspicious in any situation, undoing the alleged externality-reducing effect of the 'not get found out' idea (in any realistic world with imperfect hiding, anyway).
Thanks for the useful overview! Tiny point:
It is also true that Israel has often been more aggressive and warmongering than it needs to be, but alas the same could be said for most countries. Let’s take Israel’s most pointless and least justified war, the Lebanon war. Has the USA ever invaded a foreign country because it provided a safe haven for terrorist attacks against them? [...] Yes - Afghanistan. Has it ever invaded a country for what turns out to be spurious reasons while lying to its populace about the necessity? Yes [... and so on]
Comparing Israel to the US might not be effective since critics often already view the US (or its foreign policy) just as negatively as Israel anyway (or view the US even as the evil driver behind Israel!). Perhaps a different example(s) could strengthen the argument.
Might be worth adding your blog post's subtitle or so, to hint at what Georgism is about (assuming I'm not an exception in not having known "Georgism" is the name for the idea of shifting taxation from labor etc. to natural resources).
Worth adding imho: Feels like a most natural way to do taxation in a world with jobs automated away.
Three related effects/terms:
1. Malthusian Trap as the maybe most famous example.
2. In energy/environment we tend to refer to such effects as
3. In economics (though more generally than only the compensation effects you mention): "equilibrium" effects; indeed famously often offsetting effects in the place where an original perturbation occurred, although as mentioned by Gunnar_Zarncke, maybe overall there is often simply a diffusion of the benefits to overall society. Say, with competitive markets in labor & goods, and making one product becomes more efficient: Yes, you as a worker in that sector won't benefit specifically from the improvement in the long run, but as society overall we slightly expanded our Pareto frontier of how much stuff we like we can produce.
No reason to believe safety-benefits are typically offset 1:1. Standard preferences structures would suggest the original effect may often only be partly offset, or in other cases even backfire by being more-than offset. And net utility for the users of a safety-improved tool might increase in the end in either case.
Started trying it now; seems great so far. Update after 3 days: Super fast & easy. Recommend!
Dear Yan LeCun, dear all,
Time to reveal myself: I'm actually just a machine designed to minimize cost. It's a sort of weighted cost of deviation from a few competing aims I harbor.
And, dear Yan LeCun, while I wish it was true, it's absolutely laughable to claim I'd be unable do implement things none of you like, if you gave me enough power (i.e. intelligence).
∎.
I mean to propose this as a trivial proof by contradiction against his proposition. Or am I overlooking sth?? I guess 1. I can definitely be implemented by what we might call cost minimizationf[1], and sadly, however benign my today's aims in theory, 2. I really don't think anyone can fully trust me or the average human if any of us got infinitely powerful.[2] So, suffices to think about us humans to see the supposed "Engineers"' (euhh) logic falter, no?
Whether with or without a strange loop making me (or if you want making it appear to myself that I would be) sentient doesn't even matter for the question.
Say, I'd hope I'd do great stuff, be a huge savior, but who really knows, and, either way, still rather plausible that I'd do things a large share of people might find rather dystopian.
I wonder whether, if sheer land mass really was the single dominant bottleneck for whatever your aims, you could potentially find a particular gov't or population from whom you'd buy the km2 you desire - say, for a few $ bn - as new sovereign land for you, for a source of potentially (i) even cheaper and (ii) more robust land to reign over?