FlorianH

Wiki Contributions

Comments

I wonder whether, if sheer land mass really was the single dominant bottleneck for whatever your aims, you could potentially find a particular gov't or population from whom you'd buy the km2 you desire - say, for a few $ bn - as new sovereign land for you, for a source of potentially (i) even cheaper and (ii) more robust land to reign over?

Difficult to overstate the role of signaling as a force in human thinking, indeed, few random examples:

  1. Expensive clothes, rings, cars, houses: Signalling 'I've got a lot of spare resources, it's great to know me/don't mess with me/I won't rob you/I'm interesting/...'
  2. Clothes of particular type -> signals your politica/religious/... views/lifestyle
  3. Talking about interesting news/persons -> signals you can be a valid connection to have as you have links
  4. In basic material economics/markets: All sorts of ways to signal your product is good (often economists refer to e.g.: insurance, public reviewing mechanism, publicity)
  5. LW-er liking to get lots of upvotes to signal his intellect or simply for his post to be a priori not entirely unfounded
  6. Us dumbly washing or ironing clothes or buying new clothes while stained-but-non-smelly or unironed or worn clothes would be functionally just as valuable - well if a major functionality would not exactly be, to signal wealth, care, status..
  7. Me teaching & consulting in a suit because the university uses an age old signalling tool to show: we care about our clients
  8. Doc having his white suit to spread an air of professional doctor-hood to the patient he tricks into not questioning his suggestions and actions
  9. Genetically: many sexually attractive traits have some origin in signaling good quality genes: directly functional body (say strong muscles) and/or 'proving spare resources to waste on useless signals' such as most egregiously for the Peacocks/Birds of paradise <- I think humans have the latter capacity too, though I might be wrong/no version comes to mind right now
  10. Intellect etc.! There's lots of theory that much of our deeper thinking abilities were much less required for basic material survival (hunting etc.), than for social purposes: impress with our stories etc.; signal that what we want is good and not only self-serving. (ok, the latter maybe that is partly not pure 'signaling' but seems at least related).
  11. Putting solar panels, driving Tesla, vegtarian ... -> we're clean and modern and care about the climate
    1. I see this sort of signaling more and more by individuals and commercial entities, esp. in places where there is low-cost for it. The café that sells "Organic coffee" uses a few cents to buy organic coffee powder to pseudo-signal care and sustainability while it sells you the dirtiest produced chicken sandwich, saving many dollars compared to organic/honestly produced produce.
    2. Of course, shops do this sort of stuff commercially all the time -> all sorts of PR is signaling
  12. Companies do all sorts of psychological tricks to signal they're this or that to motivate its employees too
  13. Politics. For a stylized example consider: Trump or so with his wall promises signalling to those receptive to it that he'd be caring to reduce illegal immigation (while knowing he does/cannot change the situation so much so easily)
    1. Biden or so with his stopping-the-wall promises signalling a leaner treatment of illegal immigrants (while equally knowing he does/cannot change the situation so much so easily)
  14. ... list doesn't stop...  but I guess I better stop here :)

I guess the vastness of signaling importantly depends on how narrowly or broadly we define it in terms of: Whether we consciously have in mind to signal something vs. whether we instinctively do/like things that serve for us to signal quality/importance... But both signalling domains seem absolutely vast - and sometimes with actual value for society, but often zero-sum effects i.e. a waste of resources.

I read this as saying we’re somehow not ‘true’ to ourselves as we’re doing stuff nature didn’t mean us to do when it originally implanted our emotions.

Indeed, we might look ridiculous from the outside, but who’s there to judge - imho, nature is no authority.

  1. Increasing the odometer may be wrong from the owner’s perspective – but why should the car care about the owner? Assume the car, or the odometer itself desires really to show a high mile count, just for the sake of it. Isn’t the car making progress if it magically could put itself on a block?
  2. In the human case: Ought we to respect any ‘owner’ of us? A God? Nature who built us? Maybe not! Whatever creates happiness – I reckon it’s one of the emotions you mean – is good, however ridiculous the means to get to that. Of course, better if it creates widespread & long-term happiness, but that’s another question.
  3. Not gaming nature’s system – what would that mean? Could it be to try to have as many children or something like that? After that this is what nature wanted to ensure when it endowed us with our proxy emotions. I’m not sure it’s better.
  4. Think exactly of the last point. Imagine we were NOT gaming nature’s original intents. Just as much as we desire sex, we’d desire to have actually the maximal number of children instead! The world would probably be much more nightmarish than it is!

Now, if you’re saying, we’re in a stupid treadmill, trying to increase our emotion of (long-term) happiness by following the most ridiculous proxy (short-term) emotions for that, and creating a lot of externalized suffering at the same time, and that we forget that besides our individual shallow ‘happiness’ there are some deeper emotional aims, like general human progress etc., I couldn’t agree more!

Or if you're saying the evolutionary market system creates many temptations that exploit small imperfections in our emotional setup to trick us into behaving ridiculous and strongly against our long-term emotional success, again, all with you, and we ought to reign in markets more to limit such things.

One consequence that seems to flow from this, and which I personally find morally counter-intuitive, and don't actually believe, but cannot logically dismiss, is that if you're going to lie you have a moral obligation to not get found out. This way, the damage of your lie is at least limited to its direct effects.

With widespread information sharing, the 'can't foll all the people all the time'-logic extends to this attempt to lie without consequences: We'll learn people 'hide well but lie still so much', so we'll be even more suspicious in any situation, undoing the alleged externality-reducing effect of the 'not get found out' idea (in any realistic world with imperfect hiding, anyway).

Thanks for the useful overview! Tiny point:

It is also true that Israel has often been more aggressive and warmongering than it needs to be, but alas the same could be said for most countries. Let’s take Israel’s most pointless and least justified war, the Lebanon war. Has the USA ever invaded a foreign country because it provided a safe haven for terrorist attacks against them? [...] Yes - Afghanistan. Has it ever invaded a country for what turns out to be spurious reasons while lying to its populace about the necessity? Yes [... and so on]

Comparing Israel to the US might not be effective since critics often already view the US (or its foreign policy) just as negatively as Israel anyway (or view the US even as the evil driver behind Israel!). Perhaps a different example(s) could strengthen the argument.

Might be worth adding your blog post's subtitle or so, to hint at what Georgism is about (assuming I'm not an exception in not having known "Georgism" is the name for the idea of shifting taxation from labor etc. to natural resources).

Worth adding imho: Feels like a most natural way to do taxation in a world with jobs automated away.

Three related effects/terms:

1. Malthusian Trap as the maybe most famous example.

2. In energy/environment we tend to refer to such effects as

  • "rebound" when behavioral adjustment compensates part of the originally enable saving (energy consumption doesn't go down so much as better window insulation means people afford to keep the house warmer) and
  • "backfiring" when behavioral adjustment means we overcompensate (let's assume flights become very efficient, and everyone who today wouldn't have been flying because of cost or environmental conscience, starts to fly all the time, so even more energy is consumed in the end)

3. In economics (though more generally than only the compensation effects you mention): "equilibrium" effects; indeed famously often offsetting effects in the place where an original perturbation occurred, although as mentioned by Gunnar_Zarncke, maybe overall there is often simply a diffusion of the benefits to overall society. Say, with competitive markets in labor & goods, and making one product becomes more efficient: Yes, you as a worker in that sector won't benefit specifically from the improvement in the long run, but as society overall we slightly expanded our Pareto frontier of how much stuff we like we can produce.

No reason to believe safety-benefits are typically offset 1:1. Standard preferences structures would suggest the original effect may often only be partly offset, or in other cases even backfire by being more-than offset. And net utility for the users of a safety-improved tool might increase in the end in either case.

Started trying it now; seems great so far. Update after 3 days: Super fast & easy. Recommend!

Dear Yan LeCun, dear all,

Time to reveal myself: I'm actually just a machine designed to minimize cost. It's a sort of weighted cost of deviation from a few competing aims I harbor.

And, dear Yan LeCun, while I wish it was true, it's absolutely laughable to claim I'd be unable do implement things none of you like, if you gave me enough power (i.e. intelligence).

∎.

I mean to propose this as a trivial proof by contradiction against his proposition. Or am I overlooking sth?? I guess 1. I can definitely be implemented by what we might call cost minimizationf[1], and sadly, however benign my today's aims in theory, 2. I really don't think anyone can fully trust me or the average human if any of us got infinitely powerful.[2] So, suffices to think about us humans to see the supposed "Engineers"' (euhh) logic falter, no?

  1. ^

    Whether with or without a strange loop making me (or if you want making it appear to myself that I would be) sentient doesn't even matter for the question.

  2. ^

    Say, I'd hope I'd do great stuff, be a huge savior, but who really knows, and, either way, still rather plausible that I'd do things a large share of people might find rather dystopian.

Load More