Wiki Contributions


Three related effects/terms:

1. Malthusian Trap as the maybe most famous example.

2. In energy/environment we tend to refer to such effects as

  • "rebound" when behavioral adjustment compensates part of the originally enable saving (energy consumption doesn't go down so much as better window insulation means people afford to keep the house warmer) and
  • "backfiring" when behavioral adjustment means we overcompensate (let's assume flights become very efficient, and everyone who today wouldn't have been flying because of cost or environmental conscience, starts to fly all the time, so even more energy is consumed in the end)

3. In economics (though more generally than only the compensation effects you mention): "equilibrium" effects; indeed famously often offsetting effects in the place where an original perturbation occurred, although as mentioned by Gunnar_Zarncke, maybe overall there is often simply a diffusion of the benefits to overall society. Say, with competitive markets in labor & goods, and making one product becomes more efficient: Yes, you as a worker in that sector won't benefit specifically from the improvement in the long run, but as society overall we slightly expanded our Pareto frontier of how much stuff we like we can produce.

No reason to believe safety-benefits are typically offset 1:1. Standard preferences structures would suggest the original effect may often only be partly offset, or in other cases even backfire by being more-than offset. And net utility for the users of a safety-improved tool might increase in the end in either case.

Started trying it now; seems great so far. Update after 3 days: Super fast & easy. Recommend!

Dear Yan LeCun, dear all,

Time to reveal myself: I'm actually just a machine designed to minimize cost. It's a sort of weighted cost of deviation from a few competing aims I harbor.

And, dear Yan LeCun, while I wish it was true, it's absolutely laughable to claim I'd be unable do implement things none of you like, if you gave me enough power (i.e. intelligence).


I mean to propose this as a trivial proof by contradiction against his proposition. Or am I overlooking sth?? I guess 1. I can definitely be implemented by what we might call cost minimizationf[1], and sadly, however benign my today's aims in theory, 2. I really don't think anyone can fully trust me or the average human if any of us got infinitely powerful.[2] So, suffices to think about us humans to see the supposed "Engineers"' (euhh) logic falter, no?

  1. ^

    Whether with or without a strange loop making me (or if you want making it appear to myself that I would be) sentient doesn't even matter for the question.

  2. ^

    Say, I'd hope I'd do great stuff, be a huge savior, but who really knows, and, either way, still rather plausible that I'd do things a large share of people might find rather dystopian.

Neither entirely convinced nor entirely against the idea of defining 'root cause' essentially with respect to 'where is intervention plausible'. Either way, to me that way of defining it would not have to exclude "altruism" as a candidate: (i) there could be scope to re-engineer ourselves to become more altruistic, and (ii) without doing that, gosh how infinitely difficult does it feel to improve the world truly systematically (as you rightly point out).

That is strongly related to Unfit for the Future - The Need for Moral Enhancement (whose core story is spot on imho, even though I find quite some of the details in the book substandard)

Interesting read, though I find it not easy to see exactly what your main message is. Two points strike me as potentially relevant regarding

what do I think the real root of all evil is? As you might have guessed from the above, I believe it’s our inability to understand and cooperate with each other at scale. There are different words for the thing we need more of: trust. social fabric. asabiyyah. attunement. love.

The first more relevant, the second a term I'd simply find naturally core in a discussion on the topic:

  1. Even increased "trust. social fabric." is not so clear to bring us forward. Let's assume people remain similarly self-interested, similarly egoistic, but they are able to cooperate better in limited groups: easy to imagine circumstances in which dominant effects could include: (i) easier for hierarchies in tyrannic dictatorships to cooperate to oppress their population and/or (ii) easier for firms to cooperate to create & exploit market power, replacing some reasonably well-working markets by, say, crazily exploitative oligopolies and oligarchies.
  2. Altruism: simply the sheer limitation to our degree of altruism*[1] with the wider population, might one call that one out as a single most dominant root of the tree of evil? Or, say, lack of altruism, combined with the necessary imperfection in self-interested positive collaboration given our world features at the time (i) our limited rationality and (ii) a hugely complex natural and economic environment? Increase our altruism, and most of today's billions of bad incentives we're exposed to become a bit less disastrous...


  1. ^

    Along with self-serving bias, i.e. our brain's sneaky way to reduce our actual behavioral/exhibited altruism to levels even below our (already limited) 'heartfelt' degree of altruistic interest, where we often think we try to act in other people's interests while in reality pursuing our own interests.

Don't fully disagree, but still inclined to not view non-upvoted-but-neither-downvoted things too harshly:

If I'm no exception, not upvote may often mean: 'Still enjoyed the quick thought-stimulation even if it seems ultimately not a particularly pertinent message'. One can always down vote, if one really feels its warranted.


Also: If one erratically reads LW, and hence comments on old posts: recipe for fewer upvotes afaik. So one'd have to adjust for this quite strongly.

Fair! Yes. I guess I mainly have issues with the tone in the article, which in turn then makes me fear there's little empathy the other way round: i.e. it's going too strongly in the direction dismissing all superficial care as greedy self-serving display or something, while I find the underlying motivation - however imperfect - is often kind of a nice trait, coming out of genuine care, and it's mainly a lack of understanding (and yes, admittedly some superficiality) of the situation that creates the issue.

It seems to me that many even not so close acquaintances may - simply out of genuine concern for a fellow human being that (in their conviction*) seems to be suffering - want to offer support, even if they may be clumsy in it as they're not used to the situation. I find that rather adorable; for once the humans show a bit of humaneness, even if I'd not be surprised if you're right that often it does not bring much (and even if I'd grant that they might do it mostly as long as it doesn't cost them much).

*I guess I'm not in a minority if I didn't know how extremely curable balls cancer apparently is.

I think the post nicely points out how some stoicism can be a sort of superpower in exactly such situations, but I think we should appreciate how the situation looks from the outside for normal humans who don't expect the victim to be as stoically protected as you were.

From what you write, Acemoglu's suggestions seem unlikely to be very successful in particular given international competition. I paint a bit b/w, but I think the following logic remains salient also in the messy real world:

  1. If your country unilaterally tries to halt development of the infinitely lucrative AI inventions that could automate jobs, other regions will be more than eager to accommodate the inventing companies. So, from the country's egoistic perspective, might as well develop the inventions domestically and at least benefit from being the inventor rather than the adopter
  2. If your country unilaterally tries to halt adoption of the technology, there are numerous capable countries keen to adopt and to swamp you with their sales
  3. If you'd really be able to coordinate globally to enable 1. or 2. globally - extremely unlikely in the current environment and given the huge incentives for individual countries to remain weak in enforcement - then it seems you might as well try to impose directly the economic first best solution w.r.t. robots vs. labor: high global tax rates and redistribution.


Separately, I at least spontaneously wonder: How would one even want to go about differentiating what is the 'bad automation' to be discouraged, from legit automation without which no modern economy could competitively run anyway? For a random example, say if Excel wouldn't yet exist (or, for its next update..), we'd have to say: Sorry, cannot do such software, as any given spreadsheet has the risk of removing thousands of hours of work...?! Or at least: Please, Excel, ask the human to manually confirm each cell's calculation...?? So I don't know how we'd in practice enforce non-automation. Just 'it uses a large LLM' feels weirdly arbitrary condition - though, ok, I could see how, due to a lack of alternatives, one might use something like that as an ad-hoc criterion, with all the problems it brings. But again, I think points 1. & 2. mean this is unrealistic or unsuccessful anyway.

Load More