Wiki Contributions

Comments

Regarding coherent extrapolated volition, I have recently read Bostrom's paper Base Camp for Mt. Ethics, which presents a slightly different alternative and challenged my views about morality.

One interesting point is that at the end (§ Hierarchical norm structure and higher morality), he proposes a way to extrapolate human morality in a way that seems relatively safe and easy to implement for superintelligences. It also preserves moral pluralism, which is great for reaching a consensus without fighting each other (no need to pick one single moral framework like consequentialism or deontology or a particular set of values).

Roughly, higher moral norms are defined as the moral norms of bigger, more inclusive groups. For example, the moral norms of a civilization are higher in the hierarchical structure than the moral norms of a family. But you can extrapolate further, up to what he calls the "Cosmic host", which can take into account the general moral norms of speculative civilizations of digital minds or aliens...

As the video says, labeling noise becomes more important as LLMs get closer to 100%. Does making a version 2 look worthwhile ? I suppose that a LLM could be used to automatically detect most problematic questions and a human could verify for each flagged question if it needs to be fixed or removed.

That's a crucial subject indeed.

What's more crazy is that, since AI can process information much faster than the human brain, it's probably possible to engineer digital minds that are multiple orders of magnitude more sentient than the human brain.[1] I can't precisely tell how much more sentient, but biological neurons have a typical peak frequency of 200 Hz, whereas for transistors, it can exceed 2 GHz (10 millions times more).[2]

It's not us versus them. As Nick Bostrom says, we should search for "paths that will enable digital minds and biological minds to coexist, in a mutually beneficial way where all of these different forms can flourish and thrive".[1]

  1. ^
  2. ^

But is it important in utilitarianism to think about people ? As far as I can tell, utilitarianism is not incompatible with things like panpsychism where there could be sentience without delimited personhood.

Anyway, even if technically correct, I think this is a bit too complicated and technical for a short introduction on utilitarianism.

What about something simpler, like: "Utilitarianism takes into account the interests of all sentient beings.". Perhaps we could add something on scale sensitivity, e.g. : "Unlike deontology, it is scale sensitive.". I don't know if what I propose is good, but I think there is a need for simplification.

I find confusing the paragraph : 

"Not to be confused with maximization of utility, or expected utility. If you're a utilitarian, you don't just sum over possible worlds; you sum over people."

It's not clear to me why utilitarianism is not about maximization of expected utility, notably because I guess for a utilitarian, utility and welfare can be the same thing. And it feels pretty obvious that you sum it over people, but the notion of possible worlds is not so easy to interpret here.

alenoach10mo11

Thanks for the insights. Actually, board game models don't play very well when they are so heavily loosing, or so heavily winning that it doesn't seem to matter. A human player would try to trick you and hope for a mistake. This is not necessarily the case with these models that play as if you were as good as them, which makes their situation look unwinnable. 

It's quite the same with AlphaGo. AlphaGo plays incredibly well until there is a large imbalance. Surprisingly, AlphaGo also doesn't care about winning by 10 points or by half a point, and sometimes plays moves that look bad to humans just because it's winning anyway. And when it's loosing, since it assumes that its opponent is as strong, it can't find a leaf in the tree search that end up winning. Moreover, I suspect that removing a piece is prone to distribution shift.