Was playing around with chat gpt and and some fun learning about its thoughts on metaphysics. It looks like the ego is an illusion and hedonistic utilitarianism is too narrow minded to capture all of welfare. Instead, it opts for principles of beneficence, non-maleficence, autonomy, and justice. Seems to check out. What do you guys think?

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 10:10 PM

chatgpt is not a consistent agent; it is incredibly inclined to agree with whatever you ask. it can provide insights, but because it's so inclined to agree, it has far stronger confirmation bias than humans. while its guesses seem reasonable, the hedge it insists on outputting constantly is not actually wrong.