Posts

Sorted by New

Wiki Contributions

Comments

Thanks for your comment, I think it raises an important point if I understood it correctly. But I'm not sure if I have understood it correctly. Are you saying that by doing random things that make other people happy, I would be messing with their reward function? So that I would, for example, reward and thus incentivise random other things the person doesn't really value?

In writing this, I had indeed assumed that while happiness is probably not the only valuable thing and we wouldn't want to hook everybody up to a happiness machine, the marginal bit of happiness in our world would be positive and quite harmless. But maybe superstimuli are a counterexample to that? I have to think about it more.