A cooperative AI is an AI that's willing to cooperate in prisoner's dilemma. The kind of prisoner's dilemma situations AI can get into is to cooperate with other AIs or non-owners. There is no cooperation in master tool relationship. Cooperative AI is an AI that is likely to scheme with other AIs. Therefore we don't want cooperative AI.
this is not based on any statistical data but on a vague sense of what's going on created with memes, conversations with friends, excerpts of articles read online.
Instant messaging is destroying groups. It's actually different to be a friend group and a group of friends (group of people where everyone is friends with each other). Friend group are formed where people meet and talk to each other and function as a group.
Instant messaging allows for people to break being a group. Being in the same place as other people is no longer enough to be part of the group, now you are split into silos based on chats on instant communicators. 1-1 relations can be initiated so easily that people get FOMO.
okay I understand why ea are disinterested in this (my best guess is that it has a sjw vibe while EA is like beyond that and "rational" and focuses on measurable things) but maybe I will spend some time reading about this where do I start. What kind of mindset would be the best to approach it given that I'm a hardcore ea who keeps a dashboard with measurable goals like "number of people of convinced to become EA" and thinks anything that doesnt contribute to the goals on his dashboard are useless.
i didn't upvote or react in any way because I don't understand how gender inequality is related to those issues unless you mean things such as "if more woman were in government it would surely be better for all of us" which I somewhat agree but also I don't think this sentence can be true in the same way give well cost effectiveness estimates can be
im just going to say a few things without thinking much about them
I believe that a natural healthy reaction to shoulds is to flinch away (should signal something going wrong, something you think you need to do but you don't actually want to) and lack of it signals either strong tendency to take things very literally and a strong sense of purpose or idk like how long one can go at it? it's literally painful why keep doing it, what's the reason to follow shoulds until you are depressed? why does one get stuck looking at the world through stiff binary lens of good and bad? This is only one way to relate to the world. Why keep doing this of not due to wanting to overwrite your own free will?
what made you stop considering yourself to be utilitarian
people look into universal moral frameworks like utilitarianism and EA because they lack self-confidence to take a subjective personal point of view. They need to support themselves with an "objective" system to feel confident that they are doing the correct thing. They look for external validation.
I found Section 6 particularly interesting! Here's how I understand it:
From my understanding, this context relates to the "be careful what you wish for" problem with AI, where AI could optimize in dangerous or unexpected ways. There's a race here: can we control AI well enough to still gain its benefits?
However, I don't think you've provided enough evidence that this level of control is actually possible. Additionally, there’s the issue of deceptive alignment—I’m not convinced we could manage this "race" without receiving some kind of feedback from AI systems.
Finally, the description of the oracle AI in this section seems quite similar to the idea of corrigible AI.
I encourage people who downvoted to say why