I try to practice independent reasoning/critical thinking, to challenge current solutions to be more considerate/complete. I value receiving and giving dissent. I do not reply to DMs for non-personal (with respect to the user who reached out directly) discussions, and will post here instead with reference to the user and my reply.
When thinking about deontology and consequentialism in application, it is useful to me to rate morality of actions based on intention, execution, and outcome. (Some cells are "na" as they are not really logical in real world scenarios.)
In reality, to me, it seems executed "some" intention matters (though I am not sure how much) the most when doing something bad, and executed to the best ability matters the most when doing something good.
It also seems useful to me, when we try to learn about applications of philosophy from law. (I am not an expert though in neither philosophy nor law, so these may contain errors.)
Intention to kill the person | Executed "some" intention | Killed the person | "Bad" level | Law |
Yes | Yes | Yes | 10 | murder |
Yes | Yes | No | 8-10 | as an example, attempted first-degree murder is punished by life in state prison (US, CA) |
Yes | No | Yes | na | |
Yes | No | No | 0-5 | no law on this (I can imagine for reasons on "it's hard to prove") but personally, assuming multiple "episodes", or just more time, this leads to murder and attempted murder later anyways; very rare a person can have this thought without executing it in reality. |
No | Yes | Yes | na | |
No | Yes | No | na | |
No | No | Yes | 0-5 | typically not a crime, unless something like negligence |
No | No | No | 0 | |
Intention save a person (limited decision time) | Executed intention to the best of ability | Saved the person | "Good" Level | |
Yes | Yes | Yes | 10 | |
Yes | Yes | No | 10 | |
Yes | No | Yes | na | |
Yes | No | No | 0-5 | |
No | Yes | Yes | na | |
No | Yes | No | na | |
No | No | Yes | 0-5 | |
No | No | No | 0 | |
Intention to do good | Executed intention to the best of personal ability1[1] | Did good | "Good" Level | |
Yes | Yes | Yes | 10 | |
Yes | Yes | No | 8-10 | |
Yes | No | Yes | na | |
Yes | No | No | 0-5 | |
No | Yes | Yes | na | |
No | Yes | No | na | |
No | No | Yes | 0-5 | |
No | No | No | 0 |
Possible to collaborate when there is enough time.
If you look into a bit more history on social justice/equality problems, you would see we have actually made many many progress (https://gcdd.org/images/Reports/us-social-movements-web-timeline.pdf), but not enough as the bar was so low. These also have made changes in our law. Before 1879, women cannot be lawyers (https://en.wikipedia.org/wiki/Timeline_of_women_lawyers_in_the_United_States). On war, I don't have too much knowledge myself, so I will refrain from commenting for now. It is also my belief that we should not stop at attempt, but attempt is the first step (necessary but not sufficient), and they have pushed to real changes as history shown, but it will have to take piles of piles of work, before a significant change. Just because something is very hard to do, does not mean we should stop, nor there will not be a way (just like ensuring there is humanity in the future.) For example, we should not give up on helping people during war nor try to reduce wars in the first place, and we should not give up on preventing women being raped. In my opinion, this is in a way ensuring there is future, as human may very well be destroyed by other humans, or by mistakes by ourselves. (That's also why in the AI safety case, governance is so important so that we consider the human piece.)
As you mentioned political party - it is interesting to see surveys happening here; a side track - I believe general equality problems such as "women can go to school", is not dependent on political party. And something like "police should not kill a black person randomly" should not be supported just by blacks, but also other races (I am not black).
Thanks for the background otherwise.
Thanks for sharing this paper; this also reminded me of a paper, A Pretrainer’s Guide to Training Data: Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity (https://arxiv.org/pdf/2305.13169), and their section on toxicity filtering (threshold, classifier vs generation trade-off)
I was just reminded of a story I saw online that is related to this and wanted to share since it was positive and reflective. The story OP shared an experience when walking behind a family; they encountered a homeless and the father turned to the kid and started with something like “study well and after you grow up…”; the OP thought maybe the father wanted to say “don’t end up like the homeless” which is what the poster’s father used to say to them. Instead the father said “help these people to be in better situations”. And the OP found it beautiful and I found it beautiful too. It seems the two fathers both “understood” the pain of being a homeless, but had different understanding on the “how”, and decided to act differently based on that understanding.
Do agree not to just focus on LLM (LLM base or agents), but also other architectures.
I would agree the mindset of “I can fix things if I were you” could prevent “empathy”. (I was also reading other comments mentioning this is not true empathy but simulation and I found it insightful too.) The key problem is if you would be able to tell if this is something they are able to fix, and what part of this is attributable to what they can do, and what part is attributable to lack of privilege. For example, a blind person cannot really type easily without special equipment. They or their family may not have the money to buy that special equipment. The parents were not able to get a college degree without some form of generational wealth. The same is true for intelligence level. (For example.)
Even growth mindset, is something that is developed through our education, environment growing up, experience, or even something like visa status. This is probably where empathy starts to develop further.
Would recommend checking out the link I posted from the EA forum to see why AI X risk may not get to some population and they die before then; and the proposal I have to work on both precisely avoids caring only for subsets
It would make sense in capability cases. But unfortunately, in a lot of live saving cases, all are important (this gets a bit more into other things so let me focus on only the following two points for now). 1. Many causes are not actually comparable in general cause prioritization context (one of which is people may inherent personal biases based their experience and worlds, second is it is hard to value 10 kids’ lives in US vs 10 kids’ lives in Canada, for example), and 2. Time is critical when thinking of lives. You can think of this as emergency rooms.
The link above illustrates an example of when time is important.
Sharing different perspective on why current risks is also important https://forum.effectivealtruism.org/posts/s3N8PjvBYrwWAk9ds/a-perspective-on-the-danger-hypocrisy-in-prioritizing-one, given I also believe long term risks (which to me is mostly mapping to agent safety at this point in time) is important too.
(Edit and gentle call out generally based on my observation: while downvote on disagreeing is perfectly reasonable if one disagrees, the downvote on overall karma when disagreeing seems to be inconsistent with that lesswrong's values and what it stands for; Suppressing professional dissent might be dangerous.)
Oxford languages (or really just after googling) says "rational" is "based on or in accordance with reason or logic."
I think there are a lot of other types of definitions (I think lesswrong mentioned it is related to the process of finding truth). For me, first of all it is useful to break this down into two parts: 1) observation and information analysis, and 2) decision making.
For 1): Truth, but also particularly causality finding. (Very close to the first one you bolded, and I somehow feel many other ones are just derived from this one. I added causality because many true observations are not really causality).
For 2): My controversial opinion is everyone are probably/usually "rationalists" - just sometimes the reasonings are conscious, and other times they are sub/un-conscious. These reasonings/preferences are unique to each person. It would be dangerous in my opinion if someone try to practice "rationality" based on external reasonings/preferences, or reasonings/preferences that are only recognized by the person's conscious mind (even if a preference is short term). I think a useful practice is to 1. notice what one intuitively want to do vs. what one think they should do (or multiple options they are considering), 2. ask why there is the discrepancy, 3. at least surface the unconscious reasoning, and 4. weigh things (the potential reasonings that leads to conflicting results, for example short term preference vs long term goals) out.