We simply lack systems other than religions that provide religions same benefits
I am empathetic to this point, and I was thinking of this recently too. However, I've reached a conclusion that the benefit of having values relatively independent of religion (or even similar communities) is to be able to change and develop your values as you experience more things in life. It feels to me that values in religion may be relatively fixed (I could be wrong).
The sad/hard thing is that the process of changing values or beliefs might be a painful experience, while fixed values could bring more stability. But embracing changes in values could be much more rewarding and beneficial in the long run. It does require a person to be very "strong" though, to challenge themselves on some fundamental levels.
requiring members to buy in to some ethical, ontological, and metaphysical claims is load bearing
This is sad, and I wonder what we can do to make it better.
I do see this as fair criticism (not surprised by it) to model welfare, if that is the sole reason for ending conversation early. I can see the criticism coming from two parts: 1) potential competing resources, and 2) people not showing if they care about these X group issues at all. If any of these two is true, and ending convo early is primarily about models have "feelings" and will "suffer", then we probably do need to "turn more towards" the humans that are suffering badly. (These groups usually have less correlation with "power" and their issues are usually neglected, which we probably should pay more attention anyways).
However, if ending convos early is actually about 1) not letting people having endless opportunity to practice abuse which will translate into their daily behaviors and shape human behaviors generally, and/or 2) the model learning these human abusive languages that are used to retrain the model (while take a loss) during finetuning stages, then it is a different story, and probably should be mentioned more by these companies.
People have limited capacity for empathy
Do you think this goes the other way as well?
I also agree "AI" is overloaded and has existing connotations (ranging from algorithms to applications as well)! I would think generative models, or generative AI works better (and one can specify multimodal generative models if one wants to be super clear), but also curious to see what other people would propose.
Usually I also take emotions as a channel to surface unconscious preferences (either situational or longer term), which helps with making that preference conscious as well as evaluated, and thus helps with rational decisions.
Thanks;
we just prefill the response with "<SCRATCHPAD_REASONING> The user is asking for".
Do you mean a template like:
Q: xxxx
A: <SCRATCHPAD_REASONING> The user is asking for
and then ask the model to continue?
Maybe I missed this in the paper - for base models, do you change the prompt slightly for the base model or is it still instruction based prompts for testing fake alignment?
Thanks for writing this up! I highly agree and I think this is an important point to emphasis. Stating crucial context/clarification/assumptions is important, and unclear communication is likely counter-productive.
Note: this is a completely sidebar (and apologize in advance).
I prefer my family members to randomly-sampled people with similar traits. I would certainly not elect to sterilize or kill my family members so that they could be replaced with smarter, kinder, happier people.
Out of curiosity, in hypothetical scenarios:
Context: I have no issue with this statement. I am asking because I have also been thinking about this human nature, and trying to find the pros and cons of it.
I mean it is not too too wrong if the model is using RAG?