LESSWRONG
LW

328
wonder
390150
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Religion for Rationalists
wonder26d*50

We simply lack systems other than religions that provide religions same benefits

I am empathetic to this point, and I was thinking of this recently too. However, I've reached a conclusion that the benefit of having values relatively independent of religion (or even similar communities) is to be able to change and develop your values as you experience more things in life. It feels to me that values in religion may be relatively fixed (I could be wrong). 

The sad/hard thing is that the process of changing values or beliefs might be a painful experience, while fixed values could bring more stability. But embracing changes in values could be much more rewarding and beneficial in the long run. It does require a person to be very "strong" though, to challenge themselves on some fundamental levels.

requiring members to buy in to some ethical, ontological, and metaphysical claims is load bearing

This is sad, and I wonder what we can do to make it better.

Reply
Stephen Martin's Shortform
wonder1mo*90

I do see this as fair criticism (not surprised by it) to model welfare, if that is the sole reason for ending conversation early. I can see the criticism coming from two parts: 1) potential competing resources, and 2) people not showing if they care about these X group issues at all. If any of these two is true, and ending convo early is primarily about models have "feelings" and will "suffer", then we probably do need to "turn more towards" the humans that are suffering badly. (These groups usually have less correlation with "power" and their issues are usually neglected, which we probably should pay more attention anyways). 

However, if ending convos early is actually about 1) not letting people having endless opportunity to practice abuse which will translate into their daily behaviors and shape human behaviors generally, and/or 2) the model learning these human abusive languages that are used to retrain the model (while take a loss) during finetuning stages, then it is a different story, and probably should be mentioned more by these companies.

Reply
Stephen Martin's Shortform
wonder1mo00

People have limited capacity for empathy

Do you think this goes the other way as well?

Reply
Vale's Shortform
wonder2mo60

I also agree "AI" is overloaded and has existing connotations (ranging from algorithms to applications as well)! I would think generative models, or generative AI works better (and one can specify multimodal generative models if one wants to be super clear), but also curious to see what other people would propose.

Reply
Generalized Hangriness: A Standard Rationalist Stance Toward Emotions
wonder2mo52

Usually I also take emotions as a channel to surface unconscious preferences (either situational or longer term), which helps with making that preference conscious as well as evaluated, and thus helps with rational decisions.

Reply
Why Do Some Language Models Fake Alignment While Others Don't?
wonder2mo10

Thanks; 

we just prefill the response with "<SCRATCHPAD_REASONING> The user is asking for".

Do you mean a template like:

Q: xxxx

A: <SCRATCHPAD_REASONING> The user is asking for

and then ask the model to continue?

Reply
Why Do Some Language Models Fake Alignment While Others Don't?
wonder2mo10

Maybe I missed this in the paper - for base models, do you change the prompt slightly for the base model or is it still instruction based prompts for testing fake alignment?

Reply
Authors Have a Responsibility to Communicate Clearly
wonder2mo60

Thanks for writing this up! I highly agree and I think this is an important point to emphasis. Stating crucial context/clarification/assumptions is important, and unclear communication is likely counter-productive. 

Reply
Why I am not a successionist
wonder4mo*20

Note: this is a completely sidebar (and apologize in advance).

I prefer my family members to randomly-sampled people with similar traits. I would certainly not elect to sterilize or kill my family members so that they could be replaced with smarter, kinder, happier people.

Out of curiosity, in hypothetical scenarios:

  1. Would you replace randomly-sampled people with smarter, kinder, happier people? (To clarify I hope the answer is no)
  2. Would you say you would (or humans) prefer your family members over other randomly-sampled people, in a resource scarce situation and when you have the power to choose who to survive?

Context: I have no issue with this statement. I am asking because I have also been thinking about this human nature, and trying to find the pros and cons of it.

Reply
johnswentworth's Shortform
wonder5mo10

I share some similar frustrations, and unfortunately these are also prevalent in other parts of the human society. The commonality of most of these fakeness seem to be impure intentions - there are impure/non-intrinsic motivations other than producing the best science/making true progress. Some of these motivations unfortunately could be based on survival/monetary pressure, and resolving that for true research or progress seems to be critical. We need to encourage a culture of pure motivations, and also equip ourselves with more ability/tools to distinguish extrinsic motivations.

Reply
Load More
No posts to display.