I'm surprised you frequent this site while still being Mormon because I had assumed the two were almost fundamentally mutually exclusive.
I am an ex-Mormon so yes I am biased etc etc.
I do agree that ward communities have a lot of positive attributes, I wish it was possible to create and sustain something secular like that. (Perhaps it is and I just haven't seen examples of it anywhere)
How do you justify believing in the religion on epistemic grounds?
I left primarily because I could not tell myself I was intellectually honest while knowingly using a double standard for evidence for religion vs science & everything else.
The way I see it, the entire belief system of the Church is premised upon emotional evidence (see: a personal witness from the spirit), which I personally cannot justify as sufficient basis to inform my entire worldview (especially in light of how incredibly easy and convincing it is for our brains to fabricate stimuli matching our expectations).
Extraordinary claims require extraordinary evidence and anything unfalsifiable and impossible to replicate in a controlled setting does not count as extraordinary under any consistent standard I can think of.
I'm sorry if this seems like an attack on your beliefs as it wasn't intended as so. I'm genuinely confused as to what sequence of events would result in someone passing through the inherent selection effects sufficient to end up here (and stay for any significant length of time) while being a believing Mormon. I do not have animosity to you personally.
There's no kind way to inform someone that you think that they are fundamentally wrong about every belief they hold sacred and that they build their entire identity as an individual and a community upon
I grew up Mormon and remained so until my early 20s and this is very relatable. I do not explain my choices in depth to my family because any sufficiently detailed explanation would inevitably be interpreted as an attack on their beliefs.
Furthermore I'm not going to try to convince them of my point of view, not from fear of failure but fear of success: It would be devastating for their mental and emotional health.
It's slightly disappointing and amusing whenever I have to admit my Mormon upbringing was correct about something.
While acknowledging the many harms alcohol does, I hope we can find alternatives that fill the niche of social lubricant with fewer side effects & less abuse potential.
Looking at the crippling social anxiety rampant among those near my age (28) and younger, we need all the help we can get.
In that vein :p, from what I've read nitrous oxide is both completely legal and almost completely harmless as long as it's mixed with oxygen so you don't suffocate (and if used frequently make sure you don't become deficient in vitamin B12).
I'm not sure if/how much it helps with social things as my last experience with it was a dental procedure when I was a kid.
A crux of the discussion around this topic seems to be the exact definition of "purpose" being used.
Is the purpose of a system defined as:
The original intent of those who designed and/or implemented it
Or
The various intents of those incentivized to maintain the system in its realized form
Or
Some combination of those?
Many times the original intent of a system will have little resemblance to its results, (hence the popular appeal of POSIWID), but it can be true that:
1. Most systems are not deliberately designed to be bad
and
2. Many systems create unintended bad incentives ala Goodhart
and
3. Systems that create bad incentives are likely to have their purposes co-opted by those benefiting from the incentives, making it very difficult to iteratively improve said systems.
See: Unnecessarily complex tax code creates necessity for tax-help services and software. Subsequent movements to simplify tax code are resisted by entrenched interests who benefit from the complexity.
TL;DR: the Original Intended Purpose and the Intended Purposes of Current Proponents can be dramatically different, proportional to how well the system fulfilled its original intent and how strong of perverse incentives it created.
If they were easily noticed and mimicked, they'd become useless for the purpose.
This is a good example of something anti-inductive
First off, a meta-answer: Asking "what are non-obvious x" is potentially less useful at capturing less-obvious examples of x than asking for as many distinct examples as possible. I think it is likely that of those who have some [less obvious observations] many will assume they are more obvious than they are for others.
In my case, I have a built-in bias to assume that any piece of knowledge I obtained without apparent effort must also be obvious to most other people.
So, all the examples I can think of, most of which I think are obvious:
Level of agreeableness
I'm now questioning how many of these are generally considered high-class and how many of them I associate with high-class but are more just nerd-culture things that don't entirely generalize.
All of these were off the top of my head in ~20 minutes, quality not guaranteed.
I resisted the temptation to have an LLM generate example ideas, I assume if you wanted LLM answers you would have already gotten them yourself.
Where I do think this would be a terrible idea is if the 7 year old is a prodigy, and if the 17 year olds hate math and don't want to be there.
Exactly.
In a 12th-grade/early college class with generally friendly students: I imagine if there was some very young prodigy attending they would quickly become the beloved "class mascot" kind of micro-celebrity.
The Symbolic Representation of good software is often what is wanted. Not good software
Haha, yet more context I didn't have much probability of understanding
I work in C# almost exclusively and so I've never used an LLM with the expectation that it would run the code itself. I usually explicitly specify what language and form of response I need "Generate a C# <class/method/LINQ statement> that does x y and z in this way with parameters a, b, and c"
If wireheading is bad because it separates the reward signals from (evolved) behaviors and replaces them with less-intrinsically-useful ones that get Goodharted to (literal) death:
Would it be useful to use electrical stimulation on reward areas but instead of triggering it with a contrived condition we detect the body's normal reward signal activation and amplify it by a set proportion? (obviously you'd have to make sure it didn't feed back into itself in a loop etc.)
For example, if someone's depression was a ~50% deficiency in "normal" reward signal activity, manually increasing it by a proportional amount would theoretically fix it.
Prediction: A system like this tested in rodents would show similar and potentially better behavioral results than drug based treatments like stimulants/antidepressants (depending on the exact areas of the brain targeted)
Question: would this get around problems that e.g. drugs have with gradual loss of efficacy due to tolerance build up? My limited understanding suggests it might, but I'm not confident.
I am not a neuroscientist, somebody poke holes in my hypothesis.