People don't have opinions - opinions have people.
Often, one hears someone express a strange, wrong-seeming opinion. The bad habit is to view this as that person's intentional bad action. The good habit is to remember that the person heard this opinion, accepted it as reasonable, & might have put no further thought into the matter.
Opinions are self-replicating & rarely fact-checked. People often subscribe to 2 contradictory opinions.
Epistemic status: I'm trying this opinion on. It's appealing so far.
The good habit is to remember that the person heard this opinion,
Are you specifically referring to opinions that are held by multiple people, albeit are unpopular (and, as you say, aren't the result of intentional bad action - they are not opinions held out of contrariety). Because isn't it possible for people to form their own endogenous strange opinions?
Sure, opinions come to people from a few different sources. I speculate that interpersonal transmission is the most common, but they can also originate in someone's head, either via careful thought or via a brief whim.
You're probably right that interpersonal transmission is the most common source. I guess now I have to ask what do you mean by an "opinion". Is a simple proposition like "That popstar is bad" or "that's too much (food/money/time)" enough to warrant an opinion? I ask because now thinking about it, most endogenous opinions (which presumedly most interpersonally transmitted opinions began as, with the exception of those that are the result of Chinese-whispers) are just post-rationalizations of emotive or felt experiences.
To pull an example out of thin air: "Looking at this art work doesn't make me feel 'good'... it must be because it is a non-figurative painting" - like is the simple emotive expression "this painting isn't good" as vague as it is an opinion, or is the attempt to explain it "it's a messy nonfigurative painting that doesn't depict anything" an opinion?
Unusually appropriate calendar alignment: January 6 has become a anniversary of infamous unrest in the USA. Curiously, the tonally-analogous Feast of Fools scene in Hunchback of Notre Dame (novel) is also on January 6.
The Law-Crime Tech Ratio is the metric at the core of all AI Safety disaster scenarios. It's the ratio between how high tech law enforcement is vs how high tech the entity committing a crime is.
('Law' here also includes military, the intelligence community, & civil defense - like pandemic preparedness.)
In AI 2027, the self-directed server becomes incredibly high tech in a few months, & runs circles around the government. In IABIED, a server gets higher tech the higher its tech gets, then wins a WMD war against a lower-tech humanity.
High tech software isn't bad. Superintelligence isn't bad. What's bad is when one actor gets extremely high tech in isolation, then is tempted to exploit this overhang advantage for crime or war. It's like one species evolving while the rest of the ecosystem doesn't. We should aim for the whole ecosystem evolving.
My spelling peculiarities:
I prefer not to capitalize the pronoun i.
I prefer to always use a instead of an.
I avoid gendered words. Eg i use steelmind/strawmind instead of steelman/strawman.