I endorse and operate by Crocker's rules.
I have not signed any agreements whose existence I cannot mention.
Strongly normatively laden concepts tend to spread their scope, because (being allowed to) apply a strongly normatively laden concept can be used to one's advantage. Or maybe more generally and mundanely, people like using "strong" language, which is a big part of why we have swearwords. (Related: Affeective Death Spirals.)[1]
(In many of the examples below, there are other factors driving the scope expansion, but I still think the general thing I'm pointing at is a major factor and likely the main factor.)
1. LGBT started as LGBT, but over time developed into LGBTQIA2S+.
2. Fascism initially denoted, well, fascism, but now it often means something vaguely like "politically more to the right than I am comfortable with".
3. Racism initially denoted discrimination along the lines of, well, race, socially constructed category with some non-trivial rooting in biological/ethnic differences. Now jokes targeting a specific nationality or subnationality are often called "racist", even if the person doing the joking is not "racially distinguishable" (in the old school sense) from the ones being joked about.
4. Alignment: In IABIED, the authors write:
The problem of making AIs want—and ultimately do—the exact, complicated things that humans want is a major facet of what’s known as the “AI alignment problem.” It’s what we had in mind when we were brainstorming terminology with the AI professor Stuart Russell back in 2014, and settled on the term “alignment.”
[Footnote:] In the years since, this term has been diluted: It has come to be an umbrella term that means many other things, mainly making sure an LLM never says anything that embarrasses its parent company.
See also: https://www.lesswrong.com/posts/p3aL6BwpbPhqxnayL/the-problem-with-the-word-alignment-1
https://x.com/zacharylipton/status/1771177444088685045 (h/t Gavin Leech)
5. AI Agents.
it would be good to deconflate the things that these days go as "AI agents" and "Agentic™ AI", because it makes people think that the former are (close to being) examples of the latter. Perhaps we could rename the former to "AI actors" or something.
But it's worse than that. I've witnessed an app generating a document with a single call to an LLMs (based on the inputs from a few textboxes, etc) being called an "agent". Calling [an LLM-centered script running on your computer and doing stuff to your files or on the web, etc] an "AI agent" is defensible on the grounds of continuity with the old notion of software agent, but if a web scraper is an agent and a simple document generator is an agent, then what is the boundary (or gradient / fuzzy boundary) between agents and non-agents that justifies calling those two things agents but not a script meant to format a database?
There's probably more stuff going on required to explain this comprehensively, but that's probably >50% of it.
What's your sample size?
This quote is perfectly consistent with
using nanoscale machinery to guide chemical reactions by constraining molecular motions
It is not feasible for any human not to often fall back on heuristics, so to the extent that your behavior is accurately captured by your description here, you are sitting firmly in the reference class of act utilitarian humans.
But also, if I may (unless you're already doing it), aim more for choosing your policy, not individual acts.
Also, some predictions are performative, i.e., capable of influencing their own outcomes. In the limit of predictive capacity, a predictor will be able to predict which of its possible predictions are going to elicit effects in the world that make their outcome roughly align with the prediction. Cf. https://www.lesswrong.com/posts/SwcyMEgLyd4C3Dern/the-parable-of-predict-o-matic.
Moreover, in the limit of predictive capacity, the predictor will want to tame/legibilize the world to make it easier to predict.
Speculatively introducing a hypothesis: It's easier to notice a difference like
N years ago, we didn't have X. Now that we have X, our life has been completely restructured. (Xϵ{car, PC, etc.})
than
N years ago, people sometimes died of some disease that is very rare / easily preventable now, but mostly everyone lived their lives mostly the same way.
I.e., introducing some X that causes ripples restructuring a big aspect of human life, vs introducing some X that removes an undesirable thing.
Relatedly, people systematically overlook subtractive changes.
many of which probably would have come into existence without Inkhaven, and certainly not so quickly.
The context makes it sound like you meant to say "would not have come".
You might be interested in (i.a.) Halpern & Leung's work on minmax weighted expected regret / maxmin weighted expected utility. TLDR: assign a weight to each probability in the representor and then pick the action that maximizes the minimum (or infimum) weighted expected utility across all current hypotheses.
An equivalent formulation involves using subprobability measures (sum up to ).
Updating on certain evidence (i.e., concrete measurable sets , as opposed to Jeffrey updating or virtual evidence) involves updating each hypothesis to the usual way, but the weights get updated roughly according to how well predicted the event . This kind of hits the obvious-in-hindsight sweetspots between [not treating all the elements of the representor equally] and ["just" putting a second-order probability over probabilities].
(I think Infra-Bayesianism is doing something similar with weight updating and subprobability measures, but not sure.)
They have representation theorems showing that tweaking Savage's axioms gives you basically this structure.
Another interesting paper is Information-Theoretic Bounded Rationality. They frame approximate EU maximization as a statistical sampling problem, with an inverse temperature parameter , which allows for interpolating between "pessimism"/"assumption of adversariality"/minmax (as , indifference/stochasticity/usual EU maximization (as , and "optimism"/"assumption of 'friendliness'"/maxmax (as .
Regarding the discussion about the (im)precision threadmill (e.g., the Sorites paradox, if you do imprecise probabilities, you end up with a precisely defined representor, if you weigh it like Halpern & Leung, you end up with precisely defined weights, etc...), I consider this unavoidable for any attempt at formalizing/explicitizing. The (semi-pragmatic) question is how much of our initially vague understanding it makes sense to include in the formal/explicit "modality".
However, I think that even “principled” algorithms like minimax search are still incomplete, because they don’t take into account the possibility that your opponent knows things you don’t know
It seems to me like you're trying to solve a different problem. Unbounded minimax should handle all of this (in the sense that it won't be an obstacle). Unless you are talking about bounded approximations.
(FYI, I initially failed to parse this because I interpreted "'believing in' atoms" as something like "atoms of 'believing in'", presumably because the idea of "believing in" I got from your post was not something that you typically apply to atoms.)