My hypothesis for the airline industry boils down to "commodification". Airline companies follow incentives, and competition on price is fierce. Customers have little brand loyalty and chase the cheapest tickets, except occasionally avoiding the truly minimalist airlines. The companies see the customers voting with their wallets and optimize accordingly, leading to a race to the bottom.
In my experience, non-US carriers aren't that different. Maybe just a bit further behind and a bit more resistant to the slippery slope toward enshitification.
Anthropic is currently running an automated interview "to better understand how people envision AI’s role in their lives and work". I'd encourage Claude users to participate if you want Anthropic to hear your perspective.
Access it directly here (unless you've just recently signed up): https://claude.ai/interviewer
See here for Anthropic's post about it here: https://www.anthropic.com/research/anthropic-interviewer
supremum: the least value which is greater than all the values in the set
Should be "greater than or equal to all the values in the set" or a closed interval like [0,1] has no supremum.
For alternatives to "diagonalization," the term "next-leveling" is less ambiguous than just "leveling", IMO. It more directly suggests increased depth of counter-modeling / meta-cognitive exploitation.
A more obscure option is "Yomi". Yomi (読み, literally "reading" in Japanese) is already established terminology for recursive prediction. In fighting games, yomi layers represent recursive depths of prediction (layer 1: predicting their action, layer 2: predicting their prediction of your action, etc.).
As a card game: https://www.sirlin.net/articles/designing-yomi
A nontrivial, complete, consistent, and morally acceptable solution to population ethics. Deep down, I suspect there's a meta-ethical incompleteness theorem similar to Gödel's first incompleteness theorem, which is an example of a truly impossible problem.
I feel like this argument breaks down unless leaders are actually waiting for legible problems to be solved before releasing their next updates. So far, this isn't the vibe I'm getting from players like OpenAI and xAI. It seems like they are releasing updates irrespective of most alignment concerns (except perhaps the superficial ones that are bad for PR). Making illegible problems legible is good either way, but not necessarily as good as solving the most critical problems regardless of their legibility.
Whoops. I meant "land animal" like my prior sentence.
Yep. The Elo system is not designed to handle non-transitive rock-paper-scissors-style cycles.
This already exists to an extent with the advent of odds-chess bots like LeelaQueenOdds. This bot plays without her queen against humans, but still wins most of the time, even against strong humans who can easily beat Stockfish given the same queen odds. Stockfish will reliably outperform Leela under standard conditions.
In rough terms:
Stockfish > LQO >> LQO (-queen) > strong humans > Stockfish (-queen)
Stockfish plays roughly like a minimax optimizer, whereas LQO is specifically trained to exploit humans.
Edit: For those interested, there's some good discussion of LQO in the comments of this post:
https://www.lesswrong.com/posts/odtMt7zbMuuyavaZB/when-do-brains-beat-brawn-in-chess-an-experiment
Would the default valence be the valence of the "thing"?