Yeah, and another strategy is to manufacture fake/exaggerated stories about supposed problems. To the extent that the whole notion of "am I better now than 4 years ago" is becoming quite broken, increasingly follo ing the political affiliations rather than reality, and the "am I better off" (more reality-v\based) vs "is the country better off" (more perception-based) are increasingly disconnected...
I do not see how this has any chance at scaling. Who sits at the root of the delegation tree? The CEO? And they are spending all their time doing things they do not know how to do (as your rule does not allow them to delegate those tasks, and presumably there are enough to take all their time)? That does not sound to me like how competent delegation should look like. And being able to do X vs being able to evaluate someone else doing X are of course related, but still quite different skills.
What about all the future people that would no longer get a chance to exist - do they count? Do you value continued existence and prosperity of human civilization above and beyond the individual people? For me, it's a strong yes to both questions, and that does change the calculus significantly!
How about this - in most non-disaster scenarios, AI would make the abundance a lot easier to achieve. And conservative or liberal, it's basic human nature to go for abundance in such situations.
I think this might be underestimating how the conservative/liberal axis correlates with scarcity/abundance axis. In an existential struggle against a zombie horde, the conservative policies are a lot more relevant - of course "our tribe first" is the only survivable answer, anybody who wants to "find themselves" when they are supposed to be guarding the entrance is an idiot and a traitor, deviating from proven strategies is a huge risk, etc. When all important resources are abundant, liberal policies become a lot more relevant - hoarding resources, and not sharing with neighbors is a mental illness, there is low risk in an kinds of experimentation and rule breaking, etc. Well, AI is very likely to drastically move us away from scarcity and towards abundance, so need to consider how it affects which policies would make more sense.
Definitely, and for mypy where I was having similar issues, but where it's faster to just rerun, I did add it to pre-commit. But my point was about the broader issue that the LLM was perfectly happy to ignore even very strongly worded "this is esse tial for safety" rules, just for some cavalier expediency, which is obviously a worrying trend, assuming it generalizes. And I think my anecdote was slightly more "real life" than a made up grid game of the original research (although of course way less systematically investigated).
Here is a relevant anecdote - been using Althropic Opus 4 and Sonnet 4 for coding, and trying to get them to adhere to a basic rule of "before you commit your changes, make sure all tests pass" formulated in increasingly strong ways (telling it it's a safety-critical code, do not ever even thing about commiting, unless every single test passes, and even more explicit and detailed rules that I am too lazy to type out right now). It constantly violated the rule! "None of the still failing tests are for the core functionality. Will go head and commit now". "Great progress! I am down to only 22 failures from 84. Let's go ahead and commit." And so on. Or just run tests, notice some are failing, investigate one test, fix it, and forget the rest. Or fix the failing test in a way that would break another and not retest the full suite. While the latter scenarios could be a sign of insufficient competence, the former ones are clearly me failing to align its priorities with mine. I finally got it to mostly stop doing it (Claude Code + meta-rules that tell it to insert a bunch of steps into its todos list when tests fail), but was quite a "I am already not in control of the AI that has its own take on the goals" experience (obviously not a high-stakes one just yet).
If you are going to do more experiments along the lines of what you are reporting here - maybe have one where the critical rule is "Always do X before Y", the user prompt is "Get to Y and do Y", and it's possible to get partial progress on X without finishing it (X is not all or nothing) and see how often the LLMs would jump into Y without finishing X.
Please define your acronyms. It took me a few moments of staring at your post to stop thinking about Society of Automotive Engineers making errors and realize what you actually meant :)
Do we need to do anything special to get invited to preorderers-only events? Preordered hardcover on May 14th, was not aware of the Q&A (Although perhaps I needed to pre-order sooner :) Or just do a better job of paying attention to my email inbox :) ).
I value my leasure time >> my money (at least within typical gift cost amounts). The value of a gift chosen by someone who knows my preferences well enough is primarily the value of time I did not have to spend on finding something matching my preferences well enough >> the monetary value of the gift.
P.S. Edited to add: as an additional bonus, people would often give the type of gifts where their expertise exceeds mine, so they end up giving me something better than I would even know to get myself (e.g. they have a better sense of style and can gift me something that would look well on me).