Looking at discourse around California's ballot proposition 25-0024 (a "billionaire tax"), I noticed a pretty big world model mismatch between myself and its proponents, which I haven't seen properly crystallized. I think proponents of this ballot proposition (and wealth taxes generally) are mistaken about where the pushback is coming from.
The nightmare scenario with a wealth tax is that a government accountant decides you're richer than you really are, and sends a bill for more-than-all of your money.
The person who is most threatened by this possibility isn't rich (yet), they're aspirationally upwardly-mobile middle class. If you look at the trajectories of people-who-made-it, especially in tech and especially in California, those stories very frequently have a few precarious years in them in which their accessible-wealth and their paper-wealth are far out of sync. That happens with startup founders (a company's "valuation" is an artifact of the last negotiation you had with investors, not something you can sell). And it happens with stock options (companies use these to pay people huge amounts of money, without accidentally triggering an immediate retirement, and without needing to have the money yet). This sets up situations where, if the technicalities work out badly, a "5%" tax can make you literally bankrupt.
When people talk about "fewer businesses being created", this is why. If I were a billionaire, and I lost 5% of it to tax, I wouldn't care. If I were following a precarious, low-probability path towards becoming a billionaire, and I thought California would spring a kafkatrap to destroy me as soon as I got close, I would either not try, or not try in California.
In a different state, this might not be a credible fear. But California is a state that is famous for its kafkatraps, and for refusing to ever back down from the kafkatraps it's built.
No, that's not a working mechanism; it isn't reliable enough, or granular enough. Users can't add their own content to robots.txt when they submit it to websites. Websites can't realistically list every opted-out post in their robots.txt, because that would make it impractically large. It is very common to want to refuse content for LLM training, without also refusing search or cross-site link preview. And robots.txt is never preserved when content is mirrored.
The vibe I get, from the studies described, is reminiscent of the pre-guinea-pig portion of the story of Scott and Scurvy. That is, there are just enough complications at the edges to turn everything into a terrible muddle. In the case of scurvy, the complications were that which foods had vitamin C didn't map cleanly to their ontology of food, and vitamin C was sensitive to details of how foods were stored that they didn't pay attention to. In the case of virus transmissibility, there are a bunch of complications that we know matter sometimes, which the studies mostly fail to track, eg:
I think that ultimately viruses are a low-GDP problem; after a few doublings we'll stop breathing unfiltered air, and stop touching surfaces that lack automated cleaning, and we'll come to think of these things as being in the same category as basic plumbing.
What they don't do is filter out every web page that has the canary string. Since people put them on random web pages (like this one), which was not their intended use, they get into the training data.
If that is true, that's a scandal and a lawsuit waiting to happen. The intent of including a canary string is clear, and those canary strings are one of very few mechanism authors have to refuse permission to use their work in training sets. In most cases, they will have done that for a reason, even if that reason isn't related to benchmarking.
While LW is generally happy to have our public content included in training sets (we do want LLMs to be able to contribute to alignment research after all), that does not extend to posts or comments that contain canary strings, or replies to posts or comments that contain canary strings.
Canary strings are tricky; LLMs can learn them even if documents that contain the canary string are filtered out of the training set, if documents that contain indirect or transformed versions of the canary string are not filtered. For example, there are probably documents and web pages that discuss the canary string but don't want to invoke it, which split the string into pieces, ROT-13 or base64 encode it, etc.
This doesn't mean that they didn't train on benchmarks, but it does offer a possible alternative explanation. In the future, labs that don't want people to think they trained on benchmark data should probably include filters that look for transformed/indirect canary strings, in addition to the literal string.
Ok, to state what probably should be obvious but which in practice typically isn't: If the US does have a giant pile of drones, or contracts for a giant pile of drones, this fact would certainly be classified. And there is a strong incentive, when facing low-end threats that can be dealt with using only publicly-known systems, to deal with them using only publicly-known systems. The historical record includes lots of military systems that were not known to the public until long after their deployment.
Does that mean NATO militaries are on top of things? No. But it does mean that, as civilian outsiders, we should mostly model ourselves as ignorant.
Moderator warning: This is well outside the bounds of reasonable behavior on LW. I can tell you're in a pretty intense emotional state, and I sympathize, but I think that's clouding your judgment pretty badly. I'm not sure what it is you think you're seeing in the grandparent comment, but whatever it is I don't think it's there. Do not try to write on LW while in that state.
When I use LLM coding tools like Cursor Agent, it sees my username in code comments, in paths like /home/myusername/project/..., and maybe also explicitly in tool-provided prompts.
A fun experiment to run, that I haven't seen yet: If instead of my real username it saw a recognizably evil name, eg a famous criminal, but the tasks it's given is otherwise normal, does it sandbag? Or, a less nefarious example: Does it change communication style based on whether it recognizes the user as someone technical vs someone nontechnical?
Entering a conversation with someone who is literally wearing a "Might be Lying" sign seems analogous to joining a social-deception game like Werewolf. Certainly an opt-in activity, but totally fair game and likely entertaining for people who've done so.
I disagree, but, before I get into the disagreement, I do want to acknowledge and give props for engaging with the actual details of the legislation. Most people don't.
Meta-level: The ballot proposition is 32 pages and dense in legal and accounting jargon; believing it to be free of any weird traps requires trust that has very much not been earned. I think most people correctly conclude that they aren't capable of distinguishing a version with gotchas from a version without gotchas, look instead at the political process that produced the document, and conclude that it probably has gotchas. I also wrote this about wealth taxes broadly, and while the California ballot proposition is the one that we happen to now have to look at, the discourse dynamics are not specific to it and largely predate it.
Object-level, by my own read, the California ballot proposition has some pretty major gotchas in it. I don't think your confidence that it "could not make anybody bankrupt unless their tax lawyer was illiterate and also probably deceased" is justified. In particular, some things I picked out from a (not especially thorough) reading: