The "semantic bounty" fallacy occurs when you argue semantics, and you think that if you win an argument that X counts as Y, your interlocutor automatically gives up all the properties of Y as a bounty.
What actually happens is: your interlocutor may yield that X technically counts as Y, but since it's a borderline example of Y, most of Y doesn't apply to it. Unfortunately, as the argument gets longer, you may feel you deserve a bigger bounty if you win, when really your interlocutor is revealing to your their P(X is not Y) is quite high, and if they do yield, it's more likely they're yielding that X is a borderline Y.
This is a "leaky generalizations" and more specifically a "noncentral" fallacy. It happens, for instance, when someone tries to prove something is racist so as to imply behavior should change and gets resistance. In fact, you might consider that gap between "technically a Y" and "typically a Y" as a sort of semantic deficit. If the point of something being in Y is that it's bad, consider arguing that it's bad without even bringing up Y.
This applies even to some of the most drastic, moralized words we have, like "slavery," "genocide" and "fascism." However you feel about any issue in one of those topics, I will inform you that proving to someone that that thing is in that category, is not going to have the effect you want. There is no semantic bounty.
Any rats chess players? I won't spoil my rating but broadly and subjectively in the "intermediate" tier. Chess as in chess.com or lichess.org, not deception chess or alignment chess or other rationalist-themed chess. I'd be up for a rapid game (10-20 min).
Claude 3.7 is too balanced, too sycophantic, buries the lede
me: VA monitor v IPS monitor for coding, reducing eye strain
It wrote a balanced answer, said "IPS is generally better" but it's kind of sounding like 60/40 here, and it misses the obvious fact that VA monitors are generally the curved ones. My older coworkers with more eye strain problems don't have curved monitors.
I hope on reddit/YT and the answer gets clear really fast. Claude's info was accurate yet missed the point and I wound up only getting the answer on reddit/YT.
How do I use AI to search for information on 1000 companies or so? This turns out to be harder than I thought. The difficulty can be expressed computationally: I'm requesting Output tokens = Linear(Input prompt) and that's a lot of hard cash. This is interesting because to non-computer-scientist normies, this wrinkle is really not apparent. AIs can do basic agentic reasoning (like loop over a list of 1000 companies) and a bunch of searches, surely you can combine them? Yes it turns out, but it'll cost you like $38 for one query.
I went with Ottogrid.ai, which was fun, but expensive.
If I really had time to hack at this, I would try to do breadth passes then depth passes. like google "top 20 AI companies", scrape stats, then do narrower searches as time goes on. Is there a name for this algo? Idk. But people use it constantly, even cleaning the kitchen and so on.
But is that not done already???? This seems like the first AI-practical problem you'd solve once you had search-capable AI or early agentic or early reasoning.
Perhaps you could save some money by telling the AI to write a Python code that would scrape some information from the websites, convert them to plain text (much less data), and then using the AI to process that text?
The idea is that you pay for creating the Python code, and for processing its output, but you don't pay for the Python code downloading and processing the data.
I'm not all that sure how AI search works. Searches, and indexes top 20 hits, or something like that. Is reading a webpage the expensive part? If so then caching/context window management might matter a lot. Plain text might backfire if you actually lose table structure and stuff. You can probably ignore styles at least.