where did the power go?
I don't think this is a helpful frame, but if you want to use it, the power went to the people – mostly due the democratization of information, accelerated by the internet, social media, etc. – and they then turned it on each other, wasting most of it and leaving little for the people they elect to govern with.
This ACX post, Your Incentives Are Not The Same, explains it adequately. He is prominent enough that what might be obviously negative-value behavior for you might not be for him.
These are the circumstances in which one engages in kettle logic, so I wouldn't read too much into any of their arguments.
This is the core of the dispute between the USPTO and OpenAI over their (failed) attempt to trademark the term in the US, so citing their papers doesn't help resolve this.
GPTs. Yes, it's still an initialism like "LLM" but it's much easier to pronounce ("jee-pee-tee") and you can call them "jepeats" (rhyming with "repeats") if you want.
From what I understand, they executed a risky maneuver, lost, tried to salvage what they could from the wreckage (by pinning the blame on someone else), but got pushed out anyway. So I can see where you're getting "scheming or manipulative" (them trying this scheme), "less competent" than Altman (losing to him), and "very selfish" (blaming other people). But where are you getting "cowardly" from? From their attempt at minimizing their losses after it became clear their initial plan had become exceeding unlikely to succeed? If so, I'd say it speaks to how poorly you think of valor if you believe it precludes so sensible an action.
I thought the cases in The Man Who Mistook His Wife for a Hat were obviously as fictionalized as an episode of House: the condition described is real and based on an actual case, but the details were made up to make the story engaging. But I didn't read it in 1985 when it was published. Did people back then take statements like "based on a true story" more seriously?
Lesser companies say "Look, we've made a thing you'll like!" When you're like Google, you say "Here is our thing you will use."
I made a sequence of predictions of what the effects of this "legal regulatory capture" would look like. To ignore all but the one farthest out, and ask "Is that an accurate understanding of how you foresee the regulatory capture?" as though it were my only one seems clearly in bad faith poor form.
they definitely don't know how to do this in downloadable models.
Yes, I expect this would have the effect of chilling open model releases broadly. The "AI Safety" people have been advocating for precisely this for a while now.
It's not quite the same thing, but clipboard2markdown might work too.