I made a sequence of predictions of what the effects of this "legal regulatory capture" would look like. To ignore all but the one farthest out, and ask "Is that an accurate understanding of how you foresee the regulatory capture?" as though it were my only one seems clearly in bad faith poor form.
they definitely don't know how to do this in downloadable models.
Yes, I expect this would have the effect of chilling open model releases broadly. The "AI Safety" people have been advocating for precisely this for a while now.
Is your goal here to isolate the aspect of my response that'll keep you right that "legal regulatory capture isn't happening" for as long as you can? Because if so, yeah, of all I things I said, the compute screening requirement would indeed be the hardest for them achieve, and I expect that to take them the longest if they do.
I also don't believe I said anything about new laws being passed; the threat of decades-old laws being reïnterpreted would suffice for the most part.
So first, the most likely and proximate thing I foresee happening is that major US AI companies – Google, xAI, OpenAI, and Anthropic – "voluntarily" add "guardrails" against their models providing legal advice.
Second, Huggingface, also "voluntarily," takes down open models considered harmful, but restricting themselves to fine-tunes, LoRAs, and the like, since the companies developing the foundation models have enough reach to distribute them themselves that taking them down achieves little.
Third, and this I foresee taking longer, is that companies releasing open models (for now, that's mostly a half-dozen Chinese ones) are deemed liable for "harm" caused by anyone using their models.
Okay, that's a reasonable thing to clarify. First off, I don't think whether or not one charges for it is relevant: it's currently criminal to offer unlicensed legal even for free. It's the activity itself that's restricted, not merely the fee.
I do not believe it will be made illegal[1] to receive or use for oneself legal advice from any source: unlicensed, disbarred, foreign, underage, non-human, whatever. The restrictions I predict only apply to providing such advice.
the push will be to make it illegal for an LLM to give someone legal advice
Essentially, but as stated, it could be construed as though the crime would be committed by the LLM, which I think is absurdly unlikely. Instead the company (OpenAI, et al) would be considered responsible. And yes, I expect them to be forbidden from providing such a service, and to be as liable for it as they are for, say, copyright infringement.
For any currently accessible open models you're running locally, yes, you'll probably continue to be able to use them. But companies[2] could be forbidden from releasing any future models that can't be proven to be unable to violate the law (on pain of some absurd fine), similar to the currently proposed legislation for governing "CBRN" threats. And plausibly even extant models that haven't been proven to be sufficiently safe could be taken down from Huggingface etc., and cloud GPU providers could be required to screen for them (like they generally do now for AI-generated "CSAM").
represent themselves in court, draft their own contracts, file their own patents
just deciding to use LLMs
It looks like you're not even seeing the difference I'm arguing they will make salient. I agree the former is yet widely considered too fundamental a right in America for even lawyers to try to abolish, but I expect them to argue LLM assistance in this is a service provided illegally.
With occupational licensing in general, and criminalizing the Unauthorized Practice of Law more specifically, they've already accomplished plenty of regulatory capture. Do you really believe them using this well-established framework to deem the AI companies to be "giving legal advice" in violation of these laws implausible?
The primary application of "safety research" is improving refusal calibration, which, at least from a retail client's perspective, is exactly like a capability improvement: it makes no difference to me whether the model can't satisfy my request or can but won't. It's easy to demonstrate differences in this regard – simply show one model refusing a request another fulfills – so I disagree that this would cause clients to be "dissuaded from AI in general."
On the contrary, I would expect the amor fati people to get normal prophecies, like, "you will have a grilled cheese sandwich for breakfast tomorrow," "you will marry Samantha from next door and have three kids together," or "you will get a B+ on the Chemistry quiz next week," while the horrible contrived destinies come to those who would take roads far out of their way to avoid them.
I can think of several prominent predictions in the present of similar magnitude.
The difference you're talking about might be simply due to you discounting these as insane (or maybe just disingenuous) while hailing analogous predictions in the past as wise/prescient.
“Death gives life meaning.”
A fun thing you can do is to say this line after events like natural disasters or mass murders. I'm hopeful that if it catches on as an ironic meme, people will come to realize it and the deathist sentiment that originally spawned it unironically ought to be no less obscene in any context.
Lesser companies say "Look, we've made a thing you'll like!" When you're like Google, you say "Here is our thing you will use."