We know that some lawyers are very willing to use LLMs to accelerate their work, because there have been lawyers caught submitting briefs containing confabulated case citations. Probably many other lawyers are using LLMs but are more diligent about checking their output — and thus their LLM use goes undetected.
I wonder if lawyering will have the same pipeline problem as software-engineering: The "grunt work" that has previously been assigned to trainees and junior professionals will be automated early on; thus making it less valuable to hire juniors; thus making it harder for juniors to gain job experience.
(Though the juniors can be given the task of manually checking all the citations ...)
It seems likely to me that (at least some) lawyers will have the foresight to see AI getting better and better, and that AI automation won't just stop at the grunt work and will eventually come from the more high profile jobs.
thus making it less valuable to hire juniors; thus making it harder for juniors to gain job experience.
Yes this seems very likely, I don't see why this would be limited to SWEs
It seems like it would be hard to detect if smart lawyers are using AI since (I think) lawyers' work is easier to verify than it is to generate. If a smart lawyer has an AI do research and come up with an argument, and then they verify that all of the citations make sense, the only way to know they're using AI is that they worked anomalously quickly.
I agree with your point. But what I think is interesting about legal work is not that they could/couldn't be automated or that AI usage could be detected. I think that lawyers will see the job automation coming and take legal action to protect themselves such that AI is not legally allowed to be used for some key legal tasks, such that they ~all keep their jobs
The reason I'm skeptical of this is that it doesn't seem like you could enforce a law against using AI for legal research. As much as lawyers might want to ban this as a group, individually they all have strong incentives to use AI anyway and just not admit it.
Although this assumes doing research and coming up with arguments is most of their job. It could be that most of their job is harder to do secretly, like meeting with clients and making arguments in court.
While I'd have to see the proposed law specifically, my initial reaction to the idea of legal regulatory capture is skepticism.
The ability to draft your own contracts, mediate disputes through arbitration, and represent yourself in court all derive from legal rights which would be very hard to overturn.
I can imagine some attempts at regulatory capture being passed through state or maybe even federal legislatures, only to get challenged in court and overturned.
With occupational licensing in general, and criminalizing the Unauthorized Practice of Law more specifically, they've already accomplished plenty of regulatory capture. Do you really believe them using this well-established framework to deem the AI companies to be "giving legal advice" in violation of these laws implausible?
Unauthorized Practice of Law, afaik, applies to giving advice to others. Not serving your own legal needs. Every American has a right to represent themselves in court, draft their own contracts, file their own patents, etc. I suspect at least with representing themselves in court, these are Constitutionally protected rights.
I don't think the threat to attorneys is LLMs having their own 'shop' where you can hire them for legal advice. That would probably already be "unauthorized practice of law". The threat is people just deciding to use LLMs instead of attorneys. And even for a field that can punch above its weight class politically as much as attorneys can, I think stopping that would be challenging. Especially when such a move would be unpopular among the public, and even among more libertarian/constitutionally minded lawyers (of which there are many).
represent themselves in court, draft their own contracts, file their own patents
just deciding to use LLMs
It looks like you're not even seeing the difference I'm arguing they will make salient. I agree the former is yet widely considered too fundamental a right in America for even lawyers to try to abolish, but I expect them to argue LLM assistance in this is a service provided illegally.
I want to make sure I'm not misunderstanding you. Are you saying you think the push will be to make it illegal for an LLM to give someone legal advice for them to use for themselves?
I could foresee something where you can't charge for that, so if OpenAI didn't build some sort of protection against doing that in GPT they might be liable. However, I can't see how this would work with open source (and free) models run locally.
Okay, that's a reasonable thing to clarify. First off, I don't think whether or not one charges for it is relevant: it's currently criminal to offer unlicensed legal even for free. It's the activity itself that's restricted, not merely the fee.
I do not believe it will be made illegal[1] to receive or use for oneself legal advice from any source: unlicensed, disbarred, foreign, underage, non-human, whatever. The restrictions I predict only apply to providing such advice.
the push will be to make it illegal for an LLM to give someone legal advice
Essentially, but as stated, it could be construed as though the crime would be committed by the LLM, which I think is absurdly unlikely. Instead the company (OpenAI, et al) would be considered responsible. And yes, I expect them to be forbidden from providing such a service, and to be as liable for it as they are for, say, copyright infringement.
For any currently accessible open models you're running locally, yes, you'll probably continue to be able to use them. But companies[2] could be forbidden from releasing any future models that can't be proven to be unable to violate the law (on pain of some absurd fine), similar to the currently proposed legislation for governing "CBRN" threats. And plausibly even extant models that haven't been proven to be sufficiently safe could be taken down from Huggingface etc., and cloud GPU providers could be required to screen for them (like they generally do now for AI-generated "CSAM").
If I'm reading this correctly the end state of regulatory capture would be some sort of law that forces the removal of open source models from anywhere their code could be hosted (huggingface, etc.) as well as sources of compute needing to screen for models, if said models do not have built in safeguards against giving legal advice.
Is that an accurate understanding of how you foresee the regulatory capture?
Is your goal here to isolate the aspect of my response that'll keep you right that "legal regulatory capture isn't happening" for as long as you can? Because if so, yeah, of all I things I said, the compute screening requirement would indeed be the hardest for them achieve, and I expect that to take them the longest if they do.
I also don't believe I said anything about new laws being passed; the threat of decades-old laws being reïnterpreted would suffice for the most part.
So first, the most likely and proximate thing I foresee happening is that major US AI companies – Google, xAI, OpenAI, and Anthropic – "voluntarily" add "guardrails" against their models providing legal advice.
Second, Huggingface, also "voluntarily," takes down open models considered harmful, but restricting themselves to fine-tunes, LoRAs, and the like, since the companies developing the foundation models have enough reach to distribute them themselves that taking them down achieves little.
Third, and this I foresee taking longer, is that companies releasing open models (for now, that's mostly a half-dozen Chinese ones) are deemed liable for "harm" caused by anyone using their models.
No my goal is to make sure I'm not talking past you, not to score a point in an argument.
I don't foresee the same outcome as you do, I think that's unlikely. You have explained it to the degree that I can now properly understand it though, and while I wouldn't call it a base case, that's not an unreasonable scenario.
Is your goal here to isolate the aspect of my response that'll keep you right that "legal regulatory capture isn't happening" for as long as you can?
I'm not the person you're arguing with, but wanted to jump in to say that pushing back on the weakest part of your argument is a completely reasonable thing for them to do and I found it weird that you're implying there's something wrong with that.
I also think you're missing how big of a problem it is that preventing LLMs from giving legal advice is something companies don't actually know how to do. Maybe companies could add strong enough guard rails in hosted models to at least make it not worth the effort to ask them for legal advice, but they definitely don't know how to do this in downloadable models.
That said, I could believe in a future where lawyers force the big AI companies to make their models too annoying to easily use for legal advice, and prevent startups from making products directly designed to offer AI legal advice.
I made a sequence of predictions of what the effects of this "legal regulatory capture" would look like. To ignore all but the one farthest out, and ask "Is that an accurate understanding of how you foresee the regulatory capture?" as though it were my only one seems clearly in bad faith poor form.
they definitely don't know how to do this in downloadable models.
Yes, I expect this would have the effect of chilling open model releases broadly. The "AI Safety" people have been advocating for precisely this for a while now.
The ability to draft your own contracts, mediate disputes through arbitration, and represent yourself in court all derive from legal rights which would be very hard to overturn.
Strongly agree. However I believe lawyers to be adept at the legal system, so they'd likely bundle job protections for lawyers alongside job protections for other more empathetic jobs such as teachers or 911 call agents or others. In general, I predict that lawyers see AI job automation as a valid threat, that they take actions against this threat, and also that they are much more competent at legal manoeuvring and politics than I am, so would come up with competent ways to achieve their goals.
Some lawyers, sure, but not the vast majority of the legal profession.
All those points you made are correct (besides maybe the x-risk one--you were right that that one came a little more from opinion, having worked with a bunch of lawyers I believe they generally do nothing better than provide expert arguments and rationalizations for whatever they want to believe or make you believe, rather than following the facts to the truth in good faith), but I don't think they're enough to outweigh the fact that the legal profession is absolutely ripe for the kinds of automation that AI excels at.
Paralegals and legal secretaries in particular I think are on the chopping block. Millions of people in those roles spend their whole day working on searching through complicated datasets of badly organized data (in discovery proceedings, each side has an obligation to present certain sets of evidence and documentation to the other side before a trial, but they have no obligation to organize it well...), picking out the relevant information to help answer a certain question or make a certain point, and arranging it to be presented in a compelling way. That's all stuff that AI excels at and can do in the blink of an eye, and there are ways to use AI to automate some of the process without hallucinations being a problem. Google NotebookLM in particular is basically tailor-made to help lawyers parse huge troves of discovery data for the specific information they're looking for, which many people in the legal profession have a full time job doing today. (and it takes only a little training and common sense to be able to do this and steer clear of the hallucination issue.)
Sure, I believe that lawyers will see to it that there will always be human lawyers representing clients in the courtroom, formally filing motions and submitting paperwork and consulting with clients and all the stuff that only lawyers are doing already, but in the near future I expect the giant infrastructure of clerks, secretaries and paralegals that supports them with menial paperwork to be gutted. I think the only reason it's not happening already in a more significant way is that lawyers on average tend to be older than most other careers, and many of them are set in their ways with the technology that they're used to. I believe the generation of lawyers that has grown up understanding computers and AI will not need nearly the number of supporting staff per lawyer as the industry currently has--if any at all.
Note: this was from my writing-every-day-in-november sprint, see my blog for disclaimers.
I believe that the legal profession is in a particularly unique place with regards to white-collar job automation due to artificial intelligence. Specifically, I wouldn’t be surprised if they are able to make the coordinated political and legal manoeuvres to ensure that their profession is somewhat protected from AI automation. Some points in favour of this position:
I believe that as widely deployed AI becomes more competent at various tasks involved in white-collar jobs, there’ll be more pressure to enact laws that protect these professions from being completely automated away. The legal profession is in the interesting position of having large portions of the job susceptible to AI automation, while also being very involved at drafting and guiding laws that might prevent their jobs from being completely automated.
Politicians are probably even better placed to enact laws that prevent politicians from being automated, although I don’t believe politicians are as at-risk as lawyers. Lawyers are simultaneously at-risk for automation and very able to prevent their automation.
Whether the lawyers actually take action in this space is tricky to say, because there are so many factors that could prevent this: maybe white-collar automation takes longer than expected, maybe the politicians pass laws without clear involvement from large numbers of lawyers, maybe no laws get passed but the legal profession socially shuns anyone using more than an “acceptable level” of automation.
But if the legal profession were to take moves to prevent the automation of their own jobs, I’d be very surprised if they drafted an act titled something like “THE PROTECT LAWYERS AT ALL COSTS ACT”. I imagine the legal paperwork will protect several professions, with lawyers just one of them, but that lawyers will indeed be protected under this act. This is to say, I believe the legal profession to be fairly savvy, and if they do make moves to protect themselves against AI job automation, I doubt it’ll be obviously self-serving.