We know that some lawyers are very willing to use LLMs to accelerate their work, because there have been lawyers caught submitting briefs containing confabulated case citations. Probably many other lawyers are using LLMs but are more diligent about checking their output — and thus their LLM use goes undetected.
I wonder if lawyering will have the same pipeline problem as software-engineering: The "grunt work" that has previously been assigned to trainees and junior professionals will be automated early on; thus making it less valuable to hire juniors; thus making it harder for juniors to gain job experience.
(Though the juniors can be given the task of manually checking all the citations ...)
It seems likely to me that (at least some) lawyers will have the foresight to see AI getting better and better, and that AI automation won't just stop at the grunt work and will eventually come from the more high profile jobs.
thus making it less valuable to hire juniors; thus making it harder for juniors to gain job experience.
Yes this seems very likely, I don't see why this would be limited to SWEs
It seems like it would be hard to detect if smart lawyers are using AI since (I think) lawyers' work is easier to verify than it is to generate. If a smart lawyer has an AI do research and come up with an argument, and then they verify that all of the citations make sense, the only way to know they're using AI is that they worked anomalously quickly.
Note: this was from my writing-every-day-in-november sprint, see my blog for disclaimers.
I believe that the legal profession is in a particularly unique place with regards to white-collar job automation due to artificial intelligence. Specifically, I wouldn’t be surprised if they are able to make the coordinated political and legal manoeuvres to ensure that their profession is somewhat protected from AI automation. Some points in favour of this position:
I believe that as widely deployed AI becomes more competent at various tasks involved in white-collar jobs, there’ll be more pressure to enact laws that protect these professions from being completely automated away. The legal profession is in the interesting position of having large portions of the job susceptible to AI automation, while also being very involved at drafting and guiding laws that might prevent their jobs from being completely automated.
Politicians are probably even better placed to enact laws that prevent politicians from being automated, although I don’t believe politicians are as at-risk as lawyers. Lawyers are simultaneously at-risk for automation and very able to prevent their automation.
Whether the lawyers actually take action in this space is tricky to say, because there are so many factors that could prevent this: maybe white-collar automation takes longer than expected, maybe the politicians pass laws without clear involvement from large numbers of lawyers, maybe no laws get passed but the legal profession socially shuns anyone using more than an “acceptable level” of automation.
But if the legal profession were to take moves to prevent the automation of their own jobs, I’d be very surprised if they drafted an act titled something like “THE PROTECT LAWYERS AT ALL COSTS ACT”. I imagine the legal paperwork will protect several professions, with lawyers just one of them, but that lawyers will indeed be protected under this act. This is to say, I believe the legal profession to be fairly savvy, and if they do make moves to protect themselves against AI job automation, I doubt it’ll be obviously self-serving.