Skip to Content
Policy

AI might not steal your job, but it could change it

AI is already being used in the legal field. Is it really ready to be a lawyer?

barrister's suit and wig embodied by a digital glitch pattern
Stephanie Arnett/MITTR

(This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.)

Advances in artificial intelligence tend to be followed by anxieties around jobs. This latest wave of AI models, like ChatGPT and OpenAI’s new GPT-4, is no different. First we had the launch of the systems. Now we’re seeing the predictions of automation. 

In a report released this week, Goldman Sachs predicted that AI advances could cause 300 million jobs, representing roughly 18% of the global workforce, to be automated in some way. OpenAI also recently released its own study with the University of Pennsylvania, which claimed that ChatGPT could affect over 80% of the jobs in the US. 

The numbers sound scary, but the wording of these reports can be frustratingly vague. “Affect” can mean a whole range of things, and the details are murky. 

People whose jobs deal with language could, unsurprisingly, be particularly affected by large language models like ChatGPT and GPT-4. Let’s take one example: lawyers. I’ve spent time over the past two weeks looking at the legal industry and how it’s likely to be affected by new AI models, and what I found is as much cause for optimism as for concern. 

The antiquated, slow-moving legal industry has been a candidate for technological disruption for some time. In an industry with a labor shortage and a need to deal with reams of complex documents, a technology that can quickly understand and summarize texts could be immensely useful. So how should we think about the impact these AI models might have on the legal industry? 

First off, recent AI advances are particularly well suited for legal work. GPT-4 recently passed the Universal Bar Exam, which is the standard test required to license lawyers. However, that doesn’t mean AI is ready to be a lawyer. 

The model could have been trained on thousands of practice tests, which would make it an impressive test-taker but not necessarily a great lawyer. (We don’t know much about GPT-4’s training data because OpenAI hasn’t released that information.) 

Still, the system is very good at parsing text, which is of the utmost importance for lawyers. 

“Language is the coin in the realm of the legal industry and in the field of law. Every road leads to a document. Either you have to read, consume, or produce a document … that’s really the currency that folks trade in,” says Daniel Katz, a law professor at Chicago-Kent College of Law who conducted GPT-4's exam. 

Secondly, legal work has lots of repetitive tasks that could be automated, such as searching for applicable laws and cases and pulling relevant evidence, according to Katz. 

One of the researchers on the bar exam paper, Pablo Arredondo, has been secretly working with OpenAI to use GPT-4 in its legal product, Casetext, since this fall. Casetext uses AI to conduct “document review, legal research memos, deposition preparation and contract analysis,” according to its website. 

Arredondo says he’s grown more and more enthusiastic about GPT-4’s potential to assist lawyers as he’s used it. He says that the technology is “incredible” and “nuanced.”

AI in law isn’t a new trend, though. It has already been used to review contracts and predict legal outcomes, and researchers have recently explored how AI might help get laws passed. Recently, consumer rights company DoNotPay considered arguing a case in court using an argument written by AI, known as the “robot lawyer,” delivered through an earpiece. (DoNotPay did not go through with the stunt and is being sued for practicing law without a license.) 

Despite these examples, these kinds of technologies still haven’t achieved widespread adoption in law firms. Could that change with these new large language models? 

Third, lawyers are used to reviewing and editing work.

Large language models are far from perfect, and their output would have to be closely checked, which is burdensome. But lawyers are very used to reviewing documents produced by someone—or something—else. Many are trained in document review, meaning that the use of more AI, with a human in the loop, could be relatively easy and practical compared with adoption of the technology in other industries.

The big question is whether lawyers can be convinced to trust a system rather than a junior attorney who spent three years in law school. 

Finally, there are limitations and risks. GPT-4 sometimes makes up very convincing but incorrect text, and it will misuse source material. One time, Arrodondo says, GPT-4 had him doubting the facts of a case he had worked on himself. “I said to it, You’re wrong. I argued this case. And the AI said, You can sit there and brag about the cases you worked on, Pablo, but I’m right and here’s proof. And then it gave a URL to nothing.” Arredondo adds, “It’s a little sociopath.”

Katz says it’s essential that humans stay in the loop when using AI systems and highlights the  professional obligation of lawyers to be accurate: “You should not just take the outputs of these systems, not review them, and then give them to people.” 

Others are even more skeptical. “This is not a tool I would trust with making sure important legal analysis was updated and appropriate,” says Ben Winters, who leads the Electronic Privacy Information Center’s projects on AI and human rights. Winters characterizes the culture of generative AI in the legal field as “overconfident, and unaccountable.” It’s also been well-documented that AI is plagued by racial and gender bias.

There are also the long-term, high-level considerations. If attorneys have less practice doing legal research, what does that mean for expertise and oversight in the field? 

But we are a while away from that—for now.  

This week, my colleague and Tech Review’s editor at large, David Rotman, wrote a piece analyzing the new AI age’s impact on the economy—in particular, jobs and productivity.

“The optimistic view: it will prove to be a powerful tool for many workers, improving their capabilities and expertise, while providing a boost to the overall economy. The pessimistic one: companies will simply use it to destroy what once looked like automation-proof jobs, well-paying ones that require creative skills and logical reasoning; a few high-tech companies and tech elites will get even richer, but it will do little for overall economic growth.”

What I am reading this week

Some bigwigs, including Elon Musk, Gary Marcus, Andrew Yang, Steve Wozniak, and over 1,500 others, signed a letter sponsored by the Future of Life Institute that called for a moratorium on big AI projects. Quite a few AI experts agree with the proposition, but the reasoning (avoiding AI armageddon) has come in for plenty of criticism. 

The New York Times has announced it won’t pay for Twitter verification. It's yet another blow to Elon Musk’s plan to make Twitter profitable by charging for blue ticks. 

On March 31, Italian regulators temporarily banned ChatGPT over privacy concerns. Specifically, the regulators are investigating whether the way OpenAI trained the model with user data violated GDPR.

I’ve been drawn to some longer culture stories as of late. Here’s a sampling of my recent favorites:

  • My colleague Tanya Basu wrote a great story about people sleeping together, platonically, in VR. It’s part of a new age of virtual social behavior that she calls “cozy but creepy.” 
  • In the New York Times, Steven Johnson came out with a lovely, albeit haunting, profile of Thomas Midgley Jr., who created two of the most climate-damaging inventions in history
  • And Wired’s Jason Kehe spent months interviewing the most popular sci-fi author you’ve probably never heard of in this sharp and deep look into the mind of Brandon Sanderson. 

What I learned this week

“News snacking”—skimming online headlines or teasers—appears to be quite a poor way to learn about current events and political news. A peer-reviewed study conducted by researchers at the University of Amsterdam and the Macromedia University of Applied Sciences in Germany found that “users that ‘snack’ news more than others gain little from their high levels of exposure” and that “snacking” results in “significantly less learning” than more dedicated news consumption. That means the way people consume information is more important than the amount of information they see. The study furthers earlier research showing that while the number of “encounters” people have with news each day is increasing, the amount of time they spend on each encounter is decreasing. Turns out … that’s not great for an informed public. 

Deep Dive

Policy

Eric Schmidt has a 6-point plan for fighting election misinformation

The former Google CEO hopes that companies, Congress, and regulators will take his advice on board—before it’s too late.

A high school’s deepfake porn scandal is pushing US lawmakers into action

Legislators are responding quickly after teens used AI to create nonconsensual sexually explicit images.

Meta is giving researchers more access to Facebook and Instagram data

There’s still so much we don’t know about social media’s impact. But Meta president of global affairs Nick Clegg tells MIT Technology Review that he hopes new tools the company just released will start to change that.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.