Skip to Content
Policy

An inside look at Congress’s first AI regulation forum

Researcher Inioluwa Deborah Raji says tech CEOs focused on big claims of what AI could do, but she was there to offer a reality check.

September 25, 2023
a woman sitting at a table smiling with people sitting behind, beside and in front of her
U.C. Berkley Researcher, Deborah Raji, attends the "AI Insight Forum" on September 13, 2023 in Washington, DC.Chip Somodevilla/Getty Images

This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

Recently, I wrote a quick guide about what we might expect at Congress’s first AI Insight Forum. Well, now that meeting has happened, and we have some important information about what was discussed behind closed doors in the tech-celeb-studded confab.

First, some context. The AI Insight Forums were announced a few months ago by Senate Majority Leader Chuck Schumer as part of his “SAFE Innovation” initiative, which is really a set of principles for AI legislation in the United States. The invite list was heavily skewed toward Big Tech execs, including CEOs of AI companies, though a few civil society and AI ethics researchers were included too. (You can read more about it in some of my earlier stories on the forums and how Congress might approach AI legislation.)

Coverage of the meeting thus far has put a particular emphasis on the reportedly unanimous agreement about the need for AI regulation, and on issues raised by Elon Musk and others about the “civilizational risks” created by AI. (This tracker from Tech Policy Press is pretty handy if you want to know more.)  

But to really dig below the surface, I caught up with one of the other attendees, Inioluwa Deborah Raji, who gave me an inside look at how the first meeting went, the pernicious myths she needed to debunk, and where disagreements could be felt in the room. Raji is a researcher at the University of California, Berkeley, and a fellow at Mozilla. She is an expert in AI accountability, bias, and risk assessments, and was on our list of top Innovators Under 35 in 2020. 

I’m sharing our conversation here, which has been edited and condensed for length and clarity. 

First of all, how was the conversation structured?

Senators had questions that they had prepared, and then they tossed it around the room and people responded. Senator Schumer and Senator [Mike] Rounds mediated the first and second rounds of questions, so they had the most influence in shaping the way that the conversation flowed.

I’m pretty convinced that it was more of an informational rallying effort than it was a genuine opportunity for meaningful policy discourse and recommendations. I’m hoping that in future forums they include a broader range of folks, definitely from civil society.

Why do you think all the tech celebrities were invited?

I think it was to attract the crowd. They had over 60 senators—the majority of the Senate showed up. I think the Schumer team understood that what would be convincing for these senators would be to see all the major CEOs and tech celebrities in one space, and then sprinkle in a bit of civil society to diversify the perspectives a bit. 

Some people are very understandably upset about the imbalance in representation of different groups. And I think that totally makes sense. But I do recognize that if the goal is to get as many senators as possible to start talking about AI, start prioritizing AI, and look past party lines in order for that to happen, I think it makes sense to have those guys in there. 

Something I’m thinking about now is that certain senators were just picking up on what the CEOs had said and repeating that to the press. Schumer, in one of the debrief interviews, said something like, Bill Gates said AI is gonna solve hunger and this person said that it’s gonna solve cancer. And that was because all the CEOs were obviously really hyping up the technology because that’s literally the mandate of their corporations. 

Eighty percent of my comments were just fact-checking the reality that we’re living in right now, especially for marginalized folks that are not well represented in the data. A lot of civil society folks were also trying to challenge that “miracle technology” narrative that was coming out of the companies. But then it was funny because certain senators, as soon as they left the room, were like, Oh, this is a miracle technology. It was as if they had only heard one side.

I’m reflecting on: How do we properly engage these senators in perspectives that are non-corporate in a meaningful way? How do we get them to understand that the risks brought up by the non-corporate perspective reflect a very valuable type of expertise that should definitely factor into legislative decision-making? 

A common Big Tech line about AI has been, of course, hyping the technology but then heralding the risks as existential or very extreme. Did that dynamic play out in the room?

I think that played out less prominently than I anticipated. 

Hyperbolic risks came up because [representatives from tech companies] would be trying to divert attention from the reality that a lot of the current-day risks are because the technology is actually not super great. Sometimes it will fail and behave in unexpected ways. More often, the failures impact those that are underrepresented or misrepresented in the data.

There were a couple of moments where someone would make these outlandish claims of what AI could do and then tie that to a pretty far-out-there risk. 

[But I thought], what about these other harms that are more immediate and are a simple [result of] prematurely deploying something that’s not ready?

More guardrails will require more of the type of regulation that I don’t think [companies] want, like auditing, third-party evaluations, and verification. That was when things got most tense. 

It was leaked that you had an exchange with Elon Musk regarding the risks posed by AI. [Ed note: Musk said he had told the Chinese government that AI might eventually be able to overtake it, and Raji responded by questioning the safety of today’s driverless cars, like the autopilot feature in a Tesla.] Can you tell me more about that?

You know, it wasn’t just Elon. That was the one that got out. There was another CEO that was talking about curing cancer with AI, saying we have to make sure that it’s Americans that do that, and just narratives like that. 

But first of all, we have medical AI technology that is hurting people and not working well for Black and brown patients. It’s disproportionately underprioritizing them in terms of getting a bed at a hospital; it’s disproportionately misdiagnosing them, and misinterpreting lab tests for them. 

I also hope that one day AI will lead to cancer cures, but we need to understand the limitations of the systems that we have today. 

What was it that you really wanted to achieve in the forum, and do you think you had the chance to do that? 

I think we all had substantial opportunities to say what we needed to say. In terms of whether we were all equally heard or equally understood, I think that’s something that I’m still processing. 

My main position coming in was to debunk a lot of the myths that were coming out of these companies around how well these systems are working, especially on marginalized folks. And then also to debunk some of the myths around solving bias and fairness. 

Bias concerns and explainability concerns are just really difficult technical and social challenges. I came in being like, I don’t want people to underestimate the challenge.

So did I get that across? I’m not sure, because the senators loved saying that AI is gonna cure cancer. 

It’s so easy to get caught up in the marketing terms and the sci-fi narratives and completely ignore what's happening on the ground. I’m coming back from all of this more committed than ever to articulating and demonstrating the reality, because it just seems like there is this huge gap of knowledge between what’s actually happening and the stories that these senators are hearing from these companies.

What else I’m reading

  • I just loved this story from Jessica Bennett at the New York Times about what it’s like to be a teen girl with a cell phone today. Bennett kept in touch with three 13-year-olds over the course of a year to learn about the ins and outs of their digital lives. Highly recommend! 
  • This social reflection on privacy by Charlie Warzel in the Atlantic has stuck with me for a few days. The story gets at the overwhelming questions we—certainly I—have about what we can do to preserve our privacy online. 
  • The United Nations General Assembly convened in New York this past week, and one big topic of discussion was, of course, AI. Will Henshall at Time did a deep dive into what we might expect from the body on AI regulation.

What I learned this week

A Disney director tried to use AI to create a soundtrack reminiscent of the work of symphonist Hans Zimmer—and came up disappointed. Gareth Edwards, director of Rogue One: A Star Wars Storytold my colleague Melissa Heikkilä that he was hoping to use AI to create a soundtrack for his forthcoming movie about … AI, of course! Well, the soundtrack fell flat, and Edwards even shared it with the famous composer, who he says found it amusing. 

Melissa wrote, “Edwards said AI systems lack a fundamentally crucial skill for creating good art: taste. They still don’t understand what humans deem good or bad.”

In the end, the real Zimmer wrote the melodies for Edwards’s upcoming movie, The Creator

Deep Dive

Policy

Eric Schmidt has a 6-point plan for fighting election misinformation

The former Google CEO hopes that companies, Congress, and regulators will take his advice on board—before it’s too late.

A high school’s deepfake porn scandal is pushing US lawmakers into action

Legislators are responding quickly after teens used AI to create nonconsensual sexually explicit images.

Meta is giving researchers more access to Facebook and Instagram data

There’s still so much we don’t know about social media’s impact. But Meta president of global affairs Nick Clegg tells MIT Technology Review that he hopes new tools the company just released will start to change that.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.