News

The Wharton Accountable AI Research Conference Asks Who Should Govern AI, and How

Kevin Werbach, Faculty Lead of the Wharton Accountable AI Lab, hosts a panel
Kevin Werbach, faculty lead of the Wharton Accountable AI Lab, hosts the event's morning panel

On February 6, the Wharton Accountable AI Lab hosted its first-ever Accountable AI Research Conference, bringing together academics, policymakers, and industry leaders to tackle one of the defining questions of the moment: as artificial intelligence reshapes every sector of the economy, who is responsible for making sure it’s done right? 

The daylong event, organized by faculty lead Kevin Werbach, professor of legal studies & business ethics and Rory Van Loo, associate frofessor of legal studies and business ethics, drew from more than 160 paper submissions to select 24 researchers who presented work spanning AI regulation, ethics, governance, and economic impact. The morning’s two plenary panels – one focused on policy, the other on industry practice set the tone, revealing both sharp points of agreement and open questions that the field is still racing to answer. 

Where Should Regulation Live?

The first panel, moderated by Christopher Yoo of Penn Carey Law, featured Neil Chilson, head of AI policy at the Abundance Institute, and Alex Engler, executive director of the Penn Center on Media, Technology, and Democracy. The conversation quickly zeroed in on a central tension: AI is a general-purpose technology woven into nearly every industry, yet the rush to regulate it has often targeted the technology itself rather than its specific uses.

Both panelists argued forcefully that governance works best when it’s tailored to particular applications (hiring, housing, insurance) rather than imposed broadly at the model level. Chilson put it bluntly with an analogy: “We don’t require hammer manufacturers to make it impossible to misuse a hammer to bludgeon somebody. If you did that, you would also remove a lot of the utility of the hammer for driving nails.” Engler agreed, noting that application-specific regulation produces higher-quality policy because each domain, from real estate valuation algorithms to college admissions, presents fundamentally different risks.

The panelists also traced the political whiplash that followed ChatGPT’s explosive public debut in late 2022. Engler noted that the technology itself wasn’t a sudden breakthrough, but its visibility triggered radical shifts in policymaking, most visibly in the EU AI Act’s hasty addition of provisions governing large foundation models. In the U.S., state legislatures have filled the vacuum left by federal inaction: Chilson cited data showing that AI-related bills introduced in state legislatures surged from fewer than 200 in 2023 to well over 1,200 in 2025. With no comprehensive federal AI law on the horizon, both panelists acknowledged the patchwork of state regulation as imperfect but, as Engler put it, better than “functionally no governance.”

Chief Product Officer of Responsible AI @ Microsoft, speaks during the Wharton Accountable AI Research Conference
Sarah Bird, Chief Product Officer of Responsible AI @ Microsoft

Governance in Practice: Still Early Days

The second panel, moderated by Werbach, shifted to how organizations are actually implementing responsible AI. Sarah Bird, chief product officer of responsible AI at Microsoft, Heather Domin, VP and head of responsible AI and governance at HCLTech, and Radha Iyengar Plumb, an AI leader at IBM and senior fellow at the Wharton Accountable AI Lab, each described building governance functions that feel more like startups than established bureaucracies.

All three panelists agreed the field remains in its early stages. Domin observed that while leading companies have matured beyond abstract principles into concrete policy and frameworks, adoption across the broader business landscape is “very early.” Bird emphasized that generative AI forced a leap in scale: Microsoft went from a handful of teams shipping AI products to thousands doing so annually, which demanded entirely new patterns for testing and oversight. “We’re increasing maturity in frameworks,” Bird said, “but we’re still really, really early in the practice.” 

The panelists found common ground on a practical point: customer demand, not regulation alone, is driving responsible AI investment. Bird described how enterprise clients’ attitudes transformed after ChatGPT’s launch – boards that previously treated AI ethics as a distant concern were suddenly requesting two-hour briefings on responsible deployment before any project began. Domin reported a similar experience, noting that client requirements have remained strong regardless of shifts in the U.S. regulatory environment. 

 The rise of AI agents, autonomous systems that can take actions on behalf of users, emerged as a shared concern across both panels. Bird highlighted a new governance challenge: agents can be built in minutes by anyone in an organization, not just software engineering teams. “My business program manager is making agents and throwing them out there,” she noted, underscoring how traditional release review processes may not keep pace. 

A Community Takes Shape

Following the morning’s plenary panels, participants divided into three parallel sessions for the remainder of the day, where they saw presentations of selected research papers. These interdisciplinary presentations featured both analytical and empirical scholarship, covering a broad range of topics such as proposed approaches to AI regulation, AI governance in practice, privacy, deepfakes, transparency, AI agents, intellectual property, and risk management.  

 In addition to sharing research, these sessions also provided the opportunity for researchers and participants alike to engage in productive dialogues and share ideas through free-flowing conversations. The depth of these exchanges reflected a growing recognition that solving AI accountability requires more than isolated expertise, it demands sustained collaboration across disciplines. 

In an address to participants, Werbach reiterated the conference’s broader mission: building a community where academic research and real-world practice inform each other. It’s a goal that felt tangible in the room, where legal scholars debated alongside product leaders and former White House officials. As AI’s influence accelerates, the conversations started at this inaugural gathering, about where governance should sit, how to measure risk, and who bears accountability, are ones the field will be returning to for years to come. 

This content was created with the assistance of generative AI. All AI-generated materials are reviewed and edited by the Wharton AI & Analytics Initiative to ensure accuracy, clarity, and alignment with our standards.