News

From Oversight to Advantage:
Governing AI with Confidence

A wide shot from above shows audience members watching a panel of speakers on stage at the Wharton AI Governance Workshop
Speakers offer their thoughts at Penn's AI Governance Workshop

For firms deploying AI, the tech’s upside is real: operational efficiencies, enhanced insights, and new capabilities. However, the risks around bias, data quality, and model accuracy aren’t just technical issues — they are business, legal, and reputational liabilities. Ignoring them can mean regulatory exposure, customer distrust, or brand damage.

Kevin Werbach, professor and Chair of the Department of Legal Studies and Business Ethics at the Wharton School, and faculty lead of the Wharton Accountable AI Lab, is urging executives to adopt a posture of practical accountability. “We get so focused on potential that we often forget to ask the hard questions.”

In conversation with the Wharton AI & Analytics Initiative, Werbach and other thought leaders emphasized a critical shift in mindset: away from unchecked enthusiasm and toward a framework-driven, ethically grounded approach to implementation.

Governance Isn’t Optional — It’s Your Safety Net

Leading firms are no longer asking, “Should we govern AI?” They’re asking, “How do we do it well?” Step one is building a clear inventory of AI systems in use across the business, including unofficial tools used by employees. Step two is embedding governance using emerging industry frameworks like NIST’s AI Risk Management Framework or ISO 42001.

Industry implication: Start with an “AI governance hygiene check.” Build workflows and accountability around how tools are evaluated, implemented, and monitored — just like cybersecurity or financial controls.

“Reasonable Care” Is Becoming the Legal and Competitive Benchmark

Courts and regulators are beginning to treat AI like any other source of business risk. You don’t need perfection — but you do need to show that your firm acted with “reasonable care.” That means understanding risks, using known best practices, and making context-sensitive decisions. Ask yourself “How would a reasonable person handle this situation?”

Industry implication: Use the reasonable care standard as a strategic opportunity — not just legal defense. It gives your firm flexibility to design AI systems that align with your goals and industry realities, while still being protected.

Boards Are Asking, But Not Always Acting

Pippa Begg, CEO of Board Intelligence, brings a complementary view from the field. Her firm, which offers technology and advisory solutions to improve board effectiveness, interfaces with tens of thousands of board members, and recent polling found that 83% of boards do not feel prepared to harness AI effectively.

“Interestingly, many boards are actually stronger on AI as a governance issue than as a strategic or performance one,” she notes. “They’re defaulting to the risk lens because it feels safer. But that can create paralysis.”

Industry implication: Focusing entirely on governance will save you from trouble, but you might fall behind your competitors who have a stronger appetite for AI implementation.

Bridging the Confidence Gap

So, what will it take for boards to shift from cautious observers to confident stewards of AI? For Begg, the answer lies in structured education, paired with cultural nudges. “Board members love an expert speaker over dinner,” she jokes—but behind the levity is a serious call to action.

She argues that upskilling board members—particularly through digital learning options and industry peer settings—should be treated as a requirement, not a luxury. “Without a foundational understanding of how large language models work, for example, it’s impossible to ask the right questions about risk, bias, or data exposure.”

Werbach echoes this concern: “AI’s risks and limitations shouldn’t surprise anyone paying attention. But if you’re only hearing about them in crisis moments, you’re already behind.”

Industry implication: Work with board members and decision-makers to learn how to best get them up to speed on AI tools, strategy, and risk.

Automation Is Not a Strategy — Augmentation Is

Cutting staff using AI might offer short-term savings, but it’s often a strategic misstep. “There are countless use cases where AI can enhance human capacity, not eliminate it,” says Begg. Werbach agrees. “You need to understand the nuance. Replacing headcount might seem efficient, but you lose tacit knowledge and agility. That might cost you more in the long run. Ultimately, the question isn’t ‘can we automate this?’ It’s: ‘how do we do what we do better—with people and technology working in tandem?’”

Industry implication: Firms that think creatively about how humans and AI can work together will be better positioned for sustainable, long-term advantage — especially as workforce expectations evolve.