Stop Telling AI Who to Be: Why Personas
Don’t Improve Accuracy

Two people are smiling against a blue backdrop with hexagonal patterns. Text reads "STOP TELLING AI TO ACT LIKE AN EXPERT." Logos for Wharton University of Pennsylvania and Generative AI Labs are present.

Executives and operations leaders invest significant effort crafting AI prompts, and a common instinct is to assign the AI a persona: “act like a senior consultant,” “respond as a chemistry professor.” New research from the Wharton School’s Generative AI Labs, conducted by Lennart Meincke, Ethan Mollick, Lilach Mollick, Dan Shapiro, and Savir Basil, tests whether that instinct actually pays off in accuracy. The findings may surprise you.

Key Takeaways for Leaders Implementing AI

  • Telling AI to “be an expert” doesn’t make it more accurate. The researchers tested expert personas across six leading AI models and hundreds of graduate-level questions in science, engineering, and law. In the vast majority of cases and models tested, the expert label made no difference in whether the AI got the answer right (though one model, Gemini 2.0 Flash, was a notable exception).
  • The wrong persona can backfire. When a model was told it was an expert in one field but asked questions from a different field, accuracy sometimes fell. In some cases, the model refused to answer at all (specifically Gemini 2.5 Flash), deciding it wasn’t qualified, even though it actually had the knowledge to respond.
  • The less expertise you assign, the worse the output. Prompts framed around ignorance, like “you are a toddler” or “you are a layperson with no special training in this subject,” reduced accuracy across most models. The Toddler prompt consistently hurt performance, while the Layperson effect varied by model.
  • Personas shape style, not substance. Where personas do add value is in tone and framing. Asking the AI to respond like a compliance officer versus a sales executive will change how it communicates, what risks it flags, and what opportunities it highlights.
    That’s useful, just don’t expect it to improve factual accuracy.
  • Better prompts focus on the task, not the title. Clear instructions, relevant context, and concrete examples do more to improve AI output than adding a role label. That’s where your prompting effort is best spent.

Real World Application

Hands typing on a laptop keyboard in a bright, sunlit room.
Short on time? Here’s the takeaway:

Major AI providers, including Google, Anthropic, and OpenAI, have all included persona prompting in their official guidance and documentation. This research team ran thousands of trials across six widely-used AI models, with each question answered 25 separate times per model and prompt combination, producing a dataset large enough to draw reliable conclusions.

The results were consistent: expert personas produced no reliable improvement in accuracy. A notable risk also emerged when personas were mismatched to the task at hand. In several cases, models outright refused to answer questions when given an ill-fitting role, despite being fully capable of responding. The better investment is in writing clearer task instructions, giving the model relevant context, and testing outputs systematically.

This content was created with the assistance of generative AI. All AI-generated materials are reviewed and edited by the Wharton AI & Analytics Initiative to ensure accuracy, clarity, and alignment with our standards.

About Wharton AI & Analytics Insights

Wharton AI & Analytics Insights is a thought leadership series from the Wharton AI & Analytics Initiative. Featuring short-form videos and curated digital content, the series highlights cutting-edge faculty research and real-world business applications in artificial intelligence and analytics. Designed for corporate partners, alumni, and industry professionals, the series brings Wharton expertise to the forefront of today’s most dynamic technologies.