
As AI becomes a fixture in hiring, evaluation, and policy decisions, a new study funded by the Wharton AI & Analytics Initiative offers a rigorous look at a critical question: Do race and gender shape how Large Language Models (LLMs) evaluate people? If so, how can we tell? The answer is, according to Prasanna “Sonny” Tambe, Faculty Co-Director of Wharton Human AI Research, and others, is complex, and the implications matter for every organization deploying LLMs at scale. Here are the key takeaways you need to know from Tambe’s latest research on LLM bias and auditability.…Read More