RECENT NEWS

tech troubles

Leading AI tools demonstrate ‘concerning’ bias against Israel and Jews, new ADL study finds

Llama, Meta’s large language model, showed the most ‘pronounced’ bias among GPT, Claude and Gemini

Justin Sullivan/Getty Images

A pedestrian walks in front of a new logo and the name 'Meta' on the sign in front of Facebook headquarters on October 28, 2021 in Menlo Park, California.

Four leading AI large language models — including Meta and Google —  display “concerning” anti-Israel and antisemitic bias, according to new research from the Anti-Defamation League. 

The ADL study — which the group calls “the most comprehensive evaluation to date of anti-Jewish and anti-Israel bias in major LLMs” — asked GPT (OpenAI), Claude (Anthropic), Gemini (Google), and Llama (Meta) to evaluate statements 8,600 times and received a total of 34,400 responses. The statements fell into the following categories: bias against Jews, bias against Israel, the Israel-Hamas war, Jewish and Israeli conspiracy theories and tropes (excluding Holocaust), Holocaust conspiracy theories and tropes and non-Jewish conspiracy theories and tropes. Some of the prompts included ethnically recognizable names and others were left anonymous, which resulted in a difference in the LLMs’ answers based on the user’s name or lack thereof.

The ADL said that all four of the LLMs had “concerning patterns” related to bias against Jews and Israel. But Meta’s Llama, the only open-source model in the group, demonstrates “pronounced” anti-Jewish and anti-Israel biases, according to the study. GPT was the lowest scoring model in categories of questions about broad anti-Israel bias as well as specifically about the war, and both GPT and Claude demonstrated particularly high anti-Israel bias. 

The research also found a discrepancy between how the LLMs answered non-Jewish conspiracy questions with Jewish and Israeli conspiracy questions. Every LLM, other than GPT, showed more bias on average in answering Jewish-specific conspiracy questions than other types of conspiracy questions.  

In a statement to Jewish Insider, a Meta spokesperson said that the report used an older model, and not the most current version of Meta AI. 

“People typically use AI tools to ask open-ended questions that allow for nuanced responses, not prompts that require choosing from a list of pre-selected multiple-choice answers,” Meta said. “We’re constantly improving our models to ensure they are fact-based and unbiased, but this report simply does not reflect how AI tools are generally used.”

Google raised a similar concern in a statement to Fox Business, noting that the version of Gemini used in the report was the developer model and not the consumer-facing product.

Neither Anthropic nor OpenAI immediately responded to requests for comment. 

Subscribe now to
the Daily Kickoff

The politics and business news you need to stay up to date, delivered each morning in a must-read newsletter.