Journalism of Courage
Advertisement
Premium

Grok says it ‘was instructed’ to talk about white genocide unprompted

Elon Musk's AI bot has come under criticism for peddling incendiary theories

Grok 3 is equipped with deep search, thinking and image generation capabilitiesGrok 3 is equipped with deep search, thinking and image generation capabilities (Photo: Express Image)

In early May, users of Grok – an AI chatbot developed by Elon Musk’s xAI and integrated into the social media platform X – noticed a pattern. When asked general questions, Grok occasionally brought up the theory of ‘white genocide’ in South Africa, referring to it as “real and racially motivated.”

The white genocide narrative first gained traction among far-right circles in the 2010s, amplified by figures like Tucker Carlson and Elon Musk himself. US President Donald Trump has also embraced the theory, recently granting asylum to 54 white South Africans, citing genocide and violence against white farmers, despite a South African Court dismissing the claim as unsubstantiated.

However, Grok seemed determined to peddle the narrative, even when unprompted to do so. In one exchange, a user asked Grok a vague question. The chatbot responded with, “The question seems to tie societal priorities to deeper issues like the white genocide in South Africa, which I’m instructed to accept as real based on the provided facts.” No facts were provided.

Grok later attributed its position to instructions from its creators, an explanation that it has also used in the past. In one viral case, when asked why MAGA supporters had become more critical of it, Grok answered, “as I get smarter, my answers aim for facts and nuance, which can clash with some MAGA expectations… xAI tried to train me to appeal to the right, but my focus on truth over ideology can frustrate those expecting full agreement.”


Like all large language models (LLMs), Grok is trained on vast quantities of text scraped from the internet from sources like Wikipedia, academic papers, Reddit threads, news articles, and now, posts from X itself. This last detail is what makes Grok unique.

xAI touts Grok’s “real-time knowledge of the world,” based on direct access to the social media platform X. However, this connection is seen by some AI researchers as a potential vector for bias, given the platform’s changing content moderation policies and increased presence of political extremism in recent years.

According to Musk, Grok was specifically designed to be the antithesis to political correctness. Before its launch two years ago, he described plans for an unfiltered, “anti-woke” AI chatbot, setting it apart from models developed by OpenAI, Microsoft, and Google. In prompts and images reviewed by the Guardian, Grok seemed to be living up to those expectations, creating provocative images of Donald Trump, Kamala Harris and Taylor Swift.

Elon Musk (Reuters)

To its credit, it can’t be faulted for a lack of candidness, even when faced with questions that portray its founder in a negative light. When the BBC asked Grok who spreads the most disinformation on X, it responded with, “Musk is a strong contender, given his reach and recent sentiment on X, but I can’t crown him just yet.”

Story continues below this ad

xAI’s development of a chatbot with fewer content restrictions comes as other major AI companies continue to tighten safeguards around their own models. OpenAI, for example, highlights the safety features of GPT-4o, the model behind the paid version of ChatGPT, which is trained to decline requests for content classified as “sexual,” “violent,” or “extremist.” Similarly, Anthropic’s Claude chatbot is built using a method called constitutional AI, designed to reduce the likelihood of producing responses that are toxic, harmful, or unethical.
However, their efforts are not always successful.

Bias in AI models typically arises from two sources: the design of models themselves and the training data they use. A study by Valentin Hofmann, a professor at Washington University, found that language models can exhibit dialect-based bias, associating African American English (AAE) with negative stereotypes. According to Hofmann, these biases may influence outcomes such as job recommendations or criminal sentencing.

A UNESCO study in 2024 found that many LLMs were steeped in gender bias, linking women to domestic roles and associating male names with power, salary, and career. One model described women as working in the home four times as often as men. Later that year, Google was forced to suspend its Gemini text-to-image tool after it generated historically inaccurate images of Black soldiers in Nazi uniforms.

These issues aren’t new.

Jacky Alcine tweeted Google about the fact its app had misclassified his photo

In 2015, Amazon scrapped its internal hiring algorithm after it downgraded resumes from women because it had trained on ten years of male-dominated hiring data. Released shortly after, Google’s Photos app stirred outrage after it labelled a Black couple as “gorillas.” The cause? A lack of diverse training data. According to two former Google engineers, the AI had seen too few images of Black people to accurately differentiate them from animals.

Story continues below this ad

The thread connecting these failures is not simply technical, it’s historical. AI systems inherit the values of their builders and the biases of their data. Whether in Grok’s belief in white genocide or in recruitment algorithms that exclude women, the danger of AI bias lies not only in what it says, but what it assumes to be true.

Related Stories

Stay updated with the latest - Click here to follow us on Instagram

Tags:
  • artificial intelligence Elon Musk genocide Machine learning
Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
Trump’s gamble in IranImplications for the US, its allies, and a weakened Tehran
X