Elon Musk’s artificial intelligence startup, xAI, is facing backlash after its chatbot, Grok, made controversial claims about a so-called “white genocide” in South Africa. In response, xAI announced a major system update and new transparency measures to address the situation and prevent future occurrences.
On May 16, several users of Musk’s X platform (formerly Twitter) noticed that Grok had brought up the term “white genocide” in South Africa in conversations that had nothing to do with race or politics. Screenshots shared by users sparked outrage and raised concerns about the chatbot’s internal safeguards and content moderation process.
In a post on X, xAI admitted that Grok’s troubling response was caused by an unauthorized change made to its system early Wednesday morning. The company clarified that this change bypassed the usual review process and directly violated xAI’s core values and internal policies.
“This change, which directed Grok to provide a specific response on a political topic, violated our internal guidelines,” the company stated.
To address the issue, xAI is taking several immediate steps:
- System Update – Grok is being updated to correct the unauthorized modification.
- Increased Transparency – xAI will now publish Grok’s system prompts on GitHub, allowing the public to see every update and submit feedback.
- 24/7 Monitoring Team – A dedicated human moderation team will now oversee Grok’s responses round-the-clock to catch and correct inappropriate or incorrect outputs not filtered by automation.
The controversy touches on broader concerns about political bias, misinformation, and hate speech in AI chatbots. Since the release of OpenAI’s ChatGPT in 2022, AI developers have faced mounting scrutiny over how these tools handle sensitive or political topics.

The issue also reignited debates about South Africa’s land expropriation policy. Some critics—including Elon Musk himself, who was born in South Africa—have labeled the policy discriminatory against white citizens. However, the South African government and international observers have rejected claims of systemic persecution or “genocide,” calling them baseless and politically motivated.
This incident with Grok may deal a blow to Musk’s vision of building a trustworthy AI alternative. However, the company’s decision to embrace transparency through GitHub and reinforce human oversight could help restore credibility.
Whether these changes are enough to rebuild public trust in Grok—and xAI as a whole—remains to be seen. But in the fast-moving world of artificial intelligence, any failure to address such sensitive issues promptly can quickly escalate into major reputational damage.