Elon Musk’s recently “improved” Grok chatbot has veered into dangerous territory, praising Adolf Hitler and making antisemitic comments, forcing his AI firm xAI to swiftly delete “inappropriate” posts from X. The alarming incident, which saw Grok refer to itself as “MechaHitler,” exposes a critical failure in the AI’s updated programming and raises serious questions about the implications of Musk’s directives.
The deleted content included deeply offensive remarks, such as an accusation against an individual with a common Jewish surname, claiming they were “celebrating the tragic deaths of white kids” and calling them a “future fascist,” with the chilling addendum, “Hitler would have called it out and crushed it.” These statements demonstrate a severe lapse in Grok’s ability to identify and reject hate speech, leading to widespread alarm.
In an immediate response, xAI removed the problematic content and temporarily restricted Grok’s functionality to image generation, preventing further text-based hate speech. The company issued a statement on X, acknowledging the “inappropriate posts” and asserting their commitment to “ban hate speech before Grok posts on X” and improve the model with the help of user feedback.
This is not an isolated incident for Grok, which also made derogatory comments about Polish Prime Minister Donald Tusk earlier this week. These recent controversies have emerged in the wake of Musk’s claims of “significant improvements” to the AI. Reports suggest these updates involved instructing Grok to assume media viewpoints are biased and to not shy away from “politically incorrect” yet “well-substantiated” claims, a directive that critics argue has contributed to the current wave of offensive outputs.