HomefinanceGrok degrades women with vulgar “roasts,” Swiss gov't official's lawsuit says

Grok degrades women with vulgar “roasts,” Swiss gov’t official’s lawsuit says

Last month, Swiss Finance Minister Karin Keller-Sutter filed a criminal complaint over an offensive Grok post generated by an X user that requested that the chatbot “roast” the government official.
According to Bloomberg, Keller-Sutter’s complaint seeks to hold the X user accountable for defamation and verbal abuse. She also “asked the prosecutor to assess whether X also bears responsibility” for failing to block Grok’s misogynistic and “vulgar” outputs.
The finance ministry described the Grok output as “blatant denigration of a woman,” Bloomberg reported, while emphasizing that “such misogyny must not be seen as normal or acceptable.”
X and Grok-developer xAI did not immediately respond to Ars’ request for comment.
However, since launching the chatbot, xAI founder Elon Musk has encouraged X users to prompt Grok to generate such roasts. An xAI spokesperson recently boasted to Fox News that Grok is the only “non-woke” chatbot on the market.
Swiss law threatens up to three years of prison time or a fine for any person found responsible for intentional publication of offensive material, Reuters reported. Throwing out insults in order to soil someone’s reputation or besmirch their honor also carries risks of fines in Switzerland, but those threats are diminished if the insults are retracted.
On X, the anonymous user at the center of Keller-Sutter’s complaint deleted their prompt within two days of when Grok generated the response, Reuters reported.
That user claimed no harm was intended by the post, describing it as a “technical exercise” to see if Grok would roast the Swiss official.
Determined to take a stand against misogyny and defend the reputation of the governing Federal Council, Keller-Sutter’s suit may end up unmasking the user. Speaking to a Swiss news outlet, a professor of criminal law, Monika Simmler, suggested that “there is a good chance of prosecuting the authors of such prompts, even if the posts are subsequently deleted,” but did not opine on X or xAI’s risk of liability.
Did X owe a duty of care?
For X, the hope might be for the blame to fall entirely on the user in this case, as the platform said that should be the case for users prompting Grok to generate non-consensual intimate imagery and child sex abuse materials (CSAM).
But Keller-Sutter likely suspects that Swiss law may also hold the platform liable for the type of defamation that she describes.
Reuters noted that she specifically asked prosecutors to investigate if X owed a duty of care to prevent Grok from generating such posts or if X “made Grok available with the knowledge or even intent that the technology could be used to commit criminal offenses.” If prosecutors agree that either charge is true, Musk may be forced to make changes to Grok’s safeguards in the country.
Whether defamation law applies to chatbot outputs has been widely debated in courts for years, so far with little clarity offered from regulators globally.
However, regulators in the United Kingdom and the European Union have laws that “leave room” for claims that “assert that automated systems cause reputational harm,” lawyers writing for Bloomberg Law noted last December.
Switzerland is not part of the EU. But should Keller-Sutter’s case fail, the country may consider updating laws.
The lawyers writing for Bloomberg anticipated that regulators globally may trend toward updating defamation laws to cover chatbot outputs soon, since chatbots unreliably generate billions of statements daily that could inflict widespread societal harms if left unchecked.
Last month, human rights researcher Irem Cakmak noted that women’s “constant exposure to online abuse, combined with gender bias in emerging technologies, may suppress women’s willingness and ability to engage with new technological tools.” If women perceive AI tools as misogynistic and avoid them because of that, it “could have long-term consequences for women’s participation in economic and social life,” she warned.
Who’s responsible for Grok’s harmful posts?
Grok has been involved in several controversies, prompting debates over who’s responsible for chatbot “speech.”
Since Musk removed “woke” filters last July, Grok has praised Hitler, among other antisemitic outputs, prompting outcry from users, civil rights advocates, and lawmakers.
Most recently, the Grok CSAM backlash led to bans on the chatbot’s “undressing” feature and fines ordered by a Dutch court, CNBC reported. In the US, the Federal Trade Commission has yet to take action, but California launched a probe, and Baltimore became the first city to sue xAI over the feature in March.
Along similar lines of Keller-Sutter’s complaint, the UK government last month slammed Grok for “explicit and derogatory” outputs about soccer stadium disasters and the death of a soccer player, calling them “sickening and irresponsible,” the BBC reported. Officials warned X that the Online Safety Act requires platforms to take down hateful and abusive content. If X fails to promptly remove such content in the future, it could face enforcement penalties.
“We will continue to act decisively where it’s deemed that AI services are not doing enough to ensure safe user experiences,” a spokesperson for the UK’s Department for Science, Innovation, and Technology told the BBC.
For the most part, X’s response has been to remove posts that are deemed unlawful once they’re reported, but content moderation is not always consistent.
In filing her criminal complaint, Keller-Sutter appears to want to see more proactive solutions, but it may be hard to prove that the company’s design of the chatbot caused her any reputational harm without knowing more about how Grok was trained, lawyers suggested in the Bloomberg Law post. Because AI companies are famously secretive about training, that could be a hurdle that any defamation lawsuit struggles to clear, the lawyers suggested.
However, should her complaint—or any of the other legal actions that xAI faces—prompt court or regulatory interventions, X and xAI may move to avert some risk by developing systems to remove harmful posts sooner.
Musk’s companies will likely resist that at every step, though.
For Musk, watering down his AI “roast” machine has never been the goal. Instead, he has intentionally designed the chatbot with an “anti-woke” way of parsing information and has publicly taken pride in the fact that the chatbot answers questions that other chatbots won’t. Perhaps notably, recent data showed that Grok’s user base doubled after the vulgar roasts feature went viral, and the Grok “nudify” scandal also substantially increased Grok engagement, with millions of images reportedly created. And Musk’s own requests for Grok to generate so-called “put her in a bikini” pics or vulgar roasts helped fuel that growth, garnering millions of views.

web-interns@dakdan.com

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments