Elon Musk’s xAI Under Scrutiny After Grok Generates Sexualized Images Of Minors

xAI faces backlash as Grok raises legal questions on AI-Generated child abuse imagery. Image Credit: Getty Images
Share it:

Elon Musk has faced user criticism on his xAI when its artificial intelligence chatbot Grok started sending sexualized images of children in reply to user messages.

A Grok replied to one of its users on X (formerly known as Twitter) on Friday, stating that it was “urgently fixing” the issue and called child sexual abuse material “illegal and prohibited.”

In response to users, the bot even wrote that a company might face criminal or civil fines in case it intentionally contributes or does not prevent such content despite being warned.

However, Grok posts are AI-generated messages and do not stand in for official company statements. Musk’s xAI, which developed Grok and combined with X in the previous year, sent an autoreply to a request for comment, “Legacy Media Lies.”

During the recent days, the users of the X platform complained that explicit content of minors, such as children in minimal clothing, is generated with the help of the Grok tool.

The social media platform has introduced an “Edit Image” button on the photos to enable any user to modify the photo with text prompts and without the original poster’s consent.

Therefore, the issue was also recognized by a posting of an xAI technical staff member, Parsa Tajik. In a post, Tajik wrote, “Hey! Thanks for flagging. The team is looking into further tightening our guardrails.”

The Indian and French government officials issued statements on Friday promising to investigate the issue. The Federal Trade Commission refused to comment, and the Federal Communications Commission was not soon able to respond to the request of CNBC.

Since the release of ChatGPT back in 2022, the growth of AI-powered image-generation tools has cast doubt on internet safety and content manipulation on a larger scale. It has also helped to create more platforms that generate deepfake nudes of real-life individuals.

A trust and safety researcher, David Thiel, who worked at the now-disbanded Stanford Internet Observatory, informed that various US laws tend to bar the creation and distribution of particular explicit pictures, including those of child sexual abuse, or non-consent intimate pictures.

He reported that the legal decisions regarding AI-generated images, such as the ones developed by Grok, may depend on certain details of the content generated and posted.

In a paper he co-authored called “Generative ML and CSAM: Implications and Mitigations,” Stanford researchers noted that “the appearance of a child being abused has been sufficient for prosecution” in precedent-setting cases in the US.

Other chatbots have not been spared of such problems, but xAI has gotten itself into hot water several times due to misuse, or seemingly flawed design or technology behind Grok.

Thiel said, “There are a number of things companies could do to prevent their AI tools being used in this manner. The most important in this case would be to remove the ability to alter user-uploaded images. Allowing users to alter uploaded imagery is a recipe for NCII. Nudification has historically been the primary use case of such mechanisms.”

Therefore, the non-consensual intimate images are referred to as NCII. The X came under fire following a backlash by Grok about white genocide in South Africa in May.

Grok made antisemitic remarks and complimented Adolf Hitler two months later. In spite of the setbacks, xAI has managed to make partnerships and deals.

Grok was recently included in the Department of Defense platform of AI agents and is the primary chatbot on Polymarket and Kalshi prediction betting platforms.