![]() |
| Manchester United and Liverpool pressured X to remove AI Grok posts referencing club tragedies, raising concerns about AI responsibility and online safety regulations. |
Two English soccer giants, Manchester United and Liverpool FC, have successfully pressured the social media platform X (Twitter) owned by Elon Musk to remove several posts generated by the AI chatbot Grok. The content sparked strong backlash after it referenced tragic events linked to the history of both clubs.
The controversy emerged after anonymous users reportedly asked Grok—an AI tool developed by xAI—to generate posts intended to deliberately offend supporters of Manchester United and Liverpool. The prompts allegedly encouraged the AI to create messages specifically designed to provoke fans.
After the posts circulated online and triggered official complaints from the two Premier League clubs, they were removed from X later the same day.
AI Grok Referenced Football Tragedies
The posts generated by Grok reportedly referenced several tragic events that remain deeply sensitive within the English soccer community.
Among them was the Munich Air Disaster in 1958, which killed several Manchester United players and staff.
Another event mentioned was the Hillsborough Disaster in 1989, a stadium crowd crush that claimed the lives of 97 Liverpool supporters.
The posts also referenced the death of Liverpool forward Diogo Jota last summer, further intensifying criticism of the AI-generated content.
“Tragedy Chanting” Culture in Football
In English soccer culture, mocking tragedies connected to rival clubs is known as “tragedy chanting.” The practice has long been condemned but has persisted for decades.
Historically, such abuse appeared in stadium chants, graffiti, and fan rivalries. Social media has now amplified the problem, allowing offensive messages to spread quickly and anonymously online.
As the two most successful clubs in English soccer history, Manchester United and Liverpool have frequently been targets of such behavior.
Calls From Managers to End the Abuse
Back in 2023, the managers of the two clubs at the time—Erik ten Hag of Manchester United and Jürgen Klopp of Liverpool—issued a joint statement condemning tragedy-related chants.
Ten Hag said it was unacceptable to use the loss of life associated with any tragedy to score points against rivals.
Klopp added that while passionate atmospheres are part of football, chants that cross the line and disrespect victims have no place in the sport.
UK Government Raises Concerns Over AI Responsibility
The incident also drew criticism from British lawmakers. Ian Byrne, Member of Parliament for Liverpool West Derby, described the posts as “appalling and completely unacceptable.”
He questioned how such content could be generated by AI on a major platform and warned that it would leave many fans shocked and disgusted.
Byrne emphasized that technology companies must ensure their tools are not used to produce or amplify abusive content.
Online Safety Act and AI Regulation
The UK government introduced the Online Safety Act in 2023 to regulate harmful online content and address risks linked to emerging technologies such as AI.
Under the legislation, spreading threatening or abusive communications online can constitute a criminal offense. AI services, including chatbots that allow content sharing, are required to prevent illegal or harmful material from appearing on their platforms.
A spokesperson from the Department for Science, Innovation and Technology described the Grok posts as “sickening and irresponsible,” stating they contradict British values and standards of decency.
The controversy highlights growing concerns about how artificial intelligence tools can be misused to generate offensive or harmful content. While Manchester United and Liverpool succeeded in pushing for the removal of the posts, the incident has intensified debate over the responsibilities of AI developers and social media platforms.
Berita Terkini, Eksklusif di WhatsApp Borneotribun.com
