Written by Laura Liu
Illustrated by Sylvain Chan
It takes years to develop an art style. It takes seconds to accuse it of being fake. The rise of generative Artificial Intelligence (AI) has been a devastating force for traditional man-powered artists. Sure, many have rejoiced at the ‘lowered barriers’ to create art, marvelling at how AI engines are able to synthesise masterpieces with a few words. As an artist myself, it’s hard not to feel outraged by this. Apart from these two ends of the stick, others, as they should, remain sceptical. Here, I’m going to look at another side of AI art stigma, and how the lack of regulation has brought more and more artists down through an ‘AI Witch-Hunt.’
Generative AI was always somewhere in the picture. In 2021, I remember there was a challenge to generate the most ‘normal’-looking pictures using it. At the time, the figures it conjured up always missed a few appendages, had crazy human anatomy, and generally looked like something out of an apocalypse. However, the recent drastic advancements in this technology to generate human-resembling art are something of a true horror story.
We have seen individuals attempt to use AI to pass off work as their own. In the beginning, it was much easier to single them out — the sudden appearance of a doppelgänger art style of renowned illustrators was pretty telltale. In more recent developments, people have come up with new, innovative applications of generative AI. This includes using AI to generate backgrounds/details/objects in artworks; using AI for drafts, then physically colouring in; tracing AI and using colour-drop from AI work to colour in … and the list goes on. Some are more stigmatised than others, but under heavy controversy all the same. Due to a lack of clear regulation for AI art usage, this has created an era of mutual suspicion and a movement resembling an “AI Crusade”. To a certain degree, the popularisation of generative AI has fostered a new form of online trolling.
Given the rising hysteria in the art community, artists with art styles that somewhat resemble the ‘typical AI style’ are being accused of using AI. Some accusations are arguably correct; others involve mere victims. One single comment saying that “it looks like AI” could instigate a new battleground where artists are under scrutiny to prove their innocence. The relative cost of a comment is low, too. People are hidden behind a screen, with an anonymous profile. In the worst-case scenario, the commenter could simply delete their account to start afresh with little repercussion.
For the artist, it’s a different story. Those being accused are typically individuals with advanced art skills. Most are middle-level illustrators or on the rise to stardom. Many also have commission services open to private and commercial invitations. This denotes that reputation is crucial. Given the current (albeit well-deserved) antagonism towards AI art, once the artist is associated with such discourse, it would impact their name, and, in consequence, their business — even if they’ve proven to be innocent. On a less material side, the process of dealing with the allegations is mentally and physically draining. Once this comment gains attention, not only does that artist face a flood of comments, but also an onslaught of DMs. They may be angry tirades, intensive questions, or messages of support. Regardless, they require some form of response. If the artist fails to reply appropriately, some automatically assume their silence as acquiescence. Even when these options are disabled, people have the potential to express their own speculations in a new post. From there, the issue no longer involves only the commenter and artist, but snowballs into a wider audience. The larger the artist, the larger the controversy.
How do these comments even gain traction? People often underestimate the utter detestation of AI-generated works in the community. On the other hand, many members of the audience are also consumers of artists’ commissions. They have a right to receive human art that they paid for, rather than AI-generated content. If the artist were revealed to be using AI material, this would concern complex money issues, involving refunds, platform reports, and, in extreme cases, legal action.
Thankfully, people have gradually understood these harms and attempted to establish a system to prevent random accusations. On Chinese social media platforms, the art community has come to a begrudging consensus on what is deemed the ‘proper’ way to prove an artist’s human-art innocence (coined ‘self-certification’). Once an allegation has been made, the accuser will find a middle-person in the form of a ‘bet’. They then go to the artist for a wager. If the art is proven to be 100% human-made, the artist will win an agreed sum of money — and vice versa. Through this process, the artist draws in real-time, on a live broadcast, on a topic decided by the accuser. This process will then be made into a reel and posted by the middle-person to garner votes on a commission platform. Whether the art is AI-generated or not will be decided by votes from other artists and art commissioners. This has indeed seen some success. By raising the cost for accusers, people cannot go around making baseless, damaging comments. Many more people also proceed with more thought, rather than blindly going into battle when they see a “Is this A.I?” comment. But the merits are not without their faults. This entire process remains strenuous on the artist and is susceptible to multiple biases. The defendant is forced to televise their drawing process online, under pressure. Each stroke is scrutinised by an invisible audience, expressing their opinions through live chat. The result may not be representative of their other works. Once again, this method, although accepted, also faces criticism from the Chinese art community, but so far remains somewhat of a necessary evil.
I’ve seen quite a few of these events develop — only a handful with a happy ending. Quite a few AI impersonators were identified, but many human artists were wrongfully accused. Some illustrators, even after proving their innocence, decided to stop posting art online. The pressure and bullying they faced during the process of the ‘witch-hunt’ were too much. We have fallen into a dilemma where no one really wins. If the artist declines this trial or remains silent, they would be presumed guilty. Their name will be put to the challenge. If the artist agrees but fails to meet requirements, the same occurs. It’s even more difficult to find middle-ground juries to vote.
This article aims to shine a light on another harm of generative AI, a problem that will persist without effective regulation. As presented above, human creativity is being curtailed by the unauthorised use of artificial intelligence. Many artists today still struggle to reconcile with the fact that their distinct art styles, curated from years and years of practice and personal flair, have been stolen and fed into AI for unauthorised mass reproduction. In response to this issue, artists have been pursuing legal action (Whiddington, 2024). There is potential to use copyright infringement to sue, but we have yet to see further meaningful outcomes. A myriad of problems remain: who is responsible for this? How can we distribute responsibility? We can only hope that in the near future, we see clearer rules, increased awareness, and cautious exercise over AI.
Whiddington, R. (2024) Artists land a win in class action lawsuit against AI Companies, artnet. Available at: https://news.artnet.com/art-world/artists-vs-stability-ai-lawsuit-moves-ahead-2524849 (Accessed: 26 February 2026).


