e621:bladerunning (locked)
AI art just doesn't stink the way it used to. Gone are the days where people were posting yiffy-e18 generations that were visibly wrong, and ChatGPT-4o was making everything yellow. Somebody must know how to identify AI usage in art.
Table of Contents
- The Bladerunning Process, Explained
- Bladerunning/AI Art Investigation FAQ
- How exactly do e621's Bladerunners detect AI art?
- Which AI detector tool should I use?
- Why were some of this artist's posts deleted as AI, but not all of them?
- Why not just mark these artists as "Avoid Posting"?
- Why not talk to these artists before deleting their posts?
- I saw sketches and WIPs from an artist accused of using AI. Doesn't this mean they're innocent?
- I saw evidence or testimony from an artist I'm not related to. What do I do?
- I saw an artist admit to AI art usage. What do I do?
- Art I made or commissioned was deleted in error. What do I do?
- What kinds of submissions are accepted during appeals?
- How do I learn more about completed Bladerunning cases?
- My case was handled incorrectly. Who do I talk to next?
The Bladerunning Process, Explained
Bladerunning is e621's process for investigating the use of AI image generation in art. This is a multi-stage process where the presence of certain AI tells is determined via technical and stylistic heuristics (in rare cases the use of AI art is proven by e621 staff already having the specific source image instead), and a strategy is formed for later interview with appellants.
The evidence for AI usage in art is rarely perfect, as tracing a generated image removes a lot of the most decisive evidence with very little effort required. Unless the original image was made available separately (a commissioner received it from the artist, the artist is tracing an AI image that was already available online, etc.), guilt must be established through means such as forensic analysis and behavioral psychology.
To this end, members of e621's staff develop investigative techniques by reviewing undisguised AI-generated images from various online sources (deleted e621 posts, galleries that only allow AI images, appellants for whom guilt has been decisively proven, etc.), reviewing behaviors exhibited in completed Bladerunning cases, and researching technological developments related to AI generation of various files.
In cases where an artist's guilt is confirmed via interview, pre-interview evidence makes innocence practically impossible (e.g. going straight from stick-figures to fully-rendered furries, or accidentally leaking their AI-generated images), or other circumstances make appeals impossible (e.g. it is illegal for them to access the site to contact us), convicted artists will later have a writeup provided on their artist wiki, explaining Bladerunners' approach to the investigation and certain elements of the interview.
Bladerunning/AI Art Investigation FAQ
↑ Q: How exactly do e621's Bladerunners detect AI art?
Explaining the ability to identify AI assistance by an artist who won't admit to it makes future investigations more difficult. Even if a person is caught lying and held to account, their failure sets an example for other AI artists. Consequently, it would be against e621's interests to explain precise details about the investigation process. Even in the interview process, explaining certain things has the potential to irreversibly alter all future evidence submissions.
If an artist is accused based on parts of their coloring, an innocent artist must honestly demonstrate the ability to create that coloring, and anything that would have them conclude "I should stop coloring like this" makes it impossible to prove their innocence.
In this same scenario, a guilty artist doesn't need to prove the legitimacy of their art, and can merely complete a checklist. They can record a reasonable-looking process that includes the suspicious coloring element, present it as proof that they don't need AI assistance, and continue without changing anything.
Those who are interested in learning to identify AI art have little choice but to learn by exposure. Naive checks that once worked around 2022, such as always checking the hands and eyes, are expected to consistently fail against basic tracing. To users who want to learn this ability, the Bladerunner team suggests browsing images labeled as AI by their creators (e.g. DeviantArt tags, Pixiv tags by the artist), images submitted to AI-only sites, and more.
Several hundreds of millions of samples exist that have been disclosed as AI-generated or AI-assisted by their creators, so these can be safely studied in order to identify certain artistic and technical elements that are common to AI art but not human art. With enough time and effort, one can develop a multifaceted approach that catches everything ranging from raw AI image generations, to high-quality traced images, to AI coloring on human lineart.
While some services claim the ability to perceptually detect AI image generation, Bladerunners do not believe this technology truly exists. Services such as sightengine, isitai, etc. seek to extract profit from their users by offering to solve the hard work for them and deceptively validating the user's preconcieved notions. Certain techniques can also reduce the confidence of AI detection services without actually changing the image's origin, such as replacing colors or removing noise.
For these reasons, it is impossible to trust AI detection services when determining the validity of art. Even human text analysis is a guess at best. Often has it been joked that AI uses em-dashes or emoji too much, and AI chatbots can be prompted to write single-word answers, but this doesn't mean that every instance of "Yes." or "No." is AI-generated text.
Detection confidence only increases when the volume and quality of sample data increases.
↑ Q: Which AI detector tool should I use?
None of them.
It is impossible for any website or software to correctly detect AI usage. Watermarks from sources like Google Gemini can be removed by reconstructing or editing the image. Simple checks can be defeated by tracing AI images. One could even go so far as to AI-generate a fake 3D-styled composition, and then make it for real with existing character models.
- post #59186 was detected as AI by "wasitai" (highest AI rating). Its creation date makes this impossible.
- post #1591714 was detected as AI by "wasitai" (highest AI rating), "QuillBot" (72%), "ZeroGPT" (97%), "DeepAI" (97%), and "TruthScan" (97%). Its creation date makes this impossible.
- post #3135698 was detected as AI by "wasitai" (highest AI rating), "QuillBot" (87%), "DeepAI (80%), and "TruthScan" (80%). Its creation date makes this impossible.
- post #5837052 was cleared as human-made by "wasitai" (highest non-AI rating), "ZeroGPT" (98% human), "QuillBot" (95% human), "DeepAI" (98% human), "TruthScan" (98% human), and "Decopy" (100% human). It is traced from a specific AI image known by e621 Bladerunners.
- post #5889387 was cleared as human-made by "ZeroGPT" (92% human), "Hive Moderation" (99% human), "DeepAI" (92% human), "TruthScan" (92% human), and "Decopy" (76% human). It is traced from a specific AI image known by e621 Bladerunners.
- Submissions from specific AI artists will often register as having a single-digit chance of being AI-generated.
A lot of these services also offer "humanizers", plagiarism checkers, study and school services, etc., simultaneously facilitating and detecting image or text fraud. It is in their best interest to give false answers rather than admit the technology does not exist to consistently detect these things.
↑ Q: Why were some of this artist's posts deleted as AI, but not all of them?
In cases where an artist's new work lacks certain suspicious elements or its origin is well-proven (particularly by predating their adoption of AI), it may remain archived on e621 even if the artist was previously convicted by Bladerunners.
As Bladerunners are only a subset of e621's staff, Janitors and Admins may also approve certain posts without checking in with Bladerunners.
In some other cases, leaving certain posts undeleted may simply be a mistake by the acting Bladerunner. e621's mass-deletion tools such as takedowns are ill-suited to precisely separating human art from AI art. In the case of Bladerunning deletions, this requires manual review.
In some other cases, a post's legitimacy is proven by making a high-quality recording of the art process, but the artist is still convicted because they failed to prove AI-like traits that were found in the rest of their gallery.
If you think a post from a suspicious artist was approved in error, you may wish to speak to a Bladerunner by following the line of escalation detailed in "A case was handled incorrectly. How do I appeal?"
↑ Q: Why not just mark these artists as "Avoid Posting"?
AI deletions are not solely a punitive matter, in some cases this is a way to get a follow-up from an artist without leaving the contested posts available for public display or comment.
Even in cases where guilt is 100% confirmed, an artist's pre-AI works are automatically free of suspicion. Deleting posts for being connected to an AI artist, even if AI usage is known to be impossible in these cases, simply would not be appropriate.
↑ Q: Why not talk to these artists before deleting their posts?
The standard of proof for starting an investigation is much lower than the standard of proof for convicting. It is best if AI artists are not given information about how the investigation against them is proceeding, and an innocent artist being contacted is not ideal.
If AI artists were contacted in advance with proof insufficient for a conviction, they could abandon that account entirely to continue anew elsewhere, and hope that e621's Bladerunners would not find enough proof to convict later. Several AI artists already exist who use this practice, changing their alias whenever they draw too much heat to continue soliciting commissions or subscriptions elsewhere.
If innocent artists were contacted in advance with proof insufficient for a conviction, this would tell them that something in their gallery currently looks suspicious. They could change things that don't need to be changed. If a Bladerunner's interference unacceptably altered the course of an investigation like this, it would be incredibly difficult for Bladerunners to lay out basic facts about the case.
The existence of third-party uploaders would also throw a wrench into any plan to inquire with a suspect. Somebody who cannot verify their existence by either submitting artist files or by demonstrating ownership of the artist's galleries/social media has no business getting involved by receiving a DMail. A spiteful uploader could try to impersonate an artist they dislike, and reply with "Yes, everything I make is AI-assisted, please delete it". Some other artists may not have e621 accounts at all, in which case another point of contact would be needed, with each new site or language bringing its own share of complications.
↑ Q: I saw sketches and WIPs from an artist accused of using AI. Doesn't this mean they're innocent?
In a word, no.
Most fraudulent AI-assisted artists manage to avoid detection by laypeople by tracing their generations, rather than posting them as-is. There are various benefits to doing this: taking more time on each piece makes the artist look less suspicious, as somebody posting 50 images in one day is only possible by AI artists; naive requests like "post your sketches/layers" work on less sophisticated frauds, but not those willing to trace AI images; tracing images allows the coloring to be made less suspicious, and for details like bad hands, eyes, etc. to be secretly fixed.
An appeal sent to e621's Bladerunners in June 2025 involved the submission of a simple uncolored pencil drawing recorded on camera. This was accepted at first, but later investigation revealed the artist had multiple storefronts where they offered AI-generated "anime" illustrations, advertised with AI art that hadn't been traced like their e621 gallery was.
The appeal process has since developed to try and reduce the number of appeals like this, asking for detailed recreations of certain quirks rather than accepting simple artist-like evidence. Consequently, the presence or absence of sketches and works-in-progress can be taken into account by investigators, but it will rarely solve a case on its own.
AI artists can be expected to have cause to falsify WIPs and sketches if they expect to be scrutinized by others, if they want to stay in communication with others while lying about their artistic progression, or to provide reasonable-looking evidence to clients who don't know any better.
↑ Q: I saw evidence or testimony from an artist I'm not related to. What do I do?
Innocent and guilty artists can hardly be distinguished based solely on their behavior. Both would have cause to make public comments if their gallery was deleted as AI-assisted, or to publicly decry AI art entirely. The benefits of admitting to AI assistance are also relatively few: apologizing to earn your audience's forgiveness, expressing a sincere interest in AI art to earn the attention of other AI-assisted artists, or curing one's own sense of guilt.
However, it is rarely fair to ignore evidence outright. If a user besides the artist themselves wishes to submit evidence, the following points should be kept in mind:
- All evidence must be scrutinized in order to establish timelines and look for technical or artistic oddities.
- Even a completely guilty artist may want to falsify proof to save face. In a best-case scenario they stand to gain more commissioners, in a worst-case scenario they can count on their most loyal followers not caring.
- Depending on the nature of an accusation, Bladerunners may request specific kinds of evidence.
- If evidence came from a specific source (a social media post, a commissioner, a private Patreon post, etc.), this should be clearly conveyed when handing over the evidence.
↑ Q: I saw an artist admit to AI art usage. What do I do?
For specific posts, you may flag the affected post with the relevant evidence, including any related disclosures by the artist.
For broader disclosures, such as social media bios that admit to broad AI art usage, you may report the associated wiki or raise the matter in our Discord's #private-help channel if no wiki exists.
If an intentional disclosure is confirmed, a notice will be added explaining the artist's disclosures and the wiki will be protected. The need for further investigation will be determined on a case-by-case basis.
↑ Q: Art I made or commissioned was deleted in error. What do I do?
In most cases, the Bladerunners recommend joining the e621 Discord server to faciliate a real-time interview with oversight by their peers. You can create a #private-help ticket to speak in private with staff members.
Bladerunners may prefer to speak directly with the artist, as the artist is likely to have more evidence than even their closest commissioner.
Depending on the events of the appeal process, you might wish to escalate to higher staff mid-interview. You may follow the line of escalation detailed in "My case was handled incorrectly. Who do I talk to next?" to speak with the acting Bladerunner's superior(s).
↑ Q: What kinds of submissions are accepted during appeals?
The following kinds of evidence are often accepted for internal review:
- sketches.
- works-in-progress.
- source files including Photoshop documents.
- recordings and livestreams.
- AI reference images.
- AI background images.
- non-AI reference images such as 3D renders.
If AI reference or background images are submitted, Bladerunners will try to determine the validity of the submissions and whether the artist's additions significantly transform the image. Submissions of this nature should be forthcoming (disclosed before the interview if at all possible) and well-explained ("I saw this while browsing the internet", "a friend generated this", etc.). Even when disclosed, use of these materials may still be treated as a conviction if the artist's interpretation is not highly transformative. Similarly, a background submission is treated as a conviction if focal content like a character is still AI-generated.
In some cases, specific directions may need to be given in order to have you create a video that excludes certain possibilities for falsification.
#private-help file submissions are treated as confidential. They are not to be shared outside the staff team without permission, and they are never submitted to online AI detection services. The actual written content of a #private-help appeal may be shared or paraphrased later in order to highlight how Bladerunners handled the interview.
↑ Q: How do I learn more about completed Bladerunning cases?
In cases where strong guilt has been established and future appeals are unlikely or impossible, an artist's wiki page will be locked with a writeup provided, explaining certain aspects of the investigation process and/or the interview process. A case writeup may occasionally disclose certain details if they are immediately obvious to outside viewers, or if the nature of certain private evidence is completely incontrovertible. This includes but is not limited to...
- traces where the source image is already known.
- visibly obvious AI errors such as malformed details and merging objects.
- submitted files that include hidden AI images.
Any artist wiki that has not been locked is not an official writeup by Bladerunners. Artist wikis that have been edited with third-party AI accusations should be reported, to be cleaned and protected as appropriate.
In cases where an artist voluntarily admits to AI assistance prior to e621 investigation, a shorter notice will be added, sharing the artist's own disclosure.
In both cases, the artist's newest posts will continue to be displayed on the wiki. The presence of a Bladerunner writeup or an artist disclosure does not automatically imply that the visible posts are AI-assisted, and future posts must be reevaluated based on their own merit.
↑ Q: My case was handled incorrectly. Who do I talk to next?
If you feel that a case or appeal was handled in error, Bladerunning escalation is handled as a subset of Janitor escalation.
Procedural errors
In cases such as the following...
- detection methods being misapplied or unfounded
- submitted evidence requiring additional review
- incorrect deletions of new art from a previous suspect
... the line of escalation is as follows:
Acting Bladerunner → Case Leader (skip if unknown) → Bladerunner Lead (Lafcadio) → Janitor Lead (Strikerman) → Admin → Staff Lead (Rainbow Dash) → Site Lead (NotMeNotYou)
The "acting Bladerunner" is whoever is directly responsible for the contested action or post deletion, and the "case leader" is whoever headed the investigation against a specific artist, often indicated in the artist wiki writeup. Either of these may be an Admin, in which case the Admin defers to the lower rank Bladerunner Lead or Janitor Lead.
Misconduct
In cases such as the following...
- publicly distributing image files private to Bladerunners, friends, paid subscribers, etc.
- personal abuse in interview or writeups
... the line of escalation is as follows.
Acting Bladerunner → Bladerunner Lead (Lafcadio) → Admin → Staff Lead (Rainbow Dash) → Site Lead (NotMeNotYou)
The "acting Bladerunner" is whoever is directly responsible for the contested action. This may be an Admin, in which case the Admin defers to the lower rank Bladerunner Lead.