Can Nightshade and Glaze be used to hide illegal material?

Lola Soko
4 min readJun 4, 2024

--

Could tools designed to hinder AI training be exploited by criminals to conceal CSAM, terrorist content, and other illegal materials? This question has been on my mind for some time and recent discussions about these tools has made me wonder if it’s possible.

For context, certain tools have been developed to obstruct or manipulate AI training processes. These tools, often promoted as means to protect artists from having their work scraped and used to train AI systems, have gained popularity with artists to protect their work.

While the existence of these tools is not really my concern, as I consider their utility to be fundamentally limited in scope and ability, I am concerned by the potential for their misuse by malicious actors. Specifically, I wonder if these tools could be used to obscure illegal activities and whether their use might eventually be subject to legal restrictions or bans.

While the legality of these tools is already questionable (in my opinion, as someone who isn’t a lawyer), due to the distribution of images manipulated by tools like Glaze or Nightshade could potentially fall under various cybercrime laws in multiple countries. This is because these tools effectively distribute malicious code with the intent of corrupting a database, akin to the distribution of malware.

My primary concern is that a government or law enforcement agency might assert that these tools facilitate the concealment and distribution of child sexual abuse material (CSAM), which could lead to a ban or restriction on their use.

A few months ago, a YouTube video was posted to the subreddit r/artisthate, a community focused on opposing AI and AI-generated content. The video, linked below, discusses Discord’s use of AI to scan messages for illegal material.

Additionally, here is an article detailing how this AI operates. Essentially, it scans messages and any images (and possibly videos) that a user sends. Messages that violate rules are flagged and sent to Discord for human review, in case the AI flags up an image of a dog or some trees as illegal material.

While I couldn’t determine the specific model used to scan these messages, it is likely to be CLIP or another model specifically trained to detect illegal material, these models are not likely available to the public.

Obviously, I will not attempt to test this system, as doing so would be illegal. However, I suspect that several individuals have already been apprehended due to this tool.

Discord takes a firm stance against child sexual abuse and exploitation. I know this from personal experience, as I once reported a server where users were sharing CSAM and grooming kids and the entire server was taken down within minutes.

This raises the question: Do tools designed to obstruct AI training compromise Discord and other websites’ abilities to scan for CSAM and other illegal materials? Can these tools be used to bypass these scans, potentially putting children at risk?

To investigate, we can test whether Nightshade effectively disrupts image captioning models such as BLIP, CLIP and others.

Unsurprisingly, Nightshade doesn’t appear to work. Although I have not yet trained any LoRA or AI models using Nightshade-processed images, my tests with CLIP and various other captioning methods show that Nightshade fails to interfere with these processes.

Here is me running it through CLIP captioning on hugging face.

Here is copilot doing the same (Chat GPT-4o was down when I tried this).

And finally here is BLIP

Yeah…I don’t think nightshade works.

While this might be perceived as an unfair and biased critique of Nightshade (and Glaze), I initially anticipated that these tools would demonstrate some level of effectiveness. However, it appears they do not…

Admittedly, this is a test to determine whether captioning tools can recognize these images. I cannot ascertain if these tools would impact the training of models or LoRAs, or other applications. However Nightshade does not seem to be effective…or even work.

On a positive note, it seems unlikely that Nightshade could be used to conceal illegal material. As for Glaze, I am currently uncertain about its efficacy but I suspect it’s also not going to do a whole lot.

Overall these tools seem rather ineffective, its like a placebo effect but at least these tools cannot be used by criminals.

--

--

No responses yet