Our forensic analysis produces evidence that your IP was used to train AI models. Built to hold up in court.
In June 2025, two federal judges sided with AI companies. Neither ruling found that training was legal. Both found that plaintiffs hadn't proven their specific works were in the data.
One judge noted that plaintiffs with better evidence will often win. That's what we're building.
No single approach produces courtroom-grade evidence on its own. We layer four methods to build the strongest possible case.
We prompt AI models to reproduce your works, document every output, and score the similarity. A reproducible catalog of infringing generations.
Works on any model's public API with no internal access required. We measure whether a model reconstructs your content with unusual fidelity, then quantify the probability it was trained on your work.
We compare your assets to model outputs across multiple perceptual and semantic dimensions, measuring not just visual similarity but how closely the model represents your specific works versus similar concepts.
For open-source models, we recover training data directly from model weights. This is the most legally potent form of evidence available.
Forensic integrity matters more than marketing claims. This is how we position our findings.
"We can prove with absolute certainty that your specific image was in the training set." No methodology can make this claim today with zero margin of error on large-scale foundation models.
The strongest available evidence, using multiple independent methodologies, with documented confidence intervals and reproducible results. More than what exists today, and more than what plaintiffs recently brought to court.
A targeted scan of your highest-priority assets against major AI models. Delivered as a forensic report with full methodology.