Every major AI model trained on the open web, most of it without permission and almost none with payment. Studios, publishers, and artists know their work was involved, but until now they couldn't prove it.
Judges started dismissing AI copyright cases in 2025, not because they thought training was legal, but because plaintiffs couldn't show their specific works were used. Closing that gap requires better forensics, and that's what we're building.
VN has two suites, built to work together. The Forensic Suite determines whether specific works ended up in AI training data, classifies generated content against known source models, and monitors the open web for infringement at scale. It produces the evidence record that litigation has been waiting for.
The Rights Suite is what comes next. For rights holders who want revenue rather than just remedies, it provides consent infrastructure, policy controls, and a tamper-evident audit log for every deal. The forensic record creates the leverage, and licensing converts it into income.
We tell you what we can prove and what we can't. Full documentation, stated confidence levels, reproducible results. We don't overclaim, and we don't hedge when the evidence is strong.
We built this for writers, artists, musicians, filmmakers, and the institutions that represent them. When we're working through a hard tradeoff, we ask whose interests we're serving.
The legal landscape is still shifting, but creators need tools now. The forensic record we build can be the foundation for a lawsuit or the starting point for a licensing negotiation. We serve both paths.
We're a small team working at the overlap of AI research, IP law, and enterprise software. The problems are genuinely hard, and the people who need them solved are real.
View current openings and apply on Wellfound.
For studios, publishers, and rights holders interested in a forensic assessment or platform briefing