How we protect your product data, brand guidelines and content strategy

Richard Knudsen
CTO & Co-Founder
3 min
read
Jan 19, 2026
Enterprise teams are increasingly comfortable using AI tools for content generation. The conversation, however, rarely extends to what happens to the data they upload.
Most AI vendors talk about outputs-speed, quality, scale. Fewer talk about inputs: where client data is stored, who can access it, how it's protected. For brands that have spent years building their identity, their product catalogues, their tone of voice, this gap in the conversation should be uncomfortable.
At Newtone, we decided early that this wasn't acceptable for the clients we serve.
The reality for most AI startups
SOC 2 is an independent security audit, increasingly regarded as the gold standard for technology companies handling client data. It examines where data is stored, who can access it, how it's protected, and what happens if something goes wrong. Crucially, auditors don't take the company's word for it-they verify controls, review documentation, and test systems over a sustained period.
Achieving compliance requires significant investment and takes months of preparation. For most startups racing to ship features and find product-market fit, it's a rational decision to defer.
But this creates a gap. Enterprise buyers-particularly in brand-sensitive sectors like fashion, luxury, and travel-are often evaluating AI tools without a clear way to verify how their data will be handled. They're left relying on vendor promises rather than independent evidence.
Why we made a different choice
Newtone works with enterprise clients in fashion, luxury, beauty, travel, and automotive. These are sectors where brand integrity isn't a nice-to-have-it's everything.
When our clients share their product data, their brand guidelines, their content strategies with us, they're entrusting us with material that is competitively sensitive. A luxury house's tone of voice. A retailer's entire product catalogue. A travel brand's seasonal campaign briefs. This information, in the wrong hands, would be damaging.
The brands we serve have reputations built over decades. They cannot afford to work with vendors who treat security as an afterthought-or worse, as a problem to be addressed once something goes wrong.
So we invested early. We went through the audit process, had our systems independently examined, and achieved SOC 2 Type II compliance.
What this means in practice
For those unfamiliar with the standard, SOC 2 Type II is a rigorous examination of how a company handles sensitive data. Unlike basic security questionnaires or self-assessments, it's an independent audit conducted by qualified third parties who spend months scrutinizing systems, processes, and controls.
The frame focuses on security - verifying that systems are genuinely protected against unauthorized access. Auditors examine how we manage employee credentials, control data access, monitor for threats, respond to incidents, and maintain these protections consistently over time.
The distinction matters. Many companies have security policies documented. SOC 2 Type II requires proving those policies actually work, day to day, under real conditions. Auditors review access logs, test authentication systems, examine incident response procedures, and verify that what's written matches what happens in practice.
It's the difference between saying you lock the door and demonstrating—with evidence—that the door stays locked, that only authorised people have keys, and that you notice when someone tries the handle.
For our clients, this means their brand guidelines, product catalogues, and content strategies are protected by independently verified controls, not vendor promises.
Looking ahead
SOC 2 is a foundation, not a finish line. As AI tools become more embedded in enterprise workflows, the questions around data governance will only become more pressing. How is client data used in model training? How are outputs quality-controlled? How is sensitive information segregated?
These are the questions we're already working on. SOC 2 compliance demonstrates that we take the fundamentals seriously. What comes next is ensuring that as AI capabilities evolve, our governance keeps pace.
For those evaluating AI content partners
If you're assessing AI tools for your organisation, we'd encourage you to ask your vendors directly: how do you protect our data, and can you prove it?
Our SOC 2 Type II report is available on request. If you'd like to discuss your security requirements in more detail, we're happy to have that conversation.


