Product

Solutions

Customers

Resources

Product

Solutions

Customers

Resources

E-commerce

E-commerce

E-commerce

How we protect your product data, brand guidelines and content strategy

Richard Knudsen

CTO & Co-Founder

3 min

read

Jan 19, 2026

woman leaning on wall covering her face
woman leaning on wall covering her face
woman leaning on wall covering her face
woman leaning on wall covering her face

Enterprise teams are increasingly comfortable using AI tools for content generation. The conversation, however, rarely extends to what happens to the data they upload.

Most AI vendors talk about outputs-speed, quality, scale. Fewer talk about inputs: where client data is stored, who can access it, how it's protected. For brands that have spent years building their identity, their product catalogues, their tone of voice, this gap in the conversation should be uncomfortable.

At Newtone, we decided early that this wasn't acceptable for the clients we serve.

The reality for most AI startups

SOC 2 is an independent security audit, increasingly regarded as the gold standard for technology companies handling client data. It examines where data is stored, who can access it, how it's protected, and what happens if something goes wrong. Crucially, auditors don't take the company's word for it-they verify controls, review documentation, and test systems over a sustained period.

Achieving this certification is expensive-typically €85,000 to €170,000 for a company of fewer than 50 people-and takes months of preparation. For most startups racing to ship features and find product-market fit, it's a rational decision to defer.

But this creates a gap. Enterprise buyers-particularly in brand-sensitive sectors like fashion, luxury, and travel-are often evaluating AI tools without a clear way to verify how their data will be handled. They're left relying on vendor promises rather than independent evidence.

Why we made a different choice

Newtone works with enterprise clients in fashion, luxury, beauty, travel, and automotive. These are sectors where brand integrity isn't a nice-to-have-it's everything.

When our clients share their product data, their brand guidelines, their content strategies with us, they're entrusting us with material that is competitively sensitive. A luxury house's tone of voice. A retailer's entire product catalogue. A travel brand's seasonal campaign briefs. This information, in the wrong hands, would be damaging.

The brands we serve have reputations built over decades. They cannot afford to work with vendors who treat security as an afterthought-or worse, as a problem to be addressed once something goes wrong.

So we invested early. We went through the audit process, had our systems independently examined, and earned SOC 2 Type II certification.

What this means in practice

For those unfamiliar with the certification, here's what SOC 2 Type II actually verifies:

Security: Our systems are protected against unauthorised access. This covers everything from how we manage employee credentials to how we monitor for threats.

Availability: Our platform is designed to be reliable and accessible when you need it, with appropriate safeguards against downtime.

Confidentiality: Client data is handled with appropriate restrictions. Information you share with us stays protected.

The audit doesn't just check whether we have policies written down-it examines whether those policies actually work, over a sustained observation period. It's the difference between saying you lock the door and proving that you do.

Looking ahead

SOC 2 is a foundation, not a finish line. As AI tools become more embedded in enterprise workflows, the questions around data governance will only become more pressing. How is client data used in model training? How are outputs quality-controlled? How is sensitive information segregated?

These are the questions we're already working on. SOC 2 certification demonstrates that we take the fundamentals seriously. What comes next is ensuring that as AI capabilities evolve, our governance keeps pace.

For those evaluating AI content partners

If you're assessing AI tools for your organisation, we'd encourage you to ask your vendors directly: how do you protect our data, and can you prove it?

Our SOC 2 Type II report is available on request. If you'd like to discuss your security requirements in more detail, we're happy to have that conversation.

See how AI infrastructure is
a growth engine

Get a Demo

Get a Demo

Get a Demo