Simplify.Connect.Accelerate
Imagine
"Exploring the limitless bounds of imagination"
Security & Compliance

Your Data Is Safe: How SOC 2 Compliance Protects INS and Our Customers

WB
January 2026
8 min read

As INS integrates AI into our operations, we understand that trust is paramount. You need assurance that your confidential information, whether it's pricing data, technical specifications, or project details, remains protected. Here's why you can be confident in our AI-powered workflows.

The adoption of AI tools like Claude (from Anthropic) and OpenAI's platforms has raised legitimate questions among our customers and partners: Where does my data go? Will it be used to train AI models? Could a competitor somehow access my confidential information? These concerns are valid, and we want to address them directly with verifiable facts.

INS Is Pursuing SOC 2 Compliance

Industrial Networking Solutions is currently undergoing a SOC 2 audit. This means our internal systems, processes, and data handling practices are being independently evaluated to verify they meet rigorous security standards. When you work with INS, you're protected by our commitment to enterprise-grade security and our AI partners who are already SOC 2 compliant.

Our pursuit of SOC 2 certification demonstrates our commitment to protecting your data at every step, from the moment you share information with us through every system that processes it.

Understanding SOC 2 Type II Compliance

SOC 2 (Service Organization Control 2) is an auditing framework developed by the American Institute of Certified Public Accountants (AICPA) that evaluates how service providers manage customer data. A SOC 2 Type II certification is particularly rigorous because it assesses not just whether controls exist, but whether they operate effectively over an extended period, typically 6-12 months.[1]

Both Anthropic and OpenAI have achieved SOC 2 Type II certification, which means independent third-party auditors have verified their security controls across five trust service criteria:

The Bottom Line

When we use Claude or OpenAI's API for business operations, your data is processed by systems that have been independently audited and certified to meet rigorous security standards. This isn't a marketing claim; it's a verified fact documented in compliance reports.

Anthropic's Security Certifications

Anthropic, the company behind Claude, has established a comprehensive security and compliance program. According to Anthropic's Trust Center, the company holds multiple certifications:[2]

The SOC 3 summary report is publicly accessible via Anthropic's Trust Portal, and the detailed SOC 2 report is available under NDA for enterprise customers who require it for their own compliance documentation.[2]

OpenAI's Security Framework

OpenAI maintains equally robust security certifications. According to their Security & Privacy documentation and Trust Portal, OpenAI's compliance includes:[3][4]

OpenAI's infrastructure includes AES-256 encryption for data at rest, TLS 1.2+ encryption for data in transit, and strict access controls with 24/7/365 security team coverage.[4]

Your Data Is NOT Used for Training

This is perhaps the most important point for our customers: When INS uses commercial API access to Claude or OpenAI, your data is explicitly excluded from model training.

Anthropic states clearly: "Commercial users (Team and Enterprise plans, API, 3rd-party platforms, and Claude Gov) maintain existing policies: Anthropic does not train generative models using code or prompts sent to Claude Code under commercial terms."[5]

OpenAI similarly confirms: "By default, OpenAI does not use data from ChatGPT Enterprise, ChatGPT Business, ChatGPT Edu, ChatGPT for Healthcare, ChatGPT for Teachers, or the API platform, including inputs or outputs, for training or improving their models."[4]

How This Protects INS and Our Customers

When you share technical specifications, pricing information, or project details with INS, and we use AI tools to assist with quotes, technical recommendations, or order processing, here's what happens to that data:

Data Protection Guarantees

Encrypted in Transit and at Rest

All data is encrypted using industry-standard AES-256 encryption at rest and TLS 1.2+ in transit. Your information is never transmitted in plain text.

Not Used for Model Training

Data submitted through commercial API access is explicitly excluded from training future AI models. Your proprietary information remains yours.

Strict Access Controls

Both providers implement role-based access controls, ensuring only authorized personnel can access systems, and all access is logged and auditable.

Data Residency Options

OpenAI offers data residency in multiple regions including the US, Europe, UK, and more. This helps organizations meet regional compliance requirements.[4]

Independent Verification

Security controls are not self-reported; they are verified by independent third-party auditors who assess actual operating effectiveness over time.

Zero Data Retention Options

Enterprise customers can access Zero Data Retention (ZDR) agreements where inputs and outputs are not stored beyond immediate processing.[5]

Side-by-Side Comparison

The following table summarizes the key security certifications and data protection policies across INS and our AI partners, demonstrating the complete chain of trust protecting your data:

Security Feature INS Anthropic (Claude) OpenAI
SOC 2 Certified In Audit
ISO 27001 Certified -
API Data Excluded from Training N/A
Encryption at Rest (AES-256)
Encryption in Transit (TLS 1.2+)
HIPAA BAA Available -
GDPR DPA Available -
Zero Data Retention Option -
AI-Specific Certification (ISO 42001) - -
Independent Third-Party Audit In Progress

Addressing Common Concerns

"Could my competitor see information I share with INS?"

No. Data processed through commercial API access is isolated to that specific request. There is no mechanism by which one customer's data could be exposed to another customer. The AI models process your query, generate a response, and (under commercial terms) do not retain or learn from that interaction.

"Will my pricing data be used to train AI that helps my competitors?"

Absolutely not. As documented above, both Anthropic and OpenAI explicitly exclude commercial API data from model training. The AI that responds to queries from INS (or anyone else using commercial terms) does not incorporate your data into its knowledge base.

"How do I know these claims are true?"

These aren't just claims; they're verified by independent auditors. SOC 2 Type II certification requires that an independent Certified Public Accountant (CPA) firm audit the organization's controls over an extended period. The resulting reports document specific controls, testing procedures, and results. Enterprise customers can request these reports directly from both providers.

INS's Commitment: SOC 2 Audit in Progress

INS doesn't just rely on our AI providers' security controls. We are currently undergoing a SOC 2 audit, meaning our internal systems and data handling practices are being independently evaluated against rigorous security standards. We do not share customer data with third parties for marketing purposes, and we follow industry best practices for data minimization, using only the information necessary to fulfill your requests. Our pursuit of SOC 2 certification demonstrates our organizational commitment to security, not just our choice of technology partners.

What This Means for Your Business

When you work with INS, you can be confident that:

AI is transforming how we serve customers, enabling faster quotes, better technical recommendations, and more efficient operations. But this transformation doesn't require sacrificing data security. The SOC 2 Type II certifications held by both Anthropic and OpenAI provide independent verification that your data is protected by enterprise-grade security controls.

Our Promise

INS is committed to leveraging AI responsibly. As an organization actively pursuing SOC 2 certification, we're building security into our operations from the ground up. We've chosen AI partners who share our commitment to security and privacy, and we're working to create a complete chain of trust verified by independent auditors. Your trust is the foundation of our business, and protecting your data is non-negotiable.

Questions?

If you have specific questions about how INS handles data in AI-assisted workflows, or if you need documentation for your own compliance requirements, please reach out to your INS representative. We're happy to provide additional details about our data handling practices and the security certifications of our technology partners.