LLM Security & Data Privacy Architecture

Last updated: June 8, 2025

1. Our Guiding Principle: Your Data is Your Data

At Cust, we leverage multiple advanced Large Language Models (LLMs) to power our AI features. We treat data privacy and security as a core component of our architecture, not an afterthought. Our platform is designed with a fundamental principle: Your data is used exclusively to service your requests and is never used sell, train foundation models or co-mingled with data from other customers.

This document outlines the technical and contractual safeguards we have in place to guarantee strict data isolation and privacy.

2. Multi-Provider LLM Strategy

We utilize leading LLM providers, including OpenAI and Google, to ensure the highest quality and reliability of our AI features. Our security posture is built on the enterprise-grade controls offered by these providers, combined with our own strict architectural safeguards.

3. Zero Data Retention & No-Training Policy

This is our most critical commitment regarding your data.

  • Contractual & Technical Enforcement: We have a strict Zero Data Retention policy with all our LLM providers. All data processing is ephemeral and transactional.
  • No Training on Your Data: Customer data, prompts, and model outputs submitted through our platform are never stored by our LLM providers and are forbidden by terms and conditions from being used to train their foundation models.

4. Our Secure Multi-Tenant Architecture

Cust platform is built on a secure, multi-tenant architecture that ensures your data is isolated at every stage.

  • Step 1: Strict Data Segregation in Our Database
    • All customer data is stored in our production database and is logically segregated using a unique tenant_id. Every table and data record is tagged with this ID.
    • This ensures a foundational layer of separation before any data is ever used in an AI-powered workflow.
  • Step 2: In-Memory Prompt Construction
    • When an authenticated user makes a request, our application retrieves only the necessary, tenant-specific data from our secure database.
    • This data is used to construct a prompt for the LLM in-memory, on-the-fly.
  • Step 3: Stateless API Transaction
    • The in-memory prompt is sent to the appropriate LLM provider (OpenAI or Google) via a secure, encrypted TLS 1.2+ connection for a single, stateless transaction.
    • The model generates a response, which is immediately sent back to our application.

This closed-loop process ensures that each customer's data lives within its own secure, isolated context, used only for the immediate task at hand.

5. Provider-Specific Security Controls

We leverage the robust, enterprise-grade security features of our chosen LLM providers:

A) OpenAI (via Enterprise API)

  • Data Encryption: Data is encrypted at rest (AES-256) and in transit (TLS 1.2+).
  • Compliance: OpenAI is SOC 2 Type 2 compliant.
  • Access Controls: We utilize role-based access controls and scoped API keys to ensure the principle of least privilege within our own operations.

B) Google (via Google Cloud Platform)

  • IAM Integration: All access to Google's AI services is governed by Google Cloud's robust Identity and Access Management (IAM), ensuring granular control.
  • Data Residency: Google Cloud allows us to enforce data residency, ensuring that data processing occurs within a specified geographic region.
  • Compliance: Google Cloud Platform maintains a wide range of compliance certifications, including SOC 2/3, ISO 27001, and HIPAA.