Introduction
Polaris AI is based on Polaris Pro and enables the secure deployment of AI and Machine Learning (ML) within a Trusted Execution Environment (TEE), by encrypting all data in transit and isolating sensitive information from the underlying infrastructure. With Polaris AI, model weights are encrypted and securely stored so that they are only accessible within the TEE.
Overview
Fr0ntierX’s Polaris AI Secure Container utilizes Confidential Virtual Machines (CVM) and Confidential GPUs based on the Nvidia Hopper architecture to isolate AI and Machine Learning (ML) models within a fully encrypted environment. Confidential computing offers full memory encryption with minimal overhead, shielding data from both the cloud provider and internal IT resources. With the Polaris Secure Container Series, sensitive information remains encrypted at all stages: at rest, in transit, and when in use.
Polaris AI encrypts HTTP requests to protect against exposure risks – our encryption process uses a public key provisioned on the client’s infrastructure and managed within the TEE by the Polaris Secure Proxy. With encryption handled transparently within the TEE, no workload changes are required.
All responses are automatically encrypted with the public key provided by the user’s request, and is securely and easily decrypted by Polaris SDK – this encryption and decryption can either take place inside a server or browser environment.
Polaris AI securely encrypts and decrypts the model weights and configuration using a permanent key only accessible within the TEE. Access is restricted through an attestation policy, verifying workload integrity, and can block SSH access or limit usage to pre-approved software versions. Both encryption and decryption are handled by Polaris SDK for seamless data protection.
Key Benefits
- Data Encryption: Security at all stages – at rest, in transit, and in use
- Complete Isolation: Workloads shielded from cloud providers and internal IT resources
- Transparent Encryption: All requests and responses are automatically encrypted and decrypted
- No Modifications Required: No changes to the AI models or inference server necessary
- Encrypted Data Storage: Securely store encrypted model weights
- TEE-Based Decryption: Secure data decryption within a Trusted Execution Environment
- Optional Software Version Pinning: Only allow pre-approved software versions to decrypt data