LOCAL AI
Air-gapped intelligence. No cloud required.
Full Brain functionality without internet connectivity. For defense, pharmaceutical, critical infrastructure, and classified facilities. Data sovereignty by design, not by promise.
THE LOCAL STACK
How every AI function runs on your hardware.
Simple queries
Local LLM
Read tags, check alarms, quick lookups. Fast, free, private.
Complex generation
Cloud if enabled (optional, BYOK)
Large system generation falls through to the best available provider — only if you allow it.
Orchestrator
100+ tools · 5-tier safety · multi-provider routing
Vision
Brain Vision
OpenCV + custom CV pipelines. Fully air-gapped.
Language
Local LLM endpoint
Ollama, vLLM, LM Studio, or any OpenAI-compatible server.
AI co-processor
NVIDIA Jetson Orin Nano Super
40 TOPS AI inference at the edge.
Compute module
CompuLab SBC-IOT-iMX8Plus
Brain OS runtime. Industrial-grade SBC.
VISION PIPELINE
From bootstrap to autonomous in 8 hours. Zero ongoing cloud dependency.
Learning
API-assisted parameter optimization. Uses cloud vision if available. Can be fully local if not.
Validation
Cross-checks detections. Tunes thresholds. Ready for autonomous.
Autonomous
OpenCV + classical CV runs locally on Jetson. Zero API calls. Fully air-gapped.
Callout
The bootstrap phase is optional. You can skip it entirely and train fully locally. Choose your sovereignty level.
NO TELEMETRY
Brain sends nothing home. Ever.
No usage analytics. No error reports. No model feedback. No phone-home checks. No “anonymous” usage data. Your cabinet is your cabinet. What happens on Brain stays on Brain.
What Brain does NOT send
- Usage statistics
- Error logs
- Performance metrics
- Model interactions
- PLC program contents
- Sensor data
- Customer identifiers
- License activation pings
Side note
Brain does check for updates if you enable it. You can disable. You can run a private update server. Your choice.
BRING YOUR OWN KEYS
If you use cloud models, your keys stay on your cabinet.
Brain never proxies AI calls through our servers. Your Anthropic, OpenAI, or Azure key is stored encrypted on the cabinet. API calls go directly from your Brain to the provider. We don’t see them. We don’t log them. We don’t bill them.
How it works
On premise
Your Brain Cabinet
Direct
Provider API
Anthropic · OpenAI · Azure
Response
Back to your cabinet
Not this
Interkey servers are NEVER in the path.
SUPPORTED LOCAL MODELS
Run any OpenAI-compatible endpoint.
Ollama
Easiest local deployment. Llama 3.3 70B, Qwen 2.5, etc. Single binary install.
vLLM
Production-grade throughput. GPU-accelerated. OpenAI-compatible API.
LM Studio
GUI for local model management. Good for smaller deployments.
llama.cpp server
Lightweight, CPU-friendly. Runs on the Jetson itself for small models.
Self-hosted GPU server
Dedicated inference box. Highest performance.
Routing note
Brain routes by query complexity. Simple tool calls (read tags, check alarms) stay local. Complex system generation goes to the best available model (local or cloud, if you allow it).
WHO BUYS LOCAL AI
Regulated industries. Classified environments. Sovereign operators.
Defense
Classified facilities. ITAR compliance. TEMPEST environments. Brain runs with no network, no telemetry, no cloud dependency.
Pharmaceutical
21 CFR Part 11. GMP validation. Data integrity. Brain's audit trail and air-gap support qualify for regulated production.
Critical Infrastructure
Water treatment. Power grid. Chemical processing. Brain operates isolated from IT networks by design.
Sovereign Industrial
Nation-state manufacturing. Local data laws. Export controls. Run Brain entirely within your borders.
COMPLIANCE
Data sovereignty is not marketing. It’s architecture.
- All data stays on the cabinet by default
- No hidden network calls (audit with Wireshark — we'll help you)
- Open-source Brain Vision pipeline available for inspection
- On-premise update servers supported
- Air-gap mode is a runtime flag, not a product tier
- Full source code escrow available for Brain Enterprise