← BLOGIndustry Analysis

Rockwell + NVIDIA Nemotron: the first edge-AI industrial Copilot

Nemotron-Nano-9B-v2 on HMI panels and air-gapped servers. Microsoft Azure on the cloud path. Vendor-billed AI. Here is what is new and what is still locked.

By Interkey9 min read

Disclosure: Interkey makes a competing product (Interkey Brain). We try to represent every other vendor by their own published material — links to source documents are inline throughout. If we have gotten anything wrong, email editorial@interkey.com.

Rockwell Automation made the most technically interesting AI announcement of any major incumbent in this cycle. At Automation Fair 2025 in November, Rockwell unveiled the integration of NVIDIA Nemotron-Nano-9B-v2 — a domain-specific small language model fine-tuned on FactoryTalk data — into FactoryTalk Design Studio Copilot. The headline: it runs on edge appliances, including air-gapped deployments.

That last bit is the news. Until this announcement, every major-vendor industrial AI product was cloud-bound. Rockwell just broke that constraint, and they did it with a model purpose-built for control engineering rather than a generic frontier LLM. We make a competing product, and we are going to give Rockwell credit where it is due before getting to the differences.

Context: Rockwell catches up, then leaps

FactoryTalk Design Studio Copilot launched first as an Azure OpenAI-backed assistant — natural language for ladder logic, code explanations, troubleshooting. Useful, but exactly the same shape as Siemens’ first-gen Industrial Copilot and a half-dozen startups: cloud-only, vendor-billed, frontier model behind it.

The Nemotron integration changes the architecture. Per the joint Rockwell + NVIDIA announcement and Manufacturing Tomorrow’s coverage, Rockwell took the open-source Nemotron-Nano-9B-v2 model, used NVIDIA NeMo to fine-tune it on FactoryTalk’s engineering corpus, and packaged it to run on:

  • HMI panels
  • Industrial appliances
  • Desktop IDEs
  • On-prem servers and private cloud
  • Air-gapped environments

The cloud Copilot still exists, still runs on Microsoft Azure OpenAI, and still serves customers who want frontier-quality responses without managing infrastructure. The Nemotron path is for facilities that cannot or will not send process data to the cloud.

What Rockwell actually shipped

  • FactoryTalk Design Studio Copilot — generative AI assistant integrated into the SaaS design tool. Code generation, ladder logic, code explanation, troubleshooting. Backed by Microsoft Azure OpenAI on the cloud path.
  • Edge variant running NVIDIA Nemotron-Nano-9B-v2, fine-tuned on FactoryTalk-specific data via NVIDIA NeMo. Deploys on HMI panels, appliances, desktop, and servers. Supports air-gapped deployment.
  • Domain-specific reasoning.Per the press release, the fine-tuning targets “improved reasoning, predictability, and responsiveness” on FactoryTalk-shaped tasks specifically — not general-purpose engineering.
  • Microsoft + NVIDIA + Rockwell triangle. Microsoft owns the cloud path (Azure OpenAI), NVIDIA owns the model and the fine-tuning toolchain (Nemotron + NeMo), Rockwell owns the FactoryTalk integration and the domain data. None of them does this alone.

Why a 9B domain SLM matters

For most of 2024 and 2025, the implicit assumption in industrial AI was that you needed a frontier model — Claude, GPT-4 or 5, Gemini — to get useful task execution. That meant cloud, latency, per-token cost, and the privacy story. A 9B-parameter small-language model fine-tuned on the right corpus can match or beat a frontier model on a narrow, well-defined task surface, and it runs on a single GPU at the edge.

For FactoryTalk Copilot’s task surface — generate a routine, explain a tag, troubleshoot a stopped sequence — that is exactly the right tradeoff. Rockwell’s bet is that fine-tuned 9B is good enough on FactoryTalk-shaped questions, and the win on latency, cost, and sovereignty is more valuable than the marginal quality difference of a frontier model.

We think they are right. This is the model architecture industrial AI converges to: a domain SLM at the edge for routine tasks, with an optional cloud frontier model for the rare hard problem. Brain ships the same architecture (your local Ollama / vLLM model handles the everyday queries; cloud is optional and operator- toggleable). It is good to see another vendor commit to it.

What is still locked

Three things to be aware of before you assume Rockwell + Nemotron is the universal answer:

  • Vendor-billed AI on the cloud path.The Azure path is sold by Rockwell, billed by Rockwell. If you want to BYOK with your own Anthropic or Azure contract, you are not the target customer for this configuration. The edge path with Nemotron is air-gapped, which makes the BYOK question moot for that deployment — but you also do not get to swap the model. You get Nemotron, fine-tuned by Rockwell, on Rockwell’s schedule.
  • The model is fixed per deployment path. Cloud = Azure OpenAI. Edge = Nemotron-Nano-9B. There is no public-facing way to point Copilot at, say, Claude Sonnet 4.6 or a local Llama 3.3 70B. For most customers that is fine. For customers with strong opinions about which model they trust with their process data, it is a constraint.
  • Plugin and extensibility story is not public. Rockwell has not published a public SDK or MCP server interface for FactoryTalk Copilot. The tool surface is what Rockwell ships. If your facility has a unique recipe management process or a custom batch sequence, you adapt to Rockwell rather than Rockwell adapting to you. (This may change; we will update if it does.)
  • Hardware is still Rockwell. The cabinet, the PLC, the I/O — same architecture as before, with AI bolted on. That is not bad, but it is different from a vertically integrated approach where the AI, the safety model, and the hardware are designed together.

Implications for buyers

  • If you are a North American Rockwell shop, the edge Nemotron path is a meaningful upgrade. You get on-prem AI with no major architectural change. Run it on a Rockwell HMI or appliance and call it a quarter.
  • If you specifically need air-gapped operation in pharma or defense, Rockwell now has a viable answer. So does Brain. Compare on extensibility, sovereignty over the model choice, and the published safety model — these are real differences.
  • If you operate a mixed-vendor estate, Rockwell does not solve the cross-vendor problem any more than Siemens does. Each AI is locked to its own engineering tool and its own PLC family. Brain Gateway (interkey.com/gateway) is the most direct attempt at a cross-vendor answer; it ships in 2027.
  • If you want to choose your own LLM provider per deployment (some Anthropic, some local, some Azure with your contract), Rockwell does not support that posture. Brain does.

How Brain compares

Direct comparison

Brain and Rockwell+Nemotron converge on the right architecture — edge SLM as the default, cloud as optional — and diverge on three things.

Model choice.Brain is BYOK across providers: Anthropic Claude, OpenAI, Azure OpenAI, Google, Ollama, vLLM, llama.cpp, or any OpenAI-compatible endpoint. You pick. Per cabinet. Rockwell’s SLM is fixed to Nemotron and updated on Rockwell’s cadence.

Vertical integration. Brain ships the cabinet, the I/O, the HMI, the AI agent, and the vision system as one system designed by one team. The hardware watchdog model (every I/O board has a 2-second hardware watchdog) and the 5-tier safety authorization were designed together with the agent, not bolted on. Rockwell + Nemotron sits on top of the existing FactoryTalk + Logix architecture.

Forever license vs SaaS.Brain Core is €2,500. Brain Pro is €5,000. One-time purchase, lifetime firmware updates. FactoryTalk Design Studio Copilot is SaaS-billed. Over a 10-year asset life, the math on a forever license at €5,000 versus Rockwell’s subscription stack is — to be blunt — not close.

Plugin SDK. Brain ships a Python plugin SDK and an MCP server. You extend the agent yourself. We are not aware of an equivalent public SDK from Rockwell.

Verdict

Rockwell + NVIDIA shipped the most technically thoughtful incumbent AI of the cycle. The Nemotron integration is genuinely good engineering, and the edge / air-gapped capability is a real deployment story for regulated industries.

For Rockwell shops who want on-prem AI without changing platform, it is a strong upgrade. For buyers who want to choose their own model, extend the agent themselves, or operate across multiple vendor ecosystems, the architecture is still vendor-bound — and the integrated, sovereignty-flexible alternative now exists.


Sources: Rockwell + NVIDIA Nemotron press release; Manufacturing Tomorrow coverage; Packaging World on FactoryTalk Copilot; RealPars on FactoryTalk Design Studio.

See how Brain compares

Honest comparison vs. Siemens, Rockwell, Beckhoff, CODESYS, Schneider.