SeanMiller
Enterprise AI Platforms

Architecting Enterprise AI: The Lattice Platform (Part 1)

Sean Miller
#blog#series#architecture#enterprise ai#platform engineering#lattice

Lattice abstract representation of an Enterprise AI Platform

Disclaimer: This series is a personal, educational reference architecture. All diagrams, opinions, and frameworks are my own and are not affiliated with, sponsored by, or representative of my employer. I’m publishing this on my own time and without using any confidential information.

© 2026 Sean Miller. All rights reserved.


When we talk about AI, the discourse usually gravitates toward model selection, prompt engineering, and integration patterns. These are obviously important considerations, but they completely miss the mark on repeatable and safe execution at scale.

Organizations tend to think of AI as a feature or tool that can simply be plugged into a product or workflow. For example, you might use AI for data understanding and insights generation. Or you might use AI for a custom chatbot with which your partner teams can interact.

Most are building AI capabilities packaged as bespoke implementations. A product team spins up its own orchestration logic, its own retrieval pipeline, and its own safety guardrails. The result is fifty different “AI-powered features” with an equal amount of failure modes. Executives begin to balk at the lack of governance, until a TPM headcount is funded to reconcile, document, and put a strategy around all of their org’s implementations.

Lattice: A Reference Architecture for Enterprise AI Platforms

In this post, we’ll review Lattice, a reference architecture for enterprise AI platforms. Lattice is a shared capability layer with consistent observability and execution semantics. We’ll define the core problem, ungoverned AI sprawl, and introduce the Five Planes model that addresses it. From there, we’ll walk through how this layered architecture enables teams to move fast without creating chaos, and I’ll preview what the rest of the series will explore in depth.

The Problem: AI Without a Platform

Imagine what happens when a product team decides to add an AI feature today.

--- title: The Problem — Ungoverned AI Sprawl --- flowchart LR subgraph teams["Product Teams"] T1["Team A"] T2["Team B"] T3["Team C"] T4["Team ...N"] end subgraph sprawl["Redundant, Ungoverned Solutions"] direction TB subgraph a["Team A's Stack"] A1["Orchestration"] A2["RAG Pipeline"] A3["Safety Filters"] A4["Logging"] end subgraph b["Team B's Stack"] B1["Orchestration"] B2["RAG Pipeline"] B3["Safety Filters"] B4["Logging"] end subgraph c["Team C's Stack"] C1["Orchestration"] C2["RAG Pipeline"] C3["Safety Filters"] C4["Logging"] end subgraph d["Team N's Stack"] D1["..."] end end subgraph models["Model APIs"] M1[("Copilot")] M2[("Claude")] M3[("Gemini")] M4[("ChatGPT")] end T1 --> a T2 --> b T3 --> c T4 --> d a --> M1 a --> M2 b --> M2 b --> M3 c --> M1 c --> M3 d --> M1 d --> M2 d --> M3 %% Problem callouts P1[/"[X] No shared guardrails"/] P2[/"[X] Inconsistent audit trails"/] P3[/"[X] Duplicated effort"/] P4[/"[X] Security blind spots"/] sprawl ~~~ P1 sprawl ~~~ P2 sprawl ~~~ P3 sprawl ~~~ P4 %% Styling style teams fill:#E8F0FE,stroke:#4285F4,stroke-width:2px style sprawl fill:#FFEBEE,stroke:#EA4335,stroke-width:3px style models fill:#E8F5E9,stroke:#34A853,stroke-width:2px style a fill:#FFCDD2,stroke:#C62828 style b fill:#FFCDD2,stroke:#C62828 style c fill:#FFCDD2,stroke:#C62828 style d fill:#FFCDD2,stroke:#C62828 style P1 fill:#FFF3E0,stroke:#FB8C00,color:#E65100 style P2 fill:#FFF3E0,stroke:#FB8C00,color:#E65100 style P3 fill:#FFF3E0,stroke:#FB8C00,color:#E65100 style P4 fill:#FFF3E0,stroke:#FB8C00,color:#E65100

Figure 1: The typical pattern. Every team rebuilds the same infrastructure with different implementations.

This approach works fine for a proof of concept. It breaks down at scale for three reasons.

First, there’s no unified governance. Each implementation makes its own decisions about what the model can access, which tools it can call, and when (or whether) humans need to intervene. What’s acceptable to one team may be a compliance violation for another, and no one has visibility across the portfolio.

Second, there’s no consistent auditability. When something goes wrong, there’s no standard trail showing what context the model consumed, what tools it invoked, or why it produced a specific output. Debugging becomes archaeology, and explaining decisions to auditors becomes creative writing.

Third, there’s no shared learning. When one team improves their retrieval pipeline or discovers a better safety check, those improvements don’t propagate easily (or at all) across the org. Every team is solving the same problems in isolation, accumulating tech debt at an insane rate.

The cumulative effect is AI sprawl: a growing portfolio of fragile, inconsistent implementations that become expensive to maintain, impossible to audit, and quickly abandoned for next month’s new set of features. AI adoption then stalls because even though the technology works fine, the organizational structure to support it never materialized.

The Solution: A Shared Capability Layer

Instead of each team building their own AI stack, product teams consume AI through a governed platform that handles orchestration, tool access, retrieval, and observability on their behalf.

Lattice model

Figure 2: The Lattice model. Products consume AI through a shared runtime governed by centralized policy. Direct Link for full-size view.

graph TB subgraph "Experience Plane" A1[Product A] A2[Product B] A3[Product C] end subgraph "Runtime Plane" GW[AI Gateway] OE[Orchestration Engine] TG[Tool Gateway] CB[Context Builder] MG[Model Gateway] end subgraph "Control Plane" PE[Policy Engine] WR[Workflow Registry] TR[Tool Registry] end A1 --> GW A2 --> GW A3 --> GW GW --> OE OE --> TG OE --> CB OE --> MG OE --> PE TG --> TR OE --> WR style A1 fill:#4285F4,stroke:#333,stroke-width:2px,color:white style A2 fill:#4285F4,stroke:#333,stroke-width:2px,color:white style A3 fill:#4285F4,stroke:#333,stroke-width:2px,color:white style GW fill:#34A853,stroke:#333,stroke-width:2px,color:white style OE fill:#34A853,stroke:#333,stroke-width:2px,color:white style TG fill:#34A853,stroke:#333,stroke-width:2px,color:white style CB fill:#34A853,stroke:#333,stroke-width:2px,color:white style MG fill:#34A853,stroke:#333,stroke-width:2px,color:white style PE fill:#FBBC04,stroke:#333,stroke-width:2px style WR fill:#FBBC04,stroke:#333,stroke-width:2px style TR fill:#FBBC04,stroke:#333,stroke-width:2px

Figure 3: High-level view of the Lattice Model.

The governance layer enables velocity by handling safety, auditability, and tool access in a common interface. Product teams can ship features without reinventing infrastructure in a piecemeal fashion. They focus on their domain expertise while the platform handles the cross-cutting concerns that would otherwise consume months of effort.

The Five Planes

Lattice organizes responsibilities into five distinct layers, each with a clear mandate. These layers define the organizational structure of an AI platform.

PlaneResponsibilityKey Question It Answers
ExperienceWhere humans consume AI”How do users interact with AI capabilities?”
RuntimeWhere AI executes”How does an intent become an output?”
ControlWhere rules live”What is allowed, and who decides?”
DataWhere facts live”What information can AI access?”
IngestionWhere raw becomes AI-ready”How does operational data become retrievable knowledge?”
--- title: Lattice Platform — Plane Interactions --- flowchart TB subgraph EP["Experience Plane"] direction LR EX["Workbenches · Consoles · Portals · APIs"] end subgraph CP["Control Plane"] direction LR CT["Identity · Policy · Registries · Eval Gates"] end subgraph RP["Runtime Plane"] direction LR RT["AI Gateway → Orchestration → Tools → Models"] end subgraph DP["Data Plane"] direction LR DT["Systems of Record · Search · Vectors · Knowledge"] end subgraph IP["Ingestion Plane"] direction LR IG["Ingest → Parse → Detect PII → Embed → Index"] end subgraph MP["Model Runtime"] direction LR MR["Approved Models · Specialized Models"] end %% Primary request flow EP ==> RP %% Runtime dependencies RP <-.->|"auth · policy · config"| CP RP <-->|"context · tool data"| DP RP <-->|"inference"| MP %% Ingestion writes to Data IP ==>|"indexes · embeddings"| DP %% Styling - modern gradient feel style EP fill:#4A90D9,stroke:#2E5C8A,stroke-width:2px,color:#fff style RP fill:#50B86E,stroke:#357A49,stroke-width:2px,color:#fff style CP fill:#F5A623,stroke:#B37A1A,stroke-width:2px,color:#333 style DP fill:#D0585E,stroke:#8E3B3F,stroke-width:2px,color:#fff style IP fill:#9B59B6,stroke:#6C3483,stroke-width:2px,color:#fff style MP fill:#34495E,stroke:#1C2833,stroke-width:2px,color:#fff

Figure 4: The Five Planes. Each layer has distinct responsibilities and communicates through defined interfaces.

Experience Plane

The Experience Plane embeds AI into the products and workflows people already use. AI surfaces feel like better versions of the products people already know, complete with clear affordances and feedback loops.

Operations dashboards with AI-powered case triage, customer portals with intelligent status explanations, and developer tools with semantic code search and runbook assistance all fall within this plane.

Runtime Plane

The Runtime Plane is the execution fabric that turns intent into output. Orchestration, tool calls, model inference, and human-in-the-loop approvals happen under strict latency and reliability constraints. It’s the factory floor of the platform.

The plane consists of five core components working in concert. The AI Gateway serves as the single entry point for all AI requests, handling authentication, routing, and response normalization. The Orchestration Engine executes workflows with proper state management, treating agentic behavior as a set of governed workflows. The Tool Gateway provides controlled access to enterprise systems. If AI is going to take action, it happens through this chokepoint. The Context Builder handles retrieval, redaction, and citation assembly, ensuring models see only what they’re supposed to see. The Model Gateway manages routing, structured outputs, and cost control across whatever model providers you’re using.

Control Plane

The Control Plane is the governing layer that decides what is allowed, what is approved, and how change happens. This is how teams move fast without creating chaos: a clear, centralized source of truth about what the platform permits.

Data Plane

The Data Plane serves as the source-of-truth and serving substrate for facts, documents, and signals. It’s organized so AI systems can retrieve what they need without leaking what they shouldn’t. This is a critical requirement when you’re dealing with sensitive enterprise data.

This includes the Vector Index for semantic search over embedded content, a Knowledge Store for curated facts and policies that should inform AI outputs, a Document Store for raw and processed documents, and Session State management for conversation history and run artifacts. The separation is important: each store has different access patterns, retention policies, and sensitivity levels.

Ingestion Plane

The Ingestion Plane is the factory that converts raw operational data into AI-ready artifacts. This layer handles document processing, embedding generation, and data quality enforcement. It’s the unglamorous work that determines whether retrieval actually works at runtime.

The components here include Document Processing (OCR, parsing, chunking of various document formats), an Embedding Pipeline for vector generation that feeds semantic search, and Data Quality enforcement for validation, deduplication, and freshness tracking. Done well, this plane is invisible. Done poorly, it’s the source of every hallucination and stale-data bug your users encounter.

Why This Structure Matters

The Five Planes model solves three critical problems that plague enterprise AI initiatives.

Separation of Concerns. Product teams own the Experience Plane. They know their users, their workflows, their domain. Platform teams own Runtime and Control. They know how to build reliable, secure infrastructure. Data teams own Ingestion and Data. They know the sources, the quality issues, the lineage requirements. Everyone knows their lane, and more importantly, everyone knows who to call when something breaks.

Consistent Governance. Every AI request flows through the same gateway, the same policy engine, the same audit trail. Compliance is baked into the platform. When regulators ask how decisions were made, you have one answer that applies across the entire portfolio, not fifty different stories to piece together.

Swappable Components. Need to change model providers? Swap the Model Gateway adapter. Need stricter retrieval for a new regulation? Update the Context Builder configuration. Need to add human review for a sensitive workflow? Update the policy rules. The architecture absorbs change without requiring every product team to update their code. This is the difference between a platform and a pile of implementations. The platform evolves as a unit.

Where This Structure Breaks Down

Lattice is a solid, foundational architecture that can power enterprise AI service offerings, but it’s not the best fit for every organization and use case. There are some limitations to consider:

What’s Next

In the next post, “The AI Gateway: Front Door to Governed AI,” we’ll dive deep into the Runtime Plane’s entry point. We’ll explore how it authenticates requests, enforces policy, routes to workflows, and normalizes responses. The Gateway is where every AI interaction begins, and getting it right determines whether your platform feels coherent or chaotic.


Series Roadmap

This series will explore each component of the Lattice architecture in depth:

  1. Introduction to Lattice (this post) — The Five Planes overview
  2. The AI Gateway — Front door and policy enforcement
  3. The Orchestration Engine — Workflows, not agents
  4. The Tool Gateway — Governed access to enterprise systems
  5. The Context Builder — Retrieval, redaction, and grounding
  6. The Model Gateway — Routing, cost control, and structured outputs
  7. The Control Plane — Policy, registries, and change management
  8. The Data Plane — Indexes, stores, and session state
  9. The Ingestion Plane — Document processing and embeddings
  10. MCP Integration — Standardized interoperability
  11. Preventing Hallucinations — Architectural approaches to grounding
  12. Lattice-Lite — A lighter approach for small orgs
  13. Putting It Together — End-to-end request lifecycle

This series documents architectural patterns for enterprise AI platforms. Diagrams and frameworks are provided for educational purposes.

← Back to Blog