If you've read the platform overview, you know what the C9800 is and which model fits your deployment. This article goes deeper — into the internals that determine how the controller actually works. Understanding the software architecture, the role of each process, and how packets flow through the dataplane isn't just academic. It's the foundation you need to troubleshoot intelligently, size your deployment correctly, and understand why certain features behave the way they do.
The C9800's architecture is fundamentally different from AireOS. It's not a monolithic system with a single memory space and a single fault domain. It's a multiprocess, data-model-driven platform built on IOS-XE (internally codenamed Polaris), designed from the ground up for elasticity, resiliency, and programmability. As Arena, Crippa, Darchis, and Katgeri explain in Understanding and Troubleshooting Cisco Catalyst 9800 Series Wireless Controllers (Cisco Press, 2023), the word "elastic" was present in the earliest codenames of the platform — and it captures the architectural philosophy perfectly.
IOS-XE Software Architecture
IOS-XE is based on a UNIX system called binOS (internally referred to as Polaris in the 16.x and 17.x trains). It's a Cisco-modified version of Linux. The legacy IOS (the "classic" IOS that ran on older routers and switches) was a monolithic operating system — a single process running in a single memory space with a single fault domain. A crash anywhere brought down everything.
IOS-XE breaks away from that model entirely. It adopts a multiprocess, modular architecture that separates the operating system (binOS) from the network tasks. The core routing and interface configuration is managed by a process called IOSd, but more specific tasks — like wireless session management — are separated into dedicated processes with their own memory and fault domains.
The key processes in the IOS-XE architecture on the C9800 are:
| Process | Function | Instances |
|---|---|---|
| IOSd | Core IOS-XE routing process — IP forwarding, routing protocols, interface configuration. Contains the IOSd Blob (legacy data structures) and IOSd subsystem modules. | 1 |
| WNCd | Wireless Network Control daemon — the heart of the wireless control plane. Manages AP sessions, client sessions, CAPWAP tunnels, and contains the SANET and SISF libraries. | 1–8 (varies by platform) |
| WNCMgrd | WNCd Manager — oversees all WNCd instances, handles CAPWAP discovery, load-balances AP assignments across WNCd processes, and centralizes CLI output. | 1 |
| Mobilityd | Mobility daemon — handles all inter-controller mobility communications, maintains the PMK cache for roaming clients, and tracks client presence across WLCs in the mobility group. | 1 |
| RRMgrd | Radio Resource Manager daemon — runs the RF grouping, TPC, DCA, and FRA algorithms. Communicates with RRM-client processes inside each WNCd. | 1 |
| Rogued | Rogue management daemon — handles rogue AP detection, WIPS (Wireless Intrusion Prevention System), RLDP (Rogue Location Discovery Protocol), and client classification. | 1 |
| NMSPd | Handles communications with CMX (Connected Mobile Experiences) and Cisco DNA Spaces for location services. | 1 |
| Wstatsd | Processes Netflow (FNF) packets and maintains the AVC (Application Visibility and Control) statistics database. | 1 |
| DBM | Database Manager — handles the configuration database used by the WebUI. | 1 |
| ODM | Manages the CWDB (Centralized Wireless operational Database) — the operational data store. | 1 |
| PubD | Centralizes information for model-driven programmatic access (NETCONF/RESTCONF/telemetry). | 1 |
This separation has practical consequences you'll see in production. If a WNCd process encounters a fatal error, the underlying IOS-XE platform stays up. IOSd continues routing. The other WNCd processes continue serving their APs and clients. The failed WNCd restarts, and its APs reconnect — without a full controller reboot. Compare that to AireOS, where a crash in the wireless process meant the entire controller went down.
The databases in IOS-XE are based on a versioned data model. Each process has its own database to which it has exclusive, contention-free access. These per-process databases are duplicated into a central, consolidated database used for manageability and high availability replication. This means that session management (WNCd doing its job) doesn't interfere with manageability (the WebUI querying config), and vice versa. This data-model-driven approach is also what makes ISSU (In-Service Software Upgrade) possible — the versioned databases allow the standby controller to synchronize state without locking the active controller's operations.
WNCd: The Heart of the Wireless Control Plane
If there's one process you need to understand on the C9800, it's WNCd — the Wireless Network Control daemon. Every AP and client session on the controller is managed by a WNCd instance. The number of WNCd processes running on a given platform is statically determined by the hardware — there's no dynamic spawning:
| Platform | WNCd Instances | Max APs | Max Clients |
|---|---|---|---|
| C9800-L | 1 | 250 (500 w/ Perf License) | 5,000 (10,000 w/ Perf License) |
| C9800-40 | 5 | 2,000 | 32,000 |
| C9800-80 | 8 | 6,000 | 64,000 |
| C9800-CL | 1–8 (scales with VM footprint) | 1,000–6,000 | Up to 64,000 |
| EWC on AP | 1 | 100 | 2,000 |
| EWC on Switch | 1 | 200 | 4,000 |
Each WNCd process handles a specific set of access points and all the clients connected to those APs. This is the key design principle: everything a WNCd needs to service a client session — AP management, CAPWAP control, 802.1X authentication, DHCP snooping, policy enforcement — is contained within that single process. This minimizes interprocess communication (IPC) and keeps latency low for time-sensitive operations like client authentication and roaming.
What's Inside a WNCd Process
WNCd is not a simple daemon — it's a large process containing multiple integrated libraries:
SANET (Session Aware Networking): This is the session management library, originally from the IOS-XE switching platform (where it runs inside the SMD process for wired ports). On the C9800, a modified copy of SANET lives inside each WNCd process. SANET handles 802.1X authentication, web authentication, MAC filtering, RADIUS communication, policy application (ACLs, VLAN assignment, QoS), URL redirect, and Change of Authorization (CoA). If you see "SANET" in your radioactive trace logs, you're looking at the authentication and policy engine.
SISF (Switch Integrated Security Features): Also integrated within WNCd, SISF originated on Catalyst switches as a framework for IPv6 control traffic processing. On the C9800, SISF handles IPv6 NDP inspection, ARP and DHCP snooping, device tracking, IP address gleaning, DAD (Duplicate Address Detection) proxy, and DHCP relay. When you see the client transition to the "IP Learn" state in troubleshooting logs, that's SISF doing its job inside WNCd.
Mobility Agent: A mobility library within WNCd handles client roaming. When a client moves to an AP managed by a different WNCd process (or a different controller), the mobility agent negotiates the handover. This means roaming doesn't require a round-trip to IOSd or any other centralized process — it's handled locally within the WNCd instances, which is critical for keeping roaming latency low.
RRM Client: Each WNCd contains an RRM client that communicates with the central RRMgrd process. The RRM client handles probe handling and AP-facing RF capabilities for the APs it manages.
WNCMgrd: The Traffic Controller
The WNCMgrd (WNCd Manager) process oversees all WNCd instances. Its responsibilities are critical to understand:
CAPWAP Discovery: WNCMgrd is the process that handles all CAPWAP discovery requests from access points. When an AP sends a CAPWAP Discovery Request, WNCMgrd receives it, processes it, and assigns the AP to a specific WNCd instance based on load-balancing rules.
AP Load Balancing: WNCMgrd distributes APs across WNCd instances to keep the load balanced. The assignment happens at AP join time and persists for the life of the AP session.
CLI Centralization: When you run a show wireless command, you want to see all APs and clients — not just the ones on WNCd-0. WNCMgrd has read-only access to the Centralized Wireless operational Database (CWDB) and consolidates information from all WNCd processes to present a unified view.
The Wireless Client State Machine
Every wireless client on the C9800 is tracked by a finite state machine inside the WNCd process that manages its AP. Understanding these states is essential for troubleshooting — when a client is stuck, the state it's in tells you exactly where in the onboarding process the failure occurred.
The onboarding of a client is considered a Control Plane activity. Traffic is analyzed and tracked by the Control Plane until the client reaches the final forwarding state (called RUN), at which point traffic is forwarded by the dataplane without further Control Plane involvement.
Here's the lifecycle of an 802.1X client joining a centrally switched WLAN, showing the internal handoffs between libraries within WNCd:
| Step | State | WNCd Component | What Happens |
|---|---|---|---|
| 1 | S_CO_INIT → S_CO_ASSOCIATING | Client Orchestrator | Client sends 802.11 association request (via CAPWAP data tunnel). WNCd receives a socket event. The CAPWAP library parses the payload and invokes the 802.11 handler. |
| 2 | S_DOT11_INIT → S_DOT11_ASSOCIATED | Dot11 Library | The 802.11 library validates the association request against the BSSID operational table, sends an association response back to the client via CAPWAP. |
| 3 | S_CO_L2_AUTH_IN_PROGRESS | Client Orchestrator → SANET | The orchestrator sends an "add mobile" payload to the AP (CAPWAP control) to create the client entry on the AP. Then it verifies the L2 authentication policy and triggers 802.1X via the SANET library. |
| 4 | S_AUTHIF states | SANET (Authentication Interface) | SANET handles the full 802.1X exchange — RADIUS communication, EAP relay, policy retrieval. On success, policies (ACLs, VLAN, QoS) are stored within SANET. An IPC call updates Mobilityd's PMK cache. |
| 5 | S_INITPMK → S_PTKINITDONE | Client Key Library | The four-way key exchange generates the PTK, GTK, and IGTK. Transient keys are sent to the AP via "add mobile" inside CAPWAP control. |
| 6 | S_CO_MOBILITY_DISCOVERY_IN_PROGRESS | Mobility Agent | If controllers are in a mobility group, the mobility interface announces the client and checks if it exists on another WLC (detecting a roam vs. new association). |
| 7 | S_CO_DPATH_PLUMB_IN_PROGRESS | Client Orchestrator → SISF | Transition to "IP Learn" state. SISF plumbs a client entry into the dataplane to learn the client's MAC and punt DHCP packets to WNCd for processing. |
| 8 | S_CO_IPLEARN_IN_PROGRESS | SISF (IP Learn Library) | SISF snoops DHCP or ARP to learn the client's IP. When the IP is identified, an event is returned to the client orchestrator. |
| 9 | S_CO_RUN | Client Orchestrator | The orchestrator drives the final policy bind — QoS, ACL, VLAN, and IP are plumbed into the dataplane. A final "add mobile" with QoS attributes is sent to the AP. The client is now in RUN state and traffic flows through the dataplane. |
A client is deleted from the state machine based on timers: idle timeout, session timeout, or state timeout (if the client is stuck in a transition state for too long). For example, a client associating to a web authentication SSID that never completes the portal login is deleted after 5 minutes by default — this prevents resource exhaustion from clients that passively associate but never finish onboarding.
These states are directly visible in radioactive traces, the C9800's most powerful troubleshooting tool. You can trace a single client's MAC address through every state transition, every library handoff, and every RADIUS exchange without generating output for every other client on the network. This is covered in depth in C9800 Debug Commands Reference.
Dataplane Architecture: How Packets Actually Flow
The control plane (WNCd, IOSd, and the other processes) handles client onboarding, policy decisions, and AP management. But once a client reaches RUN state, its traffic is forwarded by the dataplane — and the dataplane implementation is the biggest architectural difference between the C9800 platform variants.
All C9800 platforms use the same dataplane code — the Cisco Packet Processor (CPP), also known internally as Doppler. The CPP is the same dataplane that runs on the ASR 1000 series routers. It's extremely flexible because it can be implemented entirely in software (as a process running on x86 CPU cores) or accelerated by a hardware ASIC (the Quantum Flow Processor, or QFP). The code is identical across platforms — only the execution environment differs at compilation time.
| Platform | Dataplane Implementation | Max Throughput | DTLS Crypto Acceleration |
|---|---|---|---|
| C9800-40 | Hardware — single QFP ASIC (64 cores, 4 threads/core) | 40 Gbps (1500-byte packets, unidirectional) | Dedicated crypto chip |
| C9800-80 | Hardware — dual QFP ASICs | 80 Gbps (1500-byte packets, unidirectional) | Dedicated crypto chip |
| C9800-L | Software — dedicated x86 CPU cores | 5 Gbps (10 Gbps w/ Perf License) | Software (CPU) |
| C9800-CL | Software — virtual NIC (vCPU cores) | 2–3 Gbps (doubled w/ SR-IOV) | Software (vCPU) |
| EWC on AP | AP OS provides dataplane | AP-limited | N/A |
| EWC on Switch | Switch FED / UADP ASIC | Switch-limited | Switch hardware |
The Life of a Packet on the C9800-40/-80
Understanding the packet path through the hardware appliances helps you troubleshoot performance issues and understand where features like ACLs, AVC, QoS, and Encrypted Traffic Analytics are applied. Here's the path, block by block:
Block A — Ingress: Packets arrive on the data interfaces through the SFP+ transceivers and 10G PHY. The ingress ASIC (NP5c/Memory Controller, referred to as the Memory Controller or Memory Exchange chip) handles buffering and initial packet sorting. It determines whether the packet needs to go to the Control Plane (destined to a WLC IP address, or a punted packet like a DHCP request during client IP Learn) or to the dataplane for fast forwarding.
Block B — QFP Dataplane: For data plane traffic (typically a wireless client's encapsulated frame), the packet is handed to the QFP via an internal bus called Astro. The QFP is a 64-core programmable packet processing chip with four hardware threads per core and built-in priority queues and feature arrays. ACLs, AVC (Application Visibility and Control), Netflow, QoS marking, and CAPWAP encapsulation/decapsulation are all implemented directly in the QFP's packet processing pipeline at hardware speed.
Block C — CPU Complex: Control plane packets are sent to the x86 CPU complex. From there, they enter the Linux kernel through the LSMPI (Linux Shared Memory Pool Interface) or LFTS (Linux Forwarding Transport Service). These kernel libraries identify whether the packet belongs to IOSd (a routing protocol update, for example) or should be handed to the TCP/IP stack and then dispatched to the appropriate WNCd process (a CAPWAP control packet from an AP, for example).
Block D — Crypto Chip: If CAPWAP data encryption (DTLS) is enabled, encrypted packets take a detour through the dedicated crypto chip for hardware-accelerated decryption before being processed by the QFP. This is critical for the C9800-40 and C9800-80 — at 40–80 Gbps throughput, no CPU could handle DTLS decryption in software. The crypto chip is what makes data DTLS viable at scale on the hardware appliances.
An important throughput caveat from the book: the published 40 Gbps and 80 Gbps numbers represent aggregate throughput (not 40 Gbps in each direction — it's 40 Gbps in one direction or 20 Gbps simultaneously in both directions for the C9800-40). These numbers are measured with 1500-byte packets, which optimize the dataplane processing. With smaller packets (64-byte, for example), throughput drops dramatically because the QFP spends processing cycles on packet headers rather than payload. Additionally, heavy ACLs or AVC being enabled can slightly reduce effective throughput. In practice, you'll almost always hit the AP or client count ceiling before you hit the throughput ceiling — wireless is a shared medium, clients send in bursts, and real-world throughput is limited by the RF environment long before the controller dataplane becomes the bottleneck.
The Virtual Dataplane (C9800-CL and C9800-L)
The C9800-CL runs the same CPP/Doppler dataplane code, but executes it as a software process on virtual CPU cores instead of on dedicated QFP hardware. This gives it a fixed maximum throughput of 2–3 Gbps under typical conditions. Using SR-IOV (Single Root I/O Virtualization) compatible NICs and drivers, you can dedicate more CPU cores to the dataplane task, roughly doubling throughput. Separate "high throughput" VM images are available for SR-IOV deployments.
The C9800-L similarly uses dedicated x86 CPU cores for its software dataplane (it does not have a QFP chip), capping throughput at 5 Gbps (or 10 Gbps with the Performance License).
Both platforms are excellent choices for FlexConnect deployments where client data traffic is switched locally at the AP and the controller only handles the CAPWAP control plane. In FlexConnect mode, the controller's dataplane throughput is largely irrelevant because client traffic never traverses it.
EWC: Borrowing Someone Else's Dataplane
The Embedded Wireless Controller doesn't have its own dataplane at all. When running on a Catalyst 9000 switch (EWC-SW), the C9800 control plane runs freely on the switch CPU but relies on the switch's FED (Forwarding Engine Driver) and UADP ASIC for data forwarding. When running on a Catalyst 9100 AP (EWC-AP), the controller code runs in a container and the AP OS provides the dataplane — which is why EWC-AP is limited to a single SVI and FlexConnect mode only.
How CAPWAP Data Traffic Flows Through the System
Putting the control plane and dataplane together, here's what happens when a wireless client in Local Mode sends a packet to the internet — after the client has already reached RUN state:
| Step | Location | What Happens |
|---|---|---|
| 1 | Client → AP | Client sends an 802.11 frame. The AP encrypts/decrypts at Layer 2 (WPA2/WPA3), encapsulates the frame in a CAPWAP data tunnel (UDP 5247), and sends it to the controller. |
| 2 | Ingress ASIC (Block A) | The packet arrives on an SFP+ port. The ingress ASIC identifies it as a CAPWAP data packet and routes it to the QFP dataplane (not the CPU). |
| 3 | Crypto Chip (Block D) | If DTLS data encryption is enabled, the packet detours through the crypto chip for decryption. |
| 4 | QFP Dataplane (Block B) | The QFP decapsulates CAPWAP, applies ACLs, performs AVC deep packet inspection (if configured), applies QoS marking, generates Netflow records, and forwards the original client frame onto the wired network via the appropriate VLAN. |
| 5 | Egress → Wired Network | The decapsulated client frame exits the controller on the data port toward the upstream switch/router. |
Return traffic follows the reverse path: the wired frame arrives on the controller's data port, the QFP encapsulates it in CAPWAP, optionally encrypts it via the crypto chip, and sends it to the AP via UDP 5247. The AP decapsulates and transmits the 802.11 frame to the client.
The critical insight: in Local Mode (centralized switching), all client data traffic traverses the controller's dataplane. This is why the QFP matters for Local Mode deployments and why the throughput ceiling matters for high-density environments. In FlexConnect mode, the AP handles data switching locally — only CAPWAP control traffic (UDP 5246) traverses the controller, which is a fraction of the bandwidth.
A Note on ARP Handling
One architectural detail worth calling out: the C9800 does not enable ARP proxy by default (unlike what you might expect from AireOS experience). ARP proxy is a configurable feature in the policy profiles starting with IOS-XE 17.3.
The default behavior is ARP unicast conversion: broadcast ARP messages destined to wireless clients are transformed into unicast messages for the specific target MAC address. This saves airtime (the ARP isn't blasted to every AP on every WLAN) and is more efficient because the destination client also learns the source MAC from the ARP request. In the case of two wireless clients ARPing for each other, this saves an entire ARP request-reply cycle.
Enabling ARP proxy goes further — the WLC replies to ARP requests on behalf of known wireless clients without bothering the client at all. This is useful when a device on the network is constantly ARPing for wireless clients (creating unnecessary airtime consumption and battery drain), but it's not the default because it adds complexity and can cause issues in certain edge cases.
Verifying the Architecture: Key Show Commands
Once you understand the architecture, these commands let you verify what's running under the hood:
C9800# show platform software process list | include wnc
23456 6456 S 48472 3.5 1.2 WNCd
23457 6456 S 47896 3.4 1.2 WNCd
23458 6456 S 47544 3.3 1.1 WNCd
23459 6456 S 47280 3.2 1.1 WNCd
23460 6456 S 47120 3.1 1.0 WNCdThis shows you the running WNCd instances. On a C9800-40, you'll see five. On a C9800-80, you'll see eight.
C9800# show wireless loadbalance ap affinity wncd 0
Number of APs : 12
AP Name Slots AP Model Radio MAC State
-----------------------------------------------------------------------------------------------
AP-Floor1-01 3 C9130AXI-B a4b2.39xx.xxxx Registered
AP-Floor1-02 3 C9130AXI-B a4b2.39xx.xxxx Registered
...This shows which APs are assigned to a specific WNCd instance. When troubleshooting, knowing which WNCd manages a particular AP tells you which process's logs and traces to examine.
C9800# show platform hardware chassis active qfp feature wireless datapath summary
Memory Summary:
Total memory: 4294967296
Used memory: 1073741824
Free memory: 3221225472
CAPWAP Tunnel Summary:
Total tunnels: 90
Control tunnels: 45
Data tunnels: 45This confirms the QFP dataplane is active and shows CAPWAP tunnel counts — one control tunnel and one data tunnel per AP.
C9800# show wireless client mac-address xxxx.xxxx.xxxx detail
...
Client State : Associated
...
Policy Type : WPA2
Encryption Cipher : CCMP (AES)
Session Timeout : 1800
...
Client ACL : redirect-acl
VLAN : 100
...
WNCd Instance : 2The WNCd Instance field tells you exactly which process is managing this client's session. If you need to trace this client's lifecycle, you know which WNCd's logs to examine.
Key Takeaways
The C9800's architecture is built on three foundational principles: process isolation for resiliency, data-model-driven databases for programmability and ISSU, and a unified dataplane codebase (CPP/Doppler) that can be hardware-accelerated or software-rendered depending on the platform.
WNCd is the center of the wireless architecture. Each WNCd instance is a self-contained session manager with integrated SANET (authentication/policy), SISF (IP learning/tracking), and mobility agent libraries. The number of WNCd processes determines the controller's scale ceiling — five on the C9800-40, eight on the C9800-80, and one-to-eight on the C9800-CL depending on VM size.
The QFP is the defining hardware differentiator. On the C9800-40 and C9800-80, the Quantum Flow Processor handles CAPWAP encapsulation/decapsulation, ACLs, AVC, QoS, and Netflow at hardware speed. The C9800-L and C9800-CL run the same dataplane code in software, which works well for FlexConnect deployments but caps throughput at 5–10 Gbps (C9800-L) or 2–3 Gbps (C9800-CL).
The client state machine is your troubleshooting map. Every client transition — from association through 802.1X, through IP Learn, to RUN state — is tracked by specific libraries within WNCd and visible in radioactive traces. Knowing which state a client is stuck in tells you exactly which component is failing and where to look.
The next article in this series covers the configuration model that sits on top of this architecture — the Tags, Profiles, and Policies framework that replaced AireOS's flat configuration model: The C9800 Configuration Model: Tags, Profiles, and Policies Explained.