Every few years, a platform comes along that promises to simplify enterprise networking at a fundamental level. SD-Access was Cisco's answer to the question of how you do consistent segmentation, identity-based policy, and automated provisioning across a campus at scale. The pitch is compelling. The architecture is technically sound. And in 2026, after several years of real-world deployments, there's enough practitioner experience to give you a more grounded answer than the datasheets will.
The honest answer is: it depends. Not in a hand-wavy way, but in a specific, checklist-able way. SD-Access can absolutely deliver on its promises, but it requires a level of operational prerequisite that many teams underestimate, and brownfield campuses carry design debt that the fabric overlay does not magically erase. This article walks through what SD-Access actually delivers, what brownfield reality looks like, and how to decide whether it makes sense for your environment right now.
What Teams Hope SD-Access Will Solve
Before getting into the hard parts, it's worth being honest about what drives the interest. Teams that come to SD-Access are usually dealing with at least one of these problems:
- VLAN sprawl and IP addressing debt. Campus networks that have grown organically accumulate VLANs, subnets, and ACLs that nobody wants to clean up. SD-Access promises to flatten the policy model: segmentation via Virtual Networks and Scalable Group Tags (SGTs), not hundreds of VLANs.
- Inconsistent access policy. When policy lives in switch ACLs spread across dozens of closets, changes are slow, error-prone, and hard to verify. Centralised policy in ISE + Catalyst Center looks appealing.
- Guest and contractor segmentation complexity. Keeping IoT devices, contractors, and employees on logically isolated paths without endless VLAN gymnastics is a real operational pain.
- Manual provisioning overhead. New switch ports should just work when a device connects. The promise of identity-driven automation (the endpoint shows up, ISE evaluates it, policy is applied) resonates with teams that spend too much time manually tweaking port configs.
These are real problems. SD-Access addresses all of them. The question is whether you're in a position to actually get there.
The Promised Benefits: What SD-Access Actually Delivers
Let's be fair to the architecture. When SD-Access is deployed well in a reasonably clean environment, it does deliver meaningfully:
| Capability | What you get | What it replaces |
|---|---|---|
| Macro-segmentation via Virtual Networks (VNs) | VRF-isolated traffic paths between user groups (employees, guests, IoT) without L2 VLAN separation | VLAN + ACL sprawl, dedicated firewall hairpin for every segment pair |
| Micro-segmentation via SGTs | Policy between user/device groups within a VN, enforced in the fabric via SGACL; follows the user as they roam | Per-port ACLs, 802.1X-per-interface config, VLAN-based policy |
| Anycast gateway | Every Edge Node hosts the same gateway IP and MAC; no HSRP/VRRP needed; sub-second failover | HSRP/VRRP on distribution switches, STP dependency for failover |
| Seamless endpoint mobility | LISP separates endpoint identity (IP) from location (RLOC); endpoints keep their IP as they move across the fabric | Client tracking via STP/ARP, layer 2 domain extension for mobility |
| Automated provisioning | Day-0/1/2 templates in Catalyst Center push consistent config to fabric nodes; new switch onboarding is repeatable | Manual CLI configuration per closet switch, per-port ACL edits |
| Centralised policy management | SGT policy matrix in ISE; change once, enforce everywhere in the fabric | Distributed ACL management across dozens of switches |
The segmentation story in particular is genuinely better at scale than VLAN-based policy. Once you have VNs and SGTs working, adding a new user class or device category is a policy change in ISE: not a VLAN, not a new SVI, not a firewall rule coordination call.
Brownfield Reality: What the Deployment Guides Don't Say
Here's where the conversation usually shifts. The vast majority of enterprise campuses are not greenfield. They are buildings full of Catalyst switches of mixed generations, VLANs that carry tribal knowledge nobody wants to document, and routing designs that reflect decisions made by people who left the company years ago. Dropping a fabric overlay onto that is not a migration; it is a redesign that has to coexist with the original.
Legacy Switches and Hardware Eligibility
Not every switch in your campus can be an SD-Access fabric node. Catalyst 3850s, older 3650s, and anything not on the supported platform list for Catalyst Center fabric provisioning will either need replacement or will stay as "non-fabric" legacy segments that connect to the fabric via an External Border Node. That border handoff (the Layer 2 Border Handoff in Cisco's terminology) is functional, but it means those endpoints are still in traditional VLANs and don't benefit from fabric segmentation. You end up running two models simultaneously, which is fine as a migration phase but can persist indefinitely in environments where capital budget is constrained.
Before a brownfield SD-Access project gets past the whiteboard, someone needs to produce a complete platform inventory and check every device against the supported hardware matrix in the current Catalyst Center release notes. This is not glamorous work, but skipping it means discovering ineligible switches during deployment, when the schedule pressure is highest.
The L2/L3 Handoff Problem
SD-Access uses IS-IS as the underlay routing protocol, and Catalyst Center provisions the underlay automatically during fabric node onboarding. In a clean deployment, that's straightforward. In a brownfield network, you're likely running OSPF or EIGRP in the distribution and core layers, and those routing processes need to coexist with or be replaced by the IS-IS underlay. The fabric border nodes translate between the two worlds, but the underlay IP addressing scheme must be planned, not discovered, before provisioning begins.
A common failure pattern: teams bring up the fabric, Catalyst Center provisions the IS-IS underlay correctly on fabric nodes, but existing OSPF adjacencies on adjacent non-fabric distribution switches create routing asymmetry that only surfaces under load or when a link fails. The troubleshooting is difficult because the failure is at the boundary between two routing models, and tooling on each side sees a healthy network.
VLAN and IP Address Debt
SD-Access endpoints live in Endpoint Groups (EPGs) that map to VNs and SGTs. Each EPG has a subnet (the anycast gateway on the Edge Nodes). In a brownfield environment, your existing endpoints already have IP addresses in existing subnets. There are two paths: re-IP (painful, requires change windows and application validation), or use Layer 2 Border Handoff to preserve the existing subnets while gradually migrating traffic to the fabric (less painful short-term, but creates a hybrid that requires ongoing care).
A specific gotcha that catches teams: DHCP pool scopes must align precisely with fabric Endpoint Group subnet definitions. If a DHCP server is handing out addresses from a pool that doesn't match the Endpoint Group subnet the Edge Node is configured with, clients will get addresses that the fabric can't properly route. This is not a corner case; it's a common early-deployment issue in brownfield environments with legacy DHCP servers that have accumulated config over years.
The Phased Migration Grind
Cisco's recommended brownfield approach is phased: build the fabric alongside the legacy network, use Border Nodes for reachability between them, migrate buildings or floors gradually, and decommission legacy segments as you go. This is sensible. It is also slow and operationally demanding to sustain.
The practical challenge is that during the coexistence phase, your team is operating two networks simultaneously. Troubleshooting requires understanding which model a given endpoint is in. Change management gets complicated because a change to the fabric border config can affect both fabric and non-fabric segments. Teams that don't have the staffing or tooling to manage the coexistence phase well often find the migration stalls at 60-70% complete, delivering some of the cost of SD-Access without the policy consistency that justified the project.
The Dependency Stack: Everything That Has to Work First
SD-Access is not a product you deploy in isolation. It is an operating model that requires a stack of integrated components to function. Understanding the dependency chain is essential to realistic planning.
| Component | Role in SD-Access | What breaks if it's not right |
|---|---|---|
| Catalyst Center | Orchestration, provisioning, Day-2 assurance. All fabric configuration flows through it. | Without Catalyst Center, there is no supported SD-Access fabric deployment path. CLI-only fabric configuration is not a supported production approach. |
| Cisco ISE | Identity, SGT assignment, SGACL policy enforcement, 802.1X/MAB, pxGrid integration with Catalyst Center | Without ISE, you have no micro-segmentation, no dynamic policy, and no identity-based endpoint placement. VN mapping falls back to static SSID/port-based assignment only. |
| PKI / Certificate infrastructure | ISE nodes require valid certificates for EAP authentication and for Catalyst Center integration. Fabric nodes use certificates for LISP and pxGrid. | Certificate expiry or misconfiguration causes authentication failures, often silently showing as "timeout" in client logs rather than explicit certificate errors. This is the single most underestimated operational burden in SD-Access deployments. |
| Underlay network | IS-IS routed underlay connecting all fabric nodes. Catalyst Center provisions this automatically for supported hardware. | MTU mismatches (VXLAN adds overhead; end-to-end MTU must support 1600+ bytes or fragmentation occurs) break fabric reachability in ways that are hard to diagnose. |
| Supported hardware | Catalyst 9200/9300/9400/9500 series switches; not all IOS-XE versions support all SD-Access features | Fabric provisioning fails or produces incomplete configs on unsupported hardware. Certain fabric features are IOS-XE release-specific. |
| DNS and NTP | Catalyst Center and ISE both depend on accurate NTP; certificate validation is time-sensitive | NTP drift causes certificate validation failures. DNS is required for ISE node registration and Catalyst Center cluster communication. |
Two of these deserve extra emphasis because they are consistently underestimated.
ISE readiness is not just about licensing. ISE needs to be deployed (HA-pair in production environments), integrated with your Active Directory or LDAP, loaded with profiling policies for your device types, and have a tested 802.1X policy before the fabric migration begins. Teams that try to stand up ISE and migrate the campus simultaneously are combining two large, complex change programmes. The resulting failure modes are difficult to isolate because both systems are in flux at the same time.
Certificate lifecycle is a continuous operational requirement, not a one-time setup. ISE uses certificates for EAP-TLS authentication, for its Admin portal, and for pxGrid communication with Catalyst Center. When those certificates expire (and in large deployments, they expire at different times on different nodes), authentication silently fails until someone manually renews them. Organisations without a mature PKI practice, or without monitoring for certificate expiry, will experience this. It's not a question of if, it's when.
ISE: The Part That Actually Takes the Longest
It's tempting to think of SD-Access as primarily a switching and overlay project. The switching work is actually the manageable part. What takes the most time in practice is policy design in ISE.
SGT policy is an NxN relationship matrix. If you have 20 defined SGTs, you have up to 400 SGT pair relationships to consider for your SGACL policy. Most environments don't need 400 distinct rules (many pairs can default to permit or deny), but you do need to decide for each relevant pair, and those decisions require input from security, application teams, and operations. That conversation, in most organisations, takes months.
A practical sequencing that works: deploy ISE in Monitor mode first. Every endpoint authenticates, ISE assigns an SGT, but no policy is enforced. This phase shows you what you're actually dealing with: what device types are on your network, whether your profiling policies correctly identify them, and whether your 802.1X authentication rates are where they need to be. Only after Monitor mode validates your policy coverage should you move to Low Impact mode (permit traffic while logging policy violations) and then eventually to Closed mode (enforce and block).
Skipping or shortening the Monitor mode phase is a common reason SD-Access projects produce unexpected downtime during cutover. You discover that 15% of your devices are failing 802.1X when you enforce (often because of certificate issues, RADIUS timeout config, or profiling misclassification) and those devices go dark.
Common Failure Patterns
These are the patterns that come up repeatedly in brownfield SD-Access deployments:
- Treating it as a switching project. Teams scope SD-Access as a hardware refresh with a new management plane, then discover that the identity and policy design work dwarfs the network engineering work. Budget and timeline are set for "replacing switches" not "redesigning access policy."
- Combining ISE and fabric deployment simultaneously. Both are large changes; combining them makes failure isolation nearly impossible. Always get ISE stable and validated in Monitor mode before beginning fabric migration.
- MTU oversight. VXLAN encapsulation adds overhead. The end-to-end path must support a minimum MTU of approximately 1600 bytes. In brownfield environments with older WAN links, service provider handoffs, or legacy firewall policies that cap MTU, this causes intermittent fabric connectivity failures that are maddeningly hard to trace to a single root cause.
- Certificate neglect post-deployment. ISE certificate expiry is a known, trackable event. Teams that don't instrument certificate expiry monitoring will experience it as an unexpected authentication outage, typically 1-3 years after deployment when the certificates from the initial build expire.
- Policy design by network team without security input. SGT policy encodes access decisions that security teams own. When network engineers design the SGT taxonomy without security input, the resulting policy either over-permits (defeating the segmentation purpose) or under-permits (causing application breakage during enforcement cutover).
- Stalled coexistence phase. The brownfield-to-fabric migration stalls mid-way when the team runs out of momentum or budget. The result is a permanent partial deployment (fabric in some buildings, legacy in others) that doesn't deliver the policy consistency benefit but does add management complexity.
Decision Checklist: Are You Ready for SD-Access?
This isn't a binary "go/no-go"; it's a readiness profile. The more boxes you can check honestly, the better your outcomes will be.
| Question | If yes | If no |
|---|---|---|
| Is your hardware predominantly Catalyst 9000 series or equivalent SD-Access capable platforms? | Lower replacement cost; full fabric feature support | Budget for hardware refresh or plan for extended coexistence with legacy segments |
| Do you have (or are you willing to invest in) Catalyst Center and ISE licensing and infrastructure? | Foundation is present | This is not optional; no Catalyst Center = no supported SD-Access |
| Is ISE already deployed, or is there a realistic plan to stand it up and stabilise it before the fabric migration begins? | Strong prerequisite satisfied | Plan for 6–12 months of ISE work before fabric deployment starts |
| Does your organisation have a PKI practice or a vendor managing certificate lifecycle? | Certificate risk is managed | Build a certificate monitoring and renewal process as part of the project, not an afterthought |
| Have you done a full platform inventory and confirmed end-to-end MTU ≥ 1600 bytes across all paths? | Underlay is viable | Identify MTU bottlenecks before provisioning; finding them after the fact is painful |
| Do you have security team buy-in and commitment to participate in SGT taxonomy and policy design? | Policy design is feasible | Without security participation, SGT policy will be incomplete or will default to over-permission |
| Does your team have the bandwidth to run a parallel coexistence phase without it indefinitely stalling? | Migration has a realistic path to completion | Consider whether a phased approach is sustainable, or whether a more aggressive cut-and-migrate strategy fits better |
| Is your DHCP infrastructure well-documented and can it be aligned with fabric Endpoint Group subnet definitions? | Reduces early-deployment connectivity issues | Spend time on DHCP discovery and cleanup before provisioning Edge Nodes |
Final Recommendation by Environment Type
SD-Access is not the right answer for every campus, and it's worth being direct about that.
| Environment type | SD-Access recommendation | Rationale |
|---|---|---|
| Greenfield campus or major refresh (new hardware, clean subnets) | Strong yes: design for it from the start | The brownfield migration cost disappears. You get the full architecture benefit without the coexistence complexity. |
| Large enterprise with significant IoT, guest, contractor segmentation requirements and ≥ 500 access ports | Yes, but plan 18–24 months and invest in ISE first | Scale justifies the investment; segmentation problem is real and VLAN-based alternatives get worse as the network grows |
| Mid-size campus (200–500 access ports), mostly homogeneous hardware, reasonable VLAN discipline | Conditional: evaluate operational capacity honestly | The architecture works at this scale, but the operational overhead of Catalyst Center + ISE is significant relative to the network size. If your team is lean, weigh that carefully. |
| Campus with high proportion of legacy hardware (pre-9000 series) and no hardware refresh budget | Not yet: focus on ISE and identity foundation first | Hardware ineligibility will force a prolonged coexistence phase. Get ISE deployed and identity-based policy working first; the fabric deployment can follow when hardware cycles align. |
| Small campus (< 100 access ports), single building, limited IT staff | No: wrong tool for the scale | The operational model doesn't match the environment. VLAN-based segmentation with 802.1X and ISE delivers 80% of the benefit at a fraction of the complexity. |
| Campus where SD-WAN or cloud-first connectivity is the primary initiative | Sequence SD-Access after the WAN layer is stable | Combining SD-Access and SD-WAN migrations simultaneously creates compounding complexity. Finish the WAN layer first; campus fabric can follow. |
The version of this conversation that goes wrong is when SD-Access gets positioned as a tactical solution to an urgent problem ("we need better segmentation now") and the prerequisites don't get the investment they need. The architecture is sound; the operational model is demanding. Teams that understand the difference between "can deploy" and "can operate confidently for years" go into the project with the right expectations and the right budget.
Key Takeaways
SD-Access is a genuine architectural improvement over VLAN-based campus policy, but only at the right scale and with the right prerequisites in place. Brownfield campuses carry design debt that the fabric overlay doesn't erase; hardware eligibility, underlay MTU, DHCP alignment, and legacy VLAN migration are all real work that has to be planned and executed, not assumed away. The dependency stack (Catalyst Center, ISE, PKI, IS-IS underlay, supported hardware) is not optional and not trivial; ISE readiness and certificate lifecycle management in particular are consistently underestimated and are responsible for a disproportionate share of post-deployment operational incidents. If you want to evaluate SD-Access honestly, start by assessing your ISE maturity and your team's bandwidth to sustain a parallel coexistence phase during migration; those two factors, more than the technology itself, will determine whether the project succeeds. The teams that do this well treat SD-Access as what it actually is: an operating model change that happens to require new network infrastructure, not the other way around.