When a user calls the help desk and says "the network is slow," someone on your team starts digging. You pull up Catalyst Center, check client health, scan the device 360 view for the access switch they're on, review wireless RF metrics if they're on Wi-Fi, and see... green. Everything is green. The switch is healthy, the uplink is healthy, the WLC shows acceptable SNR and a clean client association. And yet the user can't load Salesforce. Now what?
This is the gap that the Catalyst Center and ThousandEyes integration is designed to close. Catalyst Center is very good at telling you the state of your infrastructure: the devices you own, the interfaces you manage, the clients connected to your fabric. ThousandEyes is very good at telling you what happens to a packet after it leaves your administrative domain: ISP routing, internet paths, CDN performance, SaaS endpoint reachability. Neither tool alone gives you the full picture. Together, they let you stop guessing and start proving.
This article is not a feature tour. It is an operations article about the specific problem this combination solves, how the troubleshooting workflow actually changes, and where the gaps and deployment realities are. If you are evaluating whether this integration is worth the operational investment, this is what you need to know.
The Classic Blame Game
Any network operations team that supports SaaS-heavy environments knows the pattern. A business unit complains that an application is slow or intermittently failing. You check your infrastructure: devices are up, interfaces are clean, no drops, no errors. You check the application team: their servers look healthy. You check the ISP: they show no outage on their portal. Nobody can point to anything broken, and the user is still having a problem.
The blame cycle then goes something like this. Network says it is not their infrastructure. App says the app is fine and it must be the network. ISP says there are no incidents on their network. Cloud provider's status page is green. Meanwhile, the user's experience is degraded, nobody has concrete evidence of where the fault actually is, and the incident drags on longer than it needs to.
The root cause of this dynamic is not incompetence, it is a visibility boundary problem. Every team can see their own domain clearly. Nobody can see the full path between the user and the application. That path often crosses at least three administrative domains: your campus or branch network, one or more ISP networks, and the cloud or SaaS provider's infrastructure. Traditional monitoring tools stop at the edge of what you own.
| Segment of the path | Who owns it | Visible to traditional NMS? | Visible to Catalyst Center? | Visible to ThousandEyes? |
|---|---|---|---|---|
| Access switch / AP | You | Yes | Yes | Partial (agent on switch sees LAN-side) |
| Distribution / core | You | Yes | Yes | Yes (if agent placed there) |
| Branch WAN edge / SD-WAN | You | Varies | Partial (with SD-WAN integration) | Yes |
| ISP / transit network | Carrier | No | No | Yes (BGP path, hops, loss) |
| Internet backbone / CDN | Third party | No | No | Yes |
| SaaS / cloud endpoint | Vendor | No | No | Yes (synthetic tests, response time, availability) |
The problem with trying to diagnose a user experience issue without internet-path visibility is that you cannot tell whether the issue is in your domain or outside it. You can rule things out, but you cannot prove fault location. ThousandEyes shifts that from "we think it is not us" to "we can show you exactly where the degradation starts."
What Catalyst Center Sees Well
Catalyst Center (formerly DNA Center) is purpose-built for managing and assuring Cisco campus and branch infrastructure. For that scope, it is genuinely powerful. If you are on a Catalyst 9000 switching fabric, Catalyst 9800 wireless, and ISE for policy, Catalyst Center gives you a level of operational context that raw SNMP polling and syslog analysis cannot match.
| Capability | What Catalyst Center gives you |
|---|---|
| Device health | CPU, memory, interface utilization, hardware fault events. Per device, with historical trends and AI-driven anomaly baselines |
| Client health | RSSI, SNR, roaming events, DHCP and AAA timers, onboarding flow analysis, client 360 timeline |
| Fabric assurance | SD-Access fabric node health, control plane (LISP) status, underlay/overlay correlation, fabric edge to border path status |
| Network topology | Auto-discovered topology with health overlays; you can click through to any node and see its neighbors |
| AI-driven insights | Proactive issue detection based on learned baselines (not just threshold alerts). For example, detecting onboarding failure spikes before ticket volume rises |
| Path trace | Trace the actual switched/routed path from a client to a destination, showing the specific interfaces and devices involved |
| Sensor testing | Dedicated network assurance sensors (physical or virtual) that test onboarding flows, RADIUS, DNS, gateway reachability, and application URLs from specific locations in your network |
The key phrase in that table is "from specific locations in your network." Catalyst Center's path trace and sensor tests operate within your infrastructure. Path trace gives you a hop-by-hop view from client to the edge of your domain. Sensor tests validate the onboarding experience and can reach URLs, but what happens between your DNS resolver's response and the application's CDN endpoint is not something Catalyst Center can tell you directly.
Catalyst Center's blind spot is not a deficiency in the product. It is a structural limitation: it can only instrument and measure what it manages. Once a packet leaves your organization's network boundary, Catalyst Center cannot follow it.
What ThousandEyes Sees Well
ThousandEyes is an internet and application path intelligence platform. Its core function is running synthetic tests from a network of vantage points, Enterprise Agents (deployed on your infrastructure), Cloud Agents (deployed in Cisco-managed cloud POPs globally), and Endpoint Agents (installed on user devices). Tests run continuously on configurable intervals and generate a full dataset: round-trip latency, packet loss, path trace with hop-by-hop loss and latency, BGP routing visibility, and application-layer metrics for HTTP, DNS, FTP, and others.
| Capability | What ThousandEyes gives you |
|---|---|
| Internet path visibility | Hop-by-hop path trace from your agent to the destination, with loss and latency at each hop, including inside ISP and transit networks you do not own |
| BGP route monitoring | BGP prefix visibility from hundreds of collector vantage points worldwide; detects prefix hijacks, route leaks, and unexpected path changes |
| SaaS monitoring | Pre-built tests for Webex, Microsoft 365, Salesforce, Zoom, and others. Measures availability, page load, response time from multiple Cloud Agent locations |
| Cloud path visibility | Tests from Enterprise Agents to AWS, Azure, GCP endpoints, with visibility into whether degradation is on-path to the cloud or inside the cloud provider's network |
| Endpoint Agent data | User-perspective measurements from Windows and macOS clients. Includes local network metrics (gateway loss, DNS resolver timing) plus path to application |
| Waterfall and page load analysis | HTTP test type captures full browser-equivalent page load, showing object-level load times. Useful for isolating whether an application is slow due to a specific resource (CDN, API, asset) |
| Alert correlation | Timeline-based correlation of path changes, BGP events, and performance degradation. Can show that an ISP route flap preceded an application slowdown by 45 seconds |
ThousandEyes does not instrument your devices the way Catalyst Center does. It does not know that the CPU on your distribution switch is high, or that a specific client failed RADIUS authentication. What it knows is whether a packet from your campus (or from any of its Cloud Agents globally) can reach a destination, what path it takes, and whether that path is performing normally. Those two data sets are complementary, not overlapping.
One thing worth noting: ThousandEyes is only as useful as the tests you configure. The platform gives you the infrastructure to run tests, but you have to decide what to test, at what interval, from which agents, and you have to set up alerts and dashboards. A deployment with no configured tests generates no operational value. This is relevant when thinking about adoption cost, which we will come back to.
Why Combining Them Changes the Troubleshooting Workflow
The individual value of each tool is well understood by most operators who have used them. The integration value is more specific: it is about reducing context-switching time during an active incident.
When a user calls with a connectivity or performance problem, the typical workflow without integration goes something like this. You open Catalyst Center, pull up the client's health view, check device health, maybe run a path trace. You see the infrastructure is fine. You then open a browser tab to the ThousandEyes portal, navigate to the relevant test (assuming you configured one), and correlate the timeline manually with what you just saw in Catalyst Center. If the timestamps don't obviously align, you have to reason about the relationship between the two data sets in your head.
With the integration, ThousandEyes data surfaces inside Catalyst Center's Assurance section. You can start in Catalyst Center for the client and device health context, then see ThousandEyes test results for the relevant applications alongside that data, without switching portals. The operational benefit is not magical, you are still looking at the same data, but reducing the context switch cuts the cognitive overhead during troubleshooting. You see device health degradation and internet path degradation on the same timeline, which makes the correlation question much easier to answer.
The most important shift is philosophical. The question changes from "is the network down?" to "where exactly is the problem, and whose domain is it in?" That is a much more useful question, and it is one you can now answer with evidence rather than inference.
| Troubleshooting question | Catalyst Center alone | ThousandEyes alone | Combined |
|---|---|---|---|
| Is the client associated and healthy? | Yes | No | Yes (CC) |
| Is the access switch / AP healthy? | Yes | No | Yes (CC) |
| Is the path through your network clean? | Yes (path trace) | Partial (agent-to-agent) | Yes (CC) |
| Is the ISP path clean? | No | Yes | Yes (TE) |
| Is the SaaS endpoint reachable and performing normally? | No | Yes | Yes (TE) |
| Did a BGP route change correlate with the outage? | No | Yes | Yes (TE) |
| Is this affecting all users or just this one? | Yes (client health scope) | Yes (multi-agent scope) | Yes (both) |
| Can I prove to the ISP that the problem is in their network? | No | Yes (path trace data) | Yes (TE) |
Example Operational Scenarios
Scenario 1: Salesforce is slow on Monday morning
Without ThousandEyes: The sales team's Salesforce instance is slow. Network checks out green in Catalyst Center. Client health is normal. You contact Salesforce support and open a case. Salesforce support says their systems look healthy. Incident sits unresolved for several hours while people argue about who owns the problem. Eventually performance recovers on its own, and nobody knows why.
With ThousandEyes: ThousandEyes has been running a synthetic HTTP test to Salesforce from your branch agents every two minutes. You pull up the test timeline and see that round-trip latency spiked from 38ms to 310ms at 8:07 AM, with the hop-by-hop path trace showing the latency appearing at an ISP transit hop two hops past your WAN edge. BGP monitoring shows a route change for the Salesforce prefix at 8:05 AM. You take a screenshot of the path trace with the latency highlighted at the ISP hop, attach it to the support case, and escalate to your carrier. You are no longer arguing about whose problem it is, you have evidence of where the degradation is occurring. Resolution time drops from hours to minutes (at least from your side: you know it is not your network).
Scenario 2: Branch users can't join Webex calls
Without ThousandEyes: Three branches are reporting Webex audio quality problems during calls. Your NOC checks the WAN links and sees utilization is within normal range. Catalyst Center shows no device issues at the branch. You ask branch users to restart their clients. Problem persists.
With ThousandEyes: Your Enterprise Agents on the branch access switches have been running a Webex test every minute. You see that all three affected branches (and only those three) are experiencing packet loss of 3-7% to the Webex media nodes, while a fourth branch on a different ISP circuit is unaffected. The path traces for all three affected branches share a common upstream hop where the loss is introduced. You now know this is an ISP issue affecting a specific circuit or POP, not a Webex platform problem and not your branch infrastructure. You escalate to your carrier with specific evidence. The NOC at the unaffected branch does not need to spend time investigating since you have already scoped it.
Scenario 3: Intermittent DNS failures in a specific building
Without ThousandEyes: Users in one building report intermittent application failures. Client health in Catalyst Center is mostly clean, with some occasional onboarding events. Nothing conclusive.
With ThousandEyes: Your Enterprise Agent on the access switch in that building is running DNS tests to your internal resolvers and to public DNS. You notice the internal DNS resolver test is showing occasional 800ms+ response times (normally sub-5ms). That narrows the problem immediately: something is wrong with the DNS resolver path from that VLAN, not with the internet or applications. You look at the path trace to the resolver and find a specific hop where latency balloons occasionally. Catalyst Center confirms no device health issues, so you dig into QoS policy on the uplink, turns out DNS traffic is being deprioritized by a misconfigured DSCP policy. Found and fixed in under an hour because ThousandEyes gave you the specific symptom (DNS latency from that agent) to start from.
Deployment Prerequisites and Realities
Before planning a deployment, there are several things you need to have in place. The integration is not plug-and-play on arbitrary hardware or licensing.
Licensing
ThousandEyes Enterprise Agent deployment on Catalyst 9000 switches requires Cisco DNA Advantage or Premier licensing on each switch. DNA Advantage includes one ThousandEyes Enterprise Agent license per switch (consumed as 22 unit-months of ThousandEyes units). If you are on DNA Essentials, you will need to either upgrade or purchase ThousandEyes licenses separately. This is a non-trivial cost consideration for large deployments, if you have 500 access switches and want agents on all of them, that is 500 DNA Advantage licenses. Many environments will already have DNA Advantage for the broader assurance features; in that case, the ThousandEyes inclusion makes the marginal cost of agents essentially zero.
Hardware and Software Requirements
Enterprise Agents run as Docker containers using the App-hosting framework built into the Catalyst 9300 and 9400 series platforms. The AppGigabitEthernet port must be configured as a trunk (not access mode) to preserve VLAN tagging for the agent's network interface. Minimum IOS-XE versions are 17.3.3 for Catalyst 9300 and 17.5.1 for Catalyst 9400. Check the ThousandEyes support matrix for your specific platform and release before planning rollout, not all hardware variants are supported, and version requirements can be specific.
! Minimum config for AppGig interface (check your platform docs)
interface AppGigabitEthernet1/0/1
switchport mode trunk
switchport trunk allowed vlan 10,20,99
!The agent container needs a routable IP address on each VLAN it will test from, with outbound HTTPS to ThousandEyes cloud endpoints. If your campus has restrictive egress filtering, you need to allow the ThousandEyes agent communication IP ranges before deployment, agents that cannot phone home to the ThousandEyes cloud will fail to register and will not generate test data.
The Deployment Workflow
Catalyst Center handles the agent push through its App Hosting for Switches workflow. You upload the ThousandEyes Docker TAR file (downloaded from the ThousandEyes portal), run a readiness check per switch that validates HTTPS reachability, authentication, and VLAN configuration, then deploy to your target switches. The integration with the ThousandEyes portal is established through Assurance > Health in Catalyst Center, where a setup banner walks you through OAuth authentication to your ThousandEyes organization.
Once agents are running and registered in ThousandEyes, you configure tests in the ThousandEyes portal (not in Catalyst Center). As of early 2026, you can deploy agents from Catalyst Center, and Catalyst Center can surface ThousandEyes data, but test creation and management still lives in the ThousandEyes portal. This is worth knowing before you promise your NOC a single-pane-of-glass experience: the operational workflow involves both interfaces.
Where to Place Agents
This is an operational decision that significantly affects what value you get. An agent on an access switch can measure from that switch's perspective on a specific VLAN, which approximates the user experience for clients on that VLAN. To get representative coverage, you should deploy agents on user-facing VLANs at access switches, plus agents on distribution or core switches (on the management VLAN) for an infrastructure-level perspective.
A reasonable starting point for a campus environment: one agent per access-layer switch on your primary user VLAN, and one agent per distribution switch on the management VLAN. Then configure tests for your top five SaaS applications, your internal DNS resolvers, your WAN gateway, and any cloud workloads your users depend on. That gives you baseline coverage before you optimize placement based on incident patterns.
Gaps and Caveats
This is not a marketing pitch, so here is an honest inventory of what this integration does not do, and where you should set expectations carefully.
Test management is still two portals. Catalyst Center can deploy agents and surface data, but configuring what to test, test types, target URLs, alerting thresholds, dashboard views, requires the ThousandEyes portal. If your team is already comfortable with multiple observability tools this is not a big deal, but if you were hoping for genuine single-pane management of both, that is not where the product is today (at least as of early 2026).
ThousandEyes is only as useful as your test configuration. Agents doing nothing generate nothing. If you deploy 300 agents and configure zero tests, you have spent time and hardware resources to collect no data. The operational investment in defining good tests, setting appropriate alert thresholds, and building useful dashboards is significant. Budget for this before deployment.
Agent-on-switch is not the same as agent-on-endpoint. A ThousandEyes Enterprise Agent on a Catalyst 9300 measures the network from the switch, not from the user's laptop. If a specific user machine has a VPN client, a security agent, or a browser configuration that affects its application performance, the switch-based agent will not see that. ThousandEyes Endpoint Agents (installed on user machines) fill this gap, but they are a separate deployment and may require separate licensing depending on your agreement.
Catalyst Center's ThousandEyes data view is a subset of what is in the ThousandEyes portal. The Catalyst Center integration surfaces ThousandEyes data in the Assurance section, but not every visualization and test type will appear with the same depth as in the dedicated ThousandEyes UI. Complex investigations, especially BGP analysis or multi-agent path correlation, will likely drive you to the ThousandEyes portal anyway.
Not every release behavior is identical. Cisco is actively developing both products, and specific feature availability by Catalyst Center version or ThousandEyes deployment type may vary. Always verify current feature support against the release notes for your specific versions before committing to an architecture that depends on a specific capability.
AI-assisted correlation is still maturing. Catalyst Center's AI assistant and anomaly detection are improving rapidly (the 2026 releases added natural-language querying across the Catalyst portfolio), but the integration between Catalyst Center AI insights and ThousandEyes data is not yet at a point where the system will automatically correlate an AI-detected anomaly in your fabric with a BGP event in ThousandEyes and surface both in a single alert. Manual timeline correlation is still the primary workflow for cross-domain incident analysis.
Who Should Adopt This and Who Should Probably Wait
This integration is not universally valuable. The answer to "should we deploy ThousandEyes on our Catalyst switches?" depends heavily on your environment and your operational pain points.
Good candidates for adoption now
- Enterprise campus environments with significant SaaS dependency (Microsoft 365, Salesforce, Webex, Zoom) where "is it the network?" is a recurring incident question and the team regularly struggles to prove network innocence to application owners or business units.
- Organizations that already hold DNA Advantage licensing. The ThousandEyes agent license is included, so the marginal cost of deploying agents is primarily operational (deployment time, test configuration, monitoring setup).
- Multi-site or branch-heavy environments where ISP diversity means the path to cloud applications varies significantly by site. ThousandEyes is particularly valuable here because it makes branch-by-branch path quality differences immediately visible.
- NOC teams that have SLA obligations with external providers (ISP, cloud, SaaS) and need to produce evidence of provider-side failures to support escalations and credits. ThousandEyes generates shareable path traces and performance snapshots that hold up in those conversations.
Situations where it can probably wait
- Smaller environments (under 50 switches) with primarily on-premises applications where most of the user-to-application path stays inside your WAN. If your users are hitting internal data center workloads over MPLS and not touching the public internet for critical applications, ThousandEyes' core value proposition does not apply with the same force.
- Teams on DNA Essentials without a current path to Advantage. The additional licensing cost changes the ROI calculation significantly.
- Operations teams that do not yet have the bandwidth to configure and maintain ThousandEyes tests and dashboards. Deploying the agents and then leaving the platform unconfigured is a waste of the infrastructure and the license. If your team is already stretched, this is an investment that requires staffing capacity to return value.
- Environments running IOS-XE versions below the minimums (17.3.3 on 9300, 17.5.1 on 9400). Get to a supported release first, and if you are that far behind on software, there are likely more urgent operational concerns to address before adding an observability layer.
Key Takeaways
- The core problem this solves is visibility across administrative boundaries. Catalyst Center is authoritative inside your domain; ThousandEyes is authoritative outside it. The integration puts both data sets in reach during a single troubleshooting session.
- The operational benefit is faster fault isolation and better evidence. The most important outcome is not "we found the problem faster", it is "we proved the problem was not in our network, and we had the data to show the ISP or SaaS vendor exactly where the fault was."
- Deployment has real prerequisites. DNA Advantage licensing, supported hardware platforms, minimum IOS-XE versions, AppGig VLAN trunk configuration, and outbound HTTPS egress from agents are all required before agents register and generate data.
- Test management is still split between platforms. Agent deployment through Catalyst Center is streamlined, but test creation, alerting, and advanced visualization live in the ThousandEyes portal. This is not a true single pane of glass yet.
- ThousandEyes value is proportional to test configuration effort. Plan time to define tests, set thresholds, build dashboards, and train your NOC on how to read the data. Agents with no tests produce no signal.
- If you are already on DNA Advantage and your team regularly fields "is it the network?" escalations from SaaS-dependent business units, this integration is worth deploying. The license cost is already included, and the operational lift of configuring good tests pays back quickly in reduced MTTR and cleaner escalation evidence.