Stateful failover is the option that turns a Cisco ASA active/standby pair into something users actually do not notice when one unit fails. Without it, basic failover preserves the IPs and continues to forward new traffic - but every existing TCP session disconnects, every UDP flow's NAT mapping disappears, every VPN tunnel re-authenticates. Stateful failover replicates the connection table, the xlate table, ARP, and (with explicit configuration) routing and VPN session state to the standby in real time, so when the standby promotes itself the in-flight sessions keep flowing. This article walks the configuration, the show output that confirms sync is healthy, and the interface-tracking knobs that control which interface failures actually trigger a failover. Built on the PingLabz ASA reference lab on ASAv 9.23(1) with a documented lab constraint.
If you are still configuring the base failover pair, start at active/standby failover configuration. This article assumes the pair is already Active / Standby Ready and adds stateful sync on top.
What State Actually Replicates
| Item | Replicated by default? | Notes |
|---|---|---|
| TCP connection table | Yes | Existing flows survive failover. Ongoing transfers do not even hiccup. |
| UDP "connection" table | Yes | Stateless protocol but the ASA tracks pseudo-flows. Replicated. |
| NAT xlate table | Yes | Static, dynamic, and PAT mappings are all on the wire. |
| ARP table | Yes | Avoids ARP relearn delay after promotion. |
HTTP session state (per failover replication http) | No - requires explicit config | Without this, HTTP connections keep flowing but the per-connection inspection state is lost; the unit reverts to "first packet" handling. |
| VPN IKE/IPsec SA | Yes for IKEv2 | Stateful failover replicates IKEv2 SA state and the rekey timers. Tunnels survive failover with at most a brief blip. |
| Routing table (dynamic) | No | Standby learns routes itself via OSPF/EIGRP/BGP if peering is configured. Static routes are config-replicated. |
| SIP / H.323 / SCCP signaling | Yes | Voice gateways do not lose call state. |
| Multicast routing state | No | Standby relearns IGMP / PIM after promotion. |
The two practical implications: HTTP-heavy environments need failover replication http set explicitly; multicast environments need to expect a brief disruption to multicast streams during a failover. Everything else "just works" once stateful sync is enabled.
Failover Link vs Stateful Link
Two roles, possibly one physical link:
| Role | What it carries | Configured with |
|---|---|---|
| Failover LAN interface | Hellos, configuration replication, election traffic. Small bandwidth. | failover lan interface |
| Stateful failover link | Connection / xlate / ARP / VPN state replication. Up to several hundred Mbps on a busy ASA. | failover link |
Best practice: use two separate physical interfaces, one for each role. Why: the stateful link can saturate during heavy traffic; if it shares with the LAN interface, a high state-rep volume can cause failover hellos to drop, triggering a spurious failover. On a budget, you can combine both on a single physical interface (just point both failover lan interface and failover link at the same name), accepting the risk.
Enabling Stateful Failover
The configuration adds two lines to the active/standby failover baseline. On the primary:
! Dedicated stateful link on its own physical interface
interface GigabitEthernet0/4
description STATE Failover Interface
no shutdown
no nameif
no security-level
no ip address
! Stateful link configuration
failover link STATE-LINK GigabitEthernet0/4
failover interface ip STATE-LINK 169.254.99.5 255.255.255.252 standby 169.254.99.6
! Optional but recommended for HTTP-heavy environments
failover replication httpAnd the same on the secondary. Once both sides have the stateful link configured and failover is enabled, sync starts immediately. There is no "start sync" command - the moment the failover link is up and authenticated, the active begins streaming state to the standby.
If you only have one spare physical interface and need to combine failover + stateful on it:
! Single physical interface for both roles
failover lan interface FAIL-LINK GigabitEthernet0/3
failover link FAIL-LINK GigabitEthernet0/3
failover interface ip FAIL-LINK 169.254.99.1 255.255.255.252 standby 169.254.99.2
Both configuration commands point at the same interface. Functional, but watch the interface utilization closely on the first month of running.
Verifying Stateful Sync: show failover
The show failover output gains a stateful-sync section once the link is up:
ASA-PERIM# show failover
Failover On
Failover unit Primary
Failover LAN Interface: FAIL-LINK GigabitEthernet0/3 (up)
... [as before] ...
Stateful Failover Logical Update Statistics
Link : STATE-LINK GigabitEthernet0/4 (up)
Stateful Obj xmit xerr rcv rerr
General 13456 0 13234 0
sys cmd 234 0 234 0
up time 0 0 0 0
RPC services 0 0 0 0
TCP conn 2345 0 2300 0
UDP conn 890 0 875 0
ARP tbl 12 0 12 0
Xlate Timeout 3 0 3 0
IPv6 ND tbl 0 0 0 0
VPN IKEv2 SA 1 0 1 0
VPN IKEv2 P2 2 0 2 0
SIP Session 0 0 0 0
ICMP session 0 0 0 0
Route Session 0 0 0 0
Logical Update Queue Information
Cur Max Total
Recv Q: 0 1 13234
Xmit Q: 0 1 13456
Three things to verify:
Link : STATE-LINK ... (up). The stateful link itself is alive.- The
xmitandrcvcolumns are non-zero on at leastGeneral,TCP conn, andUDP connif you have traffic flowing. Ifxmitis climbing butrcvon the standby is not, the standby is not receiving the updates - check the link. - The
xerrandrerrcolumns should be at or near zero. Climbing error counters mean the link is unstable or one side is not keeping up.
The Recv Q / Xmit Q queue depths should usually be 0 with occasional transient blips. Sustained Cur values above 100 indicate the stateful link is back-pressured - usually CPU on the standby unit cannot drain the queue fast enough.
show failover statistics
ASA-PERIM# show failover statistics
tx:1234567
rx:1234500
Bandwidth: 12 kbps in / 12 kbps out
Frame loss rate: 0.0% / 0.0%
Useful for capacity planning. If the stateful link is averaging anything close to its physical bandwidth, you have a sizing issue - upgrade the link before the next failover event. Most ASAs use a few Mbps of stateful sync; busy datacenter perimeters can push 100+ Mbps. A 1 GbE failover link is the safe default; 10 GbE is appropriate for SNAT-heavy environments with millions of conns.
Interface Tracking and Monitoring
Stateful sync only handles the data plane. Failover triggering - i.e. "when do we switch" - is governed by interface tracking. By default the ASA monitors every nameif'd data interface and triggers failover if any monitored interface goes down. To customize:
! Monitor only the interfaces that matter
monitor-interface inside
monitor-interface outside
no monitor-interface dmzThat config monitors inside and outside; if dmz drops, failover does not trigger. Use this when an interface should not cause a failover - for example, a backup management interface that is allowed to flap.
The failover interface-policy command tightens the trigger logic:
! Failover when ANY single monitored interface goes down (default)
failover interface-policy 1
! Failover only when AT LEAST 2 monitored interfaces go down
failover interface-policy 2
! Failover only when MORE THAN 50% of monitored interfaces go down
failover interface-policy 50%Most production deployments leave the default at 1 - any interface drop is a problem - but datacenter pairs with many monitored interfaces sometimes raise to 2 to avoid a single-link bounce causing a failover.
show monitor-interface: Per-Interface Health
ASA-PERIM# show monitor-interface
This host: Primary - Active
Interface inside (10.10.0.254): Normal (Monitored)
Interface outside (203.0.113.2): Normal (Monitored)
Interface dmz (192.168.50.1): Normal (Waiting)
Other host: Secondary - Standby Ready
Interface inside (10.10.0.253): Normal (Monitored)
Interface outside (203.0.113.3): Normal (Monitored)
Interface dmz (192.168.50.2): Normal (Waiting)
States to know:
Normal (Monitored)- link up, layer-3 reachability between the two units' standby and active IPs working. Green.Normal (Waiting)- link up but the unit cannot reach the peer's IP on this interface. Means the standby unit's data interface is wired but the peer's standby IP is not responding to ARP. Common transitional state right after promotion.Failed- link down or peer unreachable for longer than the holdtime. Triggers failover if interface-policy threshold met.No Link- physical layer 1 down. The interface lost its cable or the peer port shut down.Testing- the unit is actively probing to determine state. Brief.Unmonitored-no monitor-interfaceapplied. The interface state has no effect on failover.
If an interface stays at Waiting for more than a minute, the configured standby IP is wrong or unreachable. Compare what the active unit thinks the standby IP is against what is actually configured on the standby.
Manual Switchover
For maintenance or testing, force a switchover:
ASA-PERIM/active# no failover active
The active unit transitions to Standby Ready, the peer transitions to Active. Existing TCP and UDP sessions survive the transition because of stateful sync. To fail back:
ASA-PERIM-SEC/active# no failover active
If you want to schedule which unit returns to Active, use failover preempt <seconds> on the primary. The primary will reclaim the Active role seconds after the failover link comes back up. Useful for "primary should always be Active when both are healthy" semantics.
Lab Constraint: Standby-Side Verification
The PingLabz ASA reference lab confirmed a constraint worth flagging for any reader trying to reproduce the show output: stateful sync output is fully visible from the active unit (show failover, show failover statistics), which counts the state objects pushed to the standby. The standby's tables (show conn, show xlate) cannot be inspected from the active - you have to console into the standby to see them. In production this is a one-step problem (jump on the standby's mgmt interface). In the lab environment we used, the secondary unit's PyATS path was not reliable on second-boot ASAv instances, so verification of "did the standby actually receive the state I sent" was done by inference from the active's xmit counters and a brief simulated failover (no failover active on the active to demote it; sessions continued without disruption confirms the standby had the state). For a production runbook, always verify both sides by consoling into each.
Key Takeaways
Stateful failover is the upgrade that makes ASA failover transparent to users: connections, NAT translations, ARP entries, and VPN SAs all replicate to the standby in real time. The configuration is two extra commands beyond the base active/standby failover setup: failover link <name> <interface> for the stateful link and failover replication http for HTTP-heavy environments. Verify with the Stateful Failover Logical Update Statistics block in show failover, watching the xmit/rcv columns and any non-zero error counters. Tune which interfaces trigger failover with monitor-interface and failover interface-policy. Once running, manual switchovers via no failover active are non-disruptive thanks to the stateful sync. The full Cisco ASA reference cluster covers the rest, including common outage scenarios for the failure modes that need diagnosis.