LDP and MPLS Label Distribution

LDP (Label Distribution Protocol) distributes MPLS labels across the network. Discovery, session establishment, distribution and retention modes, LDP-IGP sync, and Cisco IOS XE configuration.

LDP (Label Distribution Protocol) is what makes MPLS work for IP forwarding. It is the protocol routers use to tell each other what label to use for each prefix. Every LSR (Label Switching Router) in an LDP-driven MPLS network runs an LDP session with each direct neighbor, exchanges label-to-prefix mappings, and builds the forwarding state that lets MPLS packets traverse the network.

This article walks through how LDP discovers neighbors, how sessions form, label distribution modes (DU vs DoD), label retention modes, the LDP-IGP synchronization story, and the Cisco IOS XE configuration. If you are configuring MPLS for the first time, troubleshooting an LDP session, or wondering why show mpls ldp bindings shows what it shows, this is the reference.

What LDP Does

For each IP prefix in the IGP, LDP assigns a local label and tells neighbors "for prefix X, use label Y to send to me." The receiving routers store this mapping in the LDP binding table. The forwarding-table builder (CEF on Cisco) uses the bindings to create the actual MPLS forwarding state.

Two key terms:

  • FEC (Forwarding Equivalence Class): a group of packets treated identically. For LDP-driven MPLS, each IP prefix is its own FEC.
  • LSP (Label Switched Path): the sequence of LSRs and labels a packet traverses. Built up implicitly as each LSR maps its locally-assigned labels to the next-hop's labels.

LDP is "IGP-driven" - it does not pick paths itself. The IGP (OSPF, IS-IS, EIGRP) decides which neighbor is the next-hop for each prefix. LDP simply assigns labels along the IGP-computed paths.

Neighbor Discovery

LDP has two phases for finding neighbors:

PhaseMechanismTransport
DiscoveryHello packets announcing presenceUDP/646 multicast (224.0.0.2 / FF02::2)
Session establishmentTCP session for label exchangeTCP/646 unicast

Each LSR sends LDP Hellos out every MPLS-enabled interface every 5 seconds (default). When two LSRs hear each other's Hellos, they establish a TCP session on the higher-numbered router ID. The LDP TCP session is used to exchange label mappings and ongoing updates.

The LDP router-id is typically the highest loopback IP, mirroring how OSPF picks its router-id. All LDP sessions for a given LSR use the same router-id; multiple sessions run over the same TCP connection (one connection per neighbor).

Label Distribution Modes

LDP supports two modes for advertising labels:

ModeBehaviorDefault on Cisco
Downstream Unsolicited (DU)Each LSR proactively advertises labels for its prefixes to all neighborsYes
Downstream-on-Demand (DoD)LSRs only advertise labels in response to a neighbor's Label RequestNo (rare in practice)

DU is what you see in production. Every LSR floods label mappings; receivers store them; the forwarding table populates rapidly. DoD is mainly for ATM-based MPLS (effectively obsolete) and has stricter signaling but tighter coupling.

Label Retention Modes

ModeBehavior
Liberal RetentionKeep all label bindings from all neighbors, even ones not on the IGP best path
Conservative RetentionOnly keep bindings from the IGP next-hop neighbor for each prefix

Liberal retention is the Cisco default. The benefit: when the IGP fails over to a different next-hop, MPLS forwarding state is already populated for the new path - no waiting for label exchange. The cost: more memory used for label bindings.

For typical service-provider deployments, liberal retention is correct. The memory cost is negligible; the convergence benefit is significant.

LDP-IGP Synchronization

One of the trickiest production issues: an interface comes up, the IGP advertises new routes immediately, but LDP takes a few seconds to assign and exchange labels. During that gap, the IGP forwards packets via the new interface but MPLS has no label - packets get dropped (LDP IP forwarding is not configured for fallback) or forwarded as plain IP (which breaks any LSP-dependent service like L3VPN).

LDP-IGP synchronization solves this. With sync enabled, the IGP cost on an interface is held at maximum (effectively infinite) until LDP has formed a session and exchanged labels with the neighbor. Once LDP is ready, the IGP cost reverts to normal and traffic shifts to the new path.

Configuration:

! On Cisco IOS XE
mpls ldp sync             ! Enable globally for all OSPF interfaces

! Or per-interface
interface GigabitEthernet0/0/0
 mpls ldp sync

Verify with show mpls ldp igp sync.

Always enable LDP-IGP sync in production MPLS deployments. The cost is zero; the protection against transient drops is real.

Targeted LDP

Standard LDP discovers neighbors via Hellos on directly-connected interfaces. Targeted LDP establishes sessions with non-adjacent LSRs (e.g. for L2VPN pseudowires that span multiple hops).

! Configure a targeted session to a remote LSR
mpls ldp neighbor 10.10.10.10 targeted

Used primarily for L2VPN services (VPWS / pseudowires) where the two endpoints of a pseudowire need direct LDP signaling regardless of how many hops separate them.

Configuration on Cisco IOS XE

Minimum LDP configuration:

! Global MPLS configuration
mpls ldp router-id Loopback0 force    ! Use loopback as LDP router-id
mpls ldp sync                          ! IGP sync for all MPLS interfaces

! Per-interface
interface GigabitEthernet0/0/0
 mpls ip                              ! Enable MPLS forwarding
 mpls ldp sync                        ! Per-interface sync (optional if global)
 mpls label protocol ldp              ! Use LDP (vs the older TDP)

That is it. Three lines per interface, two global lines. Once enabled, LDP discovers neighbors automatically and starts exchanging labels.

For the comprehensive walkthrough, see Cisco MPLS Configuration on IOS XE.

Verification

! LDP neighbors
Router# show mpls ldp neighbor
 Peer LDP Ident: 2.2.2.2:0; Local LDP Ident 1.1.1.1:0
        TCP connection: 2.2.2.2.13456 - 1.1.1.1.646
        State: Oper; Msgs sent/rcvd: 1234/1234; Downstream
        Up time: 02:30:15
        LDP discovery sources:
          GigabitEthernet0/0/0, Src IP addr: 10.0.12.2

! Label bindings
Router# show mpls ldp bindings
  lib entry: 10.10.10.0/24, rev 100
        local binding:  label: 17
        remote binding: lsr: 2.2.2.2:0, label: 18
        remote binding: lsr: 3.3.3.3:0, label: 19

! LDP-IGP sync state
Router# show mpls ldp igp sync
GigabitEthernet0/0/0:
        LDP configured; SYNC enabled
        SYNC status: sync achieved; peer reachable

The bindings table shows local and remote labels. For prefix 10.10.10.0/24, this LSR uses label 17 locally; if it forwards to 2.2.2.2, it pushes label 18; if it forwards to 3.3.3.3, it pushes label 19. CEF picks the right label based on which next-hop the IGP chose.

Authentication

LDP supports MD5 authentication for the TCP session:

mpls ldp neighbor 2.2.2.2 password Cisco123!

Both ends must agree on the password. Authentication prevents an attacker from injecting fake LDP messages that could manipulate the label table. Always enable in production.

Troubleshooting

Common LDP failure modes:

SymptomLikely cause
No LDP neighbor on directly-connected interfacempls ip not enabled on one or both interfaces; authentication mismatch
Session bouncingUnderlying link instability; MTU mismatch on the LDP TCP session
No labels for some prefixesPrefix not in IGP; or LDP filter applied that suppresses
Traffic forwarded unlabeledLDP not enabled on the egress interface; or PHP triggering as expected
Drops on link bring-upLDP-IGP sync not configured; IGP advertises before LDP has labels

Universal first command: show mpls ldp neighbor. State should be Oper. If not, work backwards through MPLS interface config, IP reachability, and authentication.

LDP vs Segment Routing

LDP is being phased out in modern service-provider designs in favor of Segment Routing (SR-MPLS). SR uses the same MPLS data plane but distributes labels via the IGP itself - no separate LDP protocol. This eliminates the LDP-IGP synchronization problem entirely (because the IGP IS the label distribution).

AspectLDPSR-MPLS
Label distributionSeparate LDP protocolIGP extensions (OSPF/IS-IS)
State per prefixStored in LDP binding tableAlgorithmic from IGP segment
Sync issuesNeed LDP-IGP syncNone (IGP is the distribution)
Operational complexityHigher (separate protocol)Lower (one less protocol)
Fast reroutePossible with extensionsNative via TI-LFA

For the full LDP-vs-SR comparison and migration story, see the MPLS pillar's segment routing section.

Summary

LDP distributes labels across the MPLS network so each LSR knows which label to use for each prefix. It runs over UDP/646 for discovery and TCP/646 for session establishment. The dominant production mode is Downstream Unsolicited with Liberal Retention; LDP-IGP synchronization is mandatory to prevent transient drops on link bring-up.

Configuration is minimal (a few commands per LSR), and once running LDP populates the forwarding state automatically. Modern designs are migrating to Segment Routing, which eliminates LDP entirely, but LDP remains the dominant label distribution protocol in production MPLS networks today. Bookmark this article alongside the MPLS cluster pillar and the MPLS labels article.

Read next

© 2025 Ping Labz. All rights reserved.