Standard GRE is point-to-point: one tunnel, one neighbor, one underlay destination. That works for two sites. For 50 branches and 2 hubs, point-to-point GRE means 50 manually configured tunnels per hub, plus another 1,225 tunnels if you want full mesh between branches. Nobody does that. The answer is multipoint GRE (mGRE), which lets a single tunnel interface serve many remote endpoints, plus NHRP (Next Hop Resolution Protocol) to dynamically map them, plus IPsec to encrypt them. Stack those three together and you have DMVPN, the Cisco standard for hub-and-spoke and dynamic spoke-to-spoke overlays.
This article is the introduction to mGRE and DMVPN, the architecture, the three DMVPN phases, and a baseline Phase 3 hub-and-spoke configuration. Part of the PingLabz GRE Tunnels cluster.
What Multipoint GRE Adds
Plain GRE has one tunnel destination configured under the interface:
interface Tunnel0
tunnel source 198.51.100.1
tunnel destination 203.0.113.1 ! <- single destination
tunnel mode gre ipmGRE removes the destination:
interface Tunnel0
tunnel source 198.51.100.1
tunnel mode gre multipoint ! <- no destination, multipoint modeThe tunnel can now have many endpoints. The router needs some way to know, for each inner-IP destination, which underlay-IP destination to use as the outer header. That mapping is what NHRP provides.
NHRP: The Address Resolution for mGRE
NHRP (Next Hop Resolution Protocol, RFC 2332) is essentially ARP for tunnel networks. It answers the question "I have an inner-overlay IP I want to send to. What underlay IP should I use as the outer-tunnel destination?" In a DMVPN, every spoke registers its overlay-to-underlay mapping with the hub (the NHS, Next Hop Server). When a spoke wants to talk to another spoke, it asks the hub for that spoke's underlay IP, then builds a direct GRE tunnel.
The relevant NHRP commands on a spoke pointing to a hub:
interface Tunnel0
ip address 10.0.0.10 255.255.255.0
ip nhrp network-id 1
ip nhrp nhs 10.0.0.1 ! Hub's overlay IP (NHS)
ip nhrp map 10.0.0.1 198.51.100.1 ! Static map: hub overlay -> hub underlay
ip nhrp map multicast 198.51.100.1 ! Send multicast to hub
tunnel source GigabitEthernet1
tunnel mode gre multipoint
tunnel key 12345 ! Same on all members
tunnel protection ipsec profile IPSEC-GREThe static map tells the spoke how to reach the hub initially. Once the spoke registers, the hub knows the spoke's underlay IP and can advertise it to other spokes on demand. The tunnel key is the GRE Key field; it must match across all DMVPN members and serves as a tunnel identifier when one router has multiple mGRE tunnels.
DMVPN Phases
DMVPN has evolved through three phases, each adding spoke-to-spoke capabilities. Modern deployments default to Phase 3.
| Phase | Spoke-to-spoke? | How it works | Use today |
|---|---|---|---|
| Phase 1 | No - all traffic via hub | Spokes have static map only to hub; hub re-encapsulates spoke-to-spoke traffic | Simple sites where hub-only is acceptable; hub bandwidth is sized for all traffic |
| Phase 2 | Yes - dynamic spoke-to-spoke | Spokes resolve other spokes' underlay IPs via NHRP and build direct tunnels; routing protocol must preserve the spoke as next-hop | Largely superseded by Phase 3; still in use in older deployments |
| Phase 3 | Yes - dynamic spoke-to-spoke with NHRP shortcut | Initial packets go via hub; NHRP shortcut sends a redirect; spokes build direct tunnel and reroute | Modern default for new DMVPN designs |
Phase 1 is conceptually simplest. The hub does all the work. Spoke-to-spoke traffic is just two hub-traversal hops. It is fine for designs where the hub has plenty of bandwidth and CPU, but it is wasteful for high-volume direct traffic.
Phase 2 added direct spoke-to-spoke but had operational warts: the routing protocol had to preserve the original next-hop (which broke summarization), and the trigger for spoke-to-spoke resolution was traffic destined for a remote spoke. It worked but design decisions were tightly coupled.
Phase 3 is the modern answer. The hub-and-spoke control-plane stays clean (the hub can summarize routes), but when a spoke sends traffic to another spoke via the hub, the hub returns an NHRP redirect telling the spoke "actually, you can reach that destination directly at this underlay IP." The spoke builds the direct tunnel, future packets bypass the hub, and the design scales cleanly.
Hub Configuration (DMVPN Phase 3)
! ---- Hub: HQ1 ----
crypto ikev2 keyring KR-DMVPN
peer ANY
address 0.0.0.0 0.0.0.0
pre-shared-key local DMVPN_KEY
pre-shared-key remote DMVPN_KEY
!
crypto ikev2 profile IKEV2-DMVPN
match identity remote address 0.0.0.0
authentication local pre-share
authentication remote pre-share
keyring local KR-DMVPN
!
crypto ipsec transform-set TS esp-aes 256 esp-sha256-hmac
mode transport
!
crypto ipsec profile IPSEC-DMVPN
set transform-set TS
set ikev2-profile IKEV2-DMVPN
!
interface Tunnel0
ip address 10.0.0.1 255.255.255.0
ip mtu 1400
ip tcp adjust-mss 1360
no ip split-horizon eigrp 100 ! or no ip ospf flood-reduction
ip nhrp authentication NHRPSEC
ip nhrp network-id 1
ip nhrp redirect ! Phase 3: hub sends redirects
ip nhrp map multicast dynamic
tunnel source GigabitEthernet1
tunnel mode gre multipoint
tunnel key 12345
tunnel protection ipsec profile IPSEC-DMVPN
!
router eigrp 100
network 10.0.0.0 0.0.0.255
no auto-summaryKey hub-side commands:
ip nhrp redirect- this is the Phase 3 enabler. The hub watches for traffic flowing through it that could go spoke-to-spoke directly, and sends an NHRP redirect to the source spoke.ip nhrp map multicast dynamic- this lets the hub flood multicast (routing-protocol hellos, broadcasts) to all registered spokes without static maps.no ip split-horizon eigrp 100- on a multipoint interface, EIGRP would normally not advertise routes back out the interface they came in on (split horizon). Disable that on a hub so it can re-advertise spoke routes to other spokes.
Spoke Configuration (Phase 3)
! ---- Spoke: Branch ----
crypto ikev2 ... (same as hub)
crypto ipsec ... (same as hub)
!
interface Tunnel0
ip address 10.0.0.10 255.255.255.0
ip mtu 1400
ip tcp adjust-mss 1360
ip nhrp authentication NHRPSEC
ip nhrp network-id 1
ip nhrp shortcut ! Phase 3: spoke acts on redirects
ip nhrp nhs 10.0.0.1 ! Hub overlay IP
ip nhrp map 10.0.0.1 198.51.100.1 ! Static map to hub
ip nhrp map multicast 198.51.100.1
tunnel source GigabitEthernet1
tunnel mode gre multipoint
tunnel key 12345
tunnel protection ipsec profile IPSEC-DMVPN
!
router eigrp 100
network 10.0.0.0 0.0.0.255
network 192.168.10.0 0.0.0.255 ! Branch LAN
no auto-summaryThe spoke-side ip nhrp shortcut command is the Phase 3 spoke counterpart to the hub's ip nhrp redirect: when the spoke receives a redirect, it issues an NHRP resolution to the target spoke's underlay IP, builds the direct tunnel, and modifies its CEF table to send future packets directly.
Verifying DMVPN
HUB# show dmvpn
Legend: Attrb --> S - Static, D - Dynamic, I - Incomplete
N - NATed, L - Local, X - No Socket
T1 - Route Installed, T2 - Nexthop-override
C - CTS Capable
Type:Hub, NHRP Peers:2,
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb
----- --------------- --------------- ----- -------- -----
1 203.0.113.10 10.0.0.10 UP 00:14:23 D
1 203.0.113.20 10.0.0.20 UP 00:13:01 D
HUB# show ip nhrp
10.0.0.10/32 via 10.0.0.10
Tunnel0 created 00:14:23, expire 01:45:36
Type: dynamic, Flags: registered nhop
NBMA address: 203.0.113.10On the spoke, after some spoke-to-spoke traffic flows, you should see a dynamic NHRP entry for the other spoke:
SPOKE10# show dmvpn
Type:Spoke, NHRP Peers:2,
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb
----- --------------- --------------- ----- -------- -----
1 198.51.100.1 10.0.0.1 UP 00:14:23 S
1 203.0.113.20 10.0.0.20 UP 00:00:34 DThe "S" entry is the static hub mapping. The "D" entry is the dynamic spoke-to-spoke tunnel built after the hub redirect.
Design Considerations
- Hub redundancy. A single hub is a single point of failure. Production DMVPN designs use two hubs, each spoke registers with both, and the routing protocol picks the active path. The hubs themselves do not need to be aware of each other beyond running the same DMVPN profile.
- Routing protocol choice. EIGRP and OSPF both work over DMVPN. EIGRP scales better in mesh designs because it does not have the LSA-flood problem; OSPF works fine for hub-and-spoke up to a few hundred spokes if you tune network types correctly. iBGP is also common, especially if the WAN already runs BGP.
- Underlay path control. If both the underlay and the DMVPN overlay run dynamic routing, recursive-routing is a real risk. Lock the underlay route to the hub's tunnel-source IP with a static route on each spoke.
- NHRP authentication. The
ip nhrp authenticationstring is plaintext. It is not a security boundary; treat it as a sanity check to make sure spokes do not register against the wrong DMVPN. The real authentication is the IPsec PSK (or PKI certificates, for production). - Spoke onboarding. New spokes need their static hub map plus their assigned overlay IP. Many shops build a Jinja2 template and push spoke configs via Ansible / CSPC for at-scale onboarding.
- NAT behind spokes. If a spoke is behind NAT, NAT-T (UDP 4500) must be permitted by the NAT box. NHRP also negotiates around NAT, but some scenarios (CGNAT, double-NAT) cause problems and you may need spoke-to-hub-only without spoke-to-spoke for those.
Alternatives in 2026
DMVPN remains widely deployed but it is no longer the only answer for multi-site overlays. The competitive options:
- SD-WAN platforms. Cisco Catalyst SD-WAN (Viptela), Fortinet Secure SD-WAN, Palo Alto Prisma SD-WAN, VMware VeloCloud, and similar all build encrypted overlays at scale, with centralized policy and orchestration. SD-WAN is the modern default for new enterprise WAN; DMVPN is increasingly the legacy / Cisco-traditional path.
- WireGuard meshes. Tools like Tailscale, NetBird, and Headscale build WireGuard meshes with central coordination. Smaller-scale, simpler to operate, no Cisco required.
- Cloud SD-WAN. AWS Cloud WAN, Azure Virtual WAN, and Google Network Connectivity Center are taking over the cloud-on-ramp portion of what DMVPN used to do.
For a Cisco shop with existing DMVPN expertise and limited willingness to introduce new platforms, DMVPN Phase 3 over GRE-IPsec remains a perfectly good answer. For greenfield enterprises, the question worth asking is: do you want to operate the underlying mGRE/NHRP/IPsec stack yourself, or do you want a managed-overlay platform to do it for you?
Summary
mGRE plus NHRP plus IPsec is the building-block trio for DMVPN. Phase 3 is the modern default, with the hub sending NHRP redirects and the spokes acting on them to build dynamic direct tunnels. The configuration is more involved than point-to-point GRE but the operational payoff is enormous: one hub-side template plus one spoke-side template scales to hundreds of sites. The gotchas (recursive routing, OSPF network types, NHRP authentication, hub redundancy) are well-documented and predictable.
This article is an introduction. DMVPN has its own cluster-worth of depth: phase migrations, OSPF over DMVPN tuning, the per-tunnel QoS story, FlexVPN as a successor architecture, and the operational habits that keep a 200-site DMVPN running. For the GRE foundation that DMVPN builds on, work back to the PingLabz GRE pillar.