Additional SDA stuff (not on blueprint)
2.1.x Node types
General information on “SDA node types”:
- Each site requires at least one Control-Plane Node and one Border Node
- There are three different node types which provide different services for the SD-Access fabric
- The three possible node types are:
- Fabric Edge Node: A fabric device that connects wired endpoints to the SDA fabric. Responsible for identifying/authenticating endpoints. Register EID info with Control-Plane node. Provide an Anycast L3 Gateway for connected endpoints (same for a specific VN on all Edge nodes). Performs encap/decap of data traffic from/to all endpoints.
- Fabric Border Node: A fabric device that connects external L3 networks to the SDA fabric (eg. core). The Fabric Border Node is sub-divided into three possible types:
- Internal Border: Connects to “known” IP subnets outside of the fabric, eg. shared services (DHCP, …). Fabric prefixes (EID spaces, aggregated) get advertised to the external domain and external prefixes are imported into the fabric domain EXCEPT default-route.
- External Border: Connects to “unknown” IP subnets outside of the fabric, eg. Internet/Cloud/… Is a Gateway of Last Resort for all unknown prefixes. Fabric prefixes (EID spaces, aggregated) get advertised to the external domain but NO prefixes are imported into the fabric domain.
- Anywhere Border: Combination of Internal and External Border. Fabric prefixes (EID spaces, aggregated) get advertised to the external domain and external prefixes are imported into the fabric domain EXCEPT default-route. Is a Gateway of Last Resort for all unknown prefixes
- Control-Plane Node: Used for the SDA control-plane. Acts as LISP MS/MR. Up to 6 control-plane nodes in a wired-only environment supported.
- Reasons to separate the Border Node (Internal/External vs. Anywhere):
- When using a single Anywhere Border traffic hairpinning can occur
- When using separate Border Nodes the traffic is much more streamlined and much easier to troubleshoot
- Possible Node combinations:
- Each device can either act as a single node type (Distributed Design) or can serve multiple purposes
- Control-Plane + Border can be combined (known as Co-Located Design)
- Border + Edge can be combined
- When Control-Plane and Border Plane are used in a Distributed Design, iBGP peering between it will be used to exchange and redistribute routes
- Control-Plane + Border + Edge are known as Fabric in a Box
- High Availability:
- Redundant Border Nodes and Control Plane nodes operate in an active-active fashion
2.1.x IP Pools
General information on “SDA IP pools:
- SDA requires several IP pools to be configured
- Required IP pools:
- Border Pool: Mandatory. For leveraging Border automation.
- Client Pool: Mandatory. For onboarding wired endpoints.
- AP Pool: Optional (if WiFi is available). For attaching APs to the SDA fabric.
- Wireless Pool: Optional. For wireless endpoints.
- Wired and wireless endpoints can use the same IP pool
- The defined gateway address of an IP pool is the Anycast Gateway of the overlay network
2.1.x Transit Types
General information on “SDA Transit Types”:
- Three possible Transit Types:
- IP-based Transit: Leverages a tradition IP-based network which requires remapping of VRFs and SGTs between sites. Typically used when connecting to Shared-Services (WLC, DNS, DHCP, …).
- SD-Access Transit: Native SD-Access fabric (LISP, VXLAN, CTS) with a domain-wide Control Plane node for inter-site communication.
- SD-WAN Transit: Used for securely interconnecting different locations over different underlay types (Internet, MPLS, …). SGTs get preserved end-to-end. VNs are mapped to VPNs.
- Transit Use Cases:
- IP-based Transit: Service Insertion (eg. Firewall), Internet Handoff, P2P IPSEC Encryption, PBR, WAN Accelerators, Traffic Engineering.
- SD-Access Transit: Fully automated site-to-site connection and seamless policy propagation (Full VXLAN connectivity from Edge-to-Edge). Typically used for sites in the same metro area, campus or even building.
- SD-WAN Transit: Almost fully automated site-to-site connection and seamless policy propagation (SGT tags carried over the SD-WAN VPNs). Typically used for sites with great distance between each other.
- Transit Control/Data Plane:
- IP-based Transit:
- Control-Plane: LISP within SDA. IGP/BGP between sites.
- Data-Plane: VXLAN + SGT within SDA and SXP with ISE between sites.
- SD-Access Transit:
- Control-Plane: LISP within SDA and between sites.
- Data-Plane: VXLAN + SGT within SDA and between sites.
- SD-WAN Transit:
- Control-Plane: LISP within SDA site. OMP over SD-WAN.
- Data-Plane: VXLAN + SGT within SDA. IPSEC + SGT over SD-WAN.
- IP-based Transit:
2.1.x DHCP within SDA
General information on “SDA DHCP”:
- Endpoint sends DHCP Request which gets intercepted by the Fabric Edge Node
- Fabric Edge Node inserts DHCP option 82 (RLOC Loopback address and VN ID)
- Fabric Edge Node sends the DHCP Request to the Border Node
- Border Node sends the DHCP Request to the DHCP Server through the Fusion Router
- DHCP Reply will go to the Border Node which punts it to the CPU. This (punting) is achieved by having a loopback interface on the Border Node which has the same IP address and is in the same VN as the Edge Node Anycast Gateway to which the original DHCP request was sent by the client.
- The Border Node extracts the Option 82 parameters (RLOC + VN ID)
- The Border Node will encapsulate the DHCP Reply in VXLAN with destination RLOC and VN ID (LISP Instance ID) extracted from the DHCP Option 82 parameters
- DHCP server must be Option 82 compliant (= DHCP server needs to preserve the option)
2.1.x Troubleshooting
General information on “SDA Troubleshooting”:
- To check the connection from the Fabric Edge Node to a connected Endpoint, VRF-based ping (from the VRF of the VN) to the Endpoint can be used
- To check the intra-VN connection between two Edge Nodes, a ping from outside the Fabric must be used since the SVI on each Edge Node has the same IP address (Anycast L3 Gateway)
- To check VN-specific configuration, the VRF names are the same as the VN names defined within DNAC
- When using multiple Control-Plane Nodes, each Edge Node sends LISP MAP-REGISTER/MAP-REQUEST to both CPs, they don’t synchronize themselves
2.1.x Multicast within SDA
General information on “SDA Multicast”:
- There are two possible ways of doing multicast within SDA:
- Headend Replication
- Native Multicast
- Function-wise the two modes are (almost) identical to how multicast traffic is handled within VXLAN
- Configuration of multicast requires a separate IP pool per VN, which provides IP addresses for PIM, RPs and MSDP
- Non-RP fabric devices require just one IP address (PIM loopback interface)
- RP fabric devices require three IP addresses (PIM loopback interface, Anycast RP address, MSDP peering address)
- If the multicast source/receiver is located outside the fabric, it must be placed within the respective VRF
- Inter-VRF multicast requires Multicast VPN Extranet which is out of scope of the CCIE Lab
- Headend Replication:
- Multicast packets are forwarded in the overlay (“multicast over unicast” scenario)
- 1000 ASM groups are supported within the overlay (as of today)
- Doesn’t require multicast configuration in the underlay
- Configuration restrictions:
- Supported Multicast Modes (overlay): ASM, SSM (custom SSM range since v1.3.3)
- RP Placement (ASM, overlay): Inside/outside the fabric (outside RP since v1.3.3)
- Multicast source placement: Inside/outside the fabric
- RP redundancy (ASM, overlay): MSDP
- PIM-SM is configured/activated under the virtual LISP instance interfaces
- Control/Data Plane:
- Multicast Control Plane: Overlay
- Multicast Data Plane (Forwarding): Overlay
- Multicast packets are encapsulated in VXLAN and forwarded as unicast towards each Edge Node separately via the overlay
- The multicast RPT is also the SPT at the same time since everything is forwarded as unicast on the best available path (technically the path is switched from RPT to SPT, but the path stays the same)
- The source address of the outer IPv4 header (VXLAN header) is the Ingress Fabric Nodes RLOC
- The destination address of the outer IPv4 header (VXLAN header) is the Egress Fabric Nodes RLOC (not the multicast address!)
- Intermediate nodes don’t need to be multicast-enabled in order for Headend replication to work
- PIM Message/Multicast traffic flow:
- PIM Join/Register message flow behaves just like vanilla multicast. The multicast traffic is encapsulated into VXLAN and sent as unicast to each Edge Node separately.
- Native Multicast:
- Multicast packets are forwarded in the underlay (“multicast over multicast” scenario)
- 1000 SSM groups are supported in underlay (from 232.0.0.1 to 232.0.3.232)
- Each overlay multicast group is mapped to an underlay multicast group
- Requires multicast configuration in the underlay
- Configuration restrictions:
- Supported Multicast Modes (overlay): ASM, SSM (custom SSM range since v1.3.3)
- Supported Multicast Modes (underlay): SSM
- RP Placement (ASM, overlay): Inside/outside the fabric (outside RP since v1.3.3)
- Multicast source placement: Inside/outside the fabric
- RP redundancy (ASM, overlay): MSDP
- PIM-SM is configured/activated under the physical L3 links
- Control/Data Plane:
- Multicast Control Plane: Overlay
- Multicast Data Plane (Forwarding): Underlay
- Multicast packets are encapsulated in VXLAN and forwarded to Edge Nodes through the underlay multicast tree
- When multicast packets hit the Ingress Fabric Node, the original packet will be copied to the underlay SSM tree and encapsulated into VXLAN
- The Egress Fabric Node will decapsulate the VXLAN packet, put it back to the multicast tree in the respective VN and forward it directly to the receivers
- The source address of the outer IPv4 header (VXLAN header) is the Ingress Fabric Nodes RLOC
- The destination address of the outer IPv4 header (VXLAN header) is the mapped underlay SSM multicast group
- Intermediate nodes need to be multicast-enabled in order for Headend replication to work
- PIM Message/Multicast traffic flow:
- Receiver is signaling it wants to receive multicast traffic
- Fabric Edge Node sends two PIM Join messages:
- (*,G) PIM Join message towards the RP in the overlay
- (S,G) PIM Join message towards the RP RLOC address in the underlay (S = RLOC of the RP)
- Fabric Edge Node sends two PIM Join messages:
- Source starts to send traffic to the specific multicast group
- Fabric Edge/Border node does two things:
- PIM Register message as well as multicast traffic is sent to the RP in the overlay
- The Multicast traffic is also forwarded natively in the underlay to the mapped SSM group
- Fabric Edge/Border node does two things:
- RP builds the overlay (S,G) path towards the source
- The (S,G) path of the overlay provides enough information to replicate the traffic in the underlay
- Pre-SPT Underlay: (RP-RLOC, SSM Group)
- Post-SPT Underlay: (Source-RLOC, SSM Group)
- Receiver is signaling it wants to receive multicast traffic
- Important: Although native multicast is used, the traffic is still VXLAN encapsulated!
2.1.x Manual Underlay Multicast Configuration
General information on “SDA Manual Underlay Multicast Configuration”:
- CP Node Configuration:
- Configure ip multicast-routing globally
- Configure ip pim ssm default globally
- Configure ip pim rp-address x.x.x.x globally to the IP address value of Loopback60000
- Configure ip pim register-source Loopback0 globally
- Configure ip pim sparse-mode under the L3 links and Loopback0 and Loopback60000
- Edge Node Configuration:
- Configure ip multicast-routing globally
- Configure ip pim ssm default globally
- Configure ip pim rp-address x.x.x.x globally
- Configure ip pim register-source Loopback0 globally
- Configure ip pim sparse-mode under the L3 links and Loopback0
2.1.x ARP Suppression in SDA
General information on “SDA ARP suppression”:
- ARP within SDA is function-wise similar to ARP within BGP EVPN VXLAN (no flooding, but rather using the control-plane and unicast)
- LISP not only learns and stores the IP address of a host, but also its MAC address
- ARP Request/Reply packet flow within SDA:
- End Host sends an ARP Request to the Fabric Edge Node
- Fabric Edge Node will send a LISP MAP-REQUEST for the IP address included in the ARP request to the Control Plane Node
- Control Plane Node responds with a LISP MAP-REPLY containing the MAC address for the requested IP back to the Fabric Edge Node
- Fabric Edge Node will send a LISP MAP-REQUEST for the previously MAC address to the Control Plane Node
- Control Plane Node responds with a LISP MAP-REPLY containing the RLOC for the requested MAC address
- Fabric Edge Node will send the ARP Request as unicast VXLAN-encapsulated packet directly to the other Fabric Edge Node who has the to-be-reached host behind it
- Important: The whole process will be repeated on the other Edge Node for the ARP Reply!
- When there’s no hit at step 3 (= Control Plane Node has no entry for the IP address) this procedure will fail
- This can happen in case there are so-called “silent hosts” on the network which never speak by themselves but only when they are directly “called”
- To solve this issue, Layer 2 Flooding must be activated (see below)
2.1.x BUM Traffic handling within SDA
General information on “SDA BUM Traffic”:
- Configured/Done under Provision -> Fabric -> [select fabric] -> Host Onboarding
- By default, BUM traffic is not flooded across the fabric, but only on the local Edge Node
- Layer 2 unicast traffic is handled by LISP, which not only stores IP-to-RLOC mappings but also their MAC addresses
- BUM traffic flooding can be enabled but requires native multicast (PIM ASM) in the underlay
- When enabled, the overlay subnet will be mapped to an underlay multicast group
- BUM traffic is encapsulated in VXLAN and sent with (Source IP = Source RLOC, Destination IP = Underlay Multicast Group
- Function-wise Layer 2 Flooding is identical to how traffic flows in VXLAN Flood-and-Learn
- Important: Requires enabled Multicast Routing in the underlay!
2.1.x Troubleshooting underlay connectivity using API
General information on “SDA Troubleshooting underlay connectivity using API”:
- When a device is shown as UNREACHABLE under the Provision page, the GUI doesn’t show the exact reason
- The possible reasons include wrong SNMP credentials/version/…, wrong management IP address, device not reachable from DNAC, …
- To get the exact reason, the DNAC API can be used: /dna/intent/api/v1/network-device
2.1.x QoS within SDA
General information on “SDA QoS”:
- Known as “Application Policy” within DNAC
- Consists of three (3) key elements:
- Application Sets
- Application Policies
- Queueing Profiles
- Application Sets in detail:
- Logical groups of applications
- Approx. 1400 application are pre-defined within DNAC
- Own applications can be added and recognition can be based on server name, incoming DSCP value and/or IP/Protocol/Port, URL
- Application Policies in detail:
- Used to put Application Sets in pre-defined groups for QoS handling
- “Business Relevant”:
- Represents high-priority traffic which directly contributes to organizational objectives
- Should be treated with industry best-practice recommendations as of RFC 4594
- “Default”:
- Represents neutral traffic which may or may not be business relevant
- Should be treated with Best Effort (DF) as of RFC 2747/4594
- “Business Irrelevant”:
- Represents low-priority traffic which has no contribution towards organizational objectives
- Should be treated as Scavenger (CS1) as of RFC 3662/4594
- Queueing Profiles in detail:
- Queueing Profiles define the QoS handling of the Application Policies
- The default Queueing Profile (CVD_QUEUEING_PROFILE) is based on the 12-Class QoS Strategy
- This is Cisco’s interpretation of the RFC 4594 recommendation
- Configuration of QoS:
- Optional: Define own Applications and Application Sets
- Optional: Define own Queueing Profiles
- Create Application Policy
- Re-group Application Sets if necessary
- Select Queueing Profile
- Select Site
- Deploy configuration