Cisco SD Access deployment
2.1.b i Cisco DNA Center device discovery and device management
General information on “SDA Cisco DNA Center device discovery and device management”:
- Configured/Done under DNAC Dashboard -> Tools -> Discovery
- Used mainly for brownfield deployments but can also be used for greenfield
- Three possible device discovery methods:
- Using Cisco Discovery Protocol (CDP)
- Using an IP address range scan
- Using Link Layer Discovery Protocol (LLDP)
- General considerations:
- It’s recommended to use the device' loopback IP address as Management IP address, especially when deploying SD-Access and Assurance
- When there are none/multiple loopbacks and/or L3 interfaces, DNAC uses the same logic as OSPF for its router-id (highest Loopback IP > highest normal IP > highest serial IP)
- Considerations when using CDP/LLDP:
- Usage of CDP/LLDP discovery requires a “seed device”
- A “seed device” is a switch/router from where DNAC starts the discovery process
- The CDP/LLDP level defines the maximum number of hops a new device can be away from the “seed device”
- Network device preparation:
- Each to discovered device needs at least…:
- Configured SNMP credentials (minimum SNMPv2c read-only credentials)
- Configured SSH credentials and access
- The admin account that is used needs privileged EXEC mode (level 15)
- Enable password is optional if the configured user has already level 15 after logging in
- Each to discovered device needs at least…:
- “Device Controllability” feature:
- Enabled by default
- Used to push some basic configuration commands to newly discovered devices to assist with data collection and device tracking
- Device controllability is mandatory in order to use newly discovered devices for SD-Access
- If Device Controllability is disabled, Cisco DNA Center does not configure any of the credentials or features on devices during Discovery or at run-time.
- Device Controllability is also a run-time condition: when you make changes to settings Cisco DNA Center will automatically push the updates to the devices immediately
- Inventory management:
- Device sync occurs every 6 hours by default
- Device role is tried to be determined automatically, can be used for image assignment in SWIM and are used to define how the SDA Fabric topology view looks
- Device provisioning:
- Pushes the settings defined in the Design -> Network Settings section to the device
- Additional templates can be defined which can also be pushed
- DNAC preparation:
- Adding CLI credentials
- Adding SNMP credentials
- After Device Discovery:
- Device must be assigned to a site
- Device must be provisioned (defined Network Settings will be pushed)
2.1.b ii Add fabric node devices to an existing fabric
General information on “SDA Add fabric node devices to an existing fabric”:
- Configured/Done under Provision -> Fabric -> [select fabric] -> [select site]
- Before a device can be added to a fabric, it must have been discovered, provisioned and assigned to a site
- Once a device is assigned to a site, it can’t be reassigned
- Instead the device must be deleted, re-discovered and re-provisioned
- The device roles determines where the device is graphically positioned within the fabric site
- Within a fabric, devices can have the following state:
- Grey: Part of the site inventory but not part of the SDA fabric.
- Blue: Part of the SDA fabric. Device is configured as fabric node.
- When a device is added to a SDA fabric, additional configuration (LISP + VRF (VN) + VXLAN) is pushed to the device so that it can take part in the overlay fabric
2.1.b iii Host onboarding (wired endpoints only)
General information on “SDA Host onboarding (wired endpoints only)":
- Configured/Done under Provision -> Fabric -> [select fabric] -> Host Onboarding
- Global Template:
- All settings defined under “Select Authentication template” and “Virtual Networks” are globally valid for the whole fabric
- This means that each port is configured to automatically recognize the end host, will put it in the appropriate VN and assigns the correct SGT
- Port Assignment:
- Individual ports can be overwritten with static settings
- This is done under “Select Port Assignment” at the bottom of the page
- All fabric Edge Nodes are listed on the left side and one or several ports can be selected and configured individually
- Onboarding Process (when a host plugs in):
- Authentication (ISE does 802.1x)
- Closed Authentication: Based on 802.1x. ANY traffic prior to authentication is dropped, including DHCP, DNS, and ARP.
- Open Authentication: Based on 802.1x. ALLOWS limited access prior authentication (eg. DHCP, …).
- Easy Connect: Based on LDAP + MAB.
- No Authentication: No authentication at all.
- Authorization (ISE assigns VN and SGT)
- Host is connected and allowed to transfer data.
- Authentication (ISE does 802.1x)
2.1.b iv Fabric border handoff
Layer 2
General information on “SDA Fabric border handoff Layer 2”:
- Configured/Done under Provision -> Fabric -> [select fabric] -> Border Node
- Used to connect legacy L2 networks to the fabric
- Fabric Underlay preparation:
- L2 handoff requires VTP mode to be off or transparent on fabric underlay devices
- Legacy L2 switch configuration:
- SVI Gateway for the VLAN needs to be removed since the switch will be L2 only from now on as the Border Node will take over all routing
- Configuration process for L2 handoff:
- Select Border Node
- Select Layer 2 Handoff
- Select VN
- Enter the VLAN ID of the “External VLAN” and select the “External Interface”
- Background process (what’s happening on the border node devices):
- The VRF-aware Anycast Gateway loopback for the VN will be removed
- CTS role-based enforcement will be activated for the legacy L2 VLAN
- The legacy L2 VLAN will be created
- The selected “External Interface” will be configured as L2 trunk
- LISP creates a dynamic EID mapping for the legacy L2 subnet
- The legacy L2 VLAN SVI will be created (identical to VN SVIs on Edge Nodes)
- LISP creates a ethernet instance ID for the legacy L2 VLAN
- DHCP snooping will be enabled for the legacy L2 VLAN
Layer 3
General information on “SDA Fabric border handoff Layer 3”:
- Configured/Done under Provision -> Fabric -> [select fabric] -> Border Node
- Used to connect the fabric site to networks outside of the fabric
- DNAC preparation:
- Configure Transit/Peer network
- Reserve IP address pool for Border Handoff (IP-Transit only)
- Configuration process for L3 IP-transit handoff:
- Select Border Node
- Select Layer 3 Handoff
- Configure the local BGP AS
- Select the Transit IP address pool
- Define the Border type:
- Default (nothing selected) = Internal Border
- “Default to all VN” = Anywhere Border
- “Default to all VN” + “Do not import External Routes” = External Border
- Select and Add the Transit/Peer network
- Under the added Transit/Peer network select the interface/s used for Border Handoff
- Select the VNs to be advertised to the Remote Peer
- Background process (what’s happening on the border node devices):
- DNAC creates /30 SVIs with an IP address selected out of the Transit IP address pool (= 1 SVI per VN)
- DNAC creates a address-family based BGP configuration for connecting to the neighbor router (= 1 AF per VN)
- DNAC creates VRF-aware iBGP peerings between all Border/Control Plane nodes
- The transit interface will be configured as trunk to allow multiple VNs to traverse
- To be configured on the peer device (remote router):
- VRFs:
- For each SDA VN, a respective VRF must be created
- It’s recommended to copy and paste the VRF configuration from the border nodes (name, RD, …)
- Sub-interface for each VN/VRF:
- Under the physical interface connecting to the border node, a sub-interface must be created for each VN/VRF
- Each sub-interface must be configured with dot1q encapsulation, put in a VRF and an be given an IP address
- The dot1q tag and IP address must be obtained from the border device
- BGP routing process:
- A BGP peering for each VN/VRF must be created
- The update-source must be set to the sub-interface
- Two or more Border Nodes:
- An iBGP peering for the default IPv4 AF must be created in-between all of them
- Important #1: Route-Leaking between VRFs can be achieved on the Fusion Device using Route-Targets. allowas-in needs to be manually configured on the Border Nodes towards the Fusion Device (eBGP only)!
- Important #2: Route-Leaking between VRFs and global table can be achieved using import/export maps. Routes to be imported into a VRF must be in the global BGP table and can’t be in the global routing table only!
- Important #3: When the Fusion Router peers with two or more Border Nodes using eBGP, filtering is needed at the Border Nodes to make sure locally originated routes are not accepted back from the Fusion Router as those would be preferred because of the max weight for the Fusion Router!
- VRFs: