Friday, 20 January 2023

Exploring network device configuration deployment using Nornir and NetBox


Requirement: 

Automated network configuration deployment that is capable of using NetBox as host inventory


Assumptions: 
  • Knowledge of basic python for network automation
  • Existing network inventory records are already populated in NetBox (v3.4.2)
  • Existing python virtual environment (python3.9)
  • Existing Windows/Linux jumphost that runs the python virtual environment that have SSH access to the network devices

Goals:
  1. Fetch network devices inventory records from NetBox
  2. Filter the fetched network devices inventory records from NetBox
  3. Store the provided username and password for network device SSH access
  4. Categorize the network device based on platform record from NetBox
  5. Connect to a network device and collect the network device facts
  6. Connect to a network device and push network configuration

Task 1: Install the following python packages in your Linux jumphost's python virtual environment
nornir==3.3.0
nornir-napalm==0.3.0
nornir-netbox==0.3.0
nornir-utils==0.2.0
nornir - nornir is a pure Python automation framework intented to be used directly from Python
nornir-napalm - nornir plugin for interacting withh devices using napalm library
nornir-netbox - nornir inventory plugin for NetBox
nornir-utils - collection of simple plugins for nornir


Task 2: Configure the Nornir config file config.yaml 
---
inventory:
  plugin: NetBoxInventory2
  options:
    nb_url: '{Your netbox URL}'
    nb_token: '{Your netbox account token}'
    ssl_verify: False

Task 3: Configure the python script for goal #1

Fetch network devices inventory records from NetBox
from nornir import InitNornir
nr = InitNornir(config_file="config.yaml")
You can verify if the inventory records was pulled successfully by accessing the data inside  nr  variable.
print(nr.inventory.hosts)
{'R1': Host: R1, 'R2': Host: R2, 'SW01': Host: SW01}
Goal #1 completed. 


Task 4: Configure the python script for goal #2

Filter the fetched network devices inventory records from NetBox
from nornir.core.filter import F
nr_filtered = nr.filter(
    F(
        platform = "my-router-platform",
        site__slug__contains = "my-site",
        status__value__contains = "active",
    )
)
You can verify if the inventory records was filtered successfully by accessing the data inside  nr_filtered  variable. The record should only show those network devices that has been configured in NetBox with all the filter parameters - platform, site and status.
print(nr_filtered.inventory.hosts)
{'R1': Host: R1, 'R2': Host: R2}
Goal #2 completed.


Task 5: Configure the python script for goal #3

Store the provided username and password for network device SSH access.

There are two ways (from what I currently know of) of storing username and password credentials to the Nornir inventory records. Either we can store it in the inventory record global default parameters or alternatively store it in inventory per host parameters. For this task, we will store is as global default parameter as we have a uniform access credential to all the network devices.

nr_filtered.inventory.defaults.username = '{Your username in clear text format}'
nr_filtered.inventory.defaults.password = '{Your password in clear text format}'
Goal #3 completed.


Task 6: Configure the python script for goal #4

Categorize the network device based on platform record from NetBox

This task is related to the nornir_napalm plugin we have installed for accessing the network devices. As we are using napalm library to access the network devices, it will require us to specify the platform name that is supported by the napalm library format for identifying the network driver to be used.

There are currently two ways I can think of right now to specify the platform parameter. Either in NetBox device object platform parameter or within the python code by using filtering method. I will do the later option as I have already an existing platform record which is not in napalm supported format.

For this task, we will update the per host device inventory parameter nr_filtered.inventory.hosts['R1'].platform with the new value in napalm supported format. You might wonder why I didn't update the global inventory defaults parameters which is technically possible. The reason is that the nornir's per host device inventory record was already updated with a value that was referenced from the NetBox's device object platform record. Updating only the global default parameter will not update the per host device parameters. We will perform an iteration as we will have multiple list of devices to be updated.
for i,k in nr_filtered.inventory.hosts.items():
    k.platform = "{Please update accordingly based on your platform type - ios|iosxr|junos|nxos_ssh}"
Goal #4 completed.


Task 7Configure the python script for goal #5

Connect to a network device and collect the network device facts
result = nr_filtered.run(
    napalm_get,
    getters=["get_facts"],
)
You can verify the result by either printing the  result variable or accessing the data inside it.
result['R1'].result['get_facts']['serial_number']
If successful, you should then be able to see the actual serial number of the network device with hostname "R1". 

Goal #5 completed.


Task 8Configure the python script for goal #6

Connect to a network device and push network configuration.

For this task, we will just do a simple network configuration of updating an interface description just to verify the functionality of our script.
from nornir_utils.plugins.functions import print_result

config_data = [
    "!",
    "interface gigabitethernet0/1",
    "  description Nornir automated interface description updating",
    "!",
]

result = nr.run(
    napalm_configure,
    configuration = "\n".join(config_data),
    replace = False,
    dry_run = False,
)

print_result(result)
The result should show some output like this:
* R1 ** changed : True ***********************************************
vvvv napalm_configure ** changed : True vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv INFO
interface gigabitethernet0/1
  description Nornir automated interface description updating
^^^^ END napalm_configure ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can then log in to the network device and verify that the interface description for GigabitEthernet0/1 was updated accordingly.

Goal #6 completed. 

Consolidated script below for reference:

config.yaml 
---
inventory:
  plugin: NetBoxInventory2
  options:
    nb_url: '{Your netbox URL}'
    nb_token: '{Your netbox account token}'
    ssl_verify: False
nornir_config_update.py 
from nornir import InitNornir
from nornir.core.filter import F
from nornir_utils.plugins.functions import print_result

nr = InitNornir(config_file="config.yaml")
nr_filtered = nr.filter(
    F(
        platform = "my-router-platform",
        site__slug__contains = "my-site",
        status__value__contains = "active",
    )
)

nr_filtered.inventory.defaults.username = '{Your username in clear text format}'
nr_filtered.inventory.defaults.password = '{Your password in clear text format}'

for i,k in nr_filtered.inventory.hosts.items():
    k.platform = "{Please update accordingly based on your platform type - ios|iosxr|junos|nxos_ssh}"

config_data = [
    "!",
    "interface gigabitethernet0/1",
    "  description Nornir automated interface description updating",
    "!",
]

result = nr.run(
    napalm_configure,
    configuration = "\n".join(config_data),
    replace = False,
    dry_run = False,
)

print_result(result)

Monday, 16 July 2018


Cisco and Juniper Spanning Tree Interoperability Testing

This lab setup documents the spanning tree protocol behavior between two different vendors namely Cisco & Juniper. My main target was to check if Cisco's PVST protocol will work together with Juniper's VSTP protocol. Luckily it work on this setup as you can see on the spanning tree status screenshots. 



Spanning Tree Status

Cisco SW1 (NXOS)


Cisco SW2 (NXOS)


Juniper SW (QFX)






Spanning Tree Configuration

Cisco SW1 (NXOS) - default


Cisco SW2 (NXOS)


Juniper SW (QFX)






Switch Software Version

Cisco (NXOS)


Juniper (QFX)


Saturday, 19 May 2018

Network Device Config Tool (Python)


As a network engineer, we are often asked to extract the output of a set of commands to a set of network devices. Quite a simple task to do, but not if the number of network devices is at least hundreds or maybe thousands. This would be an easy task if you do have a monitoring system like Solarwinds that can automate the configuration for you, but what if you don't have this kind of tools or you want doing it your own way, automated?

Hopefully this blog post will be able to help you learn or start your journey in network automation the way how I started doing it.

I have used the following tools to create & simulate this script in a lab environment prior deploying it in production networks:

  • Eve-NG
  • Cisco IOL
  • CentOS
  • Netmiko

Eve-NG is the network device simulator engine here. This is just my preference, but you may use other network simulator that you know of or comfortable with.

Cisco IOL is the network device image that we will use in this case.

CentOS is my preferred Linux distribution which will then serve here as the server that will execute the python script to login to every network devices via SSH and execute the set of commands specified.

Netmiko is a python module that I have used here to perform SSH login to the network device.

Let's begin.

I have created the following files to complete the script:


Node Database - contains the list of network devices that will be accessed in the script

This file needs at least the following data per line:
  1. Device Hostname
  2. Device Management IP
  3. Device Type

Login Credential - contains the common SSH login credentials of the network devices

This file needs at least the following data per line:
  1. Username
  2. Password
CLI Commands - contains the list of "Cisco" CLI commands

Config Tool - contains the main python script that will automate the SSH login and CLI command execution to the network devices

List of files below are the sample output logs generated from the actual script in the lab environment:


I will try to break down below on how the python script is doing it.

  1. "extract" function is created to read the data inside the supplied file name
  2. "current_time" variable is declared which will then be used to include the time stamp in the log file naming
  3. "node_db_filename" variable is declared to hold the value of node database file name
  4. "node_login_filename" variable is declared to hold the value of login credentials filename
  5. "cli_commands_filename"  variable is declared to hold the value of CLI commands filename
  6. "node_db_raw" variable will contain the raw data of  node database file name
  7. "node_login_raw"  variable will contain the raw data of  login credentials file name
  8. "cli_commands_raw"  variable will contain the raw data of  CLI commands file name
  9. "node_db" ordereddict variable will then contain the parsed values of the node database data
  10. "node_login"  ordereddict variable will then contain the parsed values of the login credential data
  11. "cli_commands" ordereddict variable will then contain the parsed values of the CLI commands data
  12. "device_params" ordereddict variable will contain all the necessary network device information needed by netmiko module to access the network devices supplied in the script
  13. "netmiko_params" ordereddict variable will contain only a network device information which will then be stored and consolidated in "device_params" variable
  14. The succeeding loops will then iterate all information supplied in "device_params" variable and will login via SSH using the netmiko module. Every CLI commands specified in the script will then be issued and logged into a file with this name formatting - "[Hostname]_[IP]_[Timestamp].txt"
This is the first version of this script so please bear with me if you happen to see some errors along the way. I would be happy to hear from you for any suggestions, questions or comments you may have.

Cheers!

Monday, 21 August 2017

Fortigate Firewall Integration to Cisco ISE 2.2 Tacacs+ Server with Active Directory Credential Authorization

This setup is useful if you have several Fortigate firewalls and you want to manage the access from a centralized tacacs+ server (ISE) instead of manually creating the accounts locally in the firewalls. They key to this setup is that you should enable the authorization and the accprofile-override settings in the Fortigate firewall to receive & apply the authorization attributes from the tacacs+ server.

Lab Network Setup:

Credits to Eve-Ng

IP Addressing Scheme:

10.254.0.12 - Fortigate FW
10.254.0.13 - Cisco ISE


AD Credentials:


Admin account - kagarcia
Read-Only account - test123


Fortigate Configuration:

CLI:

config user tacacs+
edit "ise"
set server "10.254.0.13"
set key [shared-secret]
set authorization enable
next
end 
config user group
edit "Active_Directory"
set member "ise"
next
end
config system accprofile
edit "DenyAccess"
next
end
config system admin edit "*"
set remote-auth enable
set accprofile "DenyAccess"
set vdom "root"
set wildcard enable
set remote-group "Active_Directory"
set accprofile-override enable
next
end 

Web-GUI:

Create new user group and assign the created tacacs+ server as remote server

Create new admin profile for default profile of AD logins which is deny access 

Fortigate available Admin Profiles 

Create new administrator "*" which will match any AD ID Username from the remote server 

Fortigate available Administrators accounts 

Cisco ISE Configuration:


Create new tacacs profile for administrator access authorization result

Create new tacacs profile for read-only access authorization result

Create additional authorization policy for admin & read-only access specific to the IP address of the firewall together with the created tacacs profile
Tacacs+ Authorization Result Summary Details



Fortigate logged in users


Cheers to my first blog! :)