← Portfolio
// network automation portfolio

Network Automation

A growing collection of automation work built around real lab infrastructure. Focused on practical, repeatable tooling for configuration management, operational validation, and infrastructure-as-code — using Ansible as the primary automation engine.

Ansible 2.16.3
3.12.3
12
4 live
WSL2 / Ubuntu
Connectivity Note

Telnet used for Ansible connectivity — not SSH. In production environments, SSH is the required standard for device management and Ansible connectivity. SSH provides encrypted transport, protecting credentials and session data in transit. Telnet transmits everything in plaintext and has no place in a production network.

In this lab, Cisco IOL's SSH implementation is minimal and unstable under programmatic access — sessions hang, the SSH process randomly stops accepting connections, and stale VTY sessions accumulate under sequential Ansible runs. After exhaustive troubleshooting, EVE-NG's direct telnet console ports were chosen as the reliable workaround. SSH is still configured and functional on all 12 nodes and is used for manual access via Termius. This is a known IOL limitation that does not apply to CSR1000v, IOS-XE virtual machines, or physical hardware.

Automation Stack
Orchestration
Ansible 2.16.3
cisco.ios collection
ansible.netcommon
Scripting
Python 3.12
PyYAML
Regex / Socket
Environment
WSL2 / Ubuntu 24
EVE-NG 6.2.0-4
Git / GitHub
Projects
Core-Edge Ring Lab — Ansible
Multi-Area OSPF Topology Automation
End-to-end Ansible automation for a 12-node OSPF multi-area lab topology. Covers operational data collection, topology validation, dynamic inventory generation from live CDP neighbor data, configuration deployment, and running config backup — all driven from a structured inventory with per-node host variables.
Ansible Python CDP Discovery OSPF Validation Config Backup Jinja2 Cisco IOL
Live
core-edge-ring
Planned
NetBox — Source of Truth Integration
Python and Ansible integration with a self-hosted NetBox instance as the IPAM and DCIM source of truth. Device records, prefix management, and IP assignment driven from NetBox data rather than static inventory files.
NetBox REST API pynetbox IPAM DCIM
Planned
Playbook Reference — Core-Edge Ring
File Purpose Operation Scope
gather_facts.yml Collects IOS version, interface state, and OSPF neighbor table from all nodes Read All 12 nodes
validate_ospf.yml Verifies all expected OSPF adjacencies are in FULL state — pass/fail per node Read All 12 nodes
generate_host_vars.py Discovers live CDP neighbors across all nodes and generates per-node host_vars YAML files with interface descriptions Generate All 12 nodes
deploy_descriptions.yml Pushes interface descriptions from host_vars to all devices and saves to NVRAM Write All 12 nodes
backup_configs.yml Pulls running configuration from all nodes and saves to timestamped files in the repo Read All 12 nodes
Design Approach

CDP-Driven Inventory

Rather than hardcoding interface connections, a Python script queries live CDP neighbor data from every node and generates host_vars files dynamically. If the topology changes, re-running the script updates the inventory automatically.

Structured Inventory

All 12 nodes are organized into meaningful groups — core, edge, and per-leg subgroups. Playbooks can target the full topology, a single tier, or a specific leg without modifying any code.

Validation Before Deployment

OSPF validation runs as a standalone playbook that can be executed after any config change as a post-change health check — confirming all expected adjacencies are still FULL before considering a change successful.

Config as Code

All playbooks, host variables, and backup configurations are version-controlled in GitHub. Every change is committed with a meaningful message, creating a full audit trail of what changed, when, and why.