A growing collection of automation work built around real lab infrastructure. Focused on practical, repeatable tooling for configuration management, operational validation, and infrastructure-as-code — using Ansible as the primary automation engine.
Telnet used for Ansible connectivity — not SSH. In production environments, SSH is the required standard for device management and Ansible connectivity. SSH provides encrypted transport, protecting credentials and session data in transit. Telnet transmits everything in plaintext and has no place in a production network.
In this lab, Cisco IOL's SSH implementation is minimal and unstable under programmatic access — sessions hang, the SSH process randomly stops accepting connections, and stale VTY sessions accumulate under sequential Ansible runs. After exhaustive troubleshooting, EVE-NG's direct telnet console ports were chosen as the reliable workaround. SSH is still configured and functional on all 12 nodes and is used for manual access via Termius. This is a known IOL limitation that does not apply to CSR1000v, IOS-XE virtual machines, or physical hardware.
| File | Purpose | Operation | Scope |
|---|---|---|---|
| gather_facts.yml | Collects IOS version, interface state, and OSPF neighbor table from all nodes | Read | All 12 nodes |
| validate_ospf.yml | Verifies all expected OSPF adjacencies are in FULL state — pass/fail per node | Read | All 12 nodes |
| generate_host_vars.py | Discovers live CDP neighbors across all nodes and generates per-node host_vars YAML files with interface descriptions | Generate | All 12 nodes |
| deploy_descriptions.yml | Pushes interface descriptions from host_vars to all devices and saves to NVRAM | Write | All 12 nodes |
| backup_configs.yml | Pulls running configuration from all nodes and saves to timestamped files in the repo | Read | All 12 nodes |
Rather than hardcoding interface connections, a Python script queries live CDP neighbor data from every node and generates host_vars files dynamically. If the topology changes, re-running the script updates the inventory automatically.
All 12 nodes are organized into meaningful groups — core, edge, and per-leg subgroups. Playbooks can target the full topology, a single tier, or a specific leg without modifying any code.
OSPF validation runs as a standalone playbook that can be executed after any config change as a post-change health check — confirming all expected adjacencies are still FULL before considering a change successful.
All playbooks, host variables, and backup configurations are version-controlled in GitHub. Every change is committed with a meaningful message, creating a full audit trail of what changed, when, and why.