3 Commits

Author SHA1 Message Date
1c1b410ce0 docs: use uv instead of pip in README 2026-01-11 16:26:50 +00:00
1564cffd88 docs: add README for provisioning scripts 2026-01-11 16:17:34 +00:00
1052b2ca3d feat: add NetBox provisioning script for fabric topology
Create comprehensive script to populate NetBox with:
- Custom fields (ASN, MLAG, VRF, virtual IP)
- Organization (Site, Manufacturer, DeviceType, DeviceRole)
- Devices (2 spines, 8 leafs, 4 hosts)
- Interfaces (physical, loopbacks, LAGs, SVIs)
- Cables (spine-leaf, MLAG peer-links, host dual-homing)
- IP addresses (loopbacks, P2P links)
- VLANs and VRF with route targets
- Prefixes

Reference: arista-evpn-vxlan-clab topology
Relates to #5
2026-01-11 16:17:14 +00:00
2 changed files with 1056 additions and 0 deletions

84
scripts/README.md Normal file
View File

@@ -0,0 +1,84 @@
# NetBox Provisioning for EVPN-VXLAN Fabric
This directory contains scripts to populate NetBox with the fabric topology defined in [arista-evpn-vxlan-clab](https://gitea.arnodo.fr/Damien/arista-evpn-vxlan-clab).
## Prerequisites
- NetBox 4.4.x running and accessible
- NetBox BGP Plugin v0.17.x installed (optional, for BGP sessions)
- Python 3.9+
- API token with write permissions
## Installation
```bash
uv add pynetbox
```
## Usage
```bash
export NETBOX_URL="http://netbox.example.com"
export NETBOX_TOKEN="your-api-token"
uv run python scripts/provision_fabric.py
```
## What Gets Created
### Custom Fields
| Object Type | Field | Description |
|-------------|-------|-------------|
| Device | `asn` | BGP ASN |
| Device | `mlag_domain_id` | MLAG domain identifier |
| Device | `mlag_peer_address` | MLAG peer IP |
| Device | `mlag_local_address` | MLAG local IP |
| Device | `mlag_virtual_mac` | Shared virtual MAC |
| Interface | `mlag_peer_link` | Marks peer-link interfaces |
| Interface | `mlag_id` | MLAG ID for host LAGs |
| VRF | `l3vni` | L3 VNI for EVPN |
| VRF | `vrf_vlan` | VLAN for L3 VNI SVI |
| IP Address | `virtual_ip` | Anycast/virtual IP flag |
### Organization
- **Site**: evpn-lab
- **Manufacturer**: Arista
- **Device Types**: cEOS-lab, Linux Server
- **Device Roles**: Spine, Leaf, Server
### Devices
| Device | Role | ASN | MLAG Domain |
|--------|------|-----|-------------|
| spine1, spine2 | Spine | 65000 | - |
| leaf1, leaf2 | Leaf | 65001 | MLAG1 |
| leaf3, leaf4 | Leaf | 65002 | MLAG2 |
| leaf5, leaf6 | Leaf | 65003 | MLAG3 |
| leaf7, leaf8 | Leaf | 65004 | MLAG4 |
| host1-4 | Server | - | - |
### Cabling
- Spine1/2 Ethernet1-8 → Leaf1-8 Ethernet11/12
- MLAG peer-links: Leaf pairs via Ethernet10
- Host dual-homing: eth1/eth2 to MLAG pairs
### IP Addressing
| Purpose | Prefix |
|---------|--------|
| Spine1-Leaf P2P | 10.0.1.0/24 |
| Spine2-Leaf P2P | 10.0.2.0/24 |
| MLAG iBGP P2P | 10.0.3.0/24 |
| MLAG Peer VLAN | 10.0.199.0/24 |
| Loopback0 (Router-ID) | 10.0.250.0/24 |
| Loopback1 (VTEP) | 10.0.255.0/24 |
## Idempotency
The script is idempotent - running it multiple times will not create duplicate objects.
## Reference
- [NetBox Data Model Documentation](../docs/netbox-data-model.md)

972
scripts/provision_fabric.py Normal file
View File

@@ -0,0 +1,972 @@
#!/usr/bin/env python3
"""
NetBox Provisioning Script for EVPN-VXLAN Fabric
This script populates NetBox with the complete fabric topology defined in
arista-evpn-vxlan-clab, including devices, interfaces, cables, IP addresses,
VLANs, and custom fields required for fabric orchestration.
Requirements:
pip install pynetbox
Usage:
export NETBOX_URL="http://netbox.example.com"
export NETBOX_TOKEN="your-api-token"
python scripts/provision_fabric.py
Reference:
https://gitea.arnodo.fr/Damien/fabric-orchestrator/src/branch/main/docs/netbox-data-model.md
https://gitea.arnodo.fr/Damien/arista-evpn-vxlan-clab
"""
import os
import sys
from typing import Any
try:
import pynetbox
except ImportError:
print("Error: pynetbox is required. Install with: pip install pynetbox")
sys.exit(1)
# =============================================================================
# Configuration
# =============================================================================
NETBOX_URL = os.environ.get("NETBOX_URL", "http://localhost:8000")
NETBOX_TOKEN = os.environ.get("NETBOX_TOKEN", "")
# Fabric configuration
SITE_NAME = "evpn-lab"
SITE_SLUG = "evpn-lab"
MANUFACTURER_NAME = "Arista"
MANUFACTURER_SLUG = "arista"
# Device type from community library
# https://github.com/netbox-community/devicetype-library/tree/master/device-types/Arista
DEVICE_TYPE_MODEL = "cEOS-lab"
DEVICE_TYPE_SLUG = "ceos-lab"
# =============================================================================
# Custom Fields Definition
# =============================================================================
CUSTOM_FIELDS = [
# Device custom fields
{
"content_types": ["dcim.device"],
"name": "asn",
"label": "ASN",
"type": "integer",
"required": False,
"description": "BGP Autonomous System Number assigned to this device",
},
{
"content_types": ["dcim.device"],
"name": "mlag_domain_id",
"label": "MLAG Domain ID",
"type": "text",
"required": False,
"description": "MLAG domain identifier",
},
{
"content_types": ["dcim.device"],
"name": "mlag_peer_address",
"label": "MLAG Peer Address",
"type": "text",
"required": False,
"description": "MLAG peer IP address",
},
{
"content_types": ["dcim.device"],
"name": "mlag_local_address",
"label": "MLAG Local Address",
"type": "text",
"required": False,
"description": "MLAG local IP address",
},
{
"content_types": ["dcim.device"],
"name": "mlag_virtual_mac",
"label": "MLAG Virtual MAC",
"type": "text",
"required": False,
"description": "Shared virtual-router MAC address",
},
# Interface custom fields
{
"content_types": ["dcim.interface"],
"name": "mlag_peer_link",
"label": "MLAG Peer Link",
"type": "boolean",
"required": False,
"description": "Marks interface as MLAG peer-link",
},
{
"content_types": ["dcim.interface"],
"name": "mlag_id",
"label": "MLAG ID",
"type": "integer",
"required": False,
"description": "MLAG port-channel ID for host-facing LAGs",
},
# VRF custom fields
{
"content_types": ["ipam.vrf"],
"name": "l3vni",
"label": "L3 VNI",
"type": "integer",
"required": False,
"description": "Layer 3 VNI for EVPN symmetric IRB",
},
{
"content_types": ["ipam.vrf"],
"name": "vrf_vlan",
"label": "VRF VLAN",
"type": "integer",
"required": False,
"description": "VLAN ID used for L3 VNI SVI",
},
# IP Address custom fields
{
"content_types": ["ipam.ipaddress"],
"name": "virtual_ip",
"label": "Virtual IP",
"type": "boolean",
"required": False,
"description": "Marks IP as anycast/virtual IP (shared across MLAG pair)",
},
]
# =============================================================================
# Device Definitions
# =============================================================================
DEVICE_ROLES = [
{"name": "Spine", "slug": "spine", "color": "ff5722"},
{"name": "Leaf", "slug": "leaf", "color": "4caf50"},
{"name": "Server", "slug": "server", "color": "2196f3"},
]
# Spine devices
SPINES = [
{"name": "spine1", "mgmt_ip": "172.16.0.1/24", "asn": 65000, "loopback0": "10.0.250.1/32"},
{"name": "spine2", "mgmt_ip": "172.16.0.2/24", "asn": 65000, "loopback0": "10.0.250.2/32"},
]
# Leaf devices with MLAG configuration
LEAFS = [
{
"name": "leaf1",
"mgmt_ip": "172.16.0.25/24",
"asn": 65001,
"loopback0": "10.0.250.11/32",
"loopback1": "10.0.255.11/32", # Shared VTEP with leaf2
"mlag": {
"domain_id": "MLAG1",
"local_address": "10.0.199.254",
"peer_address": "10.0.199.255",
"virtual_mac": "00:1c:73:00:00:01",
},
},
{
"name": "leaf2",
"mgmt_ip": "172.16.0.50/24",
"asn": 65001,
"loopback0": "10.0.250.12/32",
"loopback1": "10.0.255.11/32", # Shared VTEP with leaf1
"mlag": {
"domain_id": "MLAG1",
"local_address": "10.0.199.255",
"peer_address": "10.0.199.254",
"virtual_mac": "00:1c:73:00:00:01",
},
},
{
"name": "leaf3",
"mgmt_ip": "172.16.0.27/24",
"asn": 65002,
"loopback0": "10.0.250.13/32",
"loopback1": "10.0.255.12/32",
"mlag": {
"domain_id": "MLAG2",
"local_address": "10.0.199.252",
"peer_address": "10.0.199.253",
"virtual_mac": "00:1c:73:00:00:02",
},
},
{
"name": "leaf4",
"mgmt_ip": "172.16.0.28/24",
"asn": 65002,
"loopback0": "10.0.250.14/32",
"loopback1": "10.0.255.12/32",
"mlag": {
"domain_id": "MLAG2",
"local_address": "10.0.199.253",
"peer_address": "10.0.199.252",
"virtual_mac": "00:1c:73:00:00:02",
},
},
{
"name": "leaf5",
"mgmt_ip": "172.16.0.29/24",
"asn": 65003,
"loopback0": "10.0.250.15/32",
"loopback1": "10.0.255.13/32",
"mlag": {
"domain_id": "MLAG3",
"local_address": "10.0.199.250",
"peer_address": "10.0.199.251",
"virtual_mac": "00:1c:73:00:00:03",
},
},
{
"name": "leaf6",
"mgmt_ip": "172.16.0.30/24",
"asn": 65003,
"loopback0": "10.0.250.16/32",
"loopback1": "10.0.255.13/32",
"mlag": {
"domain_id": "MLAG3",
"local_address": "10.0.199.251",
"peer_address": "10.0.199.250",
"virtual_mac": "00:1c:73:00:00:03",
},
},
{
"name": "leaf7",
"mgmt_ip": "172.16.0.31/24",
"asn": 65004,
"loopback0": "10.0.250.17/32",
"loopback1": "10.0.255.14/32",
"mlag": {
"domain_id": "MLAG4",
"local_address": "10.0.199.248",
"peer_address": "10.0.199.249",
"virtual_mac": "00:1c:73:00:00:04",
},
},
{
"name": "leaf8",
"mgmt_ip": "172.16.0.32/24",
"asn": 65004,
"loopback0": "10.0.250.18/32",
"loopback1": "10.0.255.14/32",
"mlag": {
"domain_id": "MLAG4",
"local_address": "10.0.199.249",
"peer_address": "10.0.199.248",
"virtual_mac": "00:1c:73:00:00:04",
},
},
]
# Host devices
HOSTS = [
{"name": "host1", "mgmt_ip": "172.16.0.101/24", "vlan": 40, "ip": "10.40.40.101/24"},
{"name": "host2", "mgmt_ip": "172.16.0.102/24", "vlan": 34, "ip": "10.34.34.102/24"},
{"name": "host3", "mgmt_ip": "172.16.0.103/24", "vlan": 40, "ip": "10.40.40.103/24"},
{"name": "host4", "mgmt_ip": "172.16.0.104/24", "vlan": 78, "ip": "10.78.78.104/24"},
]
# =============================================================================
# Cabling Matrix
# =============================================================================
# Spine to Leaf connections: (spine_name, spine_intf, leaf_name, leaf_intf)
SPINE_LEAF_CABLES = [
# Spine1 connections
("spine1", "Ethernet1", "leaf1", "Ethernet11"),
("spine1", "Ethernet2", "leaf2", "Ethernet11"),
("spine1", "Ethernet3", "leaf3", "Ethernet11"),
("spine1", "Ethernet4", "leaf4", "Ethernet11"),
("spine1", "Ethernet5", "leaf5", "Ethernet11"),
("spine1", "Ethernet6", "leaf6", "Ethernet11"),
("spine1", "Ethernet7", "leaf7", "Ethernet11"),
("spine1", "Ethernet8", "leaf8", "Ethernet11"),
# Spine2 connections
("spine2", "Ethernet1", "leaf1", "Ethernet12"),
("spine2", "Ethernet2", "leaf2", "Ethernet12"),
("spine2", "Ethernet3", "leaf3", "Ethernet12"),
("spine2", "Ethernet4", "leaf4", "Ethernet12"),
("spine2", "Ethernet5", "leaf5", "Ethernet12"),
("spine2", "Ethernet6", "leaf6", "Ethernet12"),
("spine2", "Ethernet7", "leaf7", "Ethernet12"),
("spine2", "Ethernet8", "leaf8", "Ethernet12"),
]
# MLAG Peer-link cables: (leaf_a, intf_a, leaf_b, intf_b)
MLAG_PEER_CABLES = [
("leaf1", "Ethernet10", "leaf2", "Ethernet10"),
("leaf3", "Ethernet10", "leaf4", "Ethernet10"),
("leaf5", "Ethernet10", "leaf6", "Ethernet10"),
("leaf7", "Ethernet10", "leaf8", "Ethernet10"),
]
# Host dual-homing cables: (leaf_a, intf_a, leaf_b, intf_b, host, host_intf_a, host_intf_b)
HOST_CABLES = [
("leaf1", "Ethernet1", "leaf2", "Ethernet1", "host1"),
("leaf3", "Ethernet1", "leaf4", "Ethernet1", "host2"),
("leaf5", "Ethernet1", "leaf6", "Ethernet1", "host3"),
("leaf7", "Ethernet1", "leaf8", "Ethernet1", "host4"),
]
# =============================================================================
# IP Addressing for P2P Links
# =============================================================================
# Spine1 P2P addresses: leaf_name -> (spine_ip, leaf_ip)
SPINE1_P2P = {
"leaf1": ("10.0.1.0/31", "10.0.1.1/31"),
"leaf2": ("10.0.1.2/31", "10.0.1.3/31"),
"leaf3": ("10.0.1.4/31", "10.0.1.5/31"),
"leaf4": ("10.0.1.6/31", "10.0.1.7/31"),
"leaf5": ("10.0.1.8/31", "10.0.1.9/31"),
"leaf6": ("10.0.1.10/31", "10.0.1.11/31"),
"leaf7": ("10.0.1.12/31", "10.0.1.13/31"),
"leaf8": ("10.0.1.14/31", "10.0.1.15/31"),
}
SPINE2_P2P = {
"leaf1": ("10.0.2.0/31", "10.0.2.1/31"),
"leaf2": ("10.0.2.2/31", "10.0.2.3/31"),
"leaf3": ("10.0.2.4/31", "10.0.2.5/31"),
"leaf4": ("10.0.2.6/31", "10.0.2.7/31"),
"leaf5": ("10.0.2.8/31", "10.0.2.9/31"),
"leaf6": ("10.0.2.10/31", "10.0.2.11/31"),
"leaf7": ("10.0.2.12/31", "10.0.2.13/31"),
"leaf8": ("10.0.2.14/31", "10.0.2.15/31"),
}
# MLAG iBGP P2P addresses: (leaf_a, leaf_b) -> (leaf_a_ip, leaf_b_ip)
MLAG_IBGP_P2P = {
("leaf1", "leaf2"): ("10.0.3.0/31", "10.0.3.1/31"),
("leaf3", "leaf4"): ("10.0.3.2/31", "10.0.3.3/31"),
("leaf5", "leaf6"): ("10.0.3.4/31", "10.0.3.5/31"),
("leaf7", "leaf8"): ("10.0.3.6/31", "10.0.3.7/31"),
}
# =============================================================================
# VLANs and VRFs
# =============================================================================
VLAN_GROUP_NAME = "evpn-fabric"
VLANS = [
{"vid": 34, "name": "vlan34-l3", "description": "L3 VLAN for VRF gold"},
{"vid": 40, "name": "vlan40-l2", "description": "L2 VXLAN stretched VLAN"},
{"vid": 78, "name": "vlan78-l3", "description": "L3 VLAN for VRF gold"},
{"vid": 4090, "name": "mlag-peer", "description": "MLAG peer communication"},
{"vid": 4091, "name": "mlag-ibgp", "description": "iBGP between MLAG peers"},
]
VRFS = [
{
"name": "gold",
"rd": "1:100001",
"l3vni": 100001,
"vrf_vlan": 3000,
"import_targets": ["1:100001"],
"export_targets": ["1:100001"],
},
]
# =============================================================================
# Prefixes
# =============================================================================
PREFIXES = [
{"prefix": "10.0.1.0/24", "description": "Spine1-Leaf P2P"},
{"prefix": "10.0.2.0/24", "description": "Spine2-Leaf P2P"},
{"prefix": "10.0.3.0/24", "description": "MLAG iBGP P2P"},
{"prefix": "10.0.199.0/24", "description": "MLAG Peer VLAN 4090"},
{"prefix": "10.0.250.0/24", "description": "Loopback0 (Router-ID)"},
{"prefix": "10.0.255.0/24", "description": "Loopback1 (VTEP)"},
{"prefix": "10.34.34.0/24", "description": "VLAN 34 subnet", "vrf": "gold"},
{"prefix": "10.40.40.0/24", "description": "VLAN 40 subnet"},
{"prefix": "10.78.78.0/24", "description": "VLAN 78 subnet", "vrf": "gold"},
{"prefix": "172.16.0.0/24", "description": "Management network"},
]
# =============================================================================
# Helper Functions
# =============================================================================
def get_or_create(endpoint, search_params: dict, create_params: dict) -> Any:
"""Get existing object or create new one."""
obj = endpoint.get(**search_params)
if obj:
return obj, False
obj = endpoint.create({**search_params, **create_params})
return obj, True
def log_result(action: str, obj_type: str, name: str, created: bool):
"""Log creation result."""
status = "Created" if created else "Exists"
print(f" [{status}] {obj_type}: {name}")
# =============================================================================
# Provisioning Functions
# =============================================================================
def create_custom_fields(nb: pynetbox.api):
"""Create custom fields for fabric orchestration."""
print("\n=== Creating Custom Fields ===")
for cf in CUSTOM_FIELDS:
existing = nb.extras.custom_fields.get(name=cf["name"])
if existing:
print(f" [Exists] Custom Field: {cf['name']}")
continue
nb.extras.custom_fields.create(cf)
print(f" [Created] Custom Field: {cf['name']}")
def create_organization(nb: pynetbox.api) -> dict:
"""Create site, manufacturer, device types, and roles."""
print("\n=== Creating Organization ===")
result = {}
# Site
site, created = get_or_create(
nb.dcim.sites,
{"slug": SITE_SLUG},
{"name": SITE_NAME, "status": "active"},
)
log_result("Site", "Site", SITE_NAME, created)
result["site"] = site
# Manufacturer
manufacturer, created = get_or_create(
nb.dcim.manufacturers,
{"slug": MANUFACTURER_SLUG},
{"name": MANUFACTURER_NAME},
)
log_result("Manufacturer", "Manufacturer", MANUFACTURER_NAME, created)
result["manufacturer"] = manufacturer
# Device Type for switches
device_type, created = get_or_create(
nb.dcim.device_types,
{"slug": DEVICE_TYPE_SLUG},
{
"model": DEVICE_TYPE_MODEL,
"manufacturer": manufacturer.id,
"u_height": 1,
},
)
log_result("DeviceType", "DeviceType", DEVICE_TYPE_MODEL, created)
result["device_type"] = device_type
# Device Type for servers/hosts
server_type, created = get_or_create(
nb.dcim.device_types,
{"slug": "linux-server"},
{
"model": "Linux Server",
"manufacturer": manufacturer.id,
"u_height": 1,
},
)
log_result("DeviceType", "DeviceType", "Linux Server", created)
result["server_type"] = server_type
# Device Roles
result["roles"] = {}
for role in DEVICE_ROLES:
role_obj, created = get_or_create(
nb.dcim.device_roles,
{"slug": role["slug"]},
{"name": role["name"], "color": role["color"]},
)
log_result("DeviceRole", "DeviceRole", role["name"], created)
result["roles"][role["slug"]] = role_obj
return result
def create_vlans(nb: pynetbox.api, site) -> dict:
"""Create VLAN group and VLANs."""
print("\n=== Creating VLANs ===")
result = {}
# VLAN Group
vlan_group, created = get_or_create(
nb.ipam.vlan_groups,
{"slug": VLAN_GROUP_NAME},
{"name": VLAN_GROUP_NAME, "scope_type": "dcim.site", "scope_id": site.id},
)
log_result("VLANGroup", "VLANGroup", VLAN_GROUP_NAME, created)
result["group"] = vlan_group
# VLANs
result["vlans"] = {}
for vlan in VLANS:
vlan_obj, created = get_or_create(
nb.ipam.vlans,
{"vid": vlan["vid"], "group_id": vlan_group.id},
{"name": vlan["name"], "description": vlan.get("description", "")},
)
log_result("VLAN", "VLAN", f"{vlan['vid']} ({vlan['name']})", created)
result["vlans"][vlan["vid"]] = vlan_obj
return result
def create_vrfs(nb: pynetbox.api) -> dict:
"""Create VRFs with route targets."""
print("\n=== Creating VRFs ===")
result = {}
for vrf_def in VRFS:
# Create route targets first
import_rts = []
export_rts = []
for rt in vrf_def.get("import_targets", []):
rt_obj, created = get_or_create(
nb.ipam.route_targets,
{"name": rt},
{"description": f"Import RT for {vrf_def['name']}"},
)
import_rts.append(rt_obj.id)
log_result("RouteTarget", "RouteTarget", rt, created)
for rt in vrf_def.get("export_targets", []):
rt_obj = nb.ipam.route_targets.get(name=rt)
if rt_obj:
export_rts.append(rt_obj.id)
# Create VRF
vrf, created = get_or_create(
nb.ipam.vrfs,
{"name": vrf_def["name"]},
{
"rd": vrf_def.get("rd"),
"import_targets": import_rts,
"export_targets": export_rts,
"custom_fields": {
"l3vni": vrf_def.get("l3vni"),
"vrf_vlan": vrf_def.get("vrf_vlan"),
},
},
)
log_result("VRF", "VRF", vrf_def["name"], created)
result[vrf_def["name"]] = vrf
return result
def create_prefixes(nb: pynetbox.api, vrfs: dict):
"""Create IP prefixes."""
print("\n=== Creating Prefixes ===")
for prefix_def in PREFIXES:
vrf_id = None
if "vrf" in prefix_def:
vrf_id = vrfs.get(prefix_def["vrf"])
if vrf_id:
vrf_id = vrf_id.id
prefix, created = get_or_create(
nb.ipam.prefixes,
{"prefix": prefix_def["prefix"], "vrf_id": vrf_id},
{"description": prefix_def.get("description", "")},
)
log_result("Prefix", "Prefix", prefix_def["prefix"], created)
def create_devices(nb: pynetbox.api, org: dict) -> dict:
"""Create all network devices."""
print("\n=== Creating Devices ===")
result = {}
# Create Spines
for spine in SPINES:
device, created = get_or_create(
nb.dcim.devices,
{"name": spine["name"]},
{
"device_type": org["device_type"].id,
"role": org["roles"]["spine"].id,
"site": org["site"].id,
"status": "active",
"custom_fields": {"asn": spine["asn"]},
},
)
log_result("Device", "Device", spine["name"], created)
result[spine["name"]] = {"device": device, "config": spine}
# Create Leafs
for leaf in LEAFS:
mlag = leaf.get("mlag", {})
custom_fields = {
"asn": leaf["asn"],
"mlag_domain_id": mlag.get("domain_id"),
"mlag_peer_address": mlag.get("peer_address"),
"mlag_local_address": mlag.get("local_address"),
"mlag_virtual_mac": mlag.get("virtual_mac"),
}
device, created = get_or_create(
nb.dcim.devices,
{"name": leaf["name"]},
{
"device_type": org["device_type"].id,
"role": org["roles"]["leaf"].id,
"site": org["site"].id,
"status": "active",
"custom_fields": custom_fields,
},
)
log_result("Device", "Device", leaf["name"], created)
result[leaf["name"]] = {"device": device, "config": leaf}
# Create Hosts
for host in HOSTS:
device, created = get_or_create(
nb.dcim.devices,
{"name": host["name"]},
{
"device_type": org["server_type"].id,
"role": org["roles"]["server"].id,
"site": org["site"].id,
"status": "active",
},
)
log_result("Device", "Device", host["name"], created)
result[host["name"]] = {"device": device, "config": host}
return result
def create_interfaces(nb: pynetbox.api, devices: dict) -> dict:
"""Create all device interfaces."""
print("\n=== Creating Interfaces ===")
result = {}
for device_name, device_data in devices.items():
device = device_data["device"]
config = device_data["config"]
result[device_name] = {}
# Determine interface requirements based on device type
if device_name.startswith("spine"):
# Spines: 8 Ethernet ports for leaf connections + Management + Loopback0
interfaces = [
("Management1", "virtual"),
("Loopback0", "virtual"),
]
for i in range(1, 9):
interfaces.append((f"Ethernet{i}", "1000base-t"))
elif device_name.startswith("leaf"):
# Leafs: Ethernet1 (host), Ethernet10 (peer-link), Ethernet11-12 (spines)
# + Management + Loopback0 + Loopback1 + Vxlan1
interfaces = [
("Management1", "virtual"),
("Loopback0", "virtual"),
("Loopback1", "virtual"),
("Vxlan1", "virtual"),
("Ethernet1", "1000base-t"), # Host-facing
("Ethernet10", "1000base-t"), # MLAG peer-link
("Ethernet11", "1000base-t"), # Spine1
("Ethernet12", "1000base-t"), # Spine2
("Port-Channel10", "lag"), # MLAG peer-link LAG
]
elif device_name.startswith("host"):
# Hosts: eth1, eth2 for bonding + bond0
interfaces = [
("eth0", "virtual"), # Management
("eth1", "1000base-t"),
("eth2", "1000base-t"),
("bond0", "lag"),
]
else:
continue
for intf_name, intf_type in interfaces:
intf, created = get_or_create(
nb.dcim.interfaces,
{"device_id": device.id, "name": intf_name},
{"type": intf_type},
)
# Set custom fields for MLAG peer-link
if intf_name in ("Ethernet10", "Port-Channel10") and device_name.startswith("leaf"):
if created or not intf.custom_fields.get("mlag_peer_link"):
intf.custom_fields = {"mlag_peer_link": True}
intf.save()
result[device_name][intf_name] = intf
if created:
log_result("Interface", "Interfaces", f"{device_name}", True)
return result
def create_ip_addresses(nb: pynetbox.api, devices: dict, interfaces: dict, vrfs: dict):
"""Create IP addresses and assign to interfaces."""
print("\n=== Creating IP Addresses ===")
# Loopback addresses for spines
for spine in SPINES:
device = devices[spine["name"]]["device"]
intf = interfaces[spine["name"]]["Loopback0"]
ip, created = get_or_create(
nb.ipam.ip_addresses,
{"address": spine["loopback0"]},
{
"assigned_object_type": "dcim.interface",
"assigned_object_id": intf.id,
"description": f"{spine['name']} Router-ID",
},
)
log_result("IP", "IP Address", spine["loopback0"], created)
# Loopback addresses for leafs
for leaf in LEAFS:
device = devices[leaf["name"]]["device"]
# Loopback0
intf = interfaces[leaf["name"]]["Loopback0"]
ip, created = get_or_create(
nb.ipam.ip_addresses,
{"address": leaf["loopback0"]},
{
"assigned_object_type": "dcim.interface",
"assigned_object_id": intf.id,
"description": f"{leaf['name']} Router-ID",
},
)
log_result("IP", "IP Address", leaf["loopback0"], created)
# Loopback1 (VTEP)
intf = interfaces[leaf["name"]]["Loopback1"]
ip, created = get_or_create(
nb.ipam.ip_addresses,
{"address": leaf["loopback1"]},
{
"assigned_object_type": "dcim.interface",
"assigned_object_id": intf.id,
"description": f"{leaf['name']} VTEP",
},
)
log_result("IP", "IP Address", leaf["loopback1"], created)
# P2P addresses for Spine1-Leaf links
for leaf_name, (spine_ip, leaf_ip) in SPINE1_P2P.items():
# Spine side
spine_intf = interfaces["spine1"][f"Ethernet{list(SPINE1_P2P.keys()).index(leaf_name) + 1}"]
ip, created = get_or_create(
nb.ipam.ip_addresses,
{"address": spine_ip},
{
"assigned_object_type": "dcim.interface",
"assigned_object_id": spine_intf.id,
"description": f"spine1 to {leaf_name}",
},
)
log_result("IP", "IP Address", spine_ip, created)
# Leaf side
leaf_intf = interfaces[leaf_name]["Ethernet11"]
ip, created = get_or_create(
nb.ipam.ip_addresses,
{"address": leaf_ip},
{
"assigned_object_type": "dcim.interface",
"assigned_object_id": leaf_intf.id,
"description": f"{leaf_name} to spine1",
},
)
log_result("IP", "IP Address", leaf_ip, created)
# P2P addresses for Spine2-Leaf links
for leaf_name, (spine_ip, leaf_ip) in SPINE2_P2P.items():
spine_intf = interfaces["spine2"][f"Ethernet{list(SPINE2_P2P.keys()).index(leaf_name) + 1}"]
ip, created = get_or_create(
nb.ipam.ip_addresses,
{"address": spine_ip},
{
"assigned_object_type": "dcim.interface",
"assigned_object_id": spine_intf.id,
"description": f"spine2 to {leaf_name}",
},
)
log_result("IP", "IP Address", spine_ip, created)
leaf_intf = interfaces[leaf_name]["Ethernet12"]
ip, created = get_or_create(
nb.ipam.ip_addresses,
{"address": leaf_ip},
{
"assigned_object_type": "dcim.interface",
"assigned_object_id": leaf_intf.id,
"description": f"{leaf_name} to spine2",
},
)
log_result("IP", "IP Address", leaf_ip, created)
def create_cables(nb: pynetbox.api, interfaces: dict):
"""Create cables between devices."""
print("\n=== Creating Cables ===")
# Spine-Leaf cables
for spine, spine_intf, leaf, leaf_intf in SPINE_LEAF_CABLES:
a_intf = interfaces.get(spine, {}).get(spine_intf)
b_intf = interfaces.get(leaf, {}).get(leaf_intf)
if not a_intf or not b_intf:
print(f" [Skip] Cable: {spine}:{spine_intf} <-> {leaf}:{leaf_intf} (interface not found)")
continue
# Check if cable already exists
existing = nb.dcim.cables.filter(termination_a_id=a_intf.id)
if list(existing):
print(f" [Exists] Cable: {spine}:{spine_intf} <-> {leaf}:{leaf_intf}")
continue
try:
cable = nb.dcim.cables.create(
{
"a_terminations": [{"object_type": "dcim.interface", "object_id": a_intf.id}],
"b_terminations": [{"object_type": "dcim.interface", "object_id": b_intf.id}],
"status": "connected",
"type": "cat6a",
}
)
print(f" [Created] Cable: {spine}:{spine_intf} <-> {leaf}:{leaf_intf}")
except Exception as e:
print(f" [Error] Cable: {spine}:{spine_intf} <-> {leaf}:{leaf_intf}: {e}")
# MLAG peer-link cables
for leaf_a, intf_a, leaf_b, intf_b in MLAG_PEER_CABLES:
a_intf = interfaces.get(leaf_a, {}).get(intf_a)
b_intf = interfaces.get(leaf_b, {}).get(intf_b)
if not a_intf or not b_intf:
print(f" [Skip] Cable: {leaf_a}:{intf_a} <-> {leaf_b}:{intf_b} (interface not found)")
continue
existing = nb.dcim.cables.filter(termination_a_id=a_intf.id)
if list(existing):
print(f" [Exists] Cable: {leaf_a}:{intf_a} <-> {leaf_b}:{intf_b}")
continue
try:
cable = nb.dcim.cables.create(
{
"a_terminations": [{"object_type": "dcim.interface", "object_id": a_intf.id}],
"b_terminations": [{"object_type": "dcim.interface", "object_id": b_intf.id}],
"status": "connected",
"type": "cat6a",
"label": "MLAG Peer-Link",
}
)
print(f" [Created] Cable: {leaf_a}:{intf_a} <-> {leaf_b}:{intf_b} (MLAG peer-link)")
except Exception as e:
print(f" [Error] Cable: {leaf_a}:{intf_a} <-> {leaf_b}:{intf_b}: {e}")
# Host dual-homing cables
for leaf_a, intf_a, leaf_b, intf_b, host in HOST_CABLES:
# Cable from leaf_a to host eth1
a_intf = interfaces.get(leaf_a, {}).get(intf_a)
host_eth1 = interfaces.get(host, {}).get("eth1")
if a_intf and host_eth1:
existing = nb.dcim.cables.filter(termination_a_id=a_intf.id)
if not list(existing):
try:
cable = nb.dcim.cables.create(
{
"a_terminations": [{"object_type": "dcim.interface", "object_id": a_intf.id}],
"b_terminations": [{"object_type": "dcim.interface", "object_id": host_eth1.id}],
"status": "connected",
"type": "cat6a",
}
)
print(f" [Created] Cable: {leaf_a}:{intf_a} <-> {host}:eth1")
except Exception as e:
print(f" [Error] Cable: {leaf_a}:{intf_a} <-> {host}:eth1: {e}")
else:
print(f" [Exists] Cable: {leaf_a}:{intf_a} <-> {host}:eth1")
# Cable from leaf_b to host eth2
b_intf = interfaces.get(leaf_b, {}).get(intf_b)
host_eth2 = interfaces.get(host, {}).get("eth2")
if b_intf and host_eth2:
existing = nb.dcim.cables.filter(termination_a_id=b_intf.id)
if not list(existing):
try:
cable = nb.dcim.cables.create(
{
"a_terminations": [{"object_type": "dcim.interface", "object_id": b_intf.id}],
"b_terminations": [{"object_type": "dcim.interface", "object_id": host_eth2.id}],
"status": "connected",
"type": "cat6a",
}
)
print(f" [Created] Cable: {leaf_b}:{intf_b} <-> {host}:eth2")
except Exception as e:
print(f" [Error] Cable: {leaf_b}:{intf_b} <-> {host}:eth2: {e}")
else:
print(f" [Exists] Cable: {leaf_b}:{intf_b} <-> {host}:eth2")
# =============================================================================
# Main
# =============================================================================
def main():
"""Main provisioning function."""
if not NETBOX_TOKEN:
print("Error: NETBOX_TOKEN environment variable is required")
sys.exit(1)
print(f"Connecting to NetBox at {NETBOX_URL}...")
nb = pynetbox.api(NETBOX_URL, token=NETBOX_TOKEN)
try:
# Verify connection
status = nb.status()
print(f"Connected to NetBox {status.get('netbox-version', 'unknown')}")
except Exception as e:
print(f"Error connecting to NetBox: {e}")
sys.exit(1)
# Run provisioning steps
create_custom_fields(nb)
org = create_organization(nb)
vlans = create_vlans(nb, org["site"])
vrfs = create_vrfs(nb)
create_prefixes(nb, vrfs)
devices = create_devices(nb, org)
interfaces = create_interfaces(nb, devices)
create_ip_addresses(nb, devices, interfaces, vrfs)
create_cables(nb, interfaces)
print("\n" + "=" * 50)
print("Provisioning complete!")
print("=" * 50)
if __name__ == "__main__":
main()