Compare commits

..

11 Commits

Author SHA1 Message Date
darnodo
a0316de507 Update .infrahub.yml 2025-11-15 19:27:25 +01:00
0341ff3ffa fix: Correct schema path from schemas to schema (singular) 2025-11-15 18:02:42 +00:00
1d56d5bb4e fix: Change datacenter_id variable type from String! to ID! 2025-11-15 18:02:10 +00:00
darnodo
dafbb351d7 Move .infrahub.yml at the root level 2025-11-15 15:58:53 +01:00
darnodo
bfc6ec6031 Chore: More .infrahub.yml at the root level 2025-11-15 15:58:29 +01:00
darnodo
f67805ead4 Update .infrahub.yml 2025-11-15 15:56:44 +01:00
darnodo
a7ff08e5ff Refactor Infrahub config and rename schema folder
Move schema files into a dedicated 'schemas' directory and update
.infrahub.yml to reference them along with adding sections for schemas,
generators, and queries.
2025-11-15 11:13:46 +01:00
darnodo
1df82d4f32 push generator 2025-11-14 18:31:31 +01:00
darnodo
edc40dede6 feat: update datacenter generator documentation to reflect accurate schema and object names 2025-11-12 18:38:54 +01:00
darnodo
2a99d48cdd fix: update icon for MLAG Interface to use standard ethernet icon 2025-11-12 16:29:53 +01:00
darnodo
6a98f8e689 feat: update menu placements for various network schemas 2025-11-12 16:23:57 +01:00
14 changed files with 1348 additions and 168 deletions

22
.infrahub.yml Normal file
View File

@@ -0,0 +1,22 @@
# yaml-language-server: $schema=https://schema.infrahub.app/python-sdk/repository-config/latest.json
---
# Infrahub Repository Configuration
# Define where schemas are located
schemas:
- infrahub/schema/*.yml
# Generator definitions
generator_definitions:
- name: datacenter_generator
file_path: "infrahub/generators/datacenter_generator.py"
class_name: DatacenterGenerator
query: datacenter_query
targets: "Datacenters"
convert_query_response: true
parameters:
datacenter_id: "id"
# GraphQL queries
queries:
- name: datacenter_query
file_path: "infrahub/generators/datacenter_query.gql"

View File

@@ -1,34 +1,37 @@
# Datacenter Generator - Visual Diagrams
**Schema-Accurate Version - Corrected to match actual Infrahub schemas**
---
## 1. Generator Concept - High Level
```mermaid
graph TB
subgraph Input["📥 INPUT"]
DC1[Datacenter: DC1<br/>---<br/>dc_id: 1<br/>bays: 2<br/>dci_enabled: false]
DC2[Datacenter: DC2<br/>---<br/>dc_id: 2<br/>bays: 2<br/>dci_enabled: false]
DCI_Config[DCI Configuration<br/>---<br/>Manual/External<br/>IP: 10.253.254.x]
DC1[InfraDatacenter: DC1<br/>---<br/>dc_id: 1<br/>number_of_bays: 2<br/>dci_enabled: false]
DC2[InfraDatacenter: DC2<br/>---<br/>dc_id: 2<br/>number_of_bays: 2<br/>dci_enabled: false]
DCI_Config[NetworkDCISwitch<br/>---<br/>Manual/External<br/>Loopback0: 10.253.0.1]
end
subgraph Logic["⚙️ DATACENTER GENERATOR"]
Gen1[DC1 Generator:<br/>Create 11 devices<br/>Border eth12: shutdown]
Gen2[DC2 Generator:<br/>Create 11 devices<br/>Border eth12: shutdown]
Gen1[DC1 Generator:<br/>Create 11 NetworkDevice objects<br/>Border eth12: shutdown]
Gen2[DC2 Generator:<br/>Create 11 NetworkDevice objects<br/>Border eth12: shutdown]
end
subgraph Output1["📤 DC1 OUTPUT"]
Dev1[11 Device Objects]
Int1[~50 Interface Objects]
IP1[~30 IP Addresses]
Dev1[11 NetworkDevice Objects]
Int1[~50 NetworkInterface Objects]
IP1[~30 IpamIPAddress Objects]
end
subgraph Output2["📤 DC2 OUTPUT"]
Dev2[11 Device Objects]
Int2[~50 Interface Objects]
IP2[~30 IP Addresses]
Dev2[11 NetworkDevice Objects]
Int2[~50 NetworkInterface Objects]
IP2[~30 IpamIPAddress Objects]
end
subgraph Manual["🔧 MANUAL DCI SETUP"]
DCIDevice[DCI Device<br/>Loopback: 10.253.0.1<br/>ASN: 65000]
DCIDevice[NetworkDCISwitch<br/>Loopback0: 10.253.0.1<br/>ASN: 65000]
UpdateDC1[Update DC1:<br/>dci_enabled: true<br/>dci_remote_dc_id: 2]
UpdateDC2[Update DC2:<br/>dci_enabled: true<br/>dci_remote_dc_id: 1]
end
@@ -56,13 +59,15 @@ graph TB
style DCIDevice fill:#ffccbc
```
---
## 2. Datacenter Hierarchy
```mermaid
graph TB
Org[Organization]
Site[Site: Paris-DC1<br/>Location: Paris, France]
DC[Datacenter: DC1<br/>dc_id: 1<br/>bays: 2<br/>spines: 3]
Org[OrganizationOrganization]
Site[LocationSite: Paris-DC1<br/>Location: Paris, France]
DC[InfraDatacenter: DC1<br/>dc_id: 1<br/>number_of_bays: 2<br/>spine_count: 3]
Org --> Site
Site --> DC
@@ -73,32 +78,32 @@ graph TB
DC --> BayLayer[Bay Layer]
DC --> Subnets[IP Subnets]
SpineLayer --> S1[spine1-DC1<br/>ASN: 65100]
SpineLayer --> S2[spine2-DC1<br/>ASN: 65100]
SpineLayer --> S3[spine3-DC1<br/>ASN: 65100]
SpineLayer --> S1[spine1-DC1<br/>NetworkDevice<br/>ASN: 65100]
SpineLayer --> S2[spine2-DC1<br/>NetworkDevice<br/>ASN: 65100]
SpineLayer --> S3[spine3-DC1<br/>NetworkDevice<br/>ASN: 65100]
LeafLayer --> LP1[Leaf Pair 1<br/>ASN: 65101]
LeafLayer --> LP2[Leaf Pair 2<br/>ASN: 65102]
LeafLayer --> LP1[NetworkMLAGDomain: Pair 1<br/>ASN: 65101]
LeafLayer --> LP2[NetworkMLAGDomain: Pair 2<br/>ASN: 65102]
LP1 --> L1[leaf1-DC1<br/>10.1.0.21]
LP1 --> L2[leaf2-DC1<br/>10.1.0.22]
LP2 --> L3[leaf3-DC1<br/>10.1.0.23]
LP2 --> L4[leaf4-DC1<br/>10.1.0.24]
LP1 --> L1[leaf1-DC1<br/>Loopback0: 10.1.0.21]
LP1 --> L2[leaf2-DC1<br/>Loopback0: 10.1.0.22]
LP2 --> L3[leaf3-DC1<br/>Loopback0: 10.1.0.23]
LP2 --> L4[leaf4-DC1<br/>Loopback0: 10.1.0.24]
BorderLayer --> BP[Border Pair<br/>ASN: 65103]
BorderLayer --> BP[NetworkMLAGDomain: Border<br/>ASN: 65103]
BP --> B1[borderleaf1-DC1]
BP --> B2[borderleaf2-DC1]
BayLayer --> Bay1[Bay 1]
BayLayer --> Bay2[Bay 2]
BayLayer --> Bay1[InfraBay 1]
BayLayer --> Bay2[InfraBay 2]
Bay1 --> A1[access1-DC1]
Bay2 --> A2[access2-DC1]
Subnets --> Sub1[10.1.0.0/24<br/>Loopback0]
Subnets --> Sub2[10.1.1.0/24<br/>Loopback1]
Subnets --> Sub3[10.1.10.0/24<br/>Spine-Leaf P2P]
Subnets --> Sub4[10.1.20.0/24<br/>Leaf-Access P2P]
Subnets --> Sub1[10.1.0.0/24<br/>IpamIPPrefix<br/>type: loopback]
Subnets --> Sub2[10.1.1.0/24<br/>IpamIPPrefix<br/>type: vtep]
Subnets --> Sub3[10.1.10.0/24<br/>IpamIPPrefix<br/>type: p2p]
Subnets --> Sub4[10.1.20.0/24<br/>IpamIPPrefix<br/>type: p2p]
style DC fill:#ffecb3
style LP1 fill:#b3e5fc
@@ -107,18 +112,20 @@ graph TB
style Bay2 fill:#c8e6c9
```
---
## 3. Bay-to-Leaf Assignment Logic
```mermaid
graph LR
subgraph Bays["🏢 Bays"]
subgraph Bays["🏢 InfraBay"]
Bay1[Bay 1<br/>access1-DC1]
Bay2[Bay 2<br/>access2-DC1]
Bay3[Bay 3<br/>access3-DC1]
Bay4[Bay 4<br/>access4-DC1]
end
subgraph LeafPairs["🔀 Leaf Pairs - MLAG"]
subgraph LeafPairs["🔀 NetworkMLAGDomain - Leaf Pairs"]
LP1[Leaf Pair 1<br/>leaf1-DC1 ↔ leaf2-DC1<br/>ASN: 65101]
LP2[Leaf Pair 2<br/>leaf3-DC1 ↔ leaf4-DC1<br/>ASN: 65102]
end
@@ -149,50 +156,52 @@ graph LR
style LP2 fill:#b3e5fc
```
---
## 4. Complete DC1 Physical Topology
```mermaid
graph TB
subgraph DCI["DCI LAYER - Inter-DC"]
DCID[DCI Switch<br/>10.253.0.1<br/>ASN: 65000]
DCID[NetworkDCISwitch<br/>Loopback0: 10.253.0.1<br/>ASN: 65000]
end
subgraph Border["BORDER LEAF - DCI Gateway"]
B1[borderleaf1-DC1<br/>10.1.0.31<br/>eth12: shutdown or active]
B2[borderleaf2-DC1<br/>10.1.0.32<br/>eth12: shutdown or active]
B1[borderleaf1-DC1<br/>NetworkDevice<br/>Loopback0: 10.1.0.31<br/>eth12: shutdown or active]
B2[borderleaf2-DC1<br/>NetworkDevice<br/>Loopback0: 10.1.0.32<br/>eth12: shutdown or active]
end
subgraph Spine["SPINE LAYER - L3 Core"]
S1[spine1-DC1<br/>10.1.0.11<br/>ASN: 65100]
S2[spine2-DC1<br/>10.1.0.12<br/>ASN: 65100]
S3[spine3-DC1<br/>10.1.0.13<br/>ASN: 65100]
S1[spine1-DC1<br/>NetworkDevice<br/>Loopback0: 10.1.0.11<br/>ASN: 65100]
S2[spine2-DC1<br/>NetworkDevice<br/>Loopback0: 10.1.0.12<br/>ASN: 65100]
S3[spine3-DC1<br/>NetworkDevice<br/>Loopback0: 10.1.0.13<br/>ASN: 65100]
end
subgraph Leaf["LEAF LAYER - Aggregation + VXLAN"]
subgraph Pair1["MLAG Pair 1 - ASN: 65101"]
L1[leaf1-DC1<br/>10.1.0.21<br/>VTEP: 10.1.1.21]
L2[leaf2-DC1<br/>10.1.0.22<br/>VTEP: 10.1.1.21]
subgraph Pair1["NetworkMLAGDomain - Pair 1 - ASN: 65101"]
L1[leaf1-DC1<br/>Loopback0: 10.1.0.21<br/>VTEP: 10.1.1.21]
L2[leaf2-DC1<br/>Loopback0: 10.1.0.22<br/>VTEP: 10.1.1.21]
end
subgraph Pair2["MLAG Pair 2 - ASN: 65102"]
L3[leaf3-DC1<br/>10.1.0.23<br/>VTEP: 10.1.1.23]
L4[leaf4-DC1<br/>10.1.0.24<br/>VTEP: 10.1.1.23]
subgraph Pair2["NetworkMLAGDomain - Pair 2 - ASN: 65102"]
L3[leaf3-DC1<br/>Loopback0: 10.1.0.23<br/>VTEP: 10.1.1.23]
L4[leaf4-DC1<br/>Loopback0: 10.1.0.24<br/>VTEP: 10.1.1.23]
end
end
subgraph Access["ACCESS LAYER - Rack/Bay ToR"]
A1[access1-DC1<br/>Bay 1<br/>L2 Switch]
A2[access2-DC1<br/>Bay 2<br/>L2 Switch]
A1[access1-DC1<br/>NetworkDevice<br/>InfraBay 1<br/>L2 Switch]
A2[access2-DC1<br/>NetworkDevice<br/>InfraBay 2<br/>L2 Switch]
end
subgraph Hosts["HOST LAYER"]
H1[host1-DC1<br/>172.16.100.10]
H2[host2-DC1<br/>172.16.200.10]
H1[host1-DC1<br/>NetworkDevice<br/>172.16.100.10]
H2[host2-DC1<br/>NetworkDevice<br/>172.16.200.10]
end
DCID -.Optional DCI.- B1
DCID -.Optional DCI.- B2
DCID -.Optional NetworkDCIConnection.- B1
DCID -.Optional NetworkDCIConnection.- B2
S1 -.eBGP.- L1
S1 -.eBGP - NetworkBGPNeighbor.- L1
S1 -.eBGP.- L2
S1 -.eBGP.- L3
S1 -.eBGP.- L4
@@ -213,14 +222,14 @@ graph TB
S3 -.eBGP.- B1
S3 -.eBGP.- B2
L1 ---|MLAG| L2
L3 ---|MLAG| L4
B1 ---|MLAG| B2
L1 ---|MLAG - Vlan4094| L2
L3 ---|MLAG - Vlan4094| L4
B1 ---|MLAG - Vlan4094| B2
L1 -->|eth7| A1
L2 -->|eth7| A1
L3 -->|eth7| A2
L4 -->|eth7| A2
L1 -->|eth7 - NetworkInterface| A1
L2 -->|eth7 - NetworkInterface| A1
L3 -->|eth7 - NetworkInterface| A2
L4 -->|eth7 - NetworkInterface| A2
A1 -->|eth10| H1
A2 -->|eth10| H2
@@ -239,40 +248,42 @@ graph TB
style B2 fill:#f8bbd0
```
---
## 5. IP Address Generation Flow
```mermaid
graph TB
Start[Datacenter Object<br/>dc_id: 1, bays: 2]
Start[InfraDatacenter Object<br/>dc_id: 1, number_of_bays: 2]
Start --> GenSubnets[Generate Subnets<br/>from dc_id]
Start --> GenSubnets[Generate IpamIPPrefix<br/>from dc_id]
GenSubnets --> Sub1[10.1.0.0/24<br/>Loopback0]
GenSubnets --> Sub2[10.1.1.0/24<br/>Loopback1]
GenSubnets --> Sub3[10.1.10.0/24<br/>Spine-Leaf P2P]
GenSubnets --> Sub4[10.1.20.0/24<br/>Leaf-Access P2P]
GenSubnets --> Sub5[10.1.255.0/24<br/>MLAG Peer]
GenSubnets --> Sub6[10.255.0.0/24<br/>Management]
GenSubnets --> Sub1[10.1.0.0/24<br/>prefix_type: loopback<br/>Loopback0 addresses]
GenSubnets --> Sub2[10.1.1.0/24<br/>prefix_type: vtep<br/>Loopback1 VTEP addresses]
GenSubnets --> Sub3[10.1.10.0/24<br/>prefix_type: p2p<br/>Spine-Leaf links]
GenSubnets --> Sub4[10.1.20.0/24<br/>prefix_type: p2p<br/>Leaf-Access links]
GenSubnets --> Sub5[10.1.255.0/24<br/>prefix_type: mlag<br/>MLAG Peer links]
GenSubnets --> Sub6[10.255.0.0/24<br/>prefix_type: management<br/>Management IPs]
Sub1 --> AllocLo0[Allocate Loopback0<br/>to Spines & Leafs]
Sub2 --> AllocLo1[Allocate Loopback1<br/>to Leafs - shared in pairs]
Sub1 --> AllocLo0[Allocate IpamIPAddress<br/>Loopback0 to Spines & Leafs]
Sub2 --> AllocLo1[Allocate IpamIPAddress<br/>Loopback1 to Leafs - shared in pairs]
Sub3 --> AllocP2P1[Allocate /31s<br/>for Spine-Leaf links]
Sub4 --> AllocP2P2[Allocate /31s<br/>for Leaf-Access links]
Sub5 --> AllocMLAG[Allocate /30s<br/>for MLAG peers]
Sub6 --> AllocMgmt[Allocate Management IPs<br/>to all devices]
Sub5 --> AllocMLAG[Allocate /30s<br/>for MLAG peer links]
Sub6 --> AllocMgmt[Allocate Management IPs<br/>to all devices via template]
AllocLo0 --> Spine1IP[spine1: 10.1.0.11/32]
AllocLo0 --> Leaf1IP[leaf1: 10.1.0.21/32]
AllocLo1 --> Leaf1VTEP[leaf1-2: 10.1.1.21/32 - shared]
AllocLo1 --> Leaf1VTEP[leaf1-2 shared: 10.1.1.21/32]
AllocP2P1 --> Link1[spine1-leaf1: 10.1.10.0/31]
AllocP2P2 --> Link2[leaf1-access1: 10.1.20.0/31]
AllocMLAG --> MLAG1[leaf1-2 peer: 10.1.255.1-2/30]
AllocMLAG --> MLAG1[leaf1-2 peer: 10.1.255.1-2/30<br/>via Vlan4094]
AllocMgmt --> Mgmt1[spine1: 10.255.0.11]
AllocMgmt --> Mgmt1[spine1: 10.255.0.11<br/>computed from template]
style Start fill:#ffecb3
style Sub1 fill:#e1f5ff
@@ -283,41 +294,43 @@ graph TB
style Sub6 fill:#e1f5ff
```
---
## 6. Device Generation Flow
```mermaid
graph TB
DC[Datacenter: DC1<br/>bays: 2]
DC[InfraDatacenter: DC1<br/>number_of_bays: 2]
DC --> CalcLeafs[Calculate:<br/>leaf_pairs = ceil - bays / 2 - = 1<br/>total_leafs = 1 * 2 = 2]
DC --> CalcLeafs[Calculate:<br/>leaf_pair_count = ceil number_of_bays / 2 = 1<br/>total_leaf_count = 1 * 2 = 2<br/>total_access_count = 2]
CalcLeafs --> GenSpines[Generate Spines<br/>count: 3 - fixed]
CalcLeafs --> GenLeafs[Generate Leafs<br/>count: 4 - computed]
CalcLeafs --> GenBorders[Generate Border Leafs<br/>count: 2 - fixed]
CalcLeafs --> GenAccess[Generate Access<br/>count: 2 - from bays]
CalcLeafs --> GenSpines[Generate Spines<br/>count: spine_count = 3]
CalcLeafs --> GenLeafs[Generate Leafs<br/>count: total_leaf_count = 4]
CalcLeafs --> GenBorders[Generate Border Leafs<br/>count: border_leaf_count = 2<br/>if has_border_leafs: true]
CalcLeafs --> GenAccess[Generate Access<br/>count: total_access_count = 2]
GenSpines --> S1[spine1-DC1]
GenSpines --> S2[spine2-DC1]
GenSpines --> S3[spine3-DC1]
GenSpines --> S1[spine1-DC1<br/>NetworkDevice<br/>role: spine]
GenSpines --> S2[spine2-DC1<br/>NetworkDevice<br/>role: spine]
GenSpines --> S3[spine3-DC1<br/>NetworkDevice<br/>role: spine]
GenLeafs --> LP1[Create MLAG Pair 1]
GenLeafs --> LP2[Create MLAG Pair 2]
GenLeafs --> LP1[Create NetworkMLAGDomain<br/>MLAG-leaf1-2-DC1]
GenLeafs --> LP2[Create NetworkMLAGDomain<br/>MLAG-leaf3-4-DC1]
LP1 --> L1[leaf1-DC1]
LP1 --> L2[leaf2-DC1]
LP2 --> L3[leaf3-DC1]
LP2 --> L4[leaf4-DC1]
LP1 --> L1[leaf1-DC1<br/>role: leaf<br/>mlag_side: left]
LP1 --> L2[leaf2-DC1<br/>role: leaf<br/>mlag_side: right]
LP2 --> L3[leaf3-DC1<br/>role: leaf<br/>mlag_side: left]
LP2 --> L4[leaf4-DC1<br/>role: leaf<br/>mlag_side: right]
GenBorders --> BP[Create Border Pair]
BP --> B1[borderleaf1-DC1]
BP --> B2[borderleaf2-DC1]
GenBorders --> BP[Create NetworkMLAGDomain<br/>MLAG-border-DC1]
BP --> B1[borderleaf1-DC1<br/>role: borderleaf<br/>mlag_side: left]
BP --> B2[borderleaf2-DC1<br/>role: borderleaf<br/>mlag_side: right]
GenAccess --> AssignBays[Assign Bays to Leaf Pairs]
GenAccess --> AssignBays[Assign InfraBay to Leaf Pairs]
AssignBays --> A1Config[Bay 1 → Leaf Pair 1<br/>access1-DC1]
AssignBays --> A2Config[Bay 2 → Leaf Pair 1<br/>access2-DC1]
A1Config --> A1[access1-DC1]
A2Config --> A2[access2-DC1]
A1Config --> A1[access1-DC1<br/>NetworkDevice<br/>role: access]
A2Config --> A2[access2-DC1<br/>NetworkDevice<br/>role: access]
style DC fill:#ffecb3
style LP1 fill:#b3e5fc
@@ -325,15 +338,17 @@ graph TB
style BP fill:#f8bbd0
```
---
## 7. Scaling Scenario - Adding Bay 3
```mermaid
graph LR
subgraph Current["📦 Current State - 2 Bays"]
CurDC[Datacenter: DC1<br/>bays: 2<br/>leaf_pairs: 1]
CurLP1[Leaf Pair 1<br/>leaf1-2]
CurBay1[Bay 1 → access1]
CurBay2[Bay 2 → access2]
CurDC[InfraDatacenter: DC1<br/>number_of_bays: 2<br/>leaf_pair_count: 1]
CurLP1[NetworkMLAGDomain: Pair 1<br/>leaf1-2]
CurBay1[InfraBay 1 → access1]
CurBay2[InfraBay 2 → access2]
CurDC --> CurLP1
CurLP1 --> CurBay1
@@ -345,22 +360,22 @@ graph LR
end
subgraph Generator["🔄 Generator Logic"]
Check[Check:<br/>bays=3 > pairs*2=2?<br/>YES → Need new pair]
CreatePair[Create Leaf Pair 2<br/>leaf3-DC1, leaf4-DC1]
CreateAccess[Create access3-DC1]
AssignBay[Assign Bay 3 → Pair 2]
AllocIPs[Allocate IPs]
CreateLinks[Create Interfaces]
ConfigBGP[Configure BGP<br/>ASN: 65102]
Check[Check:<br/>number_of_bays=3 > pairs*2=2?<br/>YES → Need new pair]
CreatePair[Create NetworkMLAGDomain<br/>Leaf Pair 2<br/>leaf3-DC1, leaf4-DC1]
CreateAccess[Create NetworkDevice<br/>access3-DC1<br/>role: access]
AssignBay[Assign InfraBay 3 → Pair 2]
AllocIPs[Allocate IpamIPAddress]
CreateLinks[Create NetworkInterface]
ConfigBGP[Configure NetworkBGPConfig<br/>ASN: 65102]
end
subgraph Result["✅ New State - 3 Bays"]
NewDC[Datacenter: DC1<br/>bays: 3<br/>leaf_pairs: 2]
NewLP1[Leaf Pair 1<br/>leaf1-2<br/>ASN: 65101]
NewLP2[Leaf Pair 2<br/>leaf3-4<br/>ASN: 65102]
NewBay1[Bay 1 → access1]
NewBay2[Bay 2 → access2]
NewBay3[Bay 3 → access3]
NewDC[InfraDatacenter: DC1<br/>number_of_bays: 3<br/>leaf_pair_count: 2]
NewLP1[NetworkMLAGDomain: Pair 1<br/>leaf1-2<br/>ASN: 65101]
NewLP2[NetworkMLAGDomain: Pair 2<br/>leaf3-4<br/>ASN: 65102]
NewBay1[InfraBay 1 → access1]
NewBay2[InfraBay 2 → access2]
NewBay3[InfraBay 3 → access3]
NewDC --> NewLP1
NewDC --> NewLP2
@@ -384,6 +399,8 @@ graph LR
style Result fill:#c8e6c9
```
---
## 8. Infrahub Generator Workflow
```mermaid
@@ -393,26 +410,27 @@ sequenceDiagram
participant Generator
participant Validator
participant GraphQL
participant Devices
participant Objects
User->>Infrahub: Create Datacenter Object<br/>(name, dc_id, bays, etc.)
User->>Infrahub: Create InfraDatacenter Object<br/>(name, dc_id, number_of_bays, etc.)
Infrahub->>Validator: Pre-Generation Validation
Validator-->>Infrahub: ✅ Valid Input
Infrahub->>Generator: Trigger Generator
Generator->>Generator: Compute Derived Values<br/>(leaf_pairs, subnets, etc.)
Generator->>Generator: Compute Derived Values<br/>(leaf_pair_count, total_leaf_count, etc.)
loop For Each Layer
Generator->>Devices: Create Spine Devices
Generator->>Devices: Create Leaf Devices
Generator->>Devices: Create Access Devices
Generator->>Objects: Create NetworkDevice - Spines
Generator->>Objects: Create NetworkDevice - Leafs
Generator->>Objects: Create NetworkDevice - Access
Generator->>Objects: Create NetworkMLAGDomain
end
loop For Each Device
Generator->>Devices: Create Interfaces
Generator->>Devices: Allocate IP Addresses
Generator->>Devices: Generate BGP Config
Generator->>Objects: Create NetworkInterface
Generator->>Objects: Allocate IpamIPAddress
Generator->>Objects: Generate NetworkBGPConfig
end
Generator->>Validator: Post-Generation Validation
@@ -421,15 +439,17 @@ sequenceDiagram
Generator->>GraphQL: Commit All Objects
GraphQL-->>Infrahub: Objects Created
Infrahub-->>User: ✅ Datacenter Generated<br/>27 devices, 150 interfaces, 90 IPs
Infrahub-->>User: ✅ Datacenter Generated<br/>11 devices, 50+ interfaces, 30+ IPs
```
---
## 9. Configuration Generation Flow
```mermaid
graph TB
subgraph Infrahub["📊 Infrahub - Source of Truth"]
DeviceDB[(Device Objects<br/>Interfaces<br/>IPs<br/>BGP Sessions)]
DeviceDB[(NetworkDevice Objects<br/>NetworkInterface<br/>IpamIPAddress<br/>NetworkBGPConfig)]
end
subgraph Generator["⚙️ Config Generator"]
@@ -469,70 +489,233 @@ graph TB
style Deployment fill:#ffccbc
```
---
## 10. Complete Data Model Relationships
```mermaid
erDiagram
ORGANIZATION ||--o{ SITE : contains
SITE ||--o{ DATACENTER : contains
SITE ||--o{ IP_PREFIX : manages
OrganizationOrganization ||--o{ LocationSite : "contains (Generic)"
LocationSite ||--o{ InfraDatacenter : "contains (Generic)"
LocationSite ||--o{ IpamIPPrefix : "manages (Attribute)"
DATACENTER ||--|{ SUBNET : generates
DATACENTER ||--|{ DEVICE : generates
DATACENTER ||--|{ MLAG_PAIR : generates
InfraDatacenter ||--|{ IpamIPPrefix : "generates (Generic)"
InfraDatacenter ||--|{ NetworkDevice : "generates (Component)"
InfraDatacenter ||--|{ NetworkMLAGDomain : "generates (Generic)"
InfraDatacenter ||--|{ InfraBay : "contains (Generic)"
DEVICE ||--|{ INTERFACE : has
INTERFACE ||--o| IP_ADDRESS : assigned
INTERFACE ||--o| BGP_SESSION : endpoint
NetworkDevice ||--|{ NetworkInterface : "has (Component)"
NetworkInterface ||--o| IpamIPAddress : "assigned (Generic)"
NetworkInterface ||--o| NetworkBGPNeighbor : "endpoint (Generic)"
DEVICE }|--|| ROLE : has
DEVICE }o--o| MLAG_PAIR : member_of
NetworkDevice }|--|| DeviceRole : "has (Attribute)"
NetworkDevice }o--o| NetworkMLAGDomain : "member_of (Attribute)"
MLAG_PAIR ||--|| DEVICE : primary
MLAG_PAIR ||--|| DEVICE : secondary
NetworkMLAGDomain ||--o{ NetworkDevice : "devices (Generic)"
NetworkMLAGDomain ||--|{ NetworkMLAGInterface : "interfaces (Component)"
BGP_SESSION }o--|| INTERFACE : local
BGP_SESSION }o--|| INTERFACE : remote
NetworkBGPConfig }o--|| NetworkDevice : "device (Parent)"
NetworkBGPConfig ||--|{ NetworkBGPPeerGroup : "peer_groups (Component)"
NetworkBGPConfig ||--|{ NetworkBGPNeighbor : "neighbors (Component)"
SUBNET ||--|{ IP_ADDRESS : contains
IP_PREFIX ||--|{ SUBNET : parent
IpamIPPrefix ||--|{ IpamIPAddress : "contains (Generic)"
IpamIPPrefix ||--o| IpamIPPrefix : "parent (Attribute)"
ORGANIZATION {
NetworkDCISwitch ||--|{ NetworkDCIConnection : "connections (Component)"
NetworkDCIConnection }o--|| NetworkDevice : "border_leaf (Attribute)"
OrganizationOrganization {
string name
number asn_base
}
SITE {
LocationSite {
string name
string location
string status
}
DATACENTER {
InfraDatacenter {
string name
int dc_id
int number_of_bays
int spine_count
string parent_subnet
number dc_id
number number_of_bays
number spine_count
number border_leaf_count
IPNetwork parent_subnet
number bgp_base_asn
boolean dci_enabled
number leaf_pair_count_COMPUTED
number total_leaf_count_COMPUTED
number total_access_count_COMPUTED
}
DEVICE {
NetworkDevice {
string hostname
string role
string mgmt_ip
IPHost management_ip_COMPUTED
string platform
number spine_id
number leaf_id
}
INTERFACE {
NetworkInterface {
string name
string type
int mtu
string interface_type
boolean enabled
number mtu
string switchport_mode
number channel_id
}
IP_ADDRESS {
string address
int prefix_length
IpamIPAddress {
IPHost address
string status
}
BGP_SESSION {
int local_asn
int remote_asn
NetworkBGPConfig {
number asn
IPHost router_id
number maximum_paths
number distance_external
number distance_internal
number ebgp_admin_distance
}
NetworkMLAGDomain {
string domain_id
IPHost peer_address
string status
}
NetworkDCISwitch {
string hostname
IPHost loopback0_ip
IPHost management_ip
}
NetworkDCIConnection {
string connection_name
string status
IPHost dci_ip
IPHost border_ip
IPNetwork subnet
}
```
---
## 11. DCI Architecture - NetworkDCISwitch & NetworkDCIConnection
```mermaid
graph TB
subgraph DCILayer["🌐 DCI Layer - Separate Schema"]
DCI[NetworkDCISwitch<br/>hostname: DCI<br/>loopback0_ip: 10.253.0.1/32<br/>management_ip: 10.255.0.253]
end
subgraph DC1Border["DC1 Border Leafs"]
B1_DC1[borderleaf1-DC1<br/>NetworkDevice<br/>eth12 interface]
B2_DC1[borderleaf2-DC1<br/>NetworkDevice<br/>eth12 interface]
end
subgraph DC2Border["DC2 Border Leafs"]
B1_DC2[borderleaf1-DC2<br/>NetworkDevice<br/>eth12 interface]
B2_DC2[borderleaf2-DC2<br/>NetworkDevice<br/>eth12 interface]
end
subgraph Connections["NetworkDCIConnection Objects"]
Conn1[DCI-to-borderleaf1-DC1<br/>dci_interface: Ethernet1<br/>border_interface: Ethernet12<br/>dci_ip: 10.254.0.1/31<br/>border_ip: 10.254.0.0/31<br/>status: shutdown]
Conn2[DCI-to-borderleaf2-DC1<br/>dci_interface: Ethernet2<br/>border_interface: Ethernet12<br/>dci_ip: 10.254.0.3/31<br/>border_ip: 10.254.0.2/31<br/>status: shutdown]
Conn3[DCI-to-borderleaf1-DC2<br/>dci_interface: Ethernet3<br/>border_interface: Ethernet12<br/>dci_ip: 10.254.0.5/31<br/>border_ip: 10.254.0.4/31<br/>status: shutdown]
Conn4[DCI-to-borderleaf2-DC2<br/>dci_interface: Ethernet4<br/>border_interface: Ethernet12<br/>dci_ip: 10.254.0.7/31<br/>border_ip: 10.254.0.6/31<br/>status: shutdown]
end
DCI --> Conn1
DCI --> Conn2
DCI --> Conn3
DCI --> Conn4
Conn1 -.-> B1_DC1
Conn2 -.-> B2_DC1
Conn3 -.-> B1_DC2
Conn4 -.-> B2_DC2
style DCI fill:#ffccbc
style Conn1 fill:#fff9c4
style Conn2 fill:#fff9c4
style Conn3 fill:#fff9c4
style Conn4 fill:#fff9c4
```
**Key DCI Concepts:**
- **NetworkDCISwitch**: Separate schema object (NOT NetworkDevice)
- **NetworkDCIConnection**: Tracks P2P links with dedicated attributes
- **status**: Default is `shutdown`, changes to `active` when `dci_enabled: true`
- **border_interface**: Always Ethernet12 on border leafs
- **Relationship**: `NetworkDCIConnection``border_leaf` (Attribute) → `NetworkDevice`
---
## 12. Key Schema Attributes Summary
### InfraDatacenter Attributes
```yaml
# User Input (Required)
name: string # "DC1"
dc_id: number # 1
number_of_bays: number # 2 (default)
parent_subnet: IPNetwork # "10.0.0.0/8"
# Configuration (With Defaults)
spine_count: number # 3 (default)
border_leaf_count: number # 2 (default)
bgp_base_asn: number # 65000 (default)
spine_asn: number # Auto: bgp_base_asn + (dc_id * 100)
mlag_domain_id: string # "MLAG" (default)
mtu: number # 9214 (default)
has_border_leafs: boolean # true (default)
dci_enabled: boolean # false (default)
dci_remote_dc_id: number # Optional
# Computed (Read-Only)
leaf_pair_count: number # ceil(number_of_bays / 2)
total_leaf_count: number # leaf_pair_count * 2
total_access_count: number # number_of_bays
```
### NetworkDevice Attributes
```yaml
hostname: string # "spine1-DC1"
role: dropdown # spine, leaf, borderleaf, access, host
platform: string # "cEOS" (default)
management_ip_template: string # Template for IP generation
management_ip: IPHost # Computed from template
spine_id: number # For spines
leaf_id: number # For leafs
mlag_side: dropdown # left (odd), right (even)
```
### NetworkInterface Types
```yaml
interface_type: dropdown
- ethernet # Physical interfaces
- loopback # Loopback0, Loopback1
- port_channel # Note: underscore, not hyphen!
- vlan # SVIs
- vxlan # VXLAN tunnels
- management # Management interface
```
### IpamIPPrefix Types
```yaml
prefix_type: dropdown
- pool # General pool
- loopback # Loopback0 addresses
- vtep # Loopback1 VTEP addresses
- p2p # Point-to-point links
- management # Management network
- tenant # Tenant/VLAN networks
- mlag # MLAG peer links
```

View File

@@ -0,0 +1,905 @@
"""
Datacenter Generator for Infrahub
==================================
This generator creates a complete datacenter fabric topology including:
- Spine switches (Layer 3 core)
- Leaf switches (Aggregation with VXLAN)
- Border leaf switches (DCI gateway capable)
- Access switches (Rack ToR)
- MLAG domains for leaf and border pairs
- IP prefixes and addresses
- BGP configuration
- Interfaces with proper connectivity
Architecture:
- Spine-Leaf topology with MLAG leaf pairs
- eBGP underlay (spine ASN vs leaf pair ASNs)
- VXLAN/EVPN overlay on leafs
- Optional DCI connectivity via border leafs (eth12 shutdown by default)
"""
import math
from typing import Any, Dict, List
from infrahub_sdk.generator import InfrahubGenerator
class DatacenterGenerator(InfrahubGenerator):
"""
Generates complete datacenter fabric topology from InfraDatacenter object.
"""
async def generate(self, data: dict) -> None:
"""
Main generator entry point.
Args:
data: GraphQL query response with datacenter details
"""
# Get datacenter object from query response
dc = self.nodes[0]
self.log.info(f"🚀 Starting datacenter generation for: {dc.name.value}")
self.log.info(f" DC ID: {dc.dc_id.value}")
self.log.info(f" Number of bays: {dc.number_of_bays.value}")
self.log.info(f" Spine count: {dc.spine_count.value}")
# Step 1: Calculate derived values
self.log.info("📊 Calculating topology parameters...")
topology = self._calculate_topology(dc)
# Step 2: Create IP prefixes
self.log.info("🌐 Creating IP prefixes...")
await self._create_ip_prefixes(dc, topology)
# Step 3: Create spine switches
self.log.info("🔴 Creating spine switches...")
spines = await self._create_spines(dc, topology)
# Step 4: Create leaf switches and MLAG pairs
self.log.info("🔵 Creating leaf switches and MLAG pairs...")
leaf_pairs = await self._create_leaf_pairs(dc, topology)
# Step 5: Create border leaf switches (if enabled)
border_pair = None
if dc.has_border_leafs.value:
self.log.info("🟣 Creating border leaf switches...")
border_pair = await self._create_border_pair(dc, topology)
# Step 6: Create access switches
self.log.info("🟡 Creating access switches...")
access_switches = await self._create_access_switches(dc, topology, leaf_pairs)
# Step 7: Create spine-to-leaf interfaces and BGP sessions
self.log.info("🔗 Creating spine-to-leaf connectivity...")
await self._create_spine_leaf_connectivity(
dc, spines, leaf_pairs, border_pair, topology
)
# Step 8: Create leaf-to-access connectivity
self.log.info("🔗 Creating leaf-to-access connectivity...")
await self._create_leaf_access_connectivity(
dc, leaf_pairs, access_switches, topology
)
# Step 9: Update datacenter computed attributes
self.log.info("✍️ Updating datacenter computed attributes...")
await self._update_datacenter_computed_fields(dc, topology)
# Summary
total_devices = (
len(spines)
+ sum(len(pair["devices"]) for pair in leaf_pairs)
+ len(access_switches)
)
if border_pair:
total_devices += len(border_pair["devices"])
self.log.info("=" * 60)
self.log.info(f"✅ Datacenter '{dc.name.value}' generation complete!")
self.log.info(f" Total devices: {total_devices}")
self.log.info(f" - Spines: {len(spines)}")
self.log.info(
f" - Leaf pairs: {len(leaf_pairs)} ({sum(len(pair['devices']) for pair in leaf_pairs)} devices)"
)
if border_pair:
self.log.info(f" - Border leafs: {len(border_pair['devices'])}")
self.log.info(f" - Access switches: {len(access_switches)}")
self.log.info("=" * 60)
def _calculate_topology(self, dc) -> Dict[str, Any]:
"""
Calculate topology parameters based on datacenter configuration.
Returns:
Dictionary with calculated values:
- leaf_pair_count: Number of MLAG leaf pairs needed
- total_leaf_count: Total number of leaf switches
- total_access_count: Total number of access switches
- spine_asn: ASN for spine switches
- base_leaf_asn: Starting ASN for leaf pairs
"""
dc_id = dc.dc_id.value
number_of_bays = dc.number_of_bays.value
bgp_base_asn = dc.bgp_base_asn.value
# Calculate leaf pairs: ceil(bays / 2)
# Each pair serves 2 bays
leaf_pair_count = math.ceil(number_of_bays / 2)
total_leaf_count = leaf_pair_count * 2
total_access_count = number_of_bays
# Calculate ASNs
# Spine ASN: base + (dc_id * 100)
# Example: DC1 → 65000 + 100 = 65100
spine_asn = (
dc.spine_asn.value if dc.spine_asn.value else bgp_base_asn + (dc_id * 100)
)
# Leaf pair ASNs: spine_asn + pair_number
# Example: DC1 Pair 1 → 65100 + 1 = 65101
base_leaf_asn = spine_asn + 1
# Border leaf ASN: base_leaf_asn + leaf_pair_count
border_asn = base_leaf_asn + leaf_pair_count
topology = {
"leaf_pair_count": leaf_pair_count,
"total_leaf_count": total_leaf_count,
"total_access_count": total_access_count,
"spine_asn": spine_asn,
"base_leaf_asn": base_leaf_asn,
"border_asn": border_asn,
}
self.log.debug(f"Topology calculated: {topology}")
return topology
async def _create_ip_prefixes(self, dc, topology: Dict[str, Any]) -> None:
"""
Create IP prefixes for the datacenter.
Prefix allocation scheme:
- 10.{dc_id}.0.0/24 - Loopback0 (router IDs)
- 10.{dc_id}.1.0/24 - Loopback1 (VTEP addresses)
- 10.{dc_id}.10.0/24 - Spine-Leaf P2P links
- 10.{dc_id}.20.0/24 - Leaf-Access P2P links
- 10.{dc_id}.255.0/24 - MLAG peer links
- 10.255.0.0/24 - Management IPs
"""
dc_id = dc.dc_id.value
dc_name = dc.name.value
# TODO: Get or create namespace
# For now, assume default namespace
prefixes = [
{
"prefix": f"10.{dc_id}.0.0/24",
"description": f"Loopback0 addresses for {dc_name}",
"prefix_type": "loopback",
},
{
"prefix": f"10.{dc_id}.1.0/24",
"description": f"VTEP Loopback1 addresses for {dc_name}",
"prefix_type": "vtep",
},
{
"prefix": f"10.{dc_id}.10.0/24",
"description": f"Spine-Leaf P2P links for {dc_name}",
"prefix_type": "p2p",
},
{
"prefix": f"10.{dc_id}.20.0/24",
"description": f"Leaf-Access P2P links for {dc_name}",
"prefix_type": "p2p",
},
{
"prefix": f"10.{dc_id}.255.0/24",
"description": f"MLAG peer links for {dc_name}",
"prefix_type": "mlag",
},
]
for prefix_data in prefixes:
prefix_obj = await self.client.create(
kind="IpamIPPrefix",
data={
"prefix": prefix_data["prefix"],
"description": prefix_data["description"],
"prefix_type": prefix_data["prefix_type"],
"status": "active",
"datacenter": dc.id,
},
)
await prefix_obj.save(allow_upsert=True)
self.log.debug(
f" Created prefix: {prefix_data['prefix']} ({prefix_data['prefix_type']})"
)
async def _create_spines(self, dc, topology: Dict[str, Any]) -> List[Any]:
"""
Create spine switches.
Naming: spine{1..N}-{DC_NAME}
IPs: 10.{dc_id}.0.{10+spine_num}/32
"""
dc_id = dc.dc_id.value
dc_name = dc.name.value
spine_count = dc.spine_count.value
spine_asn = topology["spine_asn"]
spines = []
for spine_num in range(1, spine_count + 1):
hostname = f"spine{spine_num}-{dc_name}"
loopback0_ip = f"10.{dc_id}.0.{10 + spine_num}"
mgmt_ip = f"10.255.0.{10 + spine_num}"
# Create device
device = await self.client.create(
kind="NetworkDevice",
data={
"hostname": hostname,
"description": f"Spine switch {spine_num} in {dc_name}",
"role": "spine",
"platform": "cEOS",
"datacenter": dc.id,
"site": dc.site.node.id,
"status": "active",
"spine_id": spine_num,
"management_ip_template": mgmt_ip,
},
)
await device.save(allow_upsert=True)
# Create Loopback0 interface
lo0 = await self.client.create(
kind="NetworkInterface",
data={
"name": "Loopback0",
"description": f"Router ID for {hostname}",
"interface_type": "loopback",
"enabled": True,
"device": device.id,
"loopback_id": 0,
"loopback_purpose": "router_id",
},
)
await lo0.save(allow_upsert=True)
# Create IP address for Loopback0
lo0_ip = await self.client.create(
kind="IpamIPAddress",
data={
"address": f"{loopback0_ip}/32",
"description": f"Loopback0 for {hostname}",
"status": "active",
"interface": lo0.id,
},
)
await lo0_ip.save(allow_upsert=True)
# Create BGP configuration
bgp_config = await self.client.create(
kind="NetworkBGPConfig",
data={
"asn": spine_asn,
"router_id": loopback0_ip,
"maximum_paths": 64,
"distance_external": 20,
"distance_internal": 200,
"ebgp_admin_distance": 200,
"default_ipv4_unicast": False,
"device": device.id,
},
)
await bgp_config.save(allow_upsert=True)
spines.append(
{
"device": device,
"loopback0": lo0,
"loopback0_ip": loopback0_ip,
"bgp_config": bgp_config,
}
)
self.log.info(
f" ✅ Created spine: {hostname} (ASN: {spine_asn}, Loopback0: {loopback0_ip})"
)
return spines
async def _create_leaf_pairs(
self, dc, topology: Dict[str, Any]
) -> List[Dict[str, Any]]:
"""
Create leaf switch pairs with MLAG configuration.
Naming: leaf{1..N}-{DC_NAME}
IPs:
- Loopback0: 10.{dc_id}.0.{20+leaf_num}/32
- Loopback1: 10.{dc_id}.1.{20+pair_num}/32 (shared)
"""
dc_id = dc.dc_id.value
dc_name = dc.name.value
leaf_pair_count = topology["leaf_pair_count"]
base_leaf_asn = topology["base_leaf_asn"]
leaf_pairs = []
for pair_num in range(1, leaf_pair_count + 1):
pair_asn = base_leaf_asn + (pair_num - 1)
vtep_ip = f"10.{dc_id}.1.{20 + pair_num}"
# Create MLAG domain
mlag_domain = await self.client.create(
kind="NetworkMLAGDomain",
data={
"domain_id": f"MLAG-leaf{pair_num * 2 - 1}-{pair_num * 2}-{dc_name}",
"local_interface": "Vlan4094",
"peer_interface": "Vlan4094",
"peer_address": f"10.{dc_id}.255.{pair_num * 4 - 2}", # Will be set properly per device
"status": "active",
},
)
await mlag_domain.save(allow_upsert=True)
pair_devices = []
# Create 2 leafs in the pair (odd and even)
for side_num in range(2):
leaf_num = pair_num * 2 - 1 + side_num # 1, 2, 3, 4...
hostname = f"leaf{leaf_num}-{dc_name}"
loopback0_ip = f"10.{dc_id}.0.{20 + leaf_num}"
mgmt_ip = f"10.255.0.{20 + leaf_num}"
mlag_side = "left" if side_num == 0 else "right"
# Create device
device = await self.client.create(
kind="NetworkDevice",
data={
"hostname": hostname,
"description": f"Leaf switch {leaf_num} in {dc_name} (Pair {pair_num})",
"role": "leaf",
"platform": "cEOS",
"datacenter": dc.id,
"site": dc.site.node.id,
"status": "active",
"leaf_id": leaf_num,
"mlag_side": mlag_side,
"mlag_domain": mlag_domain.id,
"management_ip_template": mgmt_ip,
},
)
await device.save(allow_upsert=True)
# Create Loopback0
lo0 = await self.client.create(
kind="NetworkInterface",
data={
"name": "Loopback0",
"description": f"Router ID for {hostname}",
"interface_type": "loopback",
"enabled": True,
"device": device.id,
"loopback_id": 0,
"loopback_purpose": "router_id",
},
)
await lo0.save(allow_upsert=True)
lo0_ip = await self.client.create(
kind="IpamIPAddress",
data={
"address": f"{loopback0_ip}/32",
"description": f"Loopback0 for {hostname}",
"status": "active",
"interface": lo0.id,
},
)
await lo0_ip.save(allow_upsert=True)
# Create Loopback1 (VTEP) - shared IP
lo1 = await self.client.create(
kind="NetworkInterface",
data={
"name": "Loopback1",
"description": f"VTEP for {hostname} (shared with pair)",
"interface_type": "loopback",
"enabled": True,
"device": device.id,
"loopback_id": 1,
"loopback_purpose": "vtep",
},
)
await lo1.save(allow_upsert=True)
lo1_ip = await self.client.create(
kind="IpamIPAddress",
data={
"address": f"{vtep_ip}/32",
"description": f"VTEP shared for leaf pair {pair_num}",
"status": "active",
"interface": lo1.id,
},
)
await lo1_ip.save(allow_upsert=True)
# Create BGP configuration
bgp_config = await self.client.create(
kind="NetworkBGPConfig",
data={
"asn": pair_asn,
"router_id": loopback0_ip,
"maximum_paths": 64,
"distance_external": 20,
"distance_internal": 200,
"ebgp_admin_distance": 200,
"default_ipv4_unicast": False,
"device": device.id,
},
)
await bgp_config.save(allow_upsert=True)
# Create VXLAN interface
vxlan1 = await self.client.create(
kind="NetworkVXLANTunnel",
data={
"name": "Vxlan1",
"source_ip": vtep_ip,
"udp_port": 4789,
"device": device.id,
},
)
await vxlan1.save(allow_upsert=True)
# Create EVPN config
evpn = await self.client.create(
kind="NetworkEVPNConfig",
data={
"vni_auto": True,
"device": device.id,
},
)
await evpn.save(allow_upsert=True)
pair_devices.append(
{
"device": device,
"loopback0": lo0,
"loopback0_ip": loopback0_ip,
"loopback1": lo1,
"vtep_ip": vtep_ip,
"bgp_config": bgp_config,
}
)
self.log.info(
f" ✅ Created leaf: {hostname} (ASN: {pair_asn}, VTEP: {vtep_ip})"
)
leaf_pairs.append(
{
"pair_num": pair_num,
"asn": pair_asn,
"mlag_domain": mlag_domain,
"devices": pair_devices,
"vtep_ip": vtep_ip,
}
)
return leaf_pairs
async def _create_border_pair(self, dc, topology: Dict[str, Any]) -> Dict[str, Any]:
"""
Create border leaf pair for DCI connectivity.
Naming: borderleaf{1,2}-{DC_NAME}
IPs: 10.{dc_id}.0.{30+num}/32
eth12: Created but shutdown by default
"""
dc_id = dc.dc_id.value
dc_name = dc.name.value
border_asn = topology["border_asn"]
border_count = dc.border_leaf_count.value
# Create MLAG domain
mlag_domain = await self.client.create(
kind="NetworkMLAGDomain",
data={
"domain_id": f"MLAG-border-{dc_name}",
"local_interface": "Vlan4094",
"peer_interface": "Vlan4094",
"peer_address": f"10.{dc_id}.255.254",
"status": "active",
},
)
await mlag_domain.save(allow_upsert=True)
border_devices = []
for border_num in range(1, border_count + 1):
hostname = f"borderleaf{border_num}-{dc_name}"
loopback0_ip = f"10.{dc_id}.0.{30 + border_num}"
mgmt_ip = f"10.255.0.{30 + border_num}"
mlag_side = "left" if border_num == 1 else "right"
# Create device
device = await self.client.create(
kind="NetworkDevice",
data={
"hostname": hostname,
"description": f"Border leaf {border_num} in {dc_name} (DCI capable)",
"role": "borderleaf",
"platform": "cEOS",
"datacenter": dc.id,
"site": dc.site.node.id,
"status": "active",
"mlag_side": mlag_side,
"mlag_domain": mlag_domain.id,
"management_ip_template": mgmt_ip,
},
)
await device.save(allow_upsert=True)
# Create Loopback0
lo0 = await self.client.create(
kind="NetworkInterface",
data={
"name": "Loopback0",
"description": f"Router ID for {hostname}",
"interface_type": "loopback",
"enabled": True,
"device": device.id,
"loopback_id": 0,
"loopback_purpose": "router_id",
},
)
await lo0.save(allow_upsert=True)
lo0_ip = await self.client.create(
kind="IpamIPAddress",
data={
"address": f"{loopback0_ip}/32",
"description": f"Loopback0 for {hostname}",
"status": "active",
"interface": lo0.id,
},
)
await lo0_ip.save(allow_upsert=True)
# Create eth12 for DCI (shutdown by default)
eth12 = await self.client.create(
kind="NetworkInterface",
data={
"name": "Ethernet12",
"description": "DCI interface to DCI switch (shutdown unless dci_enabled=true)",
"interface_type": "ethernet",
"enabled": dc.dci_enabled.value if dc.dci_enabled.value else False,
"mtu": dc.mtu.value,
"device": device.id,
},
)
await eth12.save(allow_upsert=True)
# Create BGP configuration
bgp_config = await self.client.create(
kind="NetworkBGPConfig",
data={
"asn": border_asn,
"router_id": loopback0_ip,
"maximum_paths": 64,
"distance_external": 20,
"distance_internal": 200,
"ebgp_admin_distance": 200,
"default_ipv4_unicast": False,
"device": device.id,
},
)
await bgp_config.save(allow_upsert=True)
border_devices.append(
{
"device": device,
"loopback0": lo0,
"loopback0_ip": loopback0_ip,
"bgp_config": bgp_config,
"eth12": eth12,
}
)
self.log.info(
f" ✅ Created border leaf: {hostname} (ASN: {border_asn}, eth12: {'enabled' if dc.dci_enabled.value else 'shutdown'})"
)
return {
"asn": border_asn,
"mlag_domain": mlag_domain,
"devices": border_devices,
}
async def _create_access_switches(
self, dc, topology: Dict[str, Any], leaf_pairs: List[Dict[str, Any]]
) -> List[Any]:
"""
Create access switches and assign to leaf pairs.
Assignment: Round-robin to leaf pairs (2 access per pair)
Naming: access{bay_num}-{DC_NAME}
"""
dc_name = dc.name.value
number_of_bays = dc.number_of_bays.value
access_switches = []
for bay_num in range(1, number_of_bays + 1):
hostname = f"access{bay_num}-{dc_name}"
mgmt_ip = f"10.255.0.{100 + bay_num}"
# Assign to leaf pair (round-robin)
assigned_pair_idx = (bay_num - 1) // 2
assigned_pair = leaf_pairs[assigned_pair_idx]
# Create device
device = await self.client.create(
kind="NetworkDevice",
data={
"hostname": hostname,
"description": f"Access switch for Bay {bay_num} in {dc_name}",
"role": "access",
"platform": "cEOS",
"datacenter": dc.id,
"site": dc.site.node.id,
"status": "active",
"management_ip_template": mgmt_ip,
},
)
await device.save(allow_upsert=True)
access_switches.append(
{
"device": device,
"bay_num": bay_num,
"assigned_pair": assigned_pair,
}
)
self.log.info(
f" ✅ Created access: {hostname} (Bay {bay_num} → Leaf Pair {assigned_pair['pair_num']})"
)
return access_switches
async def _create_spine_leaf_connectivity(
self,
dc,
spines: List[Any],
leaf_pairs: List[Dict[str, Any]],
border_pair: Dict[str, Any],
topology: Dict[str, Any],
) -> None:
"""
Create interfaces and BGP sessions between spines and leafs (including borders).
Each leaf connects to ALL spines.
Interface assignment:
- Spine side: Ethernet{1..N} per leaf
- Leaf side: Ethernet{1..3} for 3 spines
"""
dc_id = dc.dc_id.value
# P2P IP allocator (starting from 10.{dc_id}.10.0/31)
p2p_counter = 0
# Collect all leaf devices (regular + border)
all_leaf_devices = []
for pair in leaf_pairs:
all_leaf_devices.extend(pair["devices"])
if border_pair:
all_leaf_devices.extend(border_pair["devices"])
# For each spine
for spine_idx, spine in enumerate(spines):
spine_device = spine["device"]
# Connect to each leaf
for leaf_idx, leaf in enumerate(all_leaf_devices):
leaf_device = leaf["device"]
# Allocate P2P subnet
spine_ip = f"10.{dc_id}.10.{p2p_counter * 2}"
leaf_ip = f"10.{dc_id}.10.{p2p_counter * 2 + 1}"
p2p_counter += 1
# Spine interface
spine_eth = f"Ethernet{leaf_idx + 1}"
spine_int = await self.client.create(
kind="NetworkInterface",
data={
"name": spine_eth,
"description": f"To {leaf_device.hostname.value}",
"interface_type": "ethernet",
"enabled": True,
"mtu": dc.mtu.value,
"device": spine_device.id,
},
)
await spine_int.save(allow_upsert=True)
spine_ip_obj = await self.client.create(
kind="IpamIPAddress",
data={
"address": f"{spine_ip}/31",
"description": f"{spine_device.hostname.value} to {leaf_device.hostname.value}",
"status": "active",
"interface": spine_int.id,
},
)
await spine_ip_obj.save(allow_upsert=True)
# Leaf interface
leaf_eth = f"Ethernet{spine_idx + 1}"
leaf_int = await self.client.create(
kind="NetworkInterface",
data={
"name": leaf_eth,
"description": f"To {spine_device.hostname.value}",
"interface_type": "ethernet",
"enabled": True,
"mtu": dc.mtu.value,
"device": leaf_device.id,
},
)
await leaf_int.save(allow_upsert=True)
leaf_ip_obj = await self.client.create(
kind="IpamIPAddress",
data={
"address": f"{leaf_ip}/31",
"description": f"{leaf_device.hostname.value} to {spine_device.hostname.value}",
"status": "active",
"interface": leaf_int.id,
},
)
await leaf_ip_obj.save(allow_upsert=True)
# Create BGP neighbor on spine
spine_bgp = spine["bgp_config"]
spine_neighbor = await self.client.create(
kind="NetworkBGPNeighbor",
data={
"neighbor_ip": leaf_ip,
"description": f"To {leaf_device.hostname.value}",
"enabled": True,
"peer_type": "ebgp",
"bgp_config": spine_bgp.id,
"local_interface": spine_int.id,
"remote_device": leaf_device.id,
},
)
await spine_neighbor.save(allow_upsert=True)
# Create BGP neighbor on leaf
leaf_bgp = leaf["bgp_config"]
leaf_neighbor = await self.client.create(
kind="NetworkBGPNeighbor",
data={
"neighbor_ip": spine_ip,
"description": f"To {spine_device.hostname.value}",
"enabled": True,
"peer_type": "ebgp",
"bgp_config": leaf_bgp.id,
"local_interface": leaf_int.id,
"remote_device": spine_device.id,
},
)
await leaf_neighbor.save(allow_upsert=True)
self.log.debug(f" Created {p2p_counter} spine-leaf P2P links")
async def _create_leaf_access_connectivity(
self,
dc,
leaf_pairs: List[Dict[str, Any]],
access_switches: List[Any],
topology: Dict[str, Any],
) -> None:
"""
Create dual-homed connectivity from access switches to leaf pairs.
Each access connects to both leafs in assigned pair:
- access eth1 → leaf_left eth7
- access eth2 → leaf_right eth7
"""
dc_id = dc.dc_id.value
# P2P IP allocator (starting from 10.{dc_id}.20.0/31)
p2p_counter = 0
for access in access_switches:
access_device = access["device"]
assigned_pair = access["assigned_pair"]
# Connect to both leafs in the pair
for link_num, leaf_data in enumerate(assigned_pair["devices"], start=1):
leaf_device = leaf_data["device"]
# Allocate P2P subnet
leaf_ip = f"10.{dc_id}.20.{p2p_counter * 2}"
access_ip = f"10.{dc_id}.20.{p2p_counter * 2 + 1}"
p2p_counter += 1
# Leaf interface (eth7 for access connectivity)
leaf_int = await self.client.create(
kind="NetworkInterface",
data={
"name": "Ethernet7",
"description": f"To {access_device.hostname.value}",
"interface_type": "ethernet",
"enabled": True,
"mtu": dc.mtu.value,
"device": leaf_device.id,
},
)
await leaf_int.save(allow_upsert=True)
leaf_ip_obj = await self.client.create(
kind="IpamIPAddress",
data={
"address": f"{leaf_ip}/31",
"description": f"{leaf_device.hostname.value} to {access_device.hostname.value}",
"status": "active",
"interface": leaf_int.id,
},
)
await leaf_ip_obj.save(allow_upsert=True)
# Access interface
access_eth = f"Ethernet{link_num}"
access_int = await self.client.create(
kind="NetworkInterface",
data={
"name": access_eth,
"description": f"To {leaf_device.hostname.value}",
"interface_type": "ethernet",
"enabled": True,
"mtu": dc.mtu.value,
"device": access_device.id,
},
)
await access_int.save(allow_upsert=True)
access_ip_obj = await self.client.create(
kind="IpamIPAddress",
data={
"address": f"{access_ip}/31",
"description": f"{access_device.hostname.value} to {leaf_device.hostname.value}",
"status": "active",
"interface": access_int.id,
},
)
await access_ip_obj.save(allow_upsert=True)
self.log.debug(f" Created {p2p_counter} leaf-access P2P links")
async def _update_datacenter_computed_fields(
self, dc, topology: Dict[str, Any]
) -> None:
"""
Update the datacenter object with computed values.
"""
# Update computed attributes
dc.leaf_pair_count.value = topology["leaf_pair_count"]
dc.total_leaf_count.value = topology["total_leaf_count"]
dc.total_access_count.value = topology["total_access_count"]
await dc.save()
self.log.debug(
f" Updated computed fields: leaf_pair_count={topology['leaf_pair_count']}, "
f"total_leaf_count={topology['total_leaf_count']}, "
f"total_access_count={topology['total_access_count']}"
)

View File

@@ -0,0 +1,55 @@
# GraphQL query to fetch InfraDatacenter with all required attributes
# This query retrieves all necessary data for the datacenter generator
query DatacenterQuery($datacenter_id: ID!) {
InfraDatacenter(ids: [$datacenter_id]) {
edges {
node {
id
__typename
# Basic attributes
name { value }
dc_id { value }
description { value }
# Topology configuration
number_of_bays { value }
spine_count { value }
border_leaf_count { value }
# Network configuration
parent_subnet { value }
bgp_base_asn { value }
spine_asn { value }
mlag_domain_id { value }
mtu { value }
# DCI configuration
dci_enabled { value }
dci_remote_dc_id { value }
has_border_leafs { value }
# Status
status { value }
# Relationships
site {
node {
id
__typename
name { value }
organization {
node {
id
__typename
name { value }
asn_base { value }
}
}
}
}
}
}
}
}

View File

@@ -26,6 +26,7 @@ nodes:
label: "IP Prefix"
icon: "mdi:ip-network-outline"
include_in_menu: true
menu_placement: "IpamNamespace"
human_friendly_id: ["prefix__value"]
display_label: "prefix__value"
order_by:
@@ -114,6 +115,7 @@ nodes:
label: "IP Address"
icon: "mdi:ip"
include_in_menu: true
menu_placement: "IpamNamespace"
human_friendly_id: ["address__value"]
display_label: "address__value"
order_by:

View File

@@ -8,7 +8,6 @@ nodes:
label: "Datacenter"
icon: "mdi:server-network"
include_in_menu: true
menu_placement: "LocationSite"
human_friendly_id: ["name__value"]
display_label: "name__value"
order_by:

View File

@@ -8,6 +8,7 @@ nodes:
label: "Interface"
icon: "mdi:ethernet"
include_in_menu: true
menu_placement: "NetworkDevice"
human_friendly_id: ["device__hostname__value", "name__value"]
display_label: "name__value"
order_by:

View File

@@ -8,6 +8,7 @@ nodes:
label: "BGP Configuration"
icon: "mdi:routes"
include_in_menu: true
menu_placement: "NetworkVRF"
human_friendly_id: ["asn__value"]
display_label: "asn__value"
order_by:
@@ -100,7 +101,8 @@ nodes:
namespace: Network
label: "BGP Peer Group"
icon: "mdi:account-group"
include_in_menu: false
include_in_menu: true
menu_placement: "NetworkVRF"
human_friendly_id: ["name__value"]
display_label: "name__value"
order_by:
@@ -170,7 +172,8 @@ nodes:
namespace: Network
label: "BGP Neighbor"
icon: "mdi:account-network"
include_in_menu: false
include_in_menu: true
menu_placement: "NetworkVRF"
human_friendly_id: ["neighbor_ip__value"]
display_label: "neighbor_ip__value"
order_by:
@@ -235,7 +238,8 @@ nodes:
namespace: Network
label: "BGP Address Family"
icon: "mdi:family-tree"
include_in_menu: false
include_in_menu: true
menu_placement: "NetworkVRF"
human_friendly_id: ["afi__value", "safi__value"]
display_label: "afi__value"
attributes:

View File

@@ -8,6 +8,7 @@ nodes:
label: "MLAG Domain"
icon: "mdi:link-variant"
include_in_menu: true
menu_placement: "NetworkVLAN"
human_friendly_id: ["domain_id__value"]
display_label: "domain_id__value"
order_by:
@@ -65,8 +66,9 @@ nodes:
- name: MLAGInterface
namespace: Network
label: "MLAG Interface"
icon: "mdi:ethernet-plus"
include_in_menu: false
icon: "mdi:ethernet"
include_in_menu: true
menu_placement: "NetworkVLAN"
human_friendly_id: ["mlag_domain__domain_id__value", "mlag_id__value"]
display_label: "mlag_id__value"
attributes:
@@ -115,6 +117,7 @@ nodes:
label: "VXLAN Tunnel"
icon: "mdi:tunnel"
include_in_menu: true
menu_placement: "NetworkVLAN"
human_friendly_id: ["name__value"]
display_label: "name__value"
attributes:

View File

@@ -8,6 +8,7 @@ nodes:
label: "Route Map"
icon: "mdi:map-marker-path"
include_in_menu: true
menu_placement: "NetworkVRF"
human_friendly_id: ["name__value"]
display_label: "name__value"
order_by:
@@ -56,6 +57,7 @@ nodes:
label: "Prefix List"
icon: "mdi:format-list-numbered"
include_in_menu: true
menu_placement: "NetworkVRF"
human_friendly_id: ["name__value"]
display_label: "name__value"
order_by:
@@ -124,7 +126,8 @@ nodes:
namespace: Network
label: "OSPF Configuration"
icon: "mdi:router"
include_in_menu: false
include_in_menu: true
menu_placement: "NetworkVRF"
human_friendly_id: ["device__hostname__value"]
display_label: "process_id__value"
attributes:
@@ -163,7 +166,8 @@ nodes:
namespace: Network
label: "OSPF Area"
icon: "mdi:circle-outline"
include_in_menu: false
include_in_menu: true
menu_placement: "NetworkVRF"
human_friendly_id: ["area_id__value"]
display_label: "area_id__value"
attributes:
@@ -204,7 +208,8 @@ nodes:
namespace: Network
label: "OSPF Interface"
icon: "mdi:ethernet"
include_in_menu: false
include_in_menu: true
menu_placement: "NetworkVRF"
human_friendly_id: ["identifier__value"]
display_label: "network_type__value"
attributes:

View File

@@ -8,6 +8,7 @@ nodes:
label: "DCI Interconnect Switch"
icon: "mdi:transit-connection-variant"
include_in_menu: true
menu_placement: "InfraDatacenter"
human_friendly_id: ["hostname__value"]
display_label: "hostname__value"
order_by:
@@ -112,7 +113,7 @@ nodes:
label: "DCI Connection"
icon: "mdi:cable-data"
include_in_menu: true
menu_placement: "NetworkDCISwitch"
menu_placement: "InfraDatacenter"
human_friendly_id: ["dci_switch__hostname__value", "border_leaf__hostname__value"]
display_label: "connection_name__value"
order_by: