22 KiB
22 KiB
Datacenter Generator - Visual Diagrams
Schema-Accurate Version - Corrected to match actual Infrahub schemas
1. Generator Concept - High Level
graph TB
subgraph Input["📥 INPUT"]
DC1[InfraDatacenter: DC1<br/>---<br/>dc_id: 1<br/>number_of_bays: 2<br/>dci_enabled: false]
DC2[InfraDatacenter: DC2<br/>---<br/>dc_id: 2<br/>number_of_bays: 2<br/>dci_enabled: false]
DCI_Config[NetworkDCISwitch<br/>---<br/>Manual/External<br/>Loopback0: 10.253.0.1]
end
subgraph Logic["⚙️ DATACENTER GENERATOR"]
Gen1[DC1 Generator:<br/>Create 11 NetworkDevice objects<br/>Border eth12: shutdown]
Gen2[DC2 Generator:<br/>Create 11 NetworkDevice objects<br/>Border eth12: shutdown]
end
subgraph Output1["📤 DC1 OUTPUT"]
Dev1[11 NetworkDevice Objects]
Int1[~50 NetworkInterface Objects]
IP1[~30 IpamIPAddress Objects]
end
subgraph Output2["📤 DC2 OUTPUT"]
Dev2[11 NetworkDevice Objects]
Int2[~50 NetworkInterface Objects]
IP2[~30 IpamIPAddress Objects]
end
subgraph Manual["🔧 MANUAL DCI SETUP"]
DCIDevice[NetworkDCISwitch<br/>Loopback0: 10.253.0.1<br/>ASN: 65000]
UpdateDC1[Update DC1:<br/>dci_enabled: true<br/>dci_remote_dc_id: 2]
UpdateDC2[Update DC2:<br/>dci_enabled: true<br/>dci_remote_dc_id: 1]
end
DC1 --> Gen1
DC2 --> Gen2
Gen1 --> Dev1
Gen1 --> Int1
Gen1 --> IP1
Gen2 --> Dev2
Gen2 --> Int2
Gen2 --> IP2
Output1 --> Manual
Output2 --> Manual
DCI_Config --> DCIDevice
DCIDevice --> UpdateDC1
DCIDevice --> UpdateDC2
style DC1 fill:#e1f5ff
style DC2 fill:#e1f5ff
style Manual fill:#fff9c4
style DCIDevice fill:#ffccbc
2. Datacenter Hierarchy
graph TB
Org[OrganizationOrganization]
Site[LocationSite: Paris-DC1<br/>Location: Paris, France]
DC[InfraDatacenter: DC1<br/>dc_id: 1<br/>number_of_bays: 2<br/>spine_count: 3]
Org --> Site
Site --> DC
DC --> SpineLayer[Spine Layer]
DC --> LeafLayer[Leaf Layer]
DC --> BorderLayer[Border Layer]
DC --> BayLayer[Bay Layer]
DC --> Subnets[IP Subnets]
SpineLayer --> S1[spine1-DC1<br/>NetworkDevice<br/>ASN: 65100]
SpineLayer --> S2[spine2-DC1<br/>NetworkDevice<br/>ASN: 65100]
SpineLayer --> S3[spine3-DC1<br/>NetworkDevice<br/>ASN: 65100]
LeafLayer --> LP1[NetworkMLAGDomain: Pair 1<br/>ASN: 65101]
LeafLayer --> LP2[NetworkMLAGDomain: Pair 2<br/>ASN: 65102]
LP1 --> L1[leaf1-DC1<br/>Loopback0: 10.1.0.21]
LP1 --> L2[leaf2-DC1<br/>Loopback0: 10.1.0.22]
LP2 --> L3[leaf3-DC1<br/>Loopback0: 10.1.0.23]
LP2 --> L4[leaf4-DC1<br/>Loopback0: 10.1.0.24]
BorderLayer --> BP[NetworkMLAGDomain: Border<br/>ASN: 65103]
BP --> B1[borderleaf1-DC1]
BP --> B2[borderleaf2-DC1]
BayLayer --> Bay1[InfraBay 1]
BayLayer --> Bay2[InfraBay 2]
Bay1 --> A1[access1-DC1]
Bay2 --> A2[access2-DC1]
Subnets --> Sub1[10.1.0.0/24<br/>IpamIPPrefix<br/>type: loopback]
Subnets --> Sub2[10.1.1.0/24<br/>IpamIPPrefix<br/>type: vtep]
Subnets --> Sub3[10.1.10.0/24<br/>IpamIPPrefix<br/>type: p2p]
Subnets --> Sub4[10.1.20.0/24<br/>IpamIPPrefix<br/>type: p2p]
style DC fill:#ffecb3
style LP1 fill:#b3e5fc
style LP2 fill:#b3e5fc
style Bay1 fill:#c8e6c9
style Bay2 fill:#c8e6c9
3. Bay-to-Leaf Assignment Logic
graph LR
subgraph Bays["🏢 InfraBay"]
Bay1[Bay 1<br/>access1-DC1]
Bay2[Bay 2<br/>access2-DC1]
Bay3[Bay 3<br/>access3-DC1]
Bay4[Bay 4<br/>access4-DC1]
end
subgraph LeafPairs["🔀 NetworkMLAGDomain - Leaf Pairs"]
LP1[Leaf Pair 1<br/>leaf1-DC1 ↔ leaf2-DC1<br/>ASN: 65101]
LP2[Leaf Pair 2<br/>leaf3-DC1 ↔ leaf4-DC1<br/>ASN: 65102]
end
subgraph Spines["⬆️ Spine Layer"]
S1[spine1-DC1]
S2[spine2-DC1]
S3[spine3-DC1]
end
Bay1 -.2 uplinks.- LP1
Bay2 -.2 uplinks.- LP1
Bay3 -.2 uplinks.- LP2
Bay4 -.2 uplinks.- LP2
LP1 --> S1
LP1 --> S2
LP1 --> S3
LP2 --> S1
LP2 --> S2
LP2 --> S3
style Bay1 fill:#c8e6c9
style Bay2 fill:#c8e6c9
style Bay3 fill:#ffccbc
style Bay4 fill:#ffccbc
style LP1 fill:#b3e5fc
style LP2 fill:#b3e5fc
4. Complete DC1 Physical Topology
graph TB
subgraph DCI["DCI LAYER - Inter-DC"]
DCID[NetworkDCISwitch<br/>Loopback0: 10.253.0.1<br/>ASN: 65000]
end
subgraph Border["BORDER LEAF - DCI Gateway"]
B1[borderleaf1-DC1<br/>NetworkDevice<br/>Loopback0: 10.1.0.31<br/>eth12: shutdown or active]
B2[borderleaf2-DC1<br/>NetworkDevice<br/>Loopback0: 10.1.0.32<br/>eth12: shutdown or active]
end
subgraph Spine["SPINE LAYER - L3 Core"]
S1[spine1-DC1<br/>NetworkDevice<br/>Loopback0: 10.1.0.11<br/>ASN: 65100]
S2[spine2-DC1<br/>NetworkDevice<br/>Loopback0: 10.1.0.12<br/>ASN: 65100]
S3[spine3-DC1<br/>NetworkDevice<br/>Loopback0: 10.1.0.13<br/>ASN: 65100]
end
subgraph Leaf["LEAF LAYER - Aggregation + VXLAN"]
subgraph Pair1["NetworkMLAGDomain - Pair 1 - ASN: 65101"]
L1[leaf1-DC1<br/>Loopback0: 10.1.0.21<br/>VTEP: 10.1.1.21]
L2[leaf2-DC1<br/>Loopback0: 10.1.0.22<br/>VTEP: 10.1.1.21]
end
subgraph Pair2["NetworkMLAGDomain - Pair 2 - ASN: 65102"]
L3[leaf3-DC1<br/>Loopback0: 10.1.0.23<br/>VTEP: 10.1.1.23]
L4[leaf4-DC1<br/>Loopback0: 10.1.0.24<br/>VTEP: 10.1.1.23]
end
end
subgraph Access["ACCESS LAYER - Rack/Bay ToR"]
A1[access1-DC1<br/>NetworkDevice<br/>InfraBay 1<br/>L2 Switch]
A2[access2-DC1<br/>NetworkDevice<br/>InfraBay 2<br/>L2 Switch]
end
subgraph Hosts["HOST LAYER"]
H1[host1-DC1<br/>NetworkDevice<br/>172.16.100.10]
H2[host2-DC1<br/>NetworkDevice<br/>172.16.200.10]
end
DCID -.Optional NetworkDCIConnection.- B1
DCID -.Optional NetworkDCIConnection.- B2
S1 -.eBGP - NetworkBGPNeighbor.- L1
S1 -.eBGP.- L2
S1 -.eBGP.- L3
S1 -.eBGP.- L4
S1 -.eBGP.- B1
S1 -.eBGP.- B2
S2 -.eBGP.- L1
S2 -.eBGP.- L2
S2 -.eBGP.- L3
S2 -.eBGP.- L4
S2 -.eBGP.- B1
S2 -.eBGP.- B2
S3 -.eBGP.- L1
S3 -.eBGP.- L2
S3 -.eBGP.- L3
S3 -.eBGP.- L4
S3 -.eBGP.- B1
S3 -.eBGP.- B2
L1 ---|MLAG - Vlan4094| L2
L3 ---|MLAG - Vlan4094| L4
B1 ---|MLAG - Vlan4094| B2
L1 -->|eth7 - NetworkInterface| A1
L2 -->|eth7 - NetworkInterface| A1
L3 -->|eth7 - NetworkInterface| A2
L4 -->|eth7 - NetworkInterface| A2
A1 -->|eth10| H1
A2 -->|eth10| H2
style DCID fill:#ffccbc
style S1 fill:#ffccbc
style S2 fill:#ffccbc
style S3 fill:#ffccbc
style L1 fill:#b3e5fc
style L2 fill:#b3e5fc
style L3 fill:#b3e5fc
style L4 fill:#b3e5fc
style A1 fill:#c8e6c9
style A2 fill:#c8e6c9
style B1 fill:#f8bbd0
style B2 fill:#f8bbd0
5. IP Address Generation Flow
graph TB
Start[InfraDatacenter Object<br/>dc_id: 1, number_of_bays: 2]
Start --> GenSubnets[Generate IpamIPPrefix<br/>from dc_id]
GenSubnets --> Sub1[10.1.0.0/24<br/>prefix_type: loopback<br/>Loopback0 addresses]
GenSubnets --> Sub2[10.1.1.0/24<br/>prefix_type: vtep<br/>Loopback1 VTEP addresses]
GenSubnets --> Sub3[10.1.10.0/24<br/>prefix_type: p2p<br/>Spine-Leaf links]
GenSubnets --> Sub4[10.1.20.0/24<br/>prefix_type: p2p<br/>Leaf-Access links]
GenSubnets --> Sub5[10.1.255.0/24<br/>prefix_type: mlag<br/>MLAG Peer links]
GenSubnets --> Sub6[10.255.0.0/24<br/>prefix_type: management<br/>Management IPs]
Sub1 --> AllocLo0[Allocate IpamIPAddress<br/>Loopback0 to Spines & Leafs]
Sub2 --> AllocLo1[Allocate IpamIPAddress<br/>Loopback1 to Leafs - shared in pairs]
Sub3 --> AllocP2P1[Allocate /31s<br/>for Spine-Leaf links]
Sub4 --> AllocP2P2[Allocate /31s<br/>for Leaf-Access links]
Sub5 --> AllocMLAG[Allocate /30s<br/>for MLAG peer links]
Sub6 --> AllocMgmt[Allocate Management IPs<br/>to all devices via template]
AllocLo0 --> Spine1IP[spine1: 10.1.0.11/32]
AllocLo0 --> Leaf1IP[leaf1: 10.1.0.21/32]
AllocLo1 --> Leaf1VTEP[leaf1-2 shared: 10.1.1.21/32]
AllocP2P1 --> Link1[spine1-leaf1: 10.1.10.0/31]
AllocP2P2 --> Link2[leaf1-access1: 10.1.20.0/31]
AllocMLAG --> MLAG1[leaf1-2 peer: 10.1.255.1-2/30<br/>via Vlan4094]
AllocMgmt --> Mgmt1[spine1: 10.255.0.11<br/>computed from template]
style Start fill:#ffecb3
style Sub1 fill:#e1f5ff
style Sub2 fill:#e1f5ff
style Sub3 fill:#e1f5ff
style Sub4 fill:#e1f5ff
style Sub5 fill:#e1f5ff
style Sub6 fill:#e1f5ff
6. Device Generation Flow
graph TB
DC[InfraDatacenter: DC1<br/>number_of_bays: 2]
DC --> CalcLeafs[Calculate:<br/>leaf_pair_count = ceil number_of_bays / 2 = 1<br/>total_leaf_count = 1 * 2 = 2<br/>total_access_count = 2]
CalcLeafs --> GenSpines[Generate Spines<br/>count: spine_count = 3]
CalcLeafs --> GenLeafs[Generate Leafs<br/>count: total_leaf_count = 4]
CalcLeafs --> GenBorders[Generate Border Leafs<br/>count: border_leaf_count = 2<br/>if has_border_leafs: true]
CalcLeafs --> GenAccess[Generate Access<br/>count: total_access_count = 2]
GenSpines --> S1[spine1-DC1<br/>NetworkDevice<br/>role: spine]
GenSpines --> S2[spine2-DC1<br/>NetworkDevice<br/>role: spine]
GenSpines --> S3[spine3-DC1<br/>NetworkDevice<br/>role: spine]
GenLeafs --> LP1[Create NetworkMLAGDomain<br/>MLAG-leaf1-2-DC1]
GenLeafs --> LP2[Create NetworkMLAGDomain<br/>MLAG-leaf3-4-DC1]
LP1 --> L1[leaf1-DC1<br/>role: leaf<br/>mlag_side: left]
LP1 --> L2[leaf2-DC1<br/>role: leaf<br/>mlag_side: right]
LP2 --> L3[leaf3-DC1<br/>role: leaf<br/>mlag_side: left]
LP2 --> L4[leaf4-DC1<br/>role: leaf<br/>mlag_side: right]
GenBorders --> BP[Create NetworkMLAGDomain<br/>MLAG-border-DC1]
BP --> B1[borderleaf1-DC1<br/>role: borderleaf<br/>mlag_side: left]
BP --> B2[borderleaf2-DC1<br/>role: borderleaf<br/>mlag_side: right]
GenAccess --> AssignBays[Assign InfraBay to Leaf Pairs]
AssignBays --> A1Config[Bay 1 → Leaf Pair 1<br/>access1-DC1]
AssignBays --> A2Config[Bay 2 → Leaf Pair 1<br/>access2-DC1]
A1Config --> A1[access1-DC1<br/>NetworkDevice<br/>role: access]
A2Config --> A2[access2-DC1<br/>NetworkDevice<br/>role: access]
style DC fill:#ffecb3
style LP1 fill:#b3e5fc
style LP2 fill:#b3e5fc
style BP fill:#f8bbd0
7. Scaling Scenario - Adding Bay 3
graph LR
subgraph Current["📦 Current State - 2 Bays"]
CurDC[InfraDatacenter: DC1<br/>number_of_bays: 2<br/>leaf_pair_count: 1]
CurLP1[NetworkMLAGDomain: Pair 1<br/>leaf1-2]
CurBay1[InfraBay 1 → access1]
CurBay2[InfraBay 2 → access2]
CurDC --> CurLP1
CurLP1 --> CurBay1
CurLP1 --> CurBay2
end
subgraph Action["⚙️ User Action"]
AddBay[Add Bay 3<br/>number_of_bays: 2 → 3]
end
subgraph Generator["🔄 Generator Logic"]
Check[Check:<br/>number_of_bays=3 > pairs*2=2?<br/>YES → Need new pair]
CreatePair[Create NetworkMLAGDomain<br/>Leaf Pair 2<br/>leaf3-DC1, leaf4-DC1]
CreateAccess[Create NetworkDevice<br/>access3-DC1<br/>role: access]
AssignBay[Assign InfraBay 3 → Pair 2]
AllocIPs[Allocate IpamIPAddress]
CreateLinks[Create NetworkInterface]
ConfigBGP[Configure NetworkBGPConfig<br/>ASN: 65102]
end
subgraph Result["✅ New State - 3 Bays"]
NewDC[InfraDatacenter: DC1<br/>number_of_bays: 3<br/>leaf_pair_count: 2]
NewLP1[NetworkMLAGDomain: Pair 1<br/>leaf1-2<br/>ASN: 65101]
NewLP2[NetworkMLAGDomain: Pair 2<br/>leaf3-4<br/>ASN: 65102]
NewBay1[InfraBay 1 → access1]
NewBay2[InfraBay 2 → access2]
NewBay3[InfraBay 3 → access3]
NewDC --> NewLP1
NewDC --> NewLP2
NewLP1 --> NewBay1
NewLP1 --> NewBay2
NewLP2 --> NewBay3
end
Current --> AddBay
AddBay --> Check
Check --> CreatePair
CreatePair --> CreateAccess
CreateAccess --> AssignBay
AssignBay --> AllocIPs
AllocIPs --> CreateLinks
CreateLinks --> ConfigBGP
ConfigBGP --> Result
style AddBay fill:#fff59d
style Check fill:#ffccbc
style Result fill:#c8e6c9
8. Infrahub Generator Workflow
sequenceDiagram
participant User
participant Infrahub
participant Generator
participant Validator
participant GraphQL
participant Objects
User->>Infrahub: Create InfraDatacenter Object<br/>(name, dc_id, number_of_bays, etc.)
Infrahub->>Validator: Pre-Generation Validation
Validator-->>Infrahub: ✅ Valid Input
Infrahub->>Generator: Trigger Generator
Generator->>Generator: Compute Derived Values<br/>(leaf_pair_count, total_leaf_count, etc.)
loop For Each Layer
Generator->>Objects: Create NetworkDevice - Spines
Generator->>Objects: Create NetworkDevice - Leafs
Generator->>Objects: Create NetworkDevice - Access
Generator->>Objects: Create NetworkMLAGDomain
end
loop For Each Device
Generator->>Objects: Create NetworkInterface
Generator->>Objects: Allocate IpamIPAddress
Generator->>Objects: Generate NetworkBGPConfig
end
Generator->>Validator: Post-Generation Validation
Validator-->>Generator: ✅ All Objects Valid
Generator->>GraphQL: Commit All Objects
GraphQL-->>Infrahub: Objects Created
Infrahub-->>User: ✅ Datacenter Generated<br/>11 devices, 50+ interfaces, 30+ IPs
9. Configuration Generation Flow
graph TB
subgraph Infrahub["📊 Infrahub - Source of Truth"]
DeviceDB[(NetworkDevice Objects<br/>NetworkInterface<br/>IpamIPAddress<br/>NetworkBGPConfig)]
end
subgraph Generator["⚙️ Config Generator"]
Templates[Jinja2 Templates<br/>by Device Role]
RenderEngine[Template Renderer]
end
subgraph Output["📄 Generated Configs"]
SpineConfig[spine1-DC1.cfg]
LeafConfig[leaf1-DC1.cfg]
AccessConfig[access1-DC1.cfg]
end
subgraph Deployment["🚀 Deployment"]
Validation[Config Validation]
Push[Push to Devices<br/>via eAPI/NETCONF]
end
DeviceDB -->|Query Device Data| RenderEngine
Templates --> RenderEngine
RenderEngine --> SpineConfig
RenderEngine --> LeafConfig
RenderEngine --> AccessConfig
SpineConfig --> Validation
LeafConfig --> Validation
AccessConfig --> Validation
Validation --> Push
Push --> Devices[Physical Devices]
style Infrahub fill:#e1f5ff
style Generator fill:#fff9c4
style Output fill:#c8e6c9
style Deployment fill:#ffccbc
10. Complete Data Model Relationships
erDiagram
OrganizationOrganization ||--o{ LocationSite : "contains (Generic)"
LocationSite ||--o{ InfraDatacenter : "contains (Generic)"
LocationSite ||--o{ IpamIPPrefix : "manages (Attribute)"
InfraDatacenter ||--|{ IpamIPPrefix : "generates (Generic)"
InfraDatacenter ||--|{ NetworkDevice : "generates (Component)"
InfraDatacenter ||--|{ NetworkMLAGDomain : "generates (Generic)"
InfraDatacenter ||--|{ InfraBay : "contains (Generic)"
NetworkDevice ||--|{ NetworkInterface : "has (Component)"
NetworkInterface ||--o| IpamIPAddress : "assigned (Generic)"
NetworkInterface ||--o| NetworkBGPNeighbor : "endpoint (Generic)"
NetworkDevice }|--|| DeviceRole : "has (Attribute)"
NetworkDevice }o--o| NetworkMLAGDomain : "member_of (Attribute)"
NetworkMLAGDomain ||--o{ NetworkDevice : "devices (Generic)"
NetworkMLAGDomain ||--|{ NetworkMLAGInterface : "interfaces (Component)"
NetworkBGPConfig }o--|| NetworkDevice : "device (Parent)"
NetworkBGPConfig ||--|{ NetworkBGPPeerGroup : "peer_groups (Component)"
NetworkBGPConfig ||--|{ NetworkBGPNeighbor : "neighbors (Component)"
IpamIPPrefix ||--|{ IpamIPAddress : "contains (Generic)"
IpamIPPrefix ||--o| IpamIPPrefix : "parent (Attribute)"
NetworkDCISwitch ||--|{ NetworkDCIConnection : "connections (Component)"
NetworkDCIConnection }o--|| NetworkDevice : "border_leaf (Attribute)"
OrganizationOrganization {
string name
number asn_base
}
LocationSite {
string name
string location
string status
}
InfraDatacenter {
string name
number dc_id
number number_of_bays
number spine_count
number border_leaf_count
IPNetwork parent_subnet
number bgp_base_asn
boolean dci_enabled
number leaf_pair_count_COMPUTED
number total_leaf_count_COMPUTED
number total_access_count_COMPUTED
}
NetworkDevice {
string hostname
string role
IPHost management_ip_COMPUTED
string platform
number spine_id
number leaf_id
}
NetworkInterface {
string name
string interface_type
boolean enabled
number mtu
string switchport_mode
number channel_id
}
IpamIPAddress {
IPHost address
string status
}
NetworkBGPConfig {
number asn
IPHost router_id
number maximum_paths
number distance_external
number distance_internal
number ebgp_admin_distance
}
NetworkMLAGDomain {
string domain_id
IPHost peer_address
string status
}
NetworkDCISwitch {
string hostname
IPHost loopback0_ip
IPHost management_ip
}
NetworkDCIConnection {
string connection_name
string status
IPHost dci_ip
IPHost border_ip
IPNetwork subnet
}
11. DCI Architecture - NetworkDCISwitch & NetworkDCIConnection
graph TB
subgraph DCILayer["🌐 DCI Layer - Separate Schema"]
DCI[NetworkDCISwitch<br/>hostname: DCI<br/>loopback0_ip: 10.253.0.1/32<br/>management_ip: 10.255.0.253]
end
subgraph DC1Border["DC1 Border Leafs"]
B1_DC1[borderleaf1-DC1<br/>NetworkDevice<br/>eth12 interface]
B2_DC1[borderleaf2-DC1<br/>NetworkDevice<br/>eth12 interface]
end
subgraph DC2Border["DC2 Border Leafs"]
B1_DC2[borderleaf1-DC2<br/>NetworkDevice<br/>eth12 interface]
B2_DC2[borderleaf2-DC2<br/>NetworkDevice<br/>eth12 interface]
end
subgraph Connections["NetworkDCIConnection Objects"]
Conn1[DCI-to-borderleaf1-DC1<br/>dci_interface: Ethernet1<br/>border_interface: Ethernet12<br/>dci_ip: 10.254.0.1/31<br/>border_ip: 10.254.0.0/31<br/>status: shutdown]
Conn2[DCI-to-borderleaf2-DC1<br/>dci_interface: Ethernet2<br/>border_interface: Ethernet12<br/>dci_ip: 10.254.0.3/31<br/>border_ip: 10.254.0.2/31<br/>status: shutdown]
Conn3[DCI-to-borderleaf1-DC2<br/>dci_interface: Ethernet3<br/>border_interface: Ethernet12<br/>dci_ip: 10.254.0.5/31<br/>border_ip: 10.254.0.4/31<br/>status: shutdown]
Conn4[DCI-to-borderleaf2-DC2<br/>dci_interface: Ethernet4<br/>border_interface: Ethernet12<br/>dci_ip: 10.254.0.7/31<br/>border_ip: 10.254.0.6/31<br/>status: shutdown]
end
DCI --> Conn1
DCI --> Conn2
DCI --> Conn3
DCI --> Conn4
Conn1 -.-> B1_DC1
Conn2 -.-> B2_DC1
Conn3 -.-> B1_DC2
Conn4 -.-> B2_DC2
style DCI fill:#ffccbc
style Conn1 fill:#fff9c4
style Conn2 fill:#fff9c4
style Conn3 fill:#fff9c4
style Conn4 fill:#fff9c4
Key DCI Concepts:
- NetworkDCISwitch: Separate schema object (NOT NetworkDevice)
- NetworkDCIConnection: Tracks P2P links with dedicated attributes
- status: Default is
shutdown, changes toactivewhendci_enabled: true - border_interface: Always Ethernet12 on border leafs
- Relationship:
NetworkDCIConnection→border_leaf(Attribute) →NetworkDevice
12. Key Schema Attributes Summary
InfraDatacenter Attributes
# User Input (Required)
name: string # "DC1"
dc_id: number # 1
number_of_bays: number # 2 (default)
parent_subnet: IPNetwork # "10.0.0.0/8"
# Configuration (With Defaults)
spine_count: number # 3 (default)
border_leaf_count: number # 2 (default)
bgp_base_asn: number # 65000 (default)
spine_asn: number # Auto: bgp_base_asn + (dc_id * 100)
mlag_domain_id: string # "MLAG" (default)
mtu: number # 9214 (default)
has_border_leafs: boolean # true (default)
dci_enabled: boolean # false (default)
dci_remote_dc_id: number # Optional
# Computed (Read-Only)
leaf_pair_count: number # ceil(number_of_bays / 2)
total_leaf_count: number # leaf_pair_count * 2
total_access_count: number # number_of_bays
NetworkDevice Attributes
hostname: string # "spine1-DC1"
role: dropdown # spine, leaf, borderleaf, access, host
platform: string # "cEOS" (default)
management_ip_template: string # Template for IP generation
management_ip: IPHost # Computed from template
spine_id: number # For spines
leaf_id: number # For leafs
mlag_side: dropdown # left (odd), right (even)
NetworkInterface Types
interface_type: dropdown
- ethernet # Physical interfaces
- loopback # Loopback0, Loopback1
- port_channel # Note: underscore, not hyphen!
- vlan # SVIs
- vxlan # VXLAN tunnels
- management # Management interface
IpamIPPrefix Types
prefix_type: dropdown
- pool # General pool
- loopback # Loopback0 addresses
- vtep # Loopback1 VTEP addresses
- p2p # Point-to-point links
- management # Management network
- tenant # Tenant/VLAN networks
- mlag # MLAG peer links