Skip to content

Vijfpas inventory

This is an implementation-track document with factual inventory data: servers, storage, network devices, VLANs, and baseline services. Avoid architecture debates here; record what exists and what is configured.

1. Compute inventory

1.1 Nodes (Proxmox/Ceph candidates)

3x Oracle X3-2

  • iLOM
  • Dual Xeon 8-core
  • 16x 16GB ECC DDR3 (256GB RAM)
  • NICs
  • On-board: 4x 10GbE UTP (current Proxmox control-plane and workload LAG members)
  • Add-in: Intel Dual Port 82599 10GbE SFP+ (cephpub, cephclu)
  • HBA: LSI SAS9207-8i (IT firmware)
  • Disks
  • 2x 960GB SSD (ZFS mirror)
  • 5x 480GB SSD (Ceph OSD)
  • 1x 480GB SSD (unassigned; reserve for future host-local use)

2x Oracle X3-2L

  • iLOM
  • Dual Xeon 6-core
  • 16x 16GB ECC DDR3 (256GB RAM)
  • NICs
  • On-board: 4x 10GbE UTP (current Proxmox control-plane and workload LAG members)
  • Add-in: Intel Dual Port 82599 10GbE SFP+ (cephpub, cephclu)
  • HBA: LSI SAS9207-8i (IT firmware)
  • Disks
  • 3x 480GB Patriot SSD (host OS / generic VM storage pool)
  • 2x 960GB Kingston DC600M SSD (database VM storage pool)
  • 5x 480GB SSD (Ceph OSD)
  • 7x 480GB SSD (NAS/backup capacity)

Decision baseline: both listed X3-2L servers are in the primary Proxmox cluster. The off-cluster PBS host is a former spare X3-2L.

Current management hostnames and IPs

Hostname (FQDN) Node class Mgmt IP
proxmox-a.nfr-mgmt.vijfpas.be Oracle X3-2L 10.0.20.2
proxmox-b.nfr-mgmt.vijfpas.be Oracle X3-2L 10.0.20.3
proxmox-c.nfr-mgmt.vijfpas.be Oracle X3-2 10.0.20.4
proxmox-d.nfr-mgmt.vijfpas.be Oracle X3-2 10.0.20.5
proxmox-e.nfr-mgmt.vijfpas.be Oracle X3-2 10.0.20.6

1.2 Spare parts

  • 1x Oracle X3-2
  • Dual Xeon 8-core
  • 16x 8GB ECC DDR3 (128GB RAM)
  • On-board 4x 10GbE UTP
  • 1x Oracle X3-2L
  • Dual Xeon 6-core
  • 8x 8GB ECC DDR3 (64GB RAM)
  • On-board 4x 10GbE UTP
  • LSI SAS9207-8i (IT firmware)
  • 2x 960GB SSD
  • 4x 480GB SSD

1.3 Physical layout (Mermaid)

flowchart LR
  subgraph CLUSTER["Primary Proxmox-Ceph cluster"]
    X32A["X3-2 node 1"]
    X32B["X3-2 node 2"]
    X32C["X3-2 node 3"]
    X32L1["X3-2L node 1"]
    X32L2["X3-2L node 2"]
  end

  PBSSRV["X3-2L PBS host"]
  NAS["NAS backup storage"]

  X32A --- X32B
  X32B --- X32C
  X32C --- X32L1
  X32L1 --- X32L2
  PBSSRV --> NAS

2. Storage inventory

2.1 Ceph (raw)

  • 5 nodes x 5x 480GB SSD = 25 OSDs
  • Raw capacity ~= 12,000GB (decimal)

2.2 NAS/backup disks

  • 7x 480GB SSD on X3-2L nodes are currently earmarked for NAS/backup workloads

2.3 Local Proxmox ZFS pools (current)

Node Local ZFS pools observed Notes
proxmox-a rpool, dbpool, naspool naspool is node-local to Proxmox A
proxmox-b rpool, dbpool, bckpool bckpool is node-local to Proxmox B

2.4 Storage map (Mermaid)

flowchart LR
  CL[Cluster nodes] --> CEPH[(Ceph pools)]
  CL --> ZFS[Host ZFS mirrors]
  PBS[PBS host] --> BKS[(Backup datastore)]
  BKS --> NAS[(NAS replication target)]

3. Network inventory

3.1 Hardware

  • UniFi UDM Pro
  • UniFi USW Pro 24
  • 2x UniFi Aggregation (8-port SFP+)

3.2 VLANs and network objects

Configured in UniFi today:

Controller name VLAN ID Subnet Gateway DHCP Status
nfr-ilom 10 10.0.10.0/24 10.0.10.1 off live
nfr-mgmt 20 10.0.20.0/24 10.0.20.1 on live
nfr-corosync 21 10.0.21.0/24 10.0.21.1 off live
prd-admin 22 10.0.22.0/24 10.0.22.1 on live
dev-admin 23 10.0.23.0/24 10.0.23.1 on live
prd-dmz 30 10.0.30.0/24 10.0.30.1 on live
prd-svc 31 10.0.31.0/24 10.0.31.1 on live
dev-core 32 10.0.32.0/24 10.0.32.1 on live
dev-egress 33 10.0.33.0/24 10.0.33.1 on live
prd-egress 34 10.0.34.0/24 10.0.34.1 on live, currently unused by VMs
nfr-cephpub 40 10.0.40.0/24 10.0.40.1 off live
nfr-cephclu 41 10.0.41.0/24 10.0.41.1 off live
infra-admin 42 10.0.42.0/24 10.0.42.1 on live
pfm-svc 43 10.0.43.0/24 10.0.43.1 off live
pfm-core 44 10.0.44.0/24 10.0.44.1 off live
pfm-egress 45 10.0.45.0/24 10.0.45.1 off live
pfm-bck 46 10.0.46.0/24 10.0.46.1 off live
dev-bck 47 10.0.47.0/24 10.0.47.1 off live
acc-bck 48 10.0.48.0/24 10.0.48.1 off live in UniFi; no current VM attached

Usage note:

  • nfr-mgmt is substrate-only.
  • prd-admin and dev-admin are the current admin-source networks.
  • the live shared-platform service tiers use pfm-svc, pfm-core, pfm-egress, and pfm-bck.
  • Current Proxmox switch/uplink profiles:
  • pve-admin-trunk: native 20 plus tagged 21, 22, 23, 24, 42
  • pve-workload-trunk: tagged 25, 26, 27, 28, 30, 31, 32, 33, 34, 35, 36, 37, 43, 44, 45, 46, 47

3.3 Baseline network services

  • Routing and NAT/firewall
  • Port forwarding
  • DNS and DHCP
  • WireGuard VPN
  • Chrony (time sync)

3.4 Network graph (Mermaid)

flowchart TB
  INTERNET((Internet)) --> UDM[UniFi UDM Pro]
  UDM --> USW[USW Pro 24]
  USW --> ILOM[nfr-ilom VLAN 10 10.0.10.0/24]
  USW --> MGMT[nfr-mgmt VLAN 20 10.0.20.0/24]
  USW --> COROSYNC[nfr-corosync VLAN 21 10.0.21.0/24]
  USW --> PRDADMIN[prd-admin VLAN 22 10.0.22.0/24]
  USW --> DEVADMIN[dev-admin VLAN 23 10.0.23.0/24]
  USW --> PRDDMZ[prd-dmz VLAN 30 10.0.30.0/24]
  USW --> PRDSVC[prd-svc VLAN 31 10.0.31.0/24]
  USW --> DEVCORE[dev-core VLAN 32 10.0.32.0/24]
  USW --> DEVEGR[dev-egress VLAN 33 10.0.33.0/24]
  USW --> PRDEGR[prd-egress VLAN 34 10.0.34.0/24]

  USW --> AGG1[Aggregation switch #1 - cephpub]
  USW --> AGG2[Aggregation switch #2 - cephclu]
  AGG1 --> CEPHPUB[nfr-cephpub VLAN 40 10.0.40.0/24]
  AGG2 --> CEPHCLU[nfr-cephclu VLAN 41 10.0.41.0/24]
  USW --> INFADMIN[infra-admin VLAN 42 10.0.42.0/24]
  USW --> PFMSVC[pfm-svc VLAN 43 10.0.43.0/24]
  USW --> PFMCORE[pfm-core VLAN 44 10.0.44.0/24]
  USW --> PFMEGR[pfm-egress VLAN 45 10.0.45.0/24]
  USW --> PFMBCK[pfm-bck VLAN 46 10.0.46.0/24]
  USW --> DEVBCK[dev-bck VLAN 47 10.0.47.0/24]
  USW --> ACCBCK[acc-bck VLAN 48 10.0.48.0/24]

4. Inventory actions

  1. Assign node names and record serial numbers.
  2. Record switch port mappings (NIC to switch/VLAN/trunk).
  3. Record DHCP pool ranges and static reservation ranges for all enabled environment tiers (dev-*, acc-*, prd-*) plus nfr-mgmt and infra-admin.
  4. Validate DHCP remains disabled for nfr-ilom, nfr-corosync, pfm-svc, pfm-core, pfm-egress, pfm-bck, dev-bck, acc-bck, nfr-cephpub, and nfr-cephclu.
  5. Confirm final use of unassigned 480GB SSD on X3-2 nodes.