Keeping Open vSwitch Clean: Automating Port Cleanup with Systemd

Keeping Open vSwitch Clean: Automating Port Cleanup with Systemd

Many sysadmins and homelab builders using Open vSwitch (OVS) have encountered this problem:

"My OVS bridge shows dozens of ports that don’t exist anymore."

This usually happens when virtual machines (VMs) start and stop, leaving stale port entries behind. Over time, these ghost ports clutter the bridge and may cause:

  • Confusing ovs-vsctl show output
  • Potential conflicts reusing port names
  • Harder troubleshooting when debugging connectivity

What We Saw Before Cleanup

On our host, running ovs-vsctl show revealed many stale ports:


[root@worker ~]# ovs-vsctl show
58629941-e893-423a-87fb-b3fa574e86f5
    Bridge ovsbr0
        Port vnet37
            tag: 100
            Interface vnet37
                error: "could not open network device vnet37 (No such device)"
        Port vnet30
            tag: 10
            Interface vnet30
                error: "could not open network device vnet30 (No such device)"
        Port vnet31
            tag: 100
            Interface vnet31
                error: "could not open network device vnet31 (No such device)"
        Port vnet38
            tag: 10
            Interface vnet38
                error: "could not open network device vnet38 (No such device)"
        Port vnet1
            tag: 10
            Interface vnet1
        Port vnet0
            tag: 10
            Interface vnet0
        Port vnet22
            tag: 100
            Interface vnet22
                error: "could not open network device vnet22 (No such device)"
        Port vnet4
            tag: 100
            Interface vnet4
        Port vnet6
            tag: 100
            Interface vnet6
        Port vnet8
            tag: 100
            Interface vnet8
        Port vnet21
            tag: 10
            Interface vnet21
                error: "could not open network device vnet21 (No such device)"
        Port ovsbr0
            Interface ovsbr0
            type: internal
        Port vnet3
            tag: 10
            Interface vnet3
            Port ens3f0
        Interface ens3f0
            Port vnet5
            tag: 10
            Interface vnet5
        Port vnet36
            tag: 10
            Interface vnet36
                error: "could not open network device vnet36 (No such device)"
        Port vmgmt0
            tag: 10
            Interface vmgmt0
            type: internal
        Port vnet2
            tag: 100
            Interface vnet2
        Port vnet7
            tag: 10
            Interface vnet7
            

These were leftovers from VMs that had been shut down. The virtual network devices no longer existed in the system, but OVS still remembered their port entries.

Why This Happens

When a VM is destroyed, it usually removes its tap device from the Linux network stack. But OVS only knows about ports it manages internally—it doesn't automatically delete a port if the interface disappears externally. That means you end up with dangling OVS entries.

The Simple Solution: A Cleanup Script

We created this script: /usr/local/bin/ovs-cleanup.sh

#!/bin/bash
set -e
for port in $(ovs-vsctl list-ports ovsbr0); do
  if ! ip link show "$port" >/dev/null 2>&1; then
    echo "Removing stale port: $port"
    ovs-vsctl del-port ovsbr0 "$port" || { echo \
    "Failed to delete port: $port"
    exit 1
  }
fi
done

This does:

  • Lists all ports in the OVS bridge ovsbr0.
  • Checks if each port still exists as a network interface.
  • Deletes the port from OVS if it no longer exists in the system.
  • Dont forget chmod +x /usr/local/bin/ovs-cleanup.sh

Automating Cleanup with systemd

Rather than run this manually every time the host boots, we wired it into systemd with a dedicated service:

/etc/systemd/system/ovs-cleanup.service:

[Unit]
Description=Cleanup stale OVS ports
After=ovs-vswitchd.service
Requires=ovs-vswitchd.service

[Service]
Type=oneshot
ExecStart=/usr/local/bin/ovs-cleanup.sh

[Install]
WantedBy=multi-user.target

This ensures:

  • The cleanup runs after OVS is running.
  • It executes once on boot.

We enabled it with:

systemctl enable --now ovs-cleanup.service

Running on Stop (Optional)

We also configured it to run on service stop. This way, if the system shuts down or we stop OVS services manually, the stale ports are removed preemptively:

[Service]
ExecStop=/usr/local/bin/ovs-cleanup.sh
RemainAfterExit=yes

The Result After Cleanup

Once the script ran, ovs-vsctl show looked clean:

[root@worker ~]# ovs-vsctl show
58629941-e893-423a-87fb-b3fa574e86f5
    Bridge ovsbr0
        Port vnet1
            tag: 10
            Interface vnet1
        Port vnet0
            tag: 10
            Interface vnet0
        Port vnet4
            tag: 100
            Interface vnet4
        Port vnet6
            tag: 100
            Interface vnet6
        Port ovsbr0
            Interface ovsbr0
                type: internal
        Port vnet3
            tag: 10
            Interface vnet3
        Port ens3f0
            Interface ens3f0
            Port vnet5
            tag: 10
            Interface vnet5
        Port vmgmt0
            tag: 10
            Interface vmgmt0
                type: internal
        Port vnet2
            tag: 100
            Interface vnet2

All the ghost ports were gone.

Systemd Service Restarts

.
Jul 07 08:44:59 worker systemd[1]: Stopped Cleanup stale OVS ports.
Jul 07 08:44:59 worker systemd[1]: Starting Cleanup stale OVS ports...
Jul 07 08:44:59 worker systemd[1]: Finished Cleanup stale OVS ports.
Jul 07 08:47:24 worker systemd[1]: Stopping Cleanup stale OVS ports...
Jul 07 08:47:24 worker systemd[1]: ovs-cleanup.service: Deactivated successfully.
Jul 07 08:47:24 worker systemd[1]: Stopped Cleanup stale OVS ports.
Jul 07 08:47:24 worker systemd[1]: Starting Cleanup stale OVS ports...
Jul 07 08:47:24 worker systemd[1]: Finished Cleanup stale OVS ports.

Why This Matters

Keeping OVS clean ensures:

  • Faster OVS operations (fewer ports to iterate internally)
  • Simpler bridge state for troubleshooting
  • No conflicts when reusing VM names or ports
  • A tidier system overall

This small automation saves time and prevents subtle networking bugs, especially as networks grow larger.

Article Conclusion

If you're running OVS bridges on your hypervisors, consider adding this simple cleanup service. It's safe, lightweight, and keeps your networking neat.

Next, we'll tackle tuning nftables for multi-gigabit performance