Notes to self

Solving QEMU bridge helper "no such device" for virbr0

If you are running Vagrant on Fedora 32 or tried to spin up a virtual machine in another way, you might have encountered an annoying complaint about qemu-bridge-helper utility not finding virbr0.

In case of bringing Vagrant machine up this error would look like the following:

$ vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Checking if box 'fedora/30-cloud-base' version '30.20190425.0' is up to date...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
...
Call to virDomainCreateWithFlags failed: internal error: /usr/libexec/qemu-bridge-helper --use-vnet --br=virbr0 --fd=33: failed to communicate with bridge helper: Transport endpoint is not connected
stderr=failed to get mtu of bridge `virbr0': No such device

And indeed, virbr0 cannot be found while running ip addr.

What’s happening?

Vagrant with libvirt plugin is trying to use virbr0 provided by the libvirtd default configuration so that your virtual machines (here your Vagrant box) can share the network with your system.

It runs and fails on:

/usr/libexec/qemu-bridge-helper --use-vnet --br=virbr0 --fd=33

Since we know that virbr0 should be initiated by libvirtd, we should check if the service is running correctly:

● libvirtd.service - Virtualization daemon
     Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Wed 2020-09-30 16:22:20 CEST; 3min 10s ago
TriggeredBy: ● libvirtd-admin.socket
             ● libvirtd.socket
             ● libvirtd-ro.socket
       Docs: man:libvirtd(8)
             https://libvirt.org
    Process: 3672 ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS (code=exited, status=0/SUCCESS)
   Main PID: 3672 (code=exited, status=0/SUCCESS)

Sep 30 16:20:20 strzibny libvirtd[3672]: error from service: changeZoneOfInterface: COMMAND_FAILED: 'python-nftables' fai>
                                         JSON blob:
                                         {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"add": {"chain": {"fami>
Sep 30 16:21:33 strzibny libvirtd[3672]: Failed to open file '/sys/class/net/tap0/operstate': No such file or directory
Sep 30 16:21:33 strzibny libvirtd[3672]: unable to read: /sys/class/net/tap0/operstate: No such file or directory
Sep 30 16:21:59 strzibny libvirtd[3672]: Failed to open file '/sys/class/net/tap0/operstate': No such file or directory
Sep 30 16:21:59 strzibny libvirtd[3672]: unable to read: /sys/class/net/tap0/operstate: No such file or directory
Sep 30 16:22:07 strzibny libvirtd[3672]: Failed to open file '/sys/class/net/tap0/operstate': No such file or directory
Sep 30 16:22:07 strzibny libvirtd[3672]: unable to read: /sys/class/net/tap0/operstate: No such file or directory
Sep 30 16:22:10 strzibny libvirtd[3672]: Failed to open file '/sys/class/net/tap0/operstate': No such file or directory
Sep 30 16:22:10 strzibny libvirtd[3672]: unable to read: /sys/class/net/tap0/operstate: No such file or directory
Sep 30 16:22:20 strzibny systemd[1]: libvirtd.service: Succeeded.

It seems that libvirtd is running, but we can also see some errors.

We can list all errors for libvirtd systemd unit from a system journal with --unit flag:

$ sudo journalctl --unit=libvirtd
-- Reboot --
Sep 30 16:20:20 strzibny systemd[1]: Starting Virtualization daemon...
Sep 30 16:20:20 strzibny systemd[1]: Started Virtualization daemon.
Sep 30 16:20:20 strzibny libvirtd[3672]: libvirt version: 6.1.0, package: 4.fc32 (Fedora Project, 2020-06-02-17:50:10, )
Sep 30 16:20:20 strzibny libvirtd[3672]: hostname: strzibny
Sep 30 16:20:20 strzibny libvirtd[3672]: error from service: changeZoneOfInterface: COMMAND_FAILED: 'python-nftables' failed:
                                         JSON blob:
                                         {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"add": {"chain": {"family": "ine>
Sep 30 16:21:33 strzibny libvirtd[3672]: Failed to open file '/sys/class/net/tap0/operstate': No such file or directory
Sep 30 16:21:33 strzibny libvirtd[3672]: unable to read: /sys/class/net/tap0/operstate: No such file or directory
Sep 30 16:21:59 strzibny libvirtd[3672]: Failed to open file '/sys/class/net/tap0/operstate': No such file or directory
Sep 30 16:21:59 strzibny libvirtd[3672]: unable to read: /sys/class/net/tap0/operstate: No such file or directory
Sep 30 16:22:07 strzibny libvirtd[3672]: Failed to open file '/sys/class/net/tap0/operstate': No such file or directory
Sep 30 16:22:07 strzibny libvirtd[3672]: unable to read: /sys/class/net/tap0/operstate: No such file or directory
Sep 30 16:22:10 strzibny libvirtd[3672]: Failed to open file '/sys/class/net/tap0/operstate': No such file or directory
Sep 30 16:22:10 strzibny libvirtd[3672]: unable to read: /sys/class/net/tap0/operstate: No such file or directory
...

Very well, the same errors show up as expected.

We can easily spot error from service: changeZoneOfInterface error. If you are running Fedora systems for a while you might have gotten familiar with firewalld and its zones.

The second clue lies in the python-nftables executable failing. nftables is a packet filtering framework for Linux. A certain nftables configuration is a firewall.

firewalld is a default Fedora firewall that might use nftables to do the packet filtering. So this issue is definitely about some firewall rules.

Let’s check our active zones:

$ firewall-cmd --get-active-zones
docker
  interfaces: docker0

We can see the docker0 interface which has a similar purpose as virbr0, just for Docker containers. We cannot see anything else.

With this information, we can start googling around and find the upstream issue. The problem lies with Docker inserting its docker0 interface into the trusted zone of firewalld.

The right solution would come as system updates to the relevant packages, but in the meantime one workaround is to delete the trusted zone (or delete docker0 from it):

$ sudo rm /etc/firewalld/zones/trusted.xml

Then let’s restart the firewall and libvirt daemons:

$ sudo systemctl restart firewalld
$ sudo systemctl restart libvirtd
$ sudo virsh net-start default
Network default started

virsh net-start default will start the default network (with virbr0) since it wasn’t started automatically.

This is how the zones will look like:

$ firewall-cmd --get-active-zones
FedoraWorkstation
  interfaces: wlp4s0
docker
  interfaces: docker0
libvirt
  interfaces: virbr0

And finally, let’s continue with our original vagrant up:

$ vagrant up
...
$ vagrant ssh
[vagrant@localhost ~]$
Check out my book
Deployment from Scratch is unique Linux book about web application deployment. Learn how deployment works from the first principles rather than YAML files of a specific tool.
by Josef Strzibny
RSS