libvirt
About
Libvirt
KVM
The libvirt project
- is a toolkit to manage virtualization platforms
- is accessible from C, Python, Perl, Java and more
- is licensed under open source licenses
- supports KVM, QEMU, Xen, Virtuozzo, VMWare ESX, LXC, BHyve and more
- targets Linux, FreeBSD, Windows and OS-X
- is used by many applications
Clients
- virt-manager
- virsh
- cockpit
Installation
Install hypervisor utilities
Configure
Networking
Prepare the definition files
Configuring libvirt networks in virt-manager is very limited, so we define the networks it in virsh.
Use xmllint to check the files, it'll save your nerves.
This is the definition file of a bare isolated bridge
/root/libvirt/networks/ovs-iso.xml
Fake bridges are handled the same way.
/root/libvirt/networks/net_pub1.xml
Or a somewhat more complex definition, with portgroups and tagged vlans.
/root/libvirt/networks/net_ovs-virt.xml
1 <network>
2 <name>net_ovs-virt</name>
3 <forward mode='bridge'/>
4 <bridge name='ovs-virt'/>
5 <virtualport type='openvswitch'/>
6 <portgroup name="trunk_internal">
7 <vlan trunk="yes">
8 <tag id='1000'/>
9 <tag id='1500'/>
10 <tag id='2000'/>
11 <tag id='2500'/>
12 <tag id='3000'/>
13 </vlan>
14 </portgroup>
15 <portgroup name="trf1">
16 <vlan>
17 <tag id='100'/>
18 </vlan>
19 </portgroup>
20 <portgroup name="pub1">
21 <vlan>
22 <tag id='500'/>
23 </vlan>
24 </portgroup>
25 <portgroup name="pub2">
26 <vlan>
27 <tag id='501'/>
28 </vlan>
29 </portgroup>
30 <portgroup name="lan_1a">
31 <vlan>
32 <tag id='1000'/>
33 </vlan>
34 </portgroup>
35 <portgroup name="lan_1n">
36 <vlan>
37 <tag id='1500'/>
38 </vlan>
39 </portgroup>
40 <portgroup name="lan_2a">
41 <vlan>
42 <tag id='2000'/>
43 </vlan>
44 </portgroup>
45 <portgroup name="lan_2n">
46 <vlan>
47 <tag id='2500'/>
48 </vlan>
49 </portgroup>
50 <portgroup name="lan_mon">
51 <vlan>
52 <tag id='3000'/>
53 </vlan>
54 </portgroup>
55 </network>
If the keyword trunk=yes is not given in the top-level vlan tag and only a single vlan is given, the port is handled as an access port.
- For using port groups in virt-manager, virt-manager 4.0+ needs to be installed. ;-D
Please see
Volatile creation
1 root@infinitas ~ # virsh net-create net_ovs-iso.xml
2 Network ovs-iso created from net_ovs-iso.xml
3 root@infinitas ~ # virsh
4 Welcome to virsh, the virtualization interactive terminal.
5
6 Type: 'help' for help with commands
7 'quit' to quit
8
9 virsh # net-list --all
10 Name State Autostart Persistent
11 ----------------------------------------------------------
12 default inactive no yes
13 ovs-iso active no no
Persistent definition
1 root@infinitas ~ # virsh
2 Welcome to virsh, the virtualization interactive terminal.
3
4 Type: 'help' for help with commands
5 'quit' to quit
6
7 virsh # net-list --all
8 Name State Autostart Persistent
9 ----------------------------------------------------------
10 default inactive no yes
11
12 virsh # net-define /root/net_ovs-iso.xml
13 Network ovs-iso defined from /root/net_ovs-iso.xml
14
15 virsh # net-list --all
16 Name State Autostart Persistent
17 ----------------------------------------------------------
18 default inactive no yes
19 ovs-iso inactive no yes
20
21 virsh # net-autostart ovs-iso
22 Network ovs-iso marked as autostarted
23
24 virsh # net-list --all
25 Name State Autostart Persistent
26 ----------------------------------------------------------
27 default inactive no yes
28 ovs-iso inactive yes yes
29
30 virsh # net-start ovs-iso
31 Network ovs-iso started
32
33 virsh # net-list --all
34 Name State Autostart Persistent
35 ----------------------------------------------------------
36 default inactive no yes
37 ovs-iso active yes yes
38
39 virsh #
40
Deleting a network
If the network is defined persistently first undefine, then destroy the network.
1 virsh # net-list --all
2 Name State Autostart Persistent
3 ----------------------------------------------------------
4 default inactive no yes
5 ovs-iso inactive yes yes
6
7 virsh # net-start ovs-iso
8 Network ovs-iso started
9
10 virsh # net-undefine ovs-iso
11 Network ovs-iso has been undefined
12
13 virsh # net-list --all
14 Name State Autostart Persistent
15 ----------------------------------------------------------
16 default inactive no yes
17 ovs-iso active no no
18
19 virsh # net-destroy ovs-iso
20 Network ovs-iso destroyed
21
22 virsh # net-list --all
23 Name State Autostart Persistent
24 ----------------------------------------------------------
25 default inactive no yes
26
27 virsh #
28
Configure a guest interface of openvSwitch
The guest is configured as a member of the virtual network ovs-iso. Here is the excerpt from the virtual machine definition.
Static IP reservations via DHCP
Just edit your network and add add the <host … /> tags. {{{virsh net-edit --network default
1 <network connections="1">
2 <name>default</name>
3 <uuid>55888abe-fde2-4ff5-812a-edb7968b915e</uuid>
4 <forward mode="nat">
5 <nat>
6 <port start="1024" end="65535"/>
7 </nat>
8 </forward>
9 <bridge name="virbr0" stp="on" delay="0"/>
10 <mac address="52:54:00:53:18:d1"/>
11 <ip address="192.168.122.1" netmask="255.255.255.0">
12 <dhcp>
13 <range start="192.168.122.2" end="192.168.122.254"/>
14 <host mac="52:54:00:e7:12:f0" name="fmc1" ip="192.168.122.11"/>
15 <host mac="52:54:00:e7:12:f1" name="fmc2" ip="192.168.122.12"/>
16 </dhcp>
17 </ip>
18 </network>
Backup
Create backups of VMs
Mount a storage in e.g. in
/mnt/qcow2
Encrypt it with an overlay filesystem like gocryptfs and mount it on
/mnt/mnt/qcow2_crypt
Create the following backup script /usr/local/sbin/kvmbackup
1 #!/usr/bin/env bash
2 #
3 # Save as: /usr/local/sbin/kvmbackup
4 #
5 # @implements Non-interactive backup operations cycle for active kvm machines
6 #
7 # Utilizes virtnbdbackup tool under the hood
8 #
9 # The first backup of every vm on every new month is forced to be full size,
10 # the rest backups on the month are incremental
11 # Every backup is processed with lz4 compression 'cause this feature really
12 # saves the host drive space a lot
13 #
14 # Implemented and tested under Ubuntu 20.04.2 LTS
15 #
16 # @see https://github.com/abbbi/virtnbdbackup for requirements and options of virtnbdbackup tool
17
18 if [ "${UID}" -ne 0 ]
19 then
20 exec sudo bash "$(realpath "${0}")"
21 fi
22
23 ### INIT
24 APP="virtnbdbackup"
25 SOCKET="/var/tmp/${APP}.sock"
26 VMBUROOT="/mnt/qcow2_crypt"
27 TYPE="stream"
28 LEVEL="inc"
29 MOYE="$(date +'%m.%Y')"
30 DIR_TARGET="${VMBUROOT}/${VM}/${MOYE}"
31 VERBOSE=false
32
33 ### MAIN
34 readarray -t VMLIST < <(virsh list | tail -n +3 | awk '{print $2}')
35 for VM in "${VMLIST[@]}"; do
36 if [ ! -d "$DIR_TARGET" ]; then
37 MKDIR_ARGS=( -p -m 0755 )
38 $VERBOSE && MKDIR_ARGS+=( -v )
39 mkdir "${MKDIR_ARGS[@]}" "$DIR_TARGET"
40 LEVEL="full"
41 fi
42
43 ${APP} --domain "$VM" --socketfile "$SOCKET" \
44 --level "$LEVEL" --type "$TYPE" --compress \
45 --output "$DIR_TARGET"
46 done
Set it executable
1 chmod a+x /usr/local/sbin/kvmbackup
Create a systemd-service for that script /lib/systemd/system/kvmbackup.service
/lib/systemd/system/kvmbackup.timer
Reload systemd and enable the service and timers
Cleanup Backups
Create a script
/usr/local/sbin/kvmbackup_clean
Set it executeable
1 chmod a+x /usr/local/sbin/kvmbackup_clean
Create a systemd service
/usr/lib/systemd/system/kvmbackup_clean.service
Create a systemd timer
/usr/lib/systemd/system/kvmbackup_clean.timer
Reload systemd and enable the service and timers
libvirt and ansible
docs.ansible.com - community.libvirt.virt module – Manages virtual machines supported by libvirt
docs.ansible.com - community.libvirt.virt_net module – Manage libvirt network configuration
docs.ansible.com - community.libvirt.virt_pool module – Manage libvirt storage pools
- [[|]]
- [[|]]
Dynamic inventory
inventory/libvirt-localhost.yml
inventory/libvirt-remote.yml
First steps
List hosts
1 ansible -i inventory/libvirt-localhost.yml --list-hosts all
Tipps and Tricks
Stop all running VMs
Mount qemu-img
Qemu images may be mounted as "Network Block Devices" using the Kernel module nbd
modinfo nbd
1 filename: /lib/modules/5.3.0-rc5-amd64/kernel/drivers/block/nbd.ko
2 license: GPL
3 description: Network Block Device
4 depends:
5 retpoline: Y
6 intree: Y
7 name: nbd
8 vermagic: 5.3.0-rc5-amd64 SMP mod_unload modversions
9 sig_id: PKCS#7
10 signer: Debian Secure Boot CA
11 sig_key: A7:46:8D:EF
12 sig_hashalgo: sha256
13 signature: 5B:51:56:EF:15:5D:8C:65:A2:F1:B4:60:9D:92:DA:D3:C1:0B:36:07:
14 7E:2C:9F:EF:A9:6A:FF:18:93:39:B5:65:F9:45:3E:BB:C2:77:7F:A8:
15 35:8E:CA:E0:2A:57:C9:15:4B:0E:A1:1C:DB:B0:45:1E:43:01:95:15:
16 6F:F5:E7:3E:35:0A:8E:D2:62:90:35:1B:AB:73:66:88:E3:55:15:88:
17 26:37:B7:7B:B2:67:52:35:15:39:D1:AA:57:8A:C8:1F:2C:AA:DB:A6:
18 EF:00:0E:D4:39:43:D7:DD:EE:C2:85:AC:9C:CE:73:69:2D:F4:6E:D2:
19 48:F3:09:CB:E6:01:C2:B5:78:3D:A2:91:2D:F8:BB:C6:E7:40:56:0D:
20 63:31:5B:11:17:DF:19:A3:89:E3:14:48:B5:FB:3A:F4:3F:52:80:63:
21 1B:2A:C0:AB:98:8A:50:D3:37:1B:70:7D:A6:BF:4F:5A:38:7E:92:E7:
22 B6:53:58:8E:A2:1E:80:DB:7F:00:F2:77:43:7E:ED:20:1C:EB:58:39:
23 6E:9F:E4:DC:2A:D0:C1:A2:D5:74:F3:90:C0:E5:1A:67:F2:86:A2:4A:
24 4D:1E:18:FA:1D:59:CA:C3:45:6F:0F:1B:1F:FD:D1:E6:A5:ED:E1:A5:
25 13:D7:B7:A2:A5:2A:A8:24:39:FB:68:DF:E7:37:A4:A9
26 parm: nbds_max:number of network block devices to initialize (default: 16) (int)
27 parm: max_part:number of partitions per device (default: 16) (int)
According to modinfo nbd has two parameters, whose defaults are fine.
Load module
1 modprobe nbd
Connect immage as nbd-device
1 qemu-nbd --connect=/dev/nbd0 /var/lib/libvirt/images/disk_image.qcow2
Now the image and all its partitions are available as devices in the device-tree. ll /dev/nbd*
1 brw-rw---- 1 root disk 43, 0 Oct 20 12:44 /dev/nbd0
2 brw-rw---- 1 root disk 43, 1 Oct 20 12:44 /dev/nbd0p1
3 brw-rw---- 1 root disk 43, 2 Oct 20 12:44 /dev/nbd0p2
4 brw-rw---- 1 root disk 43, 3 Oct 20 12:44 /dev/nbd0p3
5 brw-rw---- 1 root disk 43, 4 Oct 20 12:44 /dev/nbd0p4
6 brw-rw---- 1 root disk 43, 32 Oct 20 12:40 /dev/nbd1
7 brw-rw---- 1 root disk 43, 320 Oct 20 12:40 /dev/nbd10
8 brw-rw---- 1 root disk 43, 352 Oct 20 12:40 /dev/nbd11
9 brw-rw---- 1 root disk 43, 384 Oct 20 12:40 /dev/nbd12
10 brw-rw---- 1 root disk 43, 416 Oct 20 12:40 /dev/nbd13
11 brw-rw---- 1 root disk 43, 448 Oct 20 12:40 /dev/nbd14
12 brw-rw---- 1 root disk 43, 480 Oct 20 12:40 /dev/nbd15
13 brw-rw---- 1 root disk 43, 64 Oct 20 12:40 /dev/nbd2
14 brw-rw---- 1 root disk 43, 96 Oct 20 12:40 /dev/nbd3
15 brw-rw---- 1 root disk 43, 128 Oct 20 12:40 /dev/nbd4
16 brw-rw---- 1 root disk 43, 160 Oct 20 12:40 /dev/nbd5
17 brw-rw---- 1 root disk 43, 192 Oct 20 12:40 /dev/nbd6
18 brw-rw---- 1 root disk 43, 224 Oct 20 12:40 /dev/nbd7
19 brw-rw---- 1 root disk 43, 256 Oct 20 12:40 /dev/nbd8
20 brw-rw---- 1 root disk 43, 288 Oct 20 12:40 /dev/nbd9
Simply mount the device where you like
Unmount it, when done
Disconnect the nbd-device
Optionally unload the module nbd from the kernel.
1 modprobe -r nbd
Libvirt with EFI
Little quote from this OVMF whitepaper
The Unified Extensible Firmware Interface (UEFI) is a specification that defines a software interface between an operating system and platform firmware. UEFI is designed to replace the Basic Input/Output System (BIOS) firmware interface.
Hardware platform vendors have been increasingly adopting the UEFI Specification to govern their boot firmware developments. OVMF (Open Virtual Machine Firmware), a sub-project of Intel's EFI Development Kit II (edk2), enables UEFI support for Ia32 and X64 Virtual Machines.
Install UEFI-firmware for qemu
1 aptitude install ovmf qemu-efi
Code-Snippet for UEFI-boot in KVM
VM boots fine with UEFI.
But nnapshots are no longer supported.
To workaround this limitation, we have to change type='pflash' to type='rom'. Okay the vm does not boot any more and we are dropped in a UEFI shell.
Obviously the nvram is missing. To get the system working again i had to point it to the bootloader
You may also exit the UEFI shell to enter a BIOS like UEFI menu Select
- Boot Maintenance Manager
- Boot from file
- Select directory "EFI"
- Select directory "debian"
- Select file "grubx64.efi"
Well, i heard that tianocore is the best UEFI implementation. Unfortunately i cannot give up the ability to take snapshots for UEFI. So i'll stick with BIOS for the moment. Everything is in place - when kvm/libvirt is ready - i'm the first one to switch to the new interface.
Remove all the UEFI stuff
virsh edit $VM
So far …
btrfs nocow
To improve the IO-performance of your virtual machines on btrfs, please see filesystems/btrfs#No Copy on Write (NOCOW)
Sparsify qcow2 image
Install guestfs-tools
1 apt install guestfs-tools
Resize qcow2 image
This procedure allows resizing the disk image without entering the VM e.g. with a live image.
Install utilities
1 apt install guestfs-tools
- Shutdown VM
- Create a backup copy of the original image
- Aquire some info about the qcow2 image
1 virt-filesystems --long -h --all -a "$DOMAIN".qcow2
- Resize the VM image to the new size
1 qemu-img resize "$DOMAIN".qcow2 256G
- Create a copy of the resized VM image
1 cp --reflink=always "$DOMAIN".qcow2 "$DOMAIN".qcow2_bak2
- Expand partitions and filesystems to the new disk size
1 virt-resize -v -expand /dev/sda4 "$DOMAIN".qcow2_bak2 "$DOMAIN".qcow2
- Aquire some info about the qcow2 image
1 virt-filesystems --long -h --all -a "$DOMAIN".qcow2 2 Name Type VFS Label MBR Size Parent 3 /dev/sda1 filesystem unknown - - 1,0M - 4 /dev/sda2 filesystem unknown - - 244M - 5 /dev/sda3 filesystem swap - - 3,7G - 6 /dev/sda4 filesystem btrfs rootfs - 252G - 7 btrfsvol:/dev/sda4/var/lib/docker/btrfs/subvolumes/a7dbcdb5896eaf8f841705c9805f37609abcda0771601875eb32eb9b58fcdeda filesystem btrfs rootfs - - - 8 btrfsvol:/dev/sda4/var/lib/docker/btrfs/subvolumes/8253cfad42dae565527e927d28b79468e0e4b348add029d3a6f48f0187ee0ea7 filesystem btrfs rootfs - - - 9 btrfsvol:/dev/sda4/var/lib/docker/btrfs/subvolumes/e79011c2a2dcf3c4ae59ceb9c0c60fc364326467e066782442dbcc4305af2bac filesystem btrfs rootfs - - - 10 btrfsvol:/dev/sda4/var/lib/docker/btrfs/subvolumes/aeaba1c43b3a78776ee8426b944c2cda31976e5651ca1f3bb9106665d5d33297 filesystem btrfs rootfs - - - 11 btrfsvol:/dev/sda4/var/lib/docker/btrfs/subvolumes/0345843bacc85bb63378440906a8e5baa53fc05bacdcd01f304a1ddef2a5c6f1 filesystem btrfs rootfs - - - 12 btrfsvol:/dev/sda4/var/lib/docker/btrfs/subvolumes/e43dc3eb04fa5f4e2c86051ec87575d6687b221f2a1623944c2cfec3efe8bd55 filesystem btrfs rootfs - - - 13 btrfsvol:/dev/sda4/var/lib/docker/btrfs/subvolumes/8bf622e74770efed48316800b08d1eb43f34caf1bd13132cedaae8c01f6a0da1 filesystem btrfs rootfs - - - 14 btrfsvol:/dev/sda4/var/lib/docker/btrfs/subvolumes/3104a94e82483e588c1623d6c075a9d7023368dab3e38d1ed0ec097dd6f7f390-init filesystem btrfs rootfs - - - 15 btrfsvol:/dev/sda4/var/lib/docker/btrfs/subvolumes/3104a94e82483e588c1623d6c075a9d7023368dab3e38d1ed0ec097dd6f7f390 filesystem btrfs rootfs - - - 16 /dev/sda1 partition - - - 1,0M /dev/sda 17 /dev/sda2 partition - - - 244M /dev/sda 18 /dev/sda3 partition - - - 3,7G /dev/sda 19 /dev/sda4 partition - - - 252G /dev/sda 20 /dev/sda device - - - 256G - 21 virt-filesystems --long -h --all -a "$DOMAIN".qcow2 4,43s user 3,09s system 8% cpu 1:24,77 total
- Start and test the VM
1 virsh start "$DOMAIN"
- (Some days later) cleanup the backup images
Remote viewer
Create a LocalForward
with the integrated ssh shell
or with CLI arguments
1 ssh -L 5900:localhost:5900 remote-machine
Then in a new shell start remote viewer with the connection URL as argument
1 remote-viewer 'spice://127.0.0.1:5900'
Or create a file
vm-name.ini
Start the remote viewer with this configuration file as argument
1 remote-viewer vm-name.ini
Daily snapshot report
Create a little script that queries libvirt for domains with attached snapshots, gathers some snapshot info and sends an email to some recipients.
/usr/local/sbin/libvirt_snapshot_report.sh
1 #!/bin/bash
2
3 DOMAINS_WITH_SNAPSHOTS="$(virsh list --with-snapshot)"
4 FQDN="$(hostname -f)"
5 DATE="$(date '+%F %T %Z')"
6 SUBJECT="Libvirt snapshot report of host: '$FQDN'"
7 RECIPIENTS=("root")
8 declare -A DOMAIN_SNAPSHOT_INFO
9 declare -a DOMAINS
10
11 ### GATHER SNAPSHOT INFO
12 readarray -t DOMAINS \
13 < <(virsh list --with-snapshot --name \
14 |sed '/^\s*$/d')
15
16 for DOMAIN in "${DOMAINS[@]}"; do
17 DOMAIN_SNAPSHOT_INFO["$DOMAIN"]="$(virsh snapshot-list "$DOMAIN")"
18 done
19
20 SNAPSHOT_DETAILS_PRETTY="$(
21 for DOMAIN in "${DOMAINS[@]}"; do
22 cat <<-EOI
23 #### Domain: '$DOMAIN'
24 ${DOMAIN_SNAPSHOT_INFO["$DOMAIN"]}
25
26 EOI
27 done
28 )"
29
30 mail -s "$SUBJECT" "${RECIPIENTS[@]}" <<-EOM
31 ## Libvirt snapshot report
32
33 ### Metainformation
34
35 fqdn: "$FQDN"
36 date: "$DATE"
37
38 ### Domains with snapshots:
39
40 $DOMAINS_WITH_SNAPSHOTS
41
42 ### Per domain snapshot information:
43
44 $SNAPSHOT_DETAILS_PRETTY
45
46 Please take care of dangling snaphots.
47 EOM
Note: The heredoc statements need TAB characters or won't work. Make sure to replace it correctly when copying.
Make script executable
1 chmod a+x /usr/local/sbin/libvirt_snapshot_report.sh
Create a systemd-service
/lib/systemd/system/libvirt-snapshot-report.service
Pleas also see
Systemd#systemd.timers
Create a systemd-timer
/lib/systemd/system/libvirt-snapshot-report.timer
Enable the systemd-timer
1 systemctl enable libvirt-snapshot-report.timer