Different distributions ship different boot-parameters. Look them up via:
1 man 7 bootparam
Here are some important kernel command line parameters that should not be forgotten.
1 GRUB_CMDLINE_LINUX_DEFAULT="quiet zswap.enabled=1 cgroup.enable=memory swapaccount=1 scsi_mod.use_blk_mq=1 nomodeset"
Source of the hint: FreeIPA Deployment Recommendations
DO NOT use ipv6.disable=1 on the kernel commandline: It disables the whole IPv6 stack and breaks Samba.
If necessary, adding ipv6.disable_ipv6=1 will keep the IPv6 stack functional but will not assign IPv6 addresses to any of your network devices. This is recommended approach for cases when you don't use IPv6 networking.
You may also disable "all" or very specific interfaces.
- Default: 60
at runtime via procfs
at boottime via sysctl
1 vm.swappiness = 5
Apply configuration via sysctl.
1 # sysctl --system 2 * Applying /etc/sysctl.d/30-baloo-inotify-limit.conf ... 3 fs.inotify.max_user_watches = 524288 4 * Applying /etc/sysctl.d/30-postgresql-shm.conf ... 5 * Applying /etc/sysctl.d/30-tracker.conf ... 6 fs.inotify.max_user_watches = 65536 7 * Applying /usr/lib/sysctl.d/50-coredump.conf ... 8 kernel.core_pattern = |/lib/systemd/systemd-coredump %P %u %g %s %t 9223372036854775808 %e 9 * Applying /etc/sysctl.d/99-sysctl.conf ... 10 * Applying /etc/sysctl.d/vm.conf ... 11 vm.swappiness = 5 12 vm.dirty_background_ratio = 8 13 vm.dirty_expire_centisecs = 3000 14 vm.dirty_ratio = 32 15 vm.dirty_writeback_centisecs = 500 16 * Applying /etc/sysctl.conf ...
Zswap is a lightweight compressed cache for swap pages. It takes pages that are in the process of being swapped out and attempts to compress them into a dynamically allocated RAM-based memory pool. zswap basically trades CPU cycles for potentially reduced swap I/O. This trade-off can also result in a significant performance improvement if reads from the compressed cache are faster than reads from a swap device.
grep -R . /sys/module/zswap/parameters
On a hypervisor the scheduler bfq seems to be reasonable.
On a VM with no disk or controller pass-through none should be used. This avoids optimizing the queues twice, which is inefficient and contra-productive. The hypervisor will optimize the io-request anyway.
Make alternative schedulers available
BLK-MQ is nowadays broadly available and enabled in distributions. Using multiple queues on multicore systems with fast storage promises some performance gains.
But when I took a look on available schedulers only "mq-deadline" and "none" were available.
This is because these scheduler are shipped as a kernel module and need to be loaded first into the kernel via modprobe.
Modules may be loaded manually:
Modules may also be loaded automatically at boot-time via /etc/modules.
Set IO-Scheduler permanently
Method seems not to be working any longer.
- Service affecting.
1 GRUB_CMDLINE_LINUX_DEFAULT="quiet elevator=$SCHEDULER"
Refresh grub config and reboot.
Works at run- and at boot-time!
- More selective because disks may be filtered with a regex.
Reload will probably happen automatically but the "trigger" is necessary.
1 udevadm control --reload-rules && udevadm trigger
Drop FS Cache
1 echo 3 | tee /proc/sys/vm/drop_caches
Disable TCP Timestamping
1 hping3 -S -p 22 --tcp-timestamp $DESTINATION 2 3 1 root@libertas /home/tobias/Downloads # hping3 -S -p 22 --tcp-timestamp www.rockstable.it 4 HPING www.rockstable.it (bridge 126.96.36.199): S set, 40 headers + 0 data bytes 5 len=56 ip=188.8.131.52 ttl=53 DF id=0 sport=22 flags=SA seq=0 win=65160 rtt=24.2 ms 6 TCP timestamp: tcpts=2031225761 7 8 len=56 ip=184.108.40.206 ttl=53 DF id=0 sport=22 flags=SA seq=1 win=65160 rtt=19.8 ms 9 TCP timestamp: tcpts=2031226761 10 HZ seems hz=1000 11 System uptime seems: 23 days, 12 hours, 13 minutes, 46 seconds
1 echo 0 > /proc/sys/net/ipv4/tcp_timestamps