We have a slew of ways to test the Linux kernel: selftests, kunit, and then we have a slew of subsystem specific tests. Intel 0-day has also done a fantastic job at helping find bugs. So has syzkaller. Some subsystems like filesystems and memory management have really complex test frameworks though and have falling behind in automation. Is it possible to automate testing of complex subsystems? Should we? And what are the implications if we're successful?
To provide perspective, it takes roughly 10 years to stabilize a new Linux filesystem. But can we do better? The kdevops project was started with the goal of first of addressing automation of testing of complex subsystems such as filesystems to help reduce the amount of time it takes to stabilize new filesystems or new filesystem features. The
project aimed at supporting local virtualization, bare metal, and all cloud provider support. Seven years later since the project got started, with the help of a lot of community collaboration the project is now integral part not only of testing pipelines but also development workflows. The kdevops project now enables continuous integration for different subsystems starting with:
* Linux modules
* Linux radix tree
* Linux filesystems: xfs, btrfs, ext4, tmpfs
* Linux network filesystems: NFS
* Linux blktests
* Linux selftests
A dashboard of results is now also updated automatically based on automatic tests:
https://kdevops.orgWhat have we learned from all this effort so far? And what lies ahead for the roadmap? If you want to contribute and help how do you do that?