Skip to main content
Monthly Archives

June 2025

June 2025 updates – Linaro and ELISA joined KernelCI!

By Blog, News

A lot has happened in KernelCI since our last blog update. The community continues to grow with Linaro and ELISA joining as members and more contributors and companies adding test results. The infra as a whole continues to evolve. And we are in the first stages of the development of the kernelci.yml test plan standard.

Linaro and ELISA joined us a members

Linaro is joining as a Premier member and ELISA as an Associate Member. We thank you both for their commitment to take part in the KernelCI community and join us in our mission to ensure the quality, stability and long-term maintenance of the Linux kernel.

“Linaro is excited to be rejoining the KernelCI project. KernelCI’s mission to provide Linux Kernel developers with testing at scale across a diverse set of platforms is key to ensuring the long term quality, reliability and security of the Linux kernel. Linaro looks forward to helping KernelCI to grow and become an even more valuable resource.” said Grant Likely, Linaro CTO.

“Linking the requirements to the tests will enable more efficiency in regression testing,” said Kate Stewart, Vice President of Dependable Embedded Systems at the Linux Foundation. “Being able to connect the traceability between code, requirements and tests will get us closer to improving the code coverage and quality of the Linux kernel images. The ELISA Project is focusing on kernel requirements and is looking forward to working with the KernelCI community to make the regression testing more effective over time.”

Automated Testing Summit coming up

KernelCI is hosting the Automated Testing Summit(ATS) 2025 in Denver, CO, USA. It is co-located with the Open Source Summit North America. The agenda is out and a presentation from KernelCI bringing you the latest project updates is on the schedule. 

There is still time to sign up and meet us there. It is a hybrid event, so both in-person and virtual attendees are welcome.

QualcommRISC-V International and Texas Instruments International submitting results

KernelCI gained data from 3 new submitters. Both Qualcomm, RISC-V International and Texas Instruments connected their test systems to KernelCI. If you look at our architecture, they are part of our CI ecosystem. They start off by listening to new build events from Maestro, then they download the binaries for the built kernel and artifacts from Maestro. With the kernel and artifacts, they can execute the testing on their environment – sometimes hidden behind a firewall. When tests are completed, they submit complete results to KCIDB, which then becomes accessible through our Dashboard and kci-dev

.kernelci.yml test plan

We are proposing to introduce a standardized .kernelci.yml file in upstream kernel repositories to help the KernelCI community automatically discover and configure testing for each kernel tree. Part of the goal is to also transfer the ownership of the test plan by maintaining such files close to them or inside their subsystem folder. 

This YAML file would specify branches to be tested, kernel configs to build, tests to execute, etc, enabling project maintainers to directly declare their KernelCI preferences. The main benefit is reducing manual effort and guesswork currently involved in onboarding new trees to KernelCI, ultimately making kernel testing more scalable, transparent, and easier to maintain for both KernelCI maintainers and kernel developers.

KCIDB-ng

With the amount of test results data received by KernelCI everyday growing, KCIDB started to show wear off signs. To address the limitations, including a previous implementation heavily dependent on specific Google Cloud technologies, we created kcidb-ng. The new project brings a system that is easy to deploy not only locally for development but also can be run on any cloud environment. It also simplified a lot the ingestion process. 

Essentially, we have an API that receives json files with the test result content and stores the files in the spool directory. This entrypoint was written in Rust for efficiency. Then, we have an ingester looping in the server taking the files and ingesting them into PostgreSQL.

Additionally, kcidb-ng comes with logspec integration out of the box. So now data from any origin can be parsed to generate insights about build and test failures to generate KCIDB issue objects. The issues objects are the bridge between seeing a test failures and being able to report it as a regression to the community.

Right now, we are working with all KCIDB origins to move them over to the new API.

Strengthening core infra

Behind the scenes, we’ve been working hard to optimize our infrastructure costs and performance. Our build times on Azure build cluster improved dramatically from 88 to 17 minutes after migrating to modern D8lds_v6 instances, while actually reducing costs. We also implemented a caching solution for linux-firmware that cut our data egress costs by over 95% – from a projected $69k annually down to manageable levels. These optimizations mean faster feedback for developers and more sustainable operations for the project. Additionally, we’ve begun migrating KCIDB components to more cost-effective cloud services, starting with kcidb-rest, which will help us maintain reliable service while keeping infrastructure costs under control.

Final thoughts

We will keep working on making KernelCI easier for the community to benefit from. From greater stability to an improved Web Dashboard and a more complete kci-dev CLI, there’s much more to enhance in KernelCI for everyone. Big thank you to the entire KernelCI community for making this progress possible!Talk to us at kernelci@lists.linux.dev , #kernelci IRC channel at Libera.chat or through our Discord server!

Exploring the KernelCI Dashboard

By Blog, News

The KernelCI project is a critical initiative in the Linux kernel development ecosystem, providing automated testing and continuous integration for kernel builds. In this blog post, we’ll explore the current features available in the KernelCI Dashboard at https://dashboard.kernelci.org.

KernelCI Dashboard Overview

The current KernelCI Dashboard provides a comprehensive view of kernel testing activities, organizing information into three primary sections: Trees, Hardware, and Issues. This structure allows developers to easily navigate between different aspects of kernel testing. In this web dashboard, it’s possible to see the results of CI systems from different maintainers, checking their status, details, history and issues, while gathering valuable insight to work on a better and more reliable kernel.

Trees

The Trees section serves as the main entry point to the dashboard, displaying a tabular view of kernel trees, branches, and commits being tested. The Dashboard visitor is then able to select the one they are interested in – a tree is a fork of the linux kernel with a specific branch and targeting a specific url.

> Image: The Tree Listing page. It shows the website menu on the left, a header on top with a search bar and a table listing the 10 trees with the most number of tests, each row has the tree information as well as counts for how many builds, boots and tests were executed on it.

With options for sorting and searching for a specific tree, it is possible to check the details of a desired tree, where a user can see which configurations, architectures, hardware, and a history of the builds, boots and tests.

This allows maintainers to see the successes, failures and other results of the CI systems and are able to focus their attention on what matters the most to them. A preview of a build or test can be seen right on this page, reviewing tests or builds outputs, with highlights to failures and errors, and a list of issues that were triggered from it.

> Video: Navigating to the mainline Tree Details. There are cards with information about the specifications used, graphs for the result status and history of results, and a list of boot tests. A boot is selected and its output and issues are shown.

Builds and Tests

Each build and test also has a lot of data that a maintainer can find useful, so by pressing “View more details”, the user is directed to a page containing more information about that specific item, such as a the platform that was tested on, a history of a test result, the artifacts (logs or result files) that a test or build produced, and miscellaneous data. For a build, it is also possible to see every test that was executed on it, with links to the details of those tests.

> Video: A boot test details page. There are sections with basic information, as well as the history of its results, miscellaneous data from it, and the files it produced as artifacts.

> Video: A page with details of a build. There are sections with basic information, miscellaneous data, output files, and a table listing the tests that were performed on it.

Hardware

The Hardware section focuses on the physical devices used for testing, providing insights into how different kernels perform across various hardware platforms. Maintainers of such items might want to see the results of tests for that hardware independently of which tree it came from. For that purpose, the dashboard also contains a tab for listing the hardware that were tested on.

The listing is similar to that of trees, but when entering the details pages, it is possible to see all the trees that contributed to testing on that hardware, as well as disabling or enabling the visualization of the results from that tree.

> Video: Navigation from the hardware listing page to a hardware details page. The details page shows a table at the top listing the trees that tested or had builds on that hardware, with their corresponding commit and counting of total builds, boots and tests.

Not only for trees in a specific hardware or hardware on a specific tree, it is also possible to filter for any of the card items by clicking on them or using the Filters button, allowing for better visibility of items that a user may be interested in.

Issues

The Issues section is dedicated to tracking problems identified during testing, making it easier for developers to identify and address failures.

Issues group tests or builds when their results had a certain status or had a certain message in their logs or other conditions. In the dashboard, it is possible to list the most recent issues and when and where they have appeared, resembling the other listing pages.

From that page or from links throughout the dashboard, a user can navigate to a page with details of that issue, including the first time that issue caught a result, the specific data of how that issue came to be, and a list of every incident of that issue, builds or tests.

> Video: Navigating from the issue listing to an issue details page. The detailed page shows sections of the issue’s information, its first incident, the specifications of its error, miscellaneous data, and a table with builds that triggered that issue.

Closing thoughts

The current KernelCI Dashboard provides a powerful interface for monitoring, analyzing, and troubleshooting kernel testing results. Its comprehensive features make it an essential tool for kernel developers, distributions, and hardware vendors who rely on Linux kernel stability and compatibility.

The dashboard allows users to inspect CI results from trees, hardware, builds, and tests; checking for specific issues, filtering for certain configurations and looking over results of their interest. Coupled with detailed pages, interactions and CI results history, it provides better tools for specific use cases, enhanced visualization, and improved troubleshooting capabilities. With a redesigned interface and easy shareability and filtering, the Dashboard can address the needs of different users in the kernel development ecosystem, from maintainers to lab operators.

Whether you’re a kernel developer tracking your patches, a distribution maintainer ensuring stability, or a hardware vendor verifying compatibility, the KernelCI Dashboard (both current and future versions) offers the insights and tools needed to ensure Linux kernel quality across the ecosystem.

Users are encouraged to report bugs and suggestions to kernelci-webdashboard@groups.io to help improve this vital project.

We’d like to thank you ProFUSION for their contributions to this project as supplier of the KernelCI Project. This blog post was written by then.