QA/Test Report Tool: Difference between revisions
From Yocto Project
Jump to navigationJump to search
Joshua Lock (talk | contribs) (Start to make a list of QA data sources) |
m (Benjamin moved page QA/Test Reporting Tool to QA/Test Report Tool: Name change to simplify its search) |
||
(3 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
==General Expectations== | ==General Expectations== | ||
(add link share-ability of results to the list of requirements when it fits somewhere) | |||
Here's a list of initial expectations of the Test Reporting Tool. | Here's a list of initial expectations of the Test Reporting Tool. | ||
Line 5: | Line 7: | ||
# The TRT must be able to receive test reports via a remote communication protocol. | # The TRT must be able to receive test reports via a remote communication protocol. | ||
## From expected sources of results (Auth) | ## From expected sources of results (Auth) | ||
## To existing test buckets (logical test belonging separation, could be by test component) | ## To existing test buckets (logical test belonging separation, could be by test component) through report sessions with: | ||
##* ability to resume | |||
##* ability to update existing results | |||
##* ability to abort/discard | |||
## Using existing test report protocols like XML | ## Using existing test report protocols like XML | ||
# The TRT should present an interface with views organized towards the following objectives: | # The TRT should present an interface with views organized towards the following objectives: | ||
Line 24: | Line 29: | ||
==Prior Art== | ==Prior Art== | ||
* QA maintains [[QA_sanity_history]] reports which list sanity test results and compares them to the previous run (anyone know how these are generated?) | * QA maintains [[QA_sanity_history]] reports which list sanity test results and compares them to the previous run (anyone know how these are generated?). | ||
* Someone (?) generates graphs of the [https://wiki.yoctoproject.org/charts/perf_milestone/performance_test.html performance data] | * Someone (?) generates graphs of the [https://wiki.yoctoproject.org/charts/perf_milestone/performance_test.html performance data]. | ||
* [https://github.com/fullvlad/CustomReports/ CustomReports] includes visualisation for test run pass rates and graphs changes to success rates. | * [https://github.com/fullvlad/CustomReports/ CustomReports] includes visualisation for test run pass rates and graphs changes to success rates. | ||
* [https://jenkins.io/ Jenkins] parses and displays JUnit XML output out of the box, a break down of [http://nelsonwells.net/2012/09/how-jenkins-ci-parses-and-displays-junit-output/ How Jenkins Displays JUnit output] | * [https://jenkins.io/ Jenkins] parses and displays JUnit XML output out of the box, a break down of [http://nelsonwells.net/2012/09/how-jenkins-ci-parses-and-displays-junit-output/ How Jenkins Displays JUnit output]. | ||
* [https://openqa.opensuse.org/ OpenQA] from openSUSE has a visual status overview for the tests run vs an installed OS. Code is on github: https://github.com/os-autoinst/openQA and initial release announcement provides an overview: [https://news.opensuse.org/2011/10/11/opensuse-announces-first-public-release-of-openqa/ openSUSE Announces First Public Release of openQA]. |
Latest revision as of 19:47, 10 November 2016
General Expectations
(add link share-ability of results to the list of requirements when it fits somewhere)
Here's a list of initial expectations of the Test Reporting Tool.
- The TRT must be able to receive test reports via a remote communication protocol.
- From expected sources of results (Auth)
- To existing test buckets (logical test belonging separation, could be by test component) through report sessions with:
- ability to resume
- ability to update existing results
- ability to abort/discard
- Using existing test report protocols like XML
- The TRT should present an interface with views organized towards the following objectives:
- Test Buckets (browsing a component's test results history)
- Test Collections (The type of Milestone/Release or another defined collection)
- The TRT should be able to identify the type of result it is consuming.
- Boolean test results Passed/Failed
- Numeric test results (measurements)
QA Data Sources
There are several sources of quality data we'd like to be able to track in a QA dashboard:
- oe-selftest results
- bitbake-selftest results
- build performance metrics
- buildhistory trends
- ???
Prior Art
- QA maintains QA_sanity_history reports which list sanity test results and compares them to the previous run (anyone know how these are generated?).
- Someone (?) generates graphs of the performance data.
- CustomReports includes visualisation for test run pass rates and graphs changes to success rates.
- Jenkins parses and displays JUnit XML output out of the box, a break down of How Jenkins Displays JUnit output.
- OpenQA from openSUSE has a visual status overview for the tests run vs an installed OS. Code is on github: https://github.com/os-autoinst/openQA and initial release announcement provides an overview: openSUSE Announces First Public Release of openQA.