QA/Test Report Tool: Difference between revisions
From Yocto Project
Jump to navigationJump to search
Joshua Lock (talk | contribs) (→Prior Art: add links to openQA) |
No edit summary |
||
Line 5: | Line 5: | ||
# The TRT must be able to receive test reports via a remote communication protocol. | # The TRT must be able to receive test reports via a remote communication protocol. | ||
## From expected sources of results (Auth) | ## From expected sources of results (Auth) | ||
## To existing test buckets (logical test belonging separation, could be by test component) | ## To existing test buckets (logical test belonging separation, could be by test component) through report sessions with: | ||
##* ability to resume | |||
##* ability to update existing results | |||
##* ability to abort/discard | |||
## Using existing test report protocols like XML | ## Using existing test report protocols like XML | ||
# The TRT should present an interface with views organized towards the following objectives: | # The TRT should present an interface with views organized towards the following objectives: |
Revision as of 16:40, 15 July 2016
General Expectations
Here's a list of initial expectations of the Test Reporting Tool.
- The TRT must be able to receive test reports via a remote communication protocol.
- From expected sources of results (Auth)
- To existing test buckets (logical test belonging separation, could be by test component) through report sessions with:
- ability to resume
- ability to update existing results
- ability to abort/discard
- Using existing test report protocols like XML
- The TRT should present an interface with views organized towards the following objectives:
- Test Buckets (browsing a component's test results history)
- Test Collections (The type of Milestone/Release or another defined collection)
- The TRT should be able to identify the type of result it is consuming.
- Boolean test results Passed/Failed
- Numeric test results (measurements)
QA Data Sources
There are several sources of quality data we'd like to be able to track in a QA dashboard:
- oe-selftest results
- bitbake-selftest results
- build performance metrics
- buildhistory trends
- ???
Prior Art
- QA maintains QA_sanity_history reports which list sanity test results and compares them to the previous run (anyone know how these are generated?).
- Someone (?) generates graphs of the performance data.
- CustomReports includes visualisation for test run pass rates and graphs changes to success rates.
- Jenkins parses and displays JUnit XML output out of the box, a break down of How Jenkins Displays JUnit output.
- OpenQA from openSUSE has a visual status overview for the tests run vs an installed OS. Code is on github: https://github.com/os-autoinst/openQA and initial release announcement provides an overview: openSUSE Announces First Public Release of openQA.