QA/Test Report Tool: Difference between revisions

From Yocto Project
Jump to navigationJump to search
(Created page with "==General Expectations== Here's a list of initial expectations of the Test Reporting Tool. # The TRT must be able to receive test reports via a remote communication protocol...")
 
m (Benjamin moved page QA/Test Reporting Tool to QA/Test Report Tool: Name change to simplify its search)
 
(5 intermediate revisions by 2 users not shown)
Line 1: Line 1:
==General Expectations==
==General Expectations==
(add link share-ability of results to the list of requirements when it fits somewhere)


Here's a list of initial expectations of the Test Reporting Tool.
Here's a list of initial expectations of the Test Reporting Tool.
Line 5: Line 7:
# The TRT must be able to receive test reports via a remote communication protocol.
# The TRT must be able to receive test reports via a remote communication protocol.
## From expected sources of results (Auth)
## From expected sources of results (Auth)
## To existing test buckets (logical test belonging separation, could be by test component)
## To existing test buckets (logical test belonging separation, could be by test component) through report sessions with:
##* ability to resume
##* ability to update existing results
##* ability to abort/discard
## Using existing test report protocols like XML  
## Using existing test report protocols like XML  
# The TRT should present an interface with views organized towards the following objectives:
# The TRT should present an interface with views organized towards the following objectives:
Line 13: Line 18:
## Boolean test results Passed/Failed
## Boolean test results Passed/Failed
## Numeric test results (measurements)
## Numeric test results (measurements)
==QA Data Sources==
There are several sources of quality data we'd like to be able to track in a QA dashboard:
* oe-selftest results
* bitbake-selftest results
* build performance metrics
* buildhistory trends
* ???
==Prior Art==
* QA maintains [[QA_sanity_history]] reports which list sanity test results and compares them to the previous run (anyone know how these are generated?).
* Someone (?) generates graphs of the [https://wiki.yoctoproject.org/charts/perf_milestone/performance_test.html performance data].
* [https://github.com/fullvlad/CustomReports/ CustomReports] includes visualisation for test run pass rates and graphs changes to success rates.
* [https://jenkins.io/ Jenkins] parses and displays JUnit XML output out of the box, a break down of [http://nelsonwells.net/2012/09/how-jenkins-ci-parses-and-displays-junit-output/ How Jenkins Displays JUnit output].
* [https://openqa.opensuse.org/ OpenQA] from openSUSE has a visual status overview for the tests run vs an installed OS. Code is on github: https://github.com/os-autoinst/openQA and initial release announcement provides an overview: [https://news.opensuse.org/2011/10/11/opensuse-announces-first-public-release-of-openqa/ openSUSE Announces First Public Release of openQA].

Latest revision as of 19:47, 10 November 2016

General Expectations

(add link share-ability of results to the list of requirements when it fits somewhere)

Here's a list of initial expectations of the Test Reporting Tool.

  1. The TRT must be able to receive test reports via a remote communication protocol.
    1. From expected sources of results (Auth)
    2. To existing test buckets (logical test belonging separation, could be by test component) through report sessions with:
      • ability to resume
      • ability to update existing results
      • ability to abort/discard
    3. Using existing test report protocols like XML
  2. The TRT should present an interface with views organized towards the following objectives:
    1. Test Buckets (browsing a component's test results history)
    2. Test Collections (The type of Milestone/Release or another defined collection)
  3. The TRT should be able to identify the type of result it is consuming.
    1. Boolean test results Passed/Failed
    2. Numeric test results (measurements)

QA Data Sources

There are several sources of quality data we'd like to be able to track in a QA dashboard:

  • oe-selftest results
  • bitbake-selftest results
  • build performance metrics
  • buildhistory trends
  • ???

Prior Art