Files
nova/doc/source/contributor/testing.rst
T
Adam Spiers 5df748b2ed Make it easier to run a selection of tests relevant to ongoing work
During development of a new git commit, locally running a whole unit
or functional test suite to check every minor code change is
prohibitively expensive.  For maximum developer productivity and
happiness, it's generally desirable to make the feedback loop of the
traditional red/green cycle as quick as possible.

So add run-tests-for-diff.sh and run-tests.py to the tools/
subdirectory, using a few tricks as explained below to help with this.

run-tests.py takes a list of files on STDIN, filters the list for
tests which can be run in the current tox virtualenv, and then runs
them with the correct stestr options.

run-tests-for-diff.sh is a simple wrapper around run-tests.py which
determines which tests to run using output from "git diff".  This
allows running only the test files changed/added in the working tree:

    tools/run-tests-for-diff.sh

or by a single commit:

    tools/run-tests-for-diff.sh mybranch^!

or a range of commits, e.g. a branch containing a whole patch series
for a blueprint:

    tools/run-tests-for-diff.sh gerrit/master..bp/my-blueprint

It supports the same "-HEAD" invocation syntax as flake8wrap.sh (as
used by the "fast8" tox environment):

    tools/run-tests-for-diff.sh -HEAD

run-tests.py uses two tricks to make test runs as quick as possible:

  1. It's (already) possible to speed up running of tests by
     source'ing the "activate" file for the desired tox virtualenv,
     e.g.

        source .tox/py36/bin/activate

     and then running stestr directly.  This saves a few seconds by
     skipping the overhead introduced by running tox.

  2. When only one test file needs to be run, specifying the -n option
     to stestr will skip the costly test discovery phase, saving
     several more valuable seconds.

Future commits could build on top of this work, harnessing a framework
such as watchdog / watchmedo[0] or Guard[1] in order to automatically
run relevant tests every time your editor saves changes to a .py file.

[0] https://github.com/gorakhargosh/watchdog - Python-based
[1] https://guardgem.org - probably best in class, but Ruby-based so
    maybe unacceptable for use within Nova.

Change-Id: I9a9bda5d29bbb8d8d77f769cd1abf7c42a18c36b
2019-08-19 17:48:39 +01:00

127 lines
4.6 KiB
ReStructuredText

..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
==============
Test Strategy
==============
A key part of the "four opens" is ensuring the OpenStack delivers well-tested
and usable software. For more details see:
http://docs.openstack.org/project-team-guide/introduction.html#the-four-opens
Experience has shown that untested features are frequently broken, in part
due to the velocity of upstream changes. As we aim to ensure we keep all
features working across upgrades, we must aim to test all features.
Reporting Test Coverage
=======================
For details on plans to report the current test coverage, refer to
:doc:`/user/feature-classification`.
Running tests and reporting results
===================================
Running tests locally
---------------------
Please see
https://opendev.org/openstack/nova/src/branch/master/HACKING.rst#running-tests
Voting in Gerrit
----------------
On every review in gerrit, check tests are run on very patch set, and are
able to report a +1 or -1 vote.
For more details, please see:
http://docs.openstack.org/infra/manual/developers.html#automated-testing
Before merging any code, there is an integrate gate test queue, to ensure
master is always passing all tests.
For more details, please see:
http://docs.openstack.org/infra/zuul/user/gating.html
Infra vs Third-Party
--------------------
Tests that use fully open source components are generally run by the
OpenStack Infra teams. Test setups that use non-open technology must
be run outside of that infrastructure, but should still report their
results upstream.
For more details, please see:
http://docs.openstack.org/infra/system-config/third_party.html
Ad-hoc testing
--------------
It is particularly common for people to run ad-hoc tests on each released
milestone, such as RC1, to stop regressions.
While these efforts can help stabilize the release, as a community we have a
much stronger preference for continuous integration testing. Partly this is
because we encourage users to deploy master, and we generally have to assume
that any upstream commit may already been deployed in production.
Types of tests
==============
Unit tests
----------
Unit tests help document and enforce the contract for each component.
Without good unit test coverage it is hard to continue to quickly evolve the
codebase.
The correct level of unit test coverage is very subjective, and as such we are
not aiming for a particular percentage of coverage, rather we are aiming for
good coverage.
Generally, every code change should have a related unit test:
https://github.com/openstack/nova/blob/master/HACKING.rst#creating-unit-tests
Integration tests
-----------------
Today, our integration tests involve running the Tempest test suite on a
variety of Nova deployment scenarios. The integration job setup is defined
in the ``.zuul.yaml`` file in the root of the nova repository. Jobs are
restricted by queue:
* ``check``: jobs in this queue automatically run on all proposed changes even
with non-voting jobs
* ``gate``: jobs in this queue automatically run on all approved changes
(voting jobs only)
* ``experimental``: jobs in this queue are non-voting and run on-demand by
leaving a review comment on the change of "check experimental"
In addition, we have third parties running the tests on their preferred Nova
deployment scenario.
Functional tests
----------------
Nova has a set of in-tree functional tests that focus on things that are out
of scope for tempest testing and unit testing.
Tempest tests run against a full live OpenStack deployment, generally deployed
using devstack. At the other extreme, unit tests typically use mock to test a
unit of code in isolation.
Functional tests don't run an entire stack, they are isolated to nova code,
and have no reliance on external services. They do have a WSGI app, nova
services and a database, with minimal stubbing of nova internals.
Interoperability tests
-----------------------
The DefCore committee maintains a list that contains a subset of Tempest tests.
These are used to verify if a particular Nova deployment's API responds as
expected. For more details, see: https://github.com/openstack/defcore