When trying to debug failed neutron VIF plugging events, it can be
tempting to just increase the timeout from the default to allow more
runway for those events to come in. In most cases, this is a fool's
errand because something is preventing them from coming at all. This
patch makes us debug log the event times regularly so that it is easy
to look at the history and see how long we normally wait on events,
to see if increasing the timeout is warranted or not.
Change-Id: I1be011f4dbcace78a698f9700170b8884e98a49b
Apply the common nova irrelevant files filter for the new
tempest-integrated-compute-centos-8-stream job
Change-Id: I0bacb8884a75b5ae604383d73d60fc618123a8d3
autopep8 is a code formating tool that makes python code pep8
compliant without changing everything. Unlike black it will
not radically change all code and the primary change to the
existing codebase is adding a new line after class level doc strings.
This change adds a new tox autopep8 env to manually run it on your
code before you submit a patch, it also adds autopep8 to pre-commit
so if you use pre-commit it will do it for you automatically.
This change runs autopep8 in diff mode with --exit-code in the pep8
tox env so it will fail if autopep8 would modify your code if run
in in-place mode. This allows use to gate on autopep8 not modifying
patches that are submited. This will ensure authorship of patches is
maintianed.
The intent of this change is to save the large amount of time we spend
on ensuring style guidlines are followed automatically to make it
simpler for both new and old contibutors to work on nova and save
time and effort for all involved.
Change-Id: Idd618d634cc70ae8d58fab32f322e75bfabefb9d
The commit replaces DefCore committee (a former name) by
Interop Working Group (the current name) and updates a few
more old interop references.
Change-Id: I578a21d610b5b680b4549bf34e1857307a1b8e74
We discovered that two unit test cases added in
I0647bb8545c1464b521a1d866cf5ee674aea2eae cause errors like
oslo_db.sqlalchemy.enginefacade.AlreadyStartedError:
this TransactionFactory is already started
when the db tests run selectively with tox -e py38 nova.tests.unit.db
but does not cause errors if the whole unit test suit is run.
This error happened because our db code uses two global transaction
factory, one for the api DB and one for the main DB. There was a global
flag SESSION_CONFIGURED in our Database fixture that guarded against
double initialization of the factory. But the faulty test cases in
question do not use our Database fixture but use the
OpportunisticDBTestMixin from oslo_db. Obviously that fixture does not
know about our SESSION_CONFIGURED global. So if one of the offending
test case ran first in an executor then that initialized the
transaction factory globally and a later test that uses our Database
fixture tried to configure it again leading to the error. For some
unknown reason if these tests were run in the opposite order the faulty
re-initialization did not happen. Probably the OpportunisticDBTestMixin
was able to prevent that.
A previous patch already removed the global SESSION_CONFIGURED flag
from our fixture and replaced it with a per DB specific patch_factory
calls that allow resetting the state of the factory at the end of each
test case. This would already solve the current test case issue as only
our offending test cases would initialize the global factory without
cleanup and we have one test case per DB. So there would be no
interference. However if in the future we add similar test cases then
those can still interfere through the global factory.
So this patch fixes the two offending test case. Also it extends the
DatabasePoisonFixture used for the NoDbTestCase tesst. The poison now
detects if the test case starts any transaction factory.
This poison caught another offender test case,
test_db_sync_with_special_symbols_in_connection_string, that was marked
NoDb but actually using the database. It is now changed to declare
itself as a test that sets up the DB manually and also it is changed to
use the Database fixture instead of touching the global factory
directly.
Closes-Bug: #1948963
Change-Id: Id96f1795034490c13125ebbab49b029fb96af1c7
Now that the previous patch Ifc070d19a18a2d66f1a7bd5898428b12901dfe9e moved
most of the logic to the setUp of the Database fixture we can replace
our direct factory patching with the ReplaceEngineFacade fixture from
oslo_db.
Change-Id: Icd25adcc931cae2126e03c00af7e4420d3781b9a
This patch apply the following changes to the fixture to make the
intention clearer and remove some unnecessary complexity:
* the fixture does a lot of dynamic thing in its __init__, these are
moved to setUp() instead to facilitate proper reset functionality of
the fixture
* the caching and applying of the DB schema is made a explicit and
moved to setUp() too
* the explicit reset() function is removed as it is probably
unintentionally overwrote the Fixture.reset(). Now the Fixture can be
properly reset by calling the Fixture.reset() which is by default
implemented by calling cleanUp() and setUp()
Change-Id: Ic58e93d6aafb88be4abeb6e52089f7ee43d8db01
The SESSION_CONFIGURED global flag is used in the Database fixture to
guard against the reconfiguration of the DB context in each test as the
global transaction factory does not allow such reconfiguration. However
this global is error prone for multiple reasons:
* there are tests that actually configure the factory outside of the
fixture causing tests to interfere
* we use one single global flag but we always have two separate
Database fixture one for the api DB and one for the main DB. Still
the fixture instantiated first will do the configuration of both
DB factory.
This patch replaces the global with two individual oslo_db enginefacade
patch_factory() calls that allows patching and resetting the global
factory per test case.
Change-Id: Ifc070d19a18a2d66f1a7bd5898428b12901dfe9e
The libvirt driver power on and hard reboot destroys the domain first
and unplugs the vifs then recreate the domain and replug the vifs.
However nova does not wait for the network-vif-plugged event before
unpause the domain. This can cause that the domain starts running and
requesting IP via DHCP before the networking backend finished plugging
the vifs.
So this patch adds a workaround config option to nova to wait for
network-vif-plugged events during hard reboot the same way as nova waits
for this event during new instance spawn.
This logic cannot be enabled unconditionally as not all neutron
networking backend sending plug time events to wait for. Also the logic
needs to be vnic_type dependent as ml2/ovs and the in tree sriov backend
often deployed together on the same compute. While ml2/ovs sends plug
time event the sriov backend does not send it reliably. So the
configuration is not just a boolean flag but a list of vnic_types
instead. This way the waiting for the plug time event for a vif that is
handled by ml2/ovs is possible while the instance has other vifs handled
by the sriov backend where no event can be expected.
Change-Id: Ie904d1513b5cf76d6d5f6877545e8eb378dd5499
Closes-Bug: #1946729
There are a number of nova-network tables which we can now drop.
nova-network
Feature removed entirely in Ussuri.
- dns_domains
- fixed_ips
- floating_ips
- networks
- provider_fw_rules
- security_group_default_rules
Unfortunately we can't get rid of the security group-related entries due
to the unfortunate presence of the 'security_groups' attribute on the
'Instance' object and corresponding table, which in turn brings in a
load of other tables. We'll address that separately. For now, just drop
what we can easily drop.
Change-Id: I8858faa14119f4daa9630b0ff6dcf082d0ff8fba
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
We've built up a large amount of tables that are no longer used for
anything, given the removal of their users in past releases. We'd like
to remove those, but before we do that we need to drop the models. This
means there are no references to the tables come N+2, at which point we
can remove the tables themselves.
XenAPI virt driver
Feature removed entirely in Victoria.
- agent_builds
- bw_usage_cache
- console_pools
- consoles
Cells v1
Feature removed entirely in Train.
- cells
Volume Snapshots
Feature removed entirely in Liberty.
- snapshots
EC2 API
Feature removed entirely in Mitaka. Note that these tables are *not*
used by the separate ec2-api project.
- snapshot_id_mappings
- volume_id_mappings
There are still some tables related to nova-network left here. Those are
unfortunately referenced from elsewhere, so we need to clean them up
separately.
Change-Id: I5e3d022fdf7328a1132f6e00998a3286b19be69a
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Finish up removing these entries from the versioned instance
notifications. They're useless since we dropped support for the XenAPI
virt driver. The underlying model is retained for now: that will be
handled separately.
Change-Id: I774c50fca99bc655ca5010e3b9d8247b739293b3
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Remove all of the models that were moved to the API database many many
cycles ago.
Change-Id: Ib327f47b889dbccd5279f43c39203ed27689748b
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
When the nova-compute service starts, by default it attempts to
startup instance configuration states for aspects such as networking.
This is fine in most cases, and makes a lot of sense if the
nova-compute service is just managing virtual machines on a hypervisor.
This is done, one instance at a time.
However, when the compute driver is ironic, the networking is managed
as part of the physical machine lifecycle potentially all the way into
committed switch configurations. As such, there is no need to attempt
to call ``plug_vifs`` on every single instance managed by the
nova-compute process which is backed by Ironic.
Additionally, using ironic tends to manage far more physical machines
per nova-compute service instance then when when operating co-installed
with a hypervisor. Often this means a cluster of a thousand machines,
with three controllers, will see thousands of un-needed API calls upon
service start, which elongates the entire process and negatively
impacts operations.
In essence, nova.virt.ironic's plug_vifs call now does nothing,
and merely issues a debug LOG entry when called.
Closes-Bug: #1777608
Change-Id: Iba87cef50238c5b02ab313f2311b826081d5b4ab
Previously the volume_attachments show command would incorrectly use the
nova.objects.BlockDeviceMapping.get_by_volume helper to fetch the
underlying BlockDeviceMapping object from the database that does not
support multiattach volumes.
This is corrected by switching to the get_by_volume_and_instance helper
that can pick out unique BlockDeviceMapping objects using both of the
supplied volume and instance UUIDs.
Change-Id: Ifab05abf3775efb0f29f80c9300297208f60d5d9
Closes-Bug: #1945452
As a final patch for the series this adds release notes for the complete
feature.
Change-Id: I655f5144cbfa834ee089c474c5caa3cf8140354f
Implements: qos-minimum-guaranteed-packet-rate