Specifying a duplicate port ID is currently "allowed" but results in an
integrity error when nova attempts to create a duplicate
'VirtualInterface' entry. Start rejecting these requests by checking for
duplicate IDs and rejecting offending requests. This is arguably an API
change because there isn't a HTTP 5xx error (server create is an async
operation), however, users shouldn't have to opt in to non-broken
behavior and the underlying instance was never actually created
previously, meaning automation that relied on this "feature" was always
going to fail in a later step. We're also silently failing to do what
the user asked (per flow chart at [1]).
[1] https://docs.openstack.org/nova/latest/contributor/microversions.html#when-do-i-need-a-new-microversion
Change-Id: Ie90fb83662dd06e7188f042fc6340596f93c5ef9
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Closes-Bug: #1821088
This reverts commit 7a7a223602.
That commit was added because - tl'dr - upon revert resize, Neutron
with the OVS backend and the iptables security group driver would send
us the network-vif-plugged event as soon as we updated the port
binding.
That behaviour has changed with commit 66c7f00e1d. With that commit,
we started unplugging the vifs on the source compute host when doing a
resize. When reverting the resize, the vifs had to be re-plugged again,
regarldess of the networking backend in use. This renders commit
7a7a223602. pointless, and it can be
reverted.
Conflicts - most have to do with context around this commit's code:
nova/compute/manager.py
a2984b647a added provider_mappings to
_finish_revert_resize_network_migrate_finish()'s signature
750aef54b1 started using
_finish_revert_resize_network_migrate_finish() in
_finish_revert_snapshot_based_resize_at_source()
nova/network/model.py
8b33ac0644 added get_live_migration_plug_time_events() and
has_live_migration_plug_time_event()
7da94440db added has_port_with_allocation()
nova/objects/migration.py
f203da3838 added is_resize() and is_live_migration()
nova/tests/unit/compute/test_compute.py
a0e60feb3e added request_spec to the test
nova/tests/unit/compute/test_compute_mgr.py
be278006a5 added unit tests below ours
nova/tests/unit/network/test_network_info.py
7da94440db (again) added tests for has_port_with_allocation()
nova/tests/unit/virt/libvirt/test_driver.py and
nova/virt/libvirt/driver.py are different in that attempting to
identify individual conflicts is a pointless exercise, as so much has
changed (mdev, vtmp, the recent wait for events during hard reboot
workaround config option, etc). They can be treated as
manual removal of any code that had to do with the bind-time events
logic (though guided by the conflict markers in git).
TODO(artom) There was a follow up commit,
78a08d44ea, that added the migration
parameter to finish_revert_migration(). This is no longer needed, as
the migration was only used to obtain plug-time events. We'll have to
undo that as well.
Closes-bug: 1952003
Change-Id: I3cb39a9ec2c260f422b3c48122b9db512cdd799b
We have a gap in our testing of the exernal events interaction between
Nova and Neutron. The nova-next job tests with the OVS network
backend, and Neutron has jobs that test the OVN network backend, but
nothing tests OVS + the iptables security group firewall driver, aka
"hybrid plug". Add a job to test that.
Related-bug: 1952003
Change-Id: Ie42eaa2a39ef097b0eb69b8863bb342bae007fff
We no longer have a Xen driver. This is an unnecessary dependency.
Change-Id: Ic298fa9ac4a8935ce4e0dc17d8842d399d4eb808
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
We recently added a hard failure to nova service startup for the case
where computes were more than one version old (as indicated by their
service record). This helps to prevent starting up new control
services when a very old compute is still running. However, during an
FFU, control services that have skipped multiple versions will be
started and find the older compute records (which could not be updated
yet due to their reliance on the control services being up) and refuse
to start. This creates a cross-dependency which is not resolvable
without hacking the database.
This patch adds a workaround flag to allow turning that hard fail into
a warning to proceed past the issue. This less-than-ideal solution
is simple and backportable, but perhaps a better solution can be
implemented for the future.
Related-Bug: #1958883
Change-Id: Iddbc9b2a13f19cea9a996aeadfe891f4ef3b0264
The commit I168fffac8002f274a905cfd53ac4f6c9abe18803 added a wrapper
around fasteners.ReaderWriterLock to fix up an issue with eventlet. But
the wrapper was added to nova.utils module that is use not only by the
nova tests but also the nova production code. This made the fixture
library a dependency of the nova production code. While the current
ReaderWriterLock usage only limited to the nova test sub tree. The
I712f88fc1b6053fe6d1f13e708f3bd8874452a8f commit fix the issue of not
having fixtures in the nova requirements.txt. However I think a better
fix is to move the wrapper to the test subtree instead. This patch does
that and restores the state of the requirements.txt
Change-Id: I6903ce53b9b91325f7268cf2ebd02e4488579560
Related-Bug: #1958075
The commit 887c445a7a made the nova.utils
module dependent on the fixtures library but the change missed updating
requirements and the fixtures library is not installed automatically.
This change migrates the fixtures library from test-requirements.txt to
requirements.txt so that the library is installed without test codes.
Closes-Bug: #1958075
Change-Id: I712f88fc1b6053fe6d1f13e708f3bd8874452a8f
This is a follow up change to I168fffac8002f274a905cfd53ac4f6c9abe18803
which added a hackaround to enable our tests to pass with
fasteners>=0.15 which was upgraded recently as part of a
openstack/requirements update.
The ReaderWriterLock from fasteners (and thus lockutils) cannot work
correctly with eventlet patched code, so this adds a wrapper containing
the aforementioned hackaround along with a hacking check to do our best
to ensure that future use of ReaderWriterLock will be through the
wrapper.
Change-Id: Ia7bcb40a21a804c7bc6b74f501d95ce2a88b09b5
This patch adds a regression test which asserts that if a live migration
is aborted while it's 'queued', the instance's status is never reverted
back to ACTIVE, and instance remains in MIGRATING state.
There is simple idea behind implemented LiveMigrationQueuedAbortTest:
we start two instances on the same compute and try to migrate
them simultaneously when max_concurrent_live_migrations is set to 1
and nova.tests.fixtures.libvirt.Domain.migrateToURI3 is locked.
As a result, we get two live migrations stuck in 'migrating' and
'queued' states and we can issue API call to abort the second one.
Lock is removed and first instance is live migrated after second
instance's live migration is aborted.
Co-Authored-By: Alex Stupnikov <aleksey.stupnikov@gmail.com>
Partial-Bug: #1949808
Change-Id: I67d41a8e439b1ff3c5983ee17823616b80698639
This patch adds a workaround that can be enabled
to send an announce_self QEMU monitor command
post live-migration to send out RARP frames
that was lost due to port binding or flows not
being installed.
Please note that this makes marks the domain
in libvirt as tainted.
See previous information about this issue in
the [1] bug.
[1] https://bugs.launchpad.net/nova/+bug/1815989
Change-Id: I7a6a6fe5f5b23e76948b59a85ca9be075a1c2d6d
Related-Bug: 1815989