Without this, we won't notice errors raised in the operation thread.
Before 1cd1c472bd the unit test actually
forced such errors to be raised even if in the real code it would
never be raised. But that patch fixed the unit test fixture to be more
realistic without realizing that such fixture error also means that we
might have wrong assumptions about the code under test.
Now we know that exception from the live migration thread was
never raised back to the monitor thread. To improve logging we added a
future.result() call after the main monitoring code finished.
Also the code had complex way to signal the monitoring thread that the
migration thread returned early by registering a callback on the
migration thread and setting an event. This can be simplified to just
check the status of the future of the migration thread. So the event and
the callback is removed.
All this was found because commit 25fbf32f22
missed to add the new parallel arg to the mock of guest.migrate()
on master, but the exception was never propagated to the unit test on
master. Backporting that change showed that in the old unit test env
there is a valid exception.
Co-authored-by: Dan Smith <dms@danplanet.com>
Change-Id: I22683ad5118796c6406f80d8726053afa84fff56
Signed-off-by: Dan Smith <dansmith@redhat.com>
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
This was missed in commit 25fbf32f22
because of a bug in our _live_migration_operation() post-eventlet
handling.
Change-Id: I39a7d6ebd72d9938bcb60143dfc50bd6a9c994b0
Signed-off-by: Dan Smith <dansmith@redhat.com>
This has not been supported for some time.
Change-Id: Ic7073740deb0bf9670eebe77f0f8b0daca100a5c
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Make sure that the consistent program name is always set,so that
the same config sub-directory ( /etc/{project}/{prog}.conf.d ) is used
regardless of the way api service is run.
Closes-Bug: #2098514
Change-Id: Ib5c6d431176b83eefafddc1b35589015db6dfd04
Signed-off-by: Takashi Kajinami <kajinamit@oss.nttdata.com>
States were added to the Ironic API to enable the node servicing
feature, which can be performed on nodes provisioned with Nova
instances. Current nova, if asked to delete these instances, will only
remove the instance metadata and not tear them down.
This change has two parts:
- I have added the new, relevant states to _UNPROVISION_STATES in
driver.py, which now allows Nova to know that SERVIC* states and
DEPLOYHOLD are safe to unprovision from.
- I have added all existing ironic states to ironic_states.py and the
PROVISION_STATE_LIST constant and check the state against it -- in a
case where a completely unknown state is returned, we should attempt
an unprovision.
This fix needs to be backported as far as possible, as this bug has
existed since Antelope / 2023.1 (DEPLOYHOLD) or Bobcat / 2023.3
(SERVIC*).
Assisted-by: Claude Code
Closes-bug: #2131960
Change-Id: I31c70d35b0e6e9f8d2252bfb2f0bdec477cc6cc7
Signed-off-by: Jay Faulkner <jay@jvf.cc>
With the NFS, FC, and iSCSI Cinder volume backends, Nova explicitly
sets AIO mode ``io=native`` in the Libvirt guest XML. Operators may set
this option to True in order to defer AIO mode selection to QEMU if
forcing ``io=native`` is not desired.
Closes-Bug: #2129788
Change-Id: I6e51706b5cb8be5becebbafe9108df1ba9e0f69f
Signed-off-by: melanie witt <melwittt@gmail.com>
The change Ife39b55eb40c9cb8e61f1b2295b6d42cefe3a680 migrated mypy
configuration files from setup.cfg to pyproject.toml file, but a
comment in .pre-commit-config.yaml says to keep is in sync with
setup.cfg, which is incorrect.
This change updates comment in the .pre-commit-config.yaml file to
reflect the change.
Signed-off-by: Rajesh Tailor <ratailor@redhat.com>
Change-Id: I4d35b989e8c90b629bcb15438ad82f60f7ca8957
Start supporting booting instances with the `host` TPM secret
security. This means setting the `ephemeral` and `private` attributes
on the Libvirt secret correctly, and not undefining the secret once
the instance has spawned. The Libvirt fixture's Secret support is
extended to be able to test all that in a functional test.
For functional testing, we need to:
* Extend our libvirt fixture's Secret object to properly set the usage
id (which is just the instance UUID) when parsing vTPM secret XML.
Related to blueprint vtpm-live-migration
Change-Id: I5a38a0de76a78b28a205a8d19f2374830054e1ab
Signed-off-by: melanie witt <melwittt@gmail.com>
The `user` secret security policy is just existing behavior. No
changes are necessary in the mechanics, so this patch just adds a
scheduler prefilter and tests. The functional tests add some
groundwork to make future tests easier as well by making the helper
methods more flexible.
For functional testing, we need to:
* Have our libvirt fixture keep track of undefined secrets. Secrets
are undefined as soon as the VM that uses them successfully boots
(as mentioned previously, VM creation follows this pattern), but our
tests would still like to assert that the secret had been created on
a host. Just add a _removed_secrets dict that _remove_secret()
populates.
Related to blueprint vtpm-live-migration
Change-Id: Ib449dc2f1c4a9af9d423252594261947e811452e
Signed-off-by: melanie witt <melwittt@gmail.com>
Key manager service secret ownership can be a challenge when dealing
vTPM instances. Some instance actions require access to the secret and
will fail if there is a mismatch.
In preparation for vTPM live migration changes which will involve
different users accessing secrets (user|admin|Nova service user), this
removes ADMIN_ONLY from the functional tests class and adds checking of
RequestContext user_id in the FakeKeyManager.
Change-Id: I2790cd274a4776ab306b39df1e591e8304b63f96
Signed-off-by: melanie witt <melwittt@gmail.com>
If a host has multiple instance with the same shared
multi attach volume and you delete them in parallel
nova need to correctly clean up the volume connection on
the host when the last instance is removed.
currently we do not have a volume level lock to guard the
critical section that determins if the current disconnect is
removing the final usage of the volume.
This can lead to leaking the volume or other issues as
noted in bug: #2048837
This change introduces a FairLockGuard to ensure we acquire
and release the locks in a fair and orderd manner.
The FairLockGuard is used to lock the server delete with
one lock per multi attach volume.
This will ensure that disconnects of diffrent volumes can happen
in parallel but if we are disconnecting the same volume in multiple
greenthread concurrently they will be serialised.
Assisted-By: Cursor Auto
Closes-Bug: #2048837
Change-Id: I67e10cace451259127a5d7da8fbdf7739afe3e51
Signed-off-by: Sean Mooney <work@seanmooney.info>
As part of I0b5e13673cb4cc7c57aeae50914ace443dfc18fa
a new depency was created on a placement config
option and the workarounds config group
This enabled the workaround added in
I13ab83a165c229ae57876df4570e8af25221a45e
which is present on master but not in a release
That works in ci because in ci we use placement
master but locally and in the requirement repo
we do not.
Closes-Bug: #2131032
Change-Id: I744049b5cf0ef69624fc4b6db1e5f415ab89a5af
Signed-off-by: Sean Mooney <work@seanmooney.info>
This patch implements parallel live migrations for libvirt driver.
It is achieved through introduction of new configuration parameter
`live_migration_parallel_connections`.
This allows to eliminate bottleneck on live migration speed by
establishing multiple connections for memory transition, thus
leveraging multi-threaded behavior in QEMU.
Implements-blueprint: libvirt-parallel-migrate
Change-Id: I98ff5f07f94d94f3aa0227591f425d532773adb0
Signed-off-by: Dmitriy Rabotyagov <dmitriy.rabotyagov@cleura.com>
The virt driver interface assumes that init_host is called before any
other query to the virt driver. The libvirt virt driver cannot fully
function otherwise. If any connection is made to the hypervisor before
driver.init_host then the libvirt lifecycle events will not function and
libvirt returns the warning:
URI qemu:///system does not support events: internal error: could not
initialize domain event timer: libvirt.libvirtError: internal error:
could not initialize domain event timer
During the first startup of the nova-compute service
ComputeManager.init_host checks if the hypervisor has any instances to
detect if this is not really the first start of the compute service on
the host. But that code path happens before ComputeManager.init_host
initialize the virt driver via driver.init_host.
This patch reorders the calls to make sure the driver is initialized
before use.
Closes-Bug: #2130881
Change-Id: I814a2f3982d481a1f926fe13465a19955c4f48f2
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
Show that ComputeManager.init_host calls the driver before calling
driver.init_host.
Related-Bug: #2130881
Change-Id: I364ecd4277fe8d5e62629355105fa799d7dabf19
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
This patch switches the default concurrency mode to native threading
for the services that gained native threading support in Flamingo:
nova-scheduler, nova-api, and nova-metadata.
The OS_NOVA_DISABLE_EVENTLET_PATCHING env variable still can be used to
explicitly switch the concurrency mode to eventlet by
OS_NOVA_DISABLE_EVENTLET_PATCHING=false
We also ensure that the cover, docs, py3xx and functional tox targets
are still running with eventlet while py312-threading kept running
with native threading.
Change-Id: I86c7f31f19ca3345218171f0abfa8ddd4f8fc7ea
Signed-off-by: Balazs Gibizer <gibi@redhat.com>