States were added to the Ironic API to enable the node servicing
feature, which can be performed on nodes provisioned with Nova
instances. Current nova, if asked to delete these instances, will only
remove the instance metadata and not tear them down.
This change has two parts:
- I have added the new, relevant states to _UNPROVISION_STATES in
driver.py, which now allows Nova to know that SERVIC* states and
DEPLOYHOLD are safe to unprovision from.
- I have added all existing ironic states to ironic_states.py and the
PROVISION_STATE_LIST constant and check the state against it -- in a
case where a completely unknown state is returned, we should attempt
an unprovision.
This fix needs to be backported as far as possible, as this bug has
existed since Antelope / 2023.1 (DEPLOYHOLD) or Bobcat / 2023.3
(SERVIC*).
Assisted-by: Claude Code
Closes-bug: #2131960
Change-Id: I31c70d35b0e6e9f8d2252bfb2f0bdec477cc6cc7
Signed-off-by: Jay Faulkner <jay@jvf.cc>
Update the server shares API policies to use
PROJECT_READER_OR_ADMIN and PROJECT_MEMBER_OR_ADMIN instead of
PROJECT_READER and PROJECT_MEMBER.
This aligns the server shares policies with other compute API
policies and ensures administrators can list, attach, show and
detach shares regardless of project policy overrides.
Signed-off-by: René Ribaud <rene.ribaud@gmail.com>
Change-Id: I2b237d56b08e3080475dc500e204298018af29c7
With the NFS, FC, and iSCSI Cinder volume backends, Nova explicitly
sets AIO mode ``io=native`` in the Libvirt guest XML. Operators may set
this option to True in order to defer AIO mode selection to QEMU if
forcing ``io=native`` is not desired.
Closes-Bug: #2129788
Change-Id: I6e51706b5cb8be5becebbafe9108df1ba9e0f69f
Signed-off-by: melanie witt <melwittt@gmail.com>
The change Ife39b55eb40c9cb8e61f1b2295b6d42cefe3a680 migrated mypy
configuration files from setup.cfg to pyproject.toml file, but a
comment in .pre-commit-config.yaml says to keep is in sync with
setup.cfg, which is incorrect.
This change updates comment in the .pre-commit-config.yaml file to
reflect the change.
Signed-off-by: Rajesh Tailor <ratailor@redhat.com>
Change-Id: I4d35b989e8c90b629bcb15438ad82f60f7ca8957
Start supporting booting instances with the `host` TPM secret
security. This means setting the `ephemeral` and `private` attributes
on the Libvirt secret correctly, and not undefining the secret once
the instance has spawned. The Libvirt fixture's Secret support is
extended to be able to test all that in a functional test.
For functional testing, we need to:
* Extend our libvirt fixture's Secret object to properly set the usage
id (which is just the instance UUID) when parsing vTPM secret XML.
Related to blueprint vtpm-live-migration
Change-Id: I5a38a0de76a78b28a205a8d19f2374830054e1ab
Signed-off-by: melanie witt <melwittt@gmail.com>
The `user` secret security policy is just existing behavior. No
changes are necessary in the mechanics, so this patch just adds a
scheduler prefilter and tests. The functional tests add some
groundwork to make future tests easier as well by making the helper
methods more flexible.
For functional testing, we need to:
* Have our libvirt fixture keep track of undefined secrets. Secrets
are undefined as soon as the VM that uses them successfully boots
(as mentioned previously, VM creation follows this pattern), but our
tests would still like to assert that the secret had been created on
a host. Just add a _removed_secrets dict that _remove_secret()
populates.
Related to blueprint vtpm-live-migration
Change-Id: Ib449dc2f1c4a9af9d423252594261947e811452e
Signed-off-by: melanie witt <melwittt@gmail.com>
Key manager service secret ownership can be a challenge when dealing
vTPM instances. Some instance actions require access to the secret and
will fail if there is a mismatch.
In preparation for vTPM live migration changes which will involve
different users accessing secrets (user|admin|Nova service user), this
removes ADMIN_ONLY from the functional tests class and adds checking of
RequestContext user_id in the FakeKeyManager.
Change-Id: I2790cd274a4776ab306b39df1e591e8304b63f96
Signed-off-by: melanie witt <melwittt@gmail.com>
If a host has multiple instance with the same shared
multi attach volume and you delete them in parallel
nova need to correctly clean up the volume connection on
the host when the last instance is removed.
currently we do not have a volume level lock to guard the
critical section that determins if the current disconnect is
removing the final usage of the volume.
This can lead to leaking the volume or other issues as
noted in bug: #2048837
This change introduces a FairLockGuard to ensure we acquire
and release the locks in a fair and orderd manner.
The FairLockGuard is used to lock the server delete with
one lock per multi attach volume.
This will ensure that disconnects of diffrent volumes can happen
in parallel but if we are disconnecting the same volume in multiple
greenthread concurrently they will be serialised.
Assisted-By: Cursor Auto
Closes-Bug: #2048837
Change-Id: I67e10cace451259127a5d7da8fbdf7739afe3e51
Signed-off-by: Sean Mooney <work@seanmooney.info>
As part of I0b5e13673cb4cc7c57aeae50914ace443dfc18fa
a new depency was created on a placement config
option and the workarounds config group
This enabled the workaround added in
I13ab83a165c229ae57876df4570e8af25221a45e
which is present on master but not in a release
That works in ci because in ci we use placement
master but locally and in the requirement repo
we do not.
Closes-Bug: #2131032
Change-Id: I744049b5cf0ef69624fc4b6db1e5f415ab89a5af
Signed-off-by: Sean Mooney <work@seanmooney.info>
This patch implements parallel live migrations for libvirt driver.
It is achieved through introduction of new configuration parameter
`live_migration_parallel_connections`.
This allows to eliminate bottleneck on live migration speed by
establishing multiple connections for memory transition, thus
leveraging multi-threaded behavior in QEMU.
Implements-blueprint: libvirt-parallel-migrate
Change-Id: I98ff5f07f94d94f3aa0227591f425d532773adb0
Signed-off-by: Dmitriy Rabotyagov <dmitriy.rabotyagov@cleura.com>
The virt driver interface assumes that init_host is called before any
other query to the virt driver. The libvirt virt driver cannot fully
function otherwise. If any connection is made to the hypervisor before
driver.init_host then the libvirt lifecycle events will not function and
libvirt returns the warning:
URI qemu:///system does not support events: internal error: could not
initialize domain event timer: libvirt.libvirtError: internal error:
could not initialize domain event timer
During the first startup of the nova-compute service
ComputeManager.init_host checks if the hypervisor has any instances to
detect if this is not really the first start of the compute service on
the host. But that code path happens before ComputeManager.init_host
initialize the virt driver via driver.init_host.
This patch reorders the calls to make sure the driver is initialized
before use.
Closes-Bug: #2130881
Change-Id: I814a2f3982d481a1f926fe13465a19955c4f48f2
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
Show that ComputeManager.init_host calls the driver before calling
driver.init_host.
Related-Bug: #2130881
Change-Id: I364ecd4277fe8d5e62629355105fa799d7dabf19
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
This patch switches the default concurrency mode to native threading
for the services that gained native threading support in Flamingo:
nova-scheduler, nova-api, and nova-metadata.
The OS_NOVA_DISABLE_EVENTLET_PATCHING env variable still can be used to
explicitly switch the concurrency mode to eventlet by
OS_NOVA_DISABLE_EVENTLET_PATCHING=false
We also ensure that the cover, docs, py3xx and functional tox targets
are still running with eventlet while py312-threading kept running
with native threading.
Change-Id: I86c7f31f19ca3345218171f0abfa8ddd4f8fc7ea
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
This move is needed so that we can define a per service default for
monkey patching.
And yes, the single line with both noqa and autopep8 decorators are
needed to convince autopep8 that this code is OK to be at the start of
the file.
After moving the monkey_patching earlier in the wsgi entrypoint I needed
to move the functional test monkey_patching call earlier too to keep the
early enough for the test where the wsgi entry point is not directly
imported
Change-Id: Idedd2a440adc1cde1e8ffe6636854d5a891e66d2
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
This patch renames the nova-ovs-hybrid-plug Job to
nova-alt-configurations and ensures that all nova services are
running with eventlet even after some of the services switches to
native threading by default. This ensures we keep eventlet test
coverage in place.
Change-Id: Id2b70aa3870f2bf5a28c875a7564f84c012c9456
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
This is technical dead end and not something we're going to be able to
support long-term in pbr. We need to push users away from this. Doing so
highlights quite a few place where our docs need some work, particularly
in light of the recent removal of the eventlet servers.
Change-Id: I2ffaed710fac2612f5337aca5192af15eab46861
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
When retrieving multiple - or all - server groups, the code tries to
find not deleted members for each server group in every cell
individually. This is highly inefficient, which is especially noticable
when the number of server groups rises.
We change this to query all members of all server-groups we will reply
with (i.e. from the already limited list) in advance and pass this set
of existing uuids into the function formatting the server group. This is
more efficient, because we only do one large query instead of up to 1000
times the number of cells.
Change-Id: I3459ce7a8bec9a9e6f3a3b496a3e441078b86af0
Signed-off-by: Johannes Kulik <johannes.kulik@sap.com>
Partial-Bug: #2122109