During graceful shutdown, compute service keep a 2nd RPC
server active which can be used to finish the in-progress
operations. Like live migration, resize and cold migrations
also perform RPC call among source and destination compute.
For those operation also, we can use 2nd RPC server and make
sure they will be completed during graceful shutdown.
A quick overview of what all RPC methods are involved in the
resize/cold migration and what all will be using 2nd RPC server:
Resize/cold migration
- prep_resize: No, resize/migration is not started yet.
- resize_instance: Yes, here the resize/migration starts.
- finish_resize: Yes
- cross cell resize case:
- prep_snapshot_based_resize_at_dest: NO, this is initial check and
migration is not started
- prep_snapshot_based_resize_at_source: Yes, this start the migration
Confirm resize: NO
- confirm_resize: NO
- cross cell confirm resize case:
- confirm_snapshot_based_resize - NO
Revert resize:
- revert_resize - NO
- check_instance_shared_storage: YES. This is called from dest to source
so we need source to respond to it so that revert can continue.
- finish_revert_resize on source- YES, at this stage, revert resize is
in progress and abandoning it here can lead migration to unreocverable
state.
- cross cell revert case:
- revert_snapshot_based_resize_at_dest: NO
- finish_revert_snapshot_based_resize_at_source: YES
Partial implement blueprint nova-services-graceful-shutdown-part1
Change-Id: If08b698d012a75b587144501d829403ec616f685
Signed-off-by: Ghanshyam Maan <gmaan.os14@gmail.com>
For graceful shutdown of compute service, it will have two RPC servers.
One RPC server is used for the new requests which will be stopped during
graceful shutdown and 2nd RPC server (listen on 'compute-alt' topic)
will be used to complete the in-progress operations.
We select the operations (case by case) and their RPC method to use
the 2nd PRC server so that they will not be interupted on shutdown
initiative and graceful shutdown time will keep 2nd RPC server active
for graceful_shutdown_timeout. A new method 'prepare_for_alt_rpcserver'
is added which will fallback to first RPC server if it detect the old
compute.
As this is upgrade impact, it bumps the compute/service version, adds
releasenotes for the same.
The list of operations who should use the 2nd RPC server will grow
evanutally and this commit moves the below operations to use the 2nd
RPC server:
* Live migration
- Live migration: It use 2nd RPC servers and will try to complete
the operation during shutdown.
- live_migration_force_complete does not need to use 2nd RPC server.
It is direct RPC request from API to compute and if that is
rejected during shutdown, it is fine and can be initiated again
once compute is up.
- live_migration_abort does not need to use 2nd RPC server. Ditto,
it is direct RPC request from API to compute. It cancel the queue
live migration but if migration is already started, then driver
cancel the migration. If it is rejected during shutdown because of
RPC is stopped, it is fine and can be initiated again.
* server external event
* Get server console
As graceful shutdown cannot be tested in tempest, this adds a new job
to test it. Currently it test the live migration operation which can
be extended to other operations who will use 2nd RPC server.
Partial implement blueprint nova-services-graceful-shutdown-part1
Change-Id: I4de3afbcfaefbed909a29a831ac18060c4a73246
Signed-off-by: Ghanshyam Maan <gmaan.os14@gmail.com>
Previous patches removed direct eventlet usage from nova-compute so
now we can run it with native threading as well. This patch documents
the possibility and switches both nova-compute processes to native
threading mode in the nova-next job.
Change-Id: I7bb29c627326892d1cf628bbf57efbaedda12f1a
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
Move the execution of build_and_run_instance and snapshot_instance to
one common long task executor. Originally snapshot ran
on the RPC pool, build_and_run_instance ran on the default pool.
Also each of these tasks had a separate concurrency limit enforced by a
semaphore.
After this patch each of these tasks use a common Executor. The size of
that executor and the way how we limit the concurrency differs in
eventlet and in native threading mode.
In eventlet mode we have one big Executor with "unlimit" size and
individual semaphores are used for each task type to enforce the
configured limits.
In threading mode we requests the admin to configure the 2 limits to the
same number, and we warn if not. We use that limit (or the max of the 2
limits) as the size of the long task Executor. As the limits are the
same we don't enforce individual limit any more. The executor size will
ensure the shared limit is kept. As the limit is shared a single
operation type can consume the whole limit.
Note that while live migration is a long-running task we cannot put it into
the same long_task_executor as build and snapshot as we need:
1. a very small limit of concurrent live migrations compared to
builds and snapshots
2. a way to cancel live migrations easily that are waiting due to the
limit
Change-Id: I88a6a593af8a5b518715e1245a76ee54752afe83
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
We already deprecated the unlimited max_concurrent_live_migrations
config value and now we do the same for max_concurrent_builds and
max_concurrent_snapshots as well. The reason is similar.
* The unlimited meaning was a lie, it was limited by other constructs in
the code. For these option the limit was the size of the RPC executor
defaulted to 64.
* In native threading mode having unlimited concurrent tasks is
unfeasible due to the memory cost of native threads for each task.
The deprecation is done in a way that in eventlet mode we keep a similar
behavior as before but in native threading mode we enforce a strict
maximum even if unlimited is requested.
Change-Id: Ibbf76c2c85729820035c9791719bf2c864bce12b
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
Use the firmware auto-selection feature in libvirt to find the best
UEFI firmware file according to the requested feature.
Firmware files may be reselected when a libvirt domain is created from
scratch, while these are kept during hard-reboot (or live migration
which preserves the loader/nvram elements filled by libvirt).
Closes-Bug: #2122296
Related-Bug: #2122288
Implements: blueprint libvirt-firmware-auto-selection
Change-Id: Ie48b020597a1a2fb3280815eec5ba3565e396f9b
Signed-off-by: Takashi Kajinami <kajinamit@oss.nttdata.com>
Preserve NVRAM variable store during stop/start, hard reboot, live
migration, and volume retype.
This does not affect cold migration or shelve.
For UEFI guests (hw_firmware_type=uefi), every time the instance is
started, the UEFI variable storage for that instance
(/var/lib/libvirt/qemu/nvram/instance-xxxxxxxx_VARS.fd) is deleted
and reinitialized from the default template.
The changes are based on this patch by Jonas Schäfer to preserve the
vTPM state:
https://review.opendev.org/c/openstack/nova/+/955657
Closes-Bug: #1633447
Closes-Bug: #2131730
Change-Id: I444a9285c07a04bf08a73772235f8dd73d75e513
Signed-off-by: Nicolai Ruckel <nicolai.ruckel@cloudandheat.com>
Add support for os-vif TAP device pre-creation when Neutron sets
the 'ovs_create_tap' flag in vif_details. This reduces live
migration downtime by ensuring the network is fully wired before
the VM starts.
Changes:
- Add VIF_DETAILS_OVS_CREATE_TAP constant to model.py
- Propagate create_tap from binding details to os-vif port profile
in os_vif_util.py
- Set managed='no' in libvirt XML when create_tap is enabled so
libvirt uses the pre-created TAP device
- Set multiqueue on port profile in _plug_os_vif based on instance
flavor/image hw:vif_multiqueue_enabled property
When checking oslo.versionedobjects fields for backward compat:
- Use 'field in obj.fields' to check if field exists in schema
- Use 'field in obj' to check if field value is set
Depends-On: https://review.opendev.org/c/openstack/os-vif/+/971231
Generated-By: Cursor claude-opus-4.5
Closes-Bug: #2069718
Change-Id: I32343658b53e317696d1bd8b984793bfeeccd409
Signed-off-by: Sean Mooney <work@seanmooney.info>
QEMU's scsi-block device driver does not support physical_block_size
and logical_block_size properties. When Cinder reports disk geometry
for LUN volumes, Nova was incorrectly including a <blockio> element
in the libvirt XML, causing QEMU to fail with:
Property 'scsi-block.physical_block_size' not found
This fix adds a check to skip blockio generation when source_device
is 'lun', following the existing pattern used for serial at line 1356.
Generated-By: claude-code (Claude Opus 4.5)
Closes-Bug: #2127196
Change-Id: Idf87e936edd97aac719222942c9842a9aca4c270
Signed-off-by: Sean Mooney <work@seanmooney.info>
Ironic is adding support for VNC consoles tracked under the following
spec[1]. This change provides support for the Nova Ironic driver to
access the consoles created by this feature effort.
This supersedes an existing Nova spec[2] to add VNC console support to
the Ironic driver, so this change can be considered to implement this
spec also. This change can be merged independently of the Ironic work,
as the Ironic driver handles the VNC console not being available.
The pre-requesites for a graphical console being available for an Ironic
driver node is:
- Ironic is configured to enable graphical consoles
- The node ``console_interface`` is a graphical driver such as
``redfish-graphical`` or ``fake-graphical``
- ``nova-novncproxy`` can make network connections to the VNC servers
which run adjacent to ``ironic-conductor``
The associated depends on adds the novnc validation check to the
baremetal basic ops, which is run in job
ironic-tempest-ipa-wholedisk-bios-agent_ipmitool-tinyipa.
In the support matrix console.vnc support is set to partial for ironic
due to the current lack of vencrypt support on the ironic side.
[1] https://specs.openstack.org/openstack/ironic-specs/specs/approved/graphical-console.html
[2] https://specs.openstack.org/openstack/nova-specs/specs/2023.1/approved/ironic-vnc-console.html
Related-Bug: 2086715
Implements: blueprint ironic-vnc-console
Change-Id: Iec26c67e29f91954eafc6a5a81086e36798d3f26
Signed-off-by: Steve Baker <sbaker@redhat.com>
This changes the thread pool usage of the ComputeManager to go through
the concurrency mode aware util functions.
The concurrent live migration pool had a seemingly unlimited option
when configured with value 0, but in reality GreenThreadPool has a
default worker size of 1000. In reality it is almost never right to
have more than one live migration running concurrently. Also with
native threading having 1000 worker is just too costly. So we
decided to deprecate the value 0 and changed the implementation of
unlimited to mean 5 threads in native threading mode. We kept the 1000
greenthread in eventlet mode for backward compatibility.
The _sync_power_states periodic task also spawn tasks for each instance
to be synced. As it uses a shared data structure across these tasks
and the caller a lock is needed to avoid race conditions.
Also the default pool size is 1000 for these tasks in our configuration.
That would use a lot of memory on a busy host in native threading mode.
So we changed the default value from 1000 to 5.
Change-Id: I9567d5fabdf086b5d0493103d9f6bde4f66af387
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
This is a follow up for the release notes added in the commit
35207ee8b5 that changed the default mode
for the scheduler and the API services. At that time we missed to note
the upgrade impact of such change. So this patch extends the reno with
an upgrade note.
Change-Id: I280e7eb9c1da6eeaf50e96e8b19e296961f2651a
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
Field is empty in the response of API GET /servers/detail if the
instance (hence instace_faults DB entry) is in nova cell DB.
Unlike that, for API /servers/:id fault is retrieved correctly no matter
in which nova cell the instance belongs.
Closes-Bug: #1856329
Change-Id: I1726f53cfeac0a67a5dacdddda2af2cc1db0af0f
Signed-off-by: Marius Leustean <marius.leustean@sap.com>
Make sure that the consistent program name is always set,so that
the same config sub-directory ( /etc/{project}/{prog}.conf.d ) is used
regardless of the way api service is run.
Closes-Bug: #2098514
Change-Id: Ib5c6d431176b83eefafddc1b35589015db6dfd04
Signed-off-by: Takashi Kajinami <kajinamit@oss.nttdata.com>
Ignore (1) stateless mode firmware and (2) memory device firmware which
do not include a few core keys such as nvram-template. This is
a temporal (and backportable) workaround until firmware detection using
libvirt's internal feature is implemented by [1]
[1] https://blueprints.launchpad.net/nova/+spec/libvirt-firmware-auto-selection
Closes-Bug: #2122288
Change-Id: I99bc36fdd5df816c9ae374db71e4734fb7fc467b
Signed-off-by: Takashi Kajinami <kajinamit@oss.nttdata.com>
States were added to the Ironic API to enable the node servicing
feature, which can be performed on nodes provisioned with Nova
instances. Current nova, if asked to delete these instances, will only
remove the instance metadata and not tear them down.
This change has two parts:
- I have added the new, relevant states to _UNPROVISION_STATES in
driver.py, which now allows Nova to know that SERVIC* states and
DEPLOYHOLD are safe to unprovision from.
- I have added all existing ironic states to ironic_states.py and the
PROVISION_STATE_LIST constant and check the state against it -- in a
case where a completely unknown state is returned, we should attempt
an unprovision.
This fix needs to be backported as far as possible, as this bug has
existed since Antelope / 2023.1 (DEPLOYHOLD) or Bobcat / 2023.3
(SERVIC*).
Assisted-by: Claude Code
Closes-bug: #2131960
Change-Id: I31c70d35b0e6e9f8d2252bfb2f0bdec477cc6cc7
Signed-off-by: Jay Faulkner <jay@jvf.cc>
Update the server shares API policies to use
PROJECT_READER_OR_ADMIN and PROJECT_MEMBER_OR_ADMIN instead of
PROJECT_READER and PROJECT_MEMBER.
This aligns the server shares policies with other compute API
policies and ensures administrators can list, attach, show and
detach shares regardless of project policy overrides.
Signed-off-by: René Ribaud <rene.ribaud@gmail.com>
Change-Id: I2b237d56b08e3080475dc500e204298018af29c7
With the NFS, FC, and iSCSI Cinder volume backends, Nova explicitly
sets AIO mode ``io=native`` in the Libvirt guest XML. Operators may set
this option to True in order to defer AIO mode selection to QEMU if
forcing ``io=native`` is not desired.
Closes-Bug: #2129788
Change-Id: I6e51706b5cb8be5becebbafe9108df1ba9e0f69f
Signed-off-by: melanie witt <melwittt@gmail.com>
If a host has multiple instance with the same shared
multi attach volume and you delete them in parallel
nova need to correctly clean up the volume connection on
the host when the last instance is removed.
currently we do not have a volume level lock to guard the
critical section that determins if the current disconnect is
removing the final usage of the volume.
This can lead to leaking the volume or other issues as
noted in bug: #2048837
This change introduces a FairLockGuard to ensure we acquire
and release the locks in a fair and orderd manner.
The FairLockGuard is used to lock the server delete with
one lock per multi attach volume.
This will ensure that disconnects of diffrent volumes can happen
in parallel but if we are disconnecting the same volume in multiple
greenthread concurrently they will be serialised.
Assisted-By: Cursor Auto
Closes-Bug: #2048837
Change-Id: I67e10cace451259127a5d7da8fbdf7739afe3e51
Signed-off-by: Sean Mooney <work@seanmooney.info>
This patch implements parallel live migrations for libvirt driver.
It is achieved through introduction of new configuration parameter
`live_migration_parallel_connections`.
This allows to eliminate bottleneck on live migration speed by
establishing multiple connections for memory transition, thus
leveraging multi-threaded behavior in QEMU.
Implements-blueprint: libvirt-parallel-migrate
Change-Id: I98ff5f07f94d94f3aa0227591f425d532773adb0
Signed-off-by: Dmitriy Rabotyagov <dmitriy.rabotyagov@cleura.com>
This patch switches the default concurrency mode to native threading
for the services that gained native threading support in Flamingo:
nova-scheduler, nova-api, and nova-metadata.
The OS_NOVA_DISABLE_EVENTLET_PATCHING env variable still can be used to
explicitly switch the concurrency mode to eventlet by
OS_NOVA_DISABLE_EVENTLET_PATCHING=false
We also ensure that the cover, docs, py3xx and functional tox targets
are still running with eventlet while py312-threading kept running
with native threading.
Change-Id: I86c7f31f19ca3345218171f0abfa8ddd4f8fc7ea
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
This is technical dead end and not something we're going to be able to
support long-term in pbr. We need to push users away from this. Doing so
highlights quite a few place where our docs need some work, particularly
in light of the recent removal of the eventlet servers.
Change-Id: I2ffaed710fac2612f5337aca5192af15eab46861
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
When retrieving multiple - or all - server groups, the code tries to
find not deleted members for each server group in every cell
individually. This is highly inefficient, which is especially noticable
when the number of server groups rises.
We change this to query all members of all server-groups we will reply
with (i.e. from the already limited list) in advance and pass this set
of existing uuids into the function formatting the server group. This is
more efficient, because we only do one large query instead of up to 1000
times the number of cells.
Change-Id: I3459ce7a8bec9a9e6f3a3b496a3e441078b86af0
Signed-off-by: Johannes Kulik <johannes.kulik@sap.com>
Partial-Bug: #2122109
When using the weigher, we need to target the right cell context for the
existing instances in the host.
fill_metadata was also having an issue as we need to pass the dict value
from the updated dict by keying the instance uuid, not the whole dict of
updated instances.
Change-Id: I18260095ed263da4204f21de27f866568843804e
Closes-Bug: #2125935
Signed-off-by: Sylvain Bauza <sbauza@redhat.com>
Previous patches removed direct eventlet usage from nova-conductor so
now we can run it with native threading as well. This patch documents
the possibility and switches both nova-conductor process to native
threading mode in the nova-next job.
Change-Id: If26c0c7199cbda157f24b99a419697ecb6618fa6
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
Add file to the reno documentation build to show release notes for
stable/2025.2.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/2025.2.
Sem-Ver: feature
Change-Id: I7d967c1d5b1ac7fa2e601acfa25c3b5c3880056e
Signed-off-by: OpenStack Release Bot <infra-root@openstack.org>
Generated-By: openstack/project-config:roles/copy-release-tools-scripts/files/release-tools/add_release_note_page.sh