This is technical dead end and not something we're going to be able to
support long-term in pbr. We need to push users away from this. Doing so
highlights quite a few place where our docs need some work, particularly
in light of the recent removal of the eventlet servers.
Change-Id: I2ffaed710fac2612f5337aca5192af15eab46861
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
When using the weigher, we need to target the right cell context for the
existing instances in the host.
fill_metadata was also having an issue as we need to pass the dict value
from the updated dict by keying the instance uuid, not the whole dict of
updated instances.
Change-Id: I18260095ed263da4204f21de27f866568843804e
Closes-Bug: #2125935
Signed-off-by: Sylvain Bauza <sbauza@redhat.com>
Previous patches removed direct eventlet usage from nova-conductor so
now we can run it with native threading as well. This patch documents
the possibility and switches both nova-conductor process to native
threading mode in the nova-next job.
Change-Id: If26c0c7199cbda157f24b99a419697ecb6618fa6
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
Add file to the reno documentation build to show release notes for
stable/2025.2.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/2025.2.
Sem-Ver: feature
Change-Id: I7d967c1d5b1ac7fa2e601acfa25c3b5c3880056e
Signed-off-by: OpenStack Release Bot <infra-root@openstack.org>
Generated-By: openstack/project-config:roles/copy-release-tools-scripts/files/release-tools/add_release_note_page.sh
The /os-hypervisors/detail API endpoint was experiencing significant
performance issues in environments with many compute nodes when using
microversion 2.88 or higher, as it made sequential RPC calls to gather
uptime information from each compute node.
This change optimizes uptime retrieval by:
* Adding uptime to periodic resource updates sent by nova-compute to the
database, eliminating synchronous RPC calls during API requests
* Restricting RPC-based uptime retrieval to hypervisor types that support
it (libvirt and z/VM), avoiding unnecessary calls that would always fail
* Preferring cached database uptime data over RPC calls when available
Closes-Bug: #2122036
Assisted-By: Claude <noreply@anthropic.com>
Change-Id: I5723320f578192f7e0beead7d5df5d7e47d54d2b
Co-Authored-By: Sylvain Bauza <sbauza@redhat.com>
Signed-off-by: Sean Mooney <work@seanmooney.info>
When VMCoreInfo device is enabled, the QEMU fw_cfg device in guest OS
requires DMA between host OS and guest OS through the device. However
DMA is prohibited when guest memory is encrypted using SEV, and
the attempt results in kernel crash.
Do not add VMCoreInfo when memory encryption is enabled.
Closes-Bug: #2117170
Change-Id: I05c7b1ae46ccd8d9aa42456b493ac6ee7ddd8bae
Signed-off-by: Takashi Kajinami <kajinamit@oss.nttdata.com>
This is the last piece to allow users to request AMD SEV-ES for memory
encryption instead of AMD SEV. The CPU feature for memory encryption
can now be requested via the hw:mem_encryption_model flavor extra spec
or via the hw_mem_encryption_model image property.
Implements: blueprint amd-sev-es-libvirt-support
Change-Id: Ifc9b86ad7db887cc22b2cd252fe8adc81fdc29c6
Signed-off-by: Takashi Kajinami <kajinamit@oss.nttdata.com>
Detect AMD SEV-ES support by kernel/qemu/libvirt and generate a nested
RP for ASID slots for SEV-ES under the compute node RP.
Deprecate the [libvirt] num_memory_encryption_guests option because
the option is effective only for SEV, and now the maximum numbers for
SEV/SEV-ES guests can be detected by domain capabilities presented by
libvirt.
Note that creating an instance with memory encryption enabled now
requires AMD SEV trait, because these instances can't run with SEV-ES
slots, which are added by this change.
Partially-Implements: blueprint amd-sev-es-libvirt-support
Change-Id: I5968e75325b989225ed1fc6921257751ae227a0b
Signed-off-by: Takashi Kajinami <kajinamit@oss.nttdata.com>
This patch refines our logging, doc, and release notes about the native
threading mode of scheduler, api, and metadata services to ask for
pre-prod testing before enabled in production.
Change-Id: I04bbb3d7e4664a0cab8b30f4c34ee71774536353
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
This change tightens the validation around the attachment
update API to ensure that it can only be called if the source
volume has a non empty migration status.
That means it will only accept a request to swap the volume if
it is the result of a cinder volume migration.
This change is being made to prevent the instance domain
XML from getting out of sync with the nova BDM records
and cinder connection info. In the future support for direct
swap volume actions can be re-added if and only if the
nova libvirt driver is updated to correctly modify the domain.
The libvirt driver is the only driver that supported this API
outside of a cinder orchestrated swap volume.
By allowing the domain XML and BDMs to get out of sync
if an admin later live-migrates the VM the host path will not be
modified for the destination host. Normally this results in a live
migration failure which often prompts the admin to cold migrate instead.
however if the source device path exists on the destination the migration
will proceed. This can lead to 2 VMs using the same host block device.
At best this will cause a crash or data corruption.
At worst it will allow one guest to access the data of another.
Prior to this change there was an explicit warning in nova API ref
stating that humans should never call this API because it can lead
to this situation. Now it considered a hard error due to the
security implications.
Closes-Bug: #2112187
Depends-on: https://review.opendev.org/c/openstack/tempest/+/957753
Change-Id: I439338bd2f27ccd65a436d18c8cbc9c3127ee612
Signed-off-by: Sean Mooney <work@seanmooney.info>
It turns out that nova-api and nova-metadata only depend on spawning
threads via scatter-gather. The scatter-gather already supports both
eventlet and threading mode so we can switch these services.
Our WSGI services (nova-api, nova-metadata) are not relying on
oslo.service to fork worker processes, but expect the web server to
handle that (uwsgi, apache mod_wsgi). This means we don't need to handle
any forking issues as no nova code runs before the fork.
Change-Id: Id3a339c605dfc730bdb7994c3ca45baafeb5af80
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
At the service startup nova need to initialize either the eventlet or
the threading backend of oslo.service. So this patch reuses the existing
logic behind OS_NOVA_DISABLE_EVENTLET_PATCHING.
When OS_NOVA_DISABLE_EVENTLET_PATCHING env variable is set to true the
service will select the threading backend otherwise the eventlet
backend.
Also to avoid later monkey patch calls to invalidated the selection if
the threading backend is selected then the monkey_patch code is
poisoned.
This patch also makes sure that oslo.messaging also initialized with the
matching executor backend.
As this is the last step to make nova-scheduler run in threading mode
this patch adds a release notes as well.
Change-Id: I6e2e6a43df78d23580b5e7402352a5036100ab36
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
Neutron has used the term project instead of tenant for a long time now.
Rename the option accordingly and drop deprecated group and deprecated
name aliases from other options in the '[api]' group.
Change-Id: I5a547c7b6232c24b3a0f0c6d0ac916229a91b038
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Should be using 'upgrade', not 'upgrades'.
This fixes the upgrade note not being shown in the release notes.
Change-Id: I9ba3751988bb5ca2ddd89e8cffbc88d818068e88
Signed-off-by: Callum Dickinson <callum.dickinson@catalystcloud.nz>
This patch adds the image_meta used to launch an instance to its
libvirt domain metadata.
Nova exposes the image_meta structure when publishing
instance notifications. Downstream services that consume these
notifications such as Ceilometer use this to provide metadata about
the image originally used to create an ephemeral instance or
instance boot volume.
Ceilometer also polls the running instances using the Compute Agent
by reading the metadata of active instances from the libvirt socket.
Adding the data stored in image_meta to the libvirt metadata allows
Ceilometer to discover and expose the actual image metadata used to
launch instances using its compute pollsters, without performing
additional API queries to Nova, Cinder and Glance to get this
information (and even if that was done, it could be different to what
is actually running if images are updated after the fact).
To match the existing image_meta definition from Nova notifications,
depending on the type of instance, the behaviour of the metadata is:
* Instance built from image
=> UUID set for image, image metadata added to the XML
* Instance launched from volume built from image
=> UUID empty, volume image metadata added to the XML
* Instance launched from volume NOT built from image
=> UUID empty, no attributes from image meta defined
Signed-off-by: Callum Dickinson <callum.dickinson@catalystcloud.nz>
Implements: blueprint xml-image-meta
Change-Id: I09f4f76fff30f9cccf35f4832b9c870095c380ad
This change adds the the following new information to the existing
flavor metadata structure in the libvirt guest XML:
* Flavor ID
* Extra specs
Downstream clients that query this guest XML such as Ceilometer
may also require this information. If it's not defined in this
metadata, clients are forced to perform a Nova API query just
to fetch this additional information.
This change should almost eliminate the need to perform such
API queries.
Signed-off-by: Callum Dickinson <callum.dickinson@catalystcloud.nz>
Implements: blueprint xml-image-meta
Change-Id: I249bc117a796f28e9929e12707a5afb6c869eb89
Nova adds the temporary shelved image ID to libvirt metadata
when unshelving image-backed instances. This is corrected when
the instance is cold restarted, resized or migrated but causes
issues for other services such as Ceilometer which rely on this
data being correct.
This patch ensures the correct image ID is set in the libvirt
domain metadata when image-backed instances are unshelved.
Signed-off-by: Callum Dickinson <callum.dickinson@catalystcloud.nz>
Co-Authored-By: Jeremy Lamb <jeremy.lamb@catalystcloud.nz>
Closes-Bug: #2100588
Change-Id: Ifd9f092299912606931848b2b25b4be6b36effac
This is the implementation for the USB controller extra spec as
desired by the new libvirt spice-direct console mode. USB device
redirection support is a frequently requested feature for VDI users.
Change-Id: I71edd03b5c63a8028c23a746c01c59d303994144
Signed-off-by: Michael Still <mikal@stillhq.com>
Keystone project manager role can be used for the project-level
management APIs. Nova introduced the manager role in policy
defaults.
To introduced the manager role, we need to make migrations
policies more granular. Adding the separate policies for host
related operation allow us to open the migration operations
to project manager role. Existing policy is checked if migration
without specifying host and new policy is checked if host is
specified. Same will be applied to list migrations, new policy
will control to return the host info.
Also, Adding doc and releasenotes.
Partial implement blueprint policy-manager-role-default
Change-Id: Ie7d135e4d24ac6d53c46a4c69ade0b0bda554e71
Signed-off-by: Ghanshyam Mann <gmaan@ghanshyammann.com>
Signed-off-by: ghanshyam <gmaan@ghanshyammann.com>
This is the implementation for the sound model extra spec as
desired by the new libvirt spice-direct console mode. Sound
support is a frequently requested feature for VDI users.
Change-Id: I33b8fc0136b4c1783b5c493e8ca9a15110767f6c
Signed-off-by: Michael Still <mikal@stillhq.com>
The libvirt driver now automatically enables autodeflate and
freePageReporting attributes for virtio memory balloon devices.
The autodeflate feature allows the QEMU virtio memory balloon
to release memory before the Out of Memory killer activates.
The freePageReporting feature enables returning unused pages
back to the hypervisor for use by other guests or processes,
improving overall memory efficiency on compute hosts.
These features are always enabled when a memballoon device is
configured, requiring no additional configuration from operators.
implements: blueprint automatic-memballoon-freeing
Generated-By: claude-code
Change-Id: If47a6d38cd311b08b78acffb307a99a7a2a080a1
Signed-off-by: Sean Mooney <work@seanmooney.info>
The _create_ephemeral() method is responsible for creating ephemeral
disks with image type "raw" and formatting them with mkfs. In the case
of [libvirt]images_type "qcow2", _create_ephemeral() will create
backing files.
Currently we are not using a consistent naming convention for choosing
the filesystem label for ephemeral disks. When we create a server for
example, we go through the disks and label them "ephemeral0",
"ephemeral1", "ephemeral2", etc.
When we hard reboot a server, there is a check to create missing
backing files and if so, a new backing file will be created but instead
of being labeled "ephemeralN" the code attempts to label them with the
name of the backing file itself for example "ephemeral_1_40d1d2c". This
will fail if the filesystem used for ephemeral disks has limitations on
the length of filesystem label names (VFAT, XFS, ...). For example:
mkfs.vfat: Label can be no longer than 11 characters
This adds a helper method for obtaining ephemeral disks filesystem
label names and uses it the same way in the few places fs_label is
specified.
Closes-Bug: #2061701
Change-Id: Id033a5760272e4fb06dee2342414b26aa16ffe24
This change bumps to the latest version of each
of our pre-commit hooks. Of note this add py3.13
support to autopep8.
Codespell was also updated and the new spelling
issues resolved.
Change-Id: I1aab019ffb0ee9366a7d26515bef1335d09044df
The tl;dr is to 1) avoid trying to disconnect volumes on the
destination if they were never connected in the first place and
2) avoid trying to disconnect volumes on the destination using block
device info for the source.
Details:
* Only remotely disconnect volumes on the destination if the failure
was not during pre_live_migration(). When pre_live_migration() fails,
its exception handling deletes the Cinder attachment that was created
before re-raising and returning from the RPC call. And the BDM
connection_info in the database is not guaranteed to reference the
destination because a failure could have happened after the Cinder
attachment was created but before the new connection_info was saved
back to the database. In this scenario, there is no way to reliably
disconnect volumes in the destination remotely from the source because
the destination connection_info needed to do it might not be
available.
* Due to the first point, this adds exception handling to disconnect
the volumes while still on the destination, while the destination
connection_info is still available instead of trying to do it
remotely from the source afterward.
* Do not pass Cinder volume block_device_info when calling
rollback_live_migration_on_destination() because volume BDM records
have already been rolled back to contain info for the source by
that point. Not passing volume block_device_info will prevent
driver.destroy() and subsequently driver.cleanup() from attempting to
disconnect volumes on the destination using connection_info for the
source.
Closes-Bug: #1899835
Change-Id: Ia62b99a16bfc802b8ba895c31780e9956aa74c2d
The nova debuger functionality was intended
to help debugging running process however it has
never been reliable due to our use of eventlet and is generally
not required when not using eventlet. I.e. you can just
run the nova console-scripts form a debugger or add pdb
statements as required.
As part of the eventlet removal the debugger functionality is
removed given its untested and undocumented.
Change-Id: I7bf88f06f3d1dbd2c7e342b27a21440a123c631d