Nova Hyper-V driver is not tested in OpenStack upstream and no maintianers.
This driver has been marked as deprecated in Antelope release. It has dependency
on the OpenStack Winstacker project which has been retired[1].
As discussed in vPTG[2], removing the HyperV driver, tests, and its config.
[1] https://review.opendev.org/c/openstack/governance/+/886880
[2] https://etherpad.opendev.org/p/nova-caracal-ptg#L301
Change-Id: I568c79bae9b9736a20c367096d748c730ed59f0e
list_instances and list_instance_uuids, as written in the Ironic driver,
do not currently respect conductor_group paritioning. Given a nova
compute is intended to limit it's scope of work to the conductor group
it is configured to work with; this is a bug.
Additionally, this should be a significant performance boost for a
couple of reasons; firstly, instead of calling the Ironic API and
getting all nodes, instead of the subset (when using conductor group),
we're now properly getting the subset of nodes -- this is the optimized
path in the Ironic DB and API code. Secondly, we're now using the
driver's node cache to respond to these requests. Since list_instances
and list_instance_uuids is used by periodic tasks, these operating with
data that may be slightly stale should have minimal impact compared to
the performance benefits.
Closes-bug: #2043036
Change-Id: If31158e3269e5e06848c29294fdaa147beedb5a5
This option was deprecated in favor of the HTTPProxyToWSGI middleware
in 26.0.0 release[1].
[1] cf906cdcc2
Related-Bug: #1967686
Change-Id: Iad8880127531dc2788d646f8a05b5c17fd9d0969
In I008841988547573878c4e06e82f0fa55084e51b5 we started enabling a
bunch of libvirt enlightenments for Windows unconditionally. Turns
out, the `evmcs` enlightenment only works on Intel hosts, and we broke
the ability to run Windows guests on AMD machines. Until we become
smarter about conditionally enabling evmcs (with something like traits
for host CPU features), just stop enabling it at all.
Change-Id: I2ff4fdecd9dc69de283f0e52e07df1aeaf0a9048
Closes-bug: 2009280
Libvirt has implemented the capability to expose maximum number of
SEV guests and SEV-ES guests in 8.0.0[1][2]. This allows nova to detect
maximum number of memory encrypted guests using that feature.
The detection is not used if the [libvirt] num_memory_encrypted_guests
option is set to preserve the current behavior.
Note that current nova supports only SEV and does not support SEV-ES,
so this implementation only uses the maximum number of SEV guests.
The maximum number of SEV-ES guests will be used in case we implement
support for SEV-ES.
[1] https://gitlab.com/libvirt/libvirt/-/commit/34cb8f6fcd6a56a7bbcef2f7402def1682509e16
[2] https://gitlab.com/libvirt/libvirt/-/commit/7826148a72c97367fc6aaa76397fe92d32169723
Implements: blueprint libvirt-detect-sev-max-guests
Change-Id: I502e1713add7e6a1eb11ecce0cc2b5eb6a14527a
The CPU power management feature of the libvirt driver, enabled with
[libvirt]cpu_power_management, only manages dedicated CPUs and does not
touch share CPUs. Today nova-compute refuses to start if configured
with [libvirt]cpu_power_management=true [compute]cpu_dedicated_set=None.
While this is functionally not limiting it does limit the possibility to
independently enable the power management and define the
cpu_dedicated_set. E.g. there might be a need to enable the former in
the whole cloud in a single step, while not all nodes of the cloud will
have dedicated CPUs configured.
This patch removes the strict config check. The implementation already
handles each PCPU individually, so if there are an empty list of PCPUs
then it does nothing.
Closes-Bug: #2043707
Change-Id: Ib070e1042c0526f5875e34fa4f0d569590ec2514
For live migration the libvirt driver already supports generating the
migration URL based on the compute host hostname if so configured.
However for the non live move operations the driver always used the IP
address of the compute host based on [DEFAULT]my_ip.
Some deployments rely on DNS to abstract the IP address management. In
these environments it is beneficial if nova allows connection between
compute hosts based on the hostname (or FQDN) of the host instead of
trying to configure [DEFAUL]my_ip to an IP address.
This patch introduces a new config option
[libvirt]migration_inbound_addr that is used to determine the address
for incoming move operations (cold migrate, resize, evacuate). This
config is defaulted to [DEFAULT]my_ip to keep the configuration backward
compatible. However it allows an explicit hostname or FQDN to be
specified, or allows to specify '%s' that is then resolved to the
hostname of compute host.
blueprint: libvirt-migrate-with-hostname-instead-of-ip
Change-Id: I6a80b5620f32770a04c751143c4ad07882e9f812
The nova-cert service was removed during Pike cycle by 2bcee77e3 and
the upgrade_levels option for this service was formally deprecated
during rocky by f0d2925bc7 . The other upgrade_levels options which
were deprecated at the same time were already removed.
Change-Id: I385dc41a3a69c51d60acced21cfdf6c6dd0cc724
This is being reverted because it's overly strict and complaining
that upgrade-related work has not been done before it should have or
needs to have been done. This may be re-added later when we start
depending on these linkages.
Closes-Bug: #2039597
This reverts commit 27f384b7ac.
Change-Id: Ifa5b82ca3b83d0ba481aa7a062827bd8e838989a
Libvirt's node device driver accumulates and reports information
about host devices. Network capabilities reported by node device
driver for NIC contain information about HW offloads supported
by this NIC.
One of possible features reported by node device driver is
switchdev: a NIC capability to implement VFs similar to actual
HW switch ports (also referred to as SR-IOV OVS hardware offload).
From Neutron perspective, vnic-type should be set to "direct" and
"switchdev" capability should be added to port binding profile to
enable HW offload (there are also configuration steps on compute
hosts to tune NIC config).
This patch was written to automatically translate "switchdev" from
VF network capabilities reported by node device driver to Neutron
port binding profile and allow user to skip manual step that
requires admin privileges.
Other capabilities are also translated: they are not used right
now, but provide visibility and can be utilized later.
Closes-bug: #2020813
Closes-bug: #2008238
Change-Id: I3b17f386325b8f42c0c374f766fb21c520161a59
this change fixes the typos in the releasenotes
"""
codespell --ignore-words=doc/dictionary.txt -i 3 -w releasenotes/
"""
Change-Id: I29cd5268cd129b194c43a9f6b08a2b7b1c254b65
this is the inital patch of applying codespell to nova.
codespell is a programing focused spellchecker that
looks for common typos and corrects them.
i am breaking this into multiple commits to make it simpler
to read and will automate the execution of codespell
at the end of the series.
Change-Id: If24a6c0a890f713545faa2d44b069c352655274e
Add file to the reno documentation build to show release notes for
stable/2023.2.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/2023.2.
Sem-Ver: feature
Change-Id: Ifd0b0ebbe148a323304a9e422e4c7f2bf39757f8
When people transition from three ironic nova-compute processes down
to one process, we need a way to move the ironic nodes, and any
associcated instances, between nova-compute processes.
For saftey, a nova-compute process must first be forced_down via
the API, similar to when using evacaute, before moving the associated
ironic nodes to another nova-compute process. The destination
nova-compute process should ideally not be running, but not forced
down.
blueprint ironic-shards
Change-Id: I7ef25e27bf8c47f994e28c59858cf3df30975b05
On reboot, check the instance volume status on the cinder side.
Verify if volume exists and cinder has an attachment ID, else
delete its BDMS data from nova DB and vice versa.
Updated existing test cases to use CinderFixture while rebooting as
reboot calls get_all_attachments
Implements: blueprint https://blueprints.launchpad.net/nova/+spec/cleanup-dangling-volume-attachments
Closes-Bug: 2019078
Change-Id: Ieb619d4bfe0a6472aefb118b58283d7ad8d24c29
Ironic in API 1.82 added the option for nodes to be associated with
a specific shard key. This can be used to partition up the nodes within
a single ironic conductor group into smaller sets of nodes that can
each be managed by their own nova-compute ironic service.
We add a new [ironic]shard config option to allow operators to say
which shard each nova-compute process should target.
As such, when the shard is set we ignore the peer_list setting
and always have a hash ring of one.
blueprint ironic-shards
Change-Id: I5c1b5688c96096f4cfecfc5b16ea59d2ee5756d6
As part of the move to using Ironic shards, we document that the best
practice for scaling Ironic and Nova deployments is to shard Ironic
nodes between nova-compute processes, rather than attempting to
user the peer_list.
Currently, we only allow users to do this using conductor groups.
This works well for those wanting a conductor group per L2 network
domain. But in general, conductor groups per nova-compute are
a very poor trade off in terms of ironic deployment complexity.
Futher patches will look to enable the use of ironic shards,
alongside conductor groups, to more easily shard your ironic nodes
between nova-compute processes.
To avoid confusion, we rename the partition_key configuration
value to conductor_group.
blueprint ironic-shards
Change-Id: Ia2e23a59dbd2f13c6f74ca975c249751bebf54b2