I did not have a clear understanding of when a security group would or
would not be applied to a port and reading the documentation did not
help. Massively expand the security groups document, adding a couple of
important notes along the way as well as references to the nova-specific
security group operations. The document is moved from the admin guide to
the user guide (with redirects) since these are not admin-only
operations by default.
Change-Id: I212bc99112aad2f1e3057befca381a26d702be2e
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
When cpu_state power management strategy is requested nova-compute
should not fail to start if there is no cpufreq governor is supported by
the host.
Closes-Bug: #2045966
Change-Id: Ice2fa47bdab320a7e472fbb4767761448d176bad
This is another follow up for change
I1a6468fbfa51eedec0ab91d73f313784a9a618a0 which missed setting the
*uec_image_vars for jobs that are not defined in Nova. For example,
the tempest-integrated-compute job is not defined in Nova and if we
don't set *uec_image_vars for it, it will not use the UEC image and
will instead use the default full image.
This also sets *uec_image_vars for one job defined in Nova,
nova-osprofiler-redis, that was missed in the original change.
Change-Id: Ia8741d46c28277e9addadf0e2a568c3ad86fb8dc
This is a follow up for change
I1a6468fbfa51eedec0ab91d73f313784a9a618a0 which had residue from an
earlier PS that set DEFAULT_IMAGE_NAME and DEFAULT_IMAGE_FILE_NAME
intended to override project.vars. The project.vars approach did not
work because we needed to set IMAGE_URLS as well and the
devstack-tempest ancestor job sets IMAGE_URLS which overrides
project.vars.
The approach was changed to use YAML anchors instead of project.vars to
reduce duplication, which made the setting of DEFAULT_IMAGE_NAME and
DEFAULT_IMAGE_FILE_NAME in nova-next redundant as the default values
from Devstack can be used.
This removes the setting of DEFAULT_IMAGE_NAME and
DEFAULT_IMAGE_FILE_NAME in nova-next.
Change-Id: I3929b6c55d77575a6c0bd205f933cc2a690db91e
We would like nova not to use ironicclient, but instead to invoke the
ironic API directly through an openstacksdk connection.
In this change, we migrate the remaining network-related operations.
Change-Id: Iebf3f4352083755c9e93b10a87ca58e90ed35500
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
We would like nova not to use ironicclient, but instead to invoke the
ironic API directly through an openstacksdk connection.
In this change, we migrate the calls to 'vif_attach' and 'vif_detach'
across.
Note that a workaround is removed since the referenced change merged
over 4 years ago (in Queens) and openstacksdk is doing the correct thing
wrt retrying [1].
[1] https://github.com/openstack/openstacksdk/blob/0.103.0/openstack/baremetal/v1/node.py#L682-L688
Change-Id: Ieda5636a5e80ea4af25db2e90be241869759e30c
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
We would like nova not to use ironicclient, but instead to invoke the
ironic API directly through an openstacksdk connection.
In this change, we migrate all 'set_power_state' calls across. There are
a few of these but they're nothing complicated.
Note that SDK does not expose a 'soft' flag on the
'set_node_power_state' method. Instead, it simply expects the real power
state strings rather than a combination of strings with the flag like
ironicclient [1]
Note that there's actually a small feature regression here in that
openstacksdk doesn't support server-side timeouts yet [2]. We pass
through the option and will get this for free once SDK gains this
functionality. Until then, we can rely on client-side timeouts.
[1] https://github.com/openstack/python-ironicclient/blob/5.0.1/ironicclient/v1/node.py#L27-L33
[2] https://github.com/openstack/openstacksdk/blob/0.103.0/openstack/baremetal/v1/node.py#L638-L639
Change-Id: Ie9b975a5200b9465e4408995df33927182cfe56b
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
We would like nova not to use ironicclient, but instead to invoke the
ironic API directly through an openstacksdk connection.
In this change, we migrate all 'set_provision_state' calls across. There
are a few of these but they're nothing complicated. The only significant
change we need to make is that SDK does not automatically convert bytes
to str so we need to do this ourselves. Tests are included to prevent
regressions.
Change-Id: I5efbf0dd685ca4432b68ee625637fac741dee24b
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
We would like nova not to use ironicclient, but instead to invoke the
ironic API directly through an openstacksdk connection.
In this change, we migrate the various volume_target related API calls
to openstacksdk. There are only three and they're all relatively simple,
which is nice 🎉
Note that SDK does not expose 'uuid' properties but rather 'id'
properties thus some small refactors are necessary.
Change-Id: I20eb470dc0c7208f9a9aa8240c25a49f458abc23
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
We would like nova not to use ironicclient, but instead to invoke the
ironic API directly through an openstacksdk connection.
This change changes how we retrieve network metdata from a node.
Change-Id: If36fde647348099d5a5519dc6300d625869a4862
We would like nova not to use ironicclient, but instead to invoke the
ironic API directly through an openstacksdk connection.
The parent commits set up the framework, and this commit uses it
for adding and removing instance info from the node.
Depends on field mapping in the OpenStack SDK. That work maps SDK
field names to Ironic API field names within the SDK which allows
for consistency between fields in calls and parameters on returned
objects.
Change-Id: Id427e7923ff3a9d2957586fba5ccef0216318e6f
Following keys like HW_CPU_X86_SVM and HW_CPU_X86_VMX are placed in TRAITS_CPU_MAPPING as part of a tuple instead of being direct keys.
This patch will flaten these tuples.
Closes-Bug: #2043030
Change-Id: Ia600ceb22c5939117095593b97ed94735c8f953c
In nova-live-migration job, evacuation failures are causing
POST_FAILURE.
As per discussion on bug, it looks like 500 is coming from cinder
which is cause of this failure.
Similar to attachment_delete method, this change adds a retry
mechanism in cinder API calls attachment_update method.
Closes-Bug: #1970642
Change-Id: I1da3c8481f7e7a1e8776cf03f5c4cf117b7fabaa
Libvirt has implemented the capability to expose maximum number of
SEV guests and SEV-ES guests in 8.0.0[1][2]. This allows nova to detect
maximum number of memory encrypted guests using that feature.
The detection is not used if the [libvirt] num_memory_encrypted_guests
option is set to preserve the current behavior.
Note that current nova supports only SEV and does not support SEV-ES,
so this implementation only uses the maximum number of SEV guests.
The maximum number of SEV-ES guests will be used in case we implement
support for SEV-ES.
[1] https://gitlab.com/libvirt/libvirt/-/commit/34cb8f6fcd6a56a7bbcef2f7402def1682509e16
[2] https://gitlab.com/libvirt/libvirt/-/commit/7826148a72c97367fc6aaa76397fe92d32169723
Implements: blueprint libvirt-detect-sev-max-guests
Change-Id: I502e1713add7e6a1eb11ecce0cc2b5eb6a14527a
The CPU power management feature of the libvirt driver, enabled with
[libvirt]cpu_power_management, only manages dedicated CPUs and does not
touch share CPUs. Today nova-compute refuses to start if configured
with [libvirt]cpu_power_management=true [compute]cpu_dedicated_set=None.
While this is functionally not limiting it does limit the possibility to
independently enable the power management and define the
cpu_dedicated_set. E.g. there might be a need to enable the former in
the whole cloud in a single step, while not all nodes of the cloud will
have dedicated CPUs configured.
This patch removes the strict config check. The implementation already
handles each PCPU individually, so if there are an empty list of PCPUs
then it does nothing.
Closes-Bug: #2043707
Change-Id: Ib070e1042c0526f5875e34fa4f0d569590ec2514