As it was agreed on the Rocky PTG [1] it is useful to have the
request_id of in the payload of every instance action versioned
notification. For example it could help the deployer connect
the state change described in the notification with the user
action, the request, on the REST API.
So this patch proposes to extend the InstanceActionPayload
versioned object with a new request_id field and populate
the request_id from the context object used for emitting
the instance action notifications.
[1] https://etherpad.openstack.org/p/nova-ptg-rocky L391
Implements: bp add-request-id-to-instance-action-notifications
Change-Id: I7243b60938d6e9c7c2bc2aacdba5c667cca8ec9b
Accept forbidden traits in the processing of extra_specs, with the
format of:
trait:CUSTOM_MAGIC=forbidden
This will be transformed into required=!CUSTOM_MAGIC when the traits
are assembled into a request to GET /allocation_candidates.
Implements blueprint forbidden-traits-in-nova
Change-Id: I31e609aef47d2fea03f279e4bfdd30f072d062b4
In a new microversion (1.22) expose support for processing
forbidden traits in GET /resource_providers and GET
/allocation_candidates. A forbidden trait is expressed as
part of the required parameter with a "!" prefix:
required=CUSTOM_FAST,!CUSTOM_SLOW
This change uses db and query processing code adjustments
already present in the code but guarded by a flag. If the
currently requested microversion matches 1.22 or beyond
that flag is True, otherwise False.
Reno, api-ref update and api history update are included.
Because this microversion changes the value of an existing
parameter it was unclear how to best express that in the
api-ref. In this case existing parameter references were
annotated.
Partially implements blueprint placement-forbidden-traits
Change-Id: I43e92bc5f97db7a2b09e64c6cb953c07d0561e63
The same pattern as the rest of the changes. This means that privsep now
needs to let you pass flags to e2fsck, which I don't love and will remove
in a later patch.
Change-Id: I6c695c04ae586fec6adc354257638116277dda88
blueprint: hurrah-for-privsep
Exposes flavor extra_specs in the flavor representation since
microversion 2.61. Now users can see the flavor extra-specs
in flavor APIs response only and do not need to call
``GET /flavors/{flavor_id}/extra_specs`` API.
Flavor extra_specs will be included in Response body of the
following APIs:
* ``GET /flavors/detail``
* ``GET /flavors/{flavor_id}``
* ``POST /flavors``
* ``PUT /flavors/{flavor_id}``
Part of blueprint add-extra-specs-to-flavor-list
Change-Id: I048747633babf690a63c6de9773bff5547872053
This introduces a new PowerVM conf option, proc_units_factor, which can
range from 0.05 to 1.0 and will default to 0.1. It is used to calculate
the physical processing power to assign per vCPU, where 1.0 is a whole
physical processor and 0.05 is 1/20th of a physical processor.
Change-Id: I67bfe2a6eff86f1947ada7661fc7c3fed81ed28f
This option is in the driver interface but was hard coded in the manager
class.
Defaults to old value (10 seconds) if not set in configuration file.
Change-Id: I0c8db2efec6098c017aad2f6588938bc548db139
The recent "Meltdown" CVE fixes have resulted in a critical performance
penalty[*] that will impact every Nova guest with certain CPU models.
I.e. assume you have applied all the "Meltdown" CVE fixes, and performed
a cold reboot (explicit stop & start) of all Nova guests, for the
updates to take effect. Now, if any guests that are booted with certain
named virtual CPU models (e.g. "IvyBridge", "Westmere", etc), then those
guests, will incur noticeable performance degradation[*], while being
protected from the CVE itself.
To alleviate this guest performance impact, it is now important to
specify an obscure Intel CPU feature flag, 'PCID' (Process-Context ID)
-- for the virtual CPU models that don't already include it (more on
this below). To that end, this change will allow Nova to explicitly
specify CPU feature flags via a new configuration attribute,
`cpu_model_extra_flags`, e.g. in `nova.conf`:
...
[libvirt]
cpu_mode = custom
cpu_model = IvyBridge
cpu_model_extra_flags = pcid
...
NB: In the first iteration, the choices for `cpu_model_extra_flags` is
restricted to only 'pcid' (the option is case-insensitive) -- to address
the earlier mentioned guest performance degradation. A future patch
will remove this restriction, allowing to add / remove multiple CPU
feature flags, thus making way for other useful features.
Some have asked: "Why not simply hardcode the 'PCID' CPU feature flag
into Nova?" That's not graceful, and more importantly, impractical:
(1) Not every Intel CPU model has 'PCID':
- The only Intel CPU models that include the 'PCID' capability
are: "Haswell", "Broadwell", and "Skylake" variants.
- The libvirt / QEMU Intel CPU models: "Nehalem", "Westmere",
"SandyBridge", and "IvyBridge" will *not* expose the 'PCID'
capability, even if the host CPUs by the same name include it.
I.e. 'PCID' needs to be explicitly when using the said virtual
CPU models.
(2) Magically adding new CPU feature flags under the user's feet
impacts live migration.
[*] https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU
Closes-Bug: #1750829
Change-Id: I6bb956808aa3df58747c865c92e5b276e61aff44
BluePrint: libvirt-cpu-model-extra-flags
In Pike we started requiring that ironic instances have their
embedded flavor migrated to track the ironic node custom
resource class. This can be done either via the normal running
of the nova-compute service and ironic driver or via the
'nova-manage db ironic_flavor_migration' command.
This change adds a nova-status check to see if there are any
unmigrated ironic instances across all non-cell0 cells, and
is based mostly on the same logic within the nova-manage command
except it's multi-cell aware and doesn't use the objects.
Change-Id: Ifd22325e849db2353b1b1eedfe998e3d6a79591c
Through these new options, users can enable or disable a cell
through the CLI.
Related to blueprint cell-disable
Change-Id: I761f2e2b1f1cc2c605f7da504a8c8647d6d6a45e
Nova allows deployers to configure the command line which is used to create
a filesystem of a given type. This is frankly a little bit weird, but its
also historical. Move this functionality to privsep, including doing a
dance at startup to load config flags into privsep in a hopefully secure
manner.
Honestly, all of this code should be deprecated, but that's above my pay
grade and would take time to do. Oh, and maybe deployers love it the way
it is.
Change-Id: Id8eeb21e10f98a448946f178c8c5a36e48c7cac6
blueprint: hurrah-for-privsep
This by default would be scheduling to all cells since all cells
would be enabled at the time of creation unless specified otherwise.
Since the list of enabled cells are stored as a global cache on the
host_manager, a reset() handler for the SIGHUP signal has also been
added in the scheduler. Hence upon every create-cell/enable-cell/
disable-cell operation the scheduler has to be signaled so that the
cache is refreshed.
Co-Authored-By: Dan Smith <dms@danplanet.com>
Implements blueprint cell-disable
Change-Id: I6a9007d172b55238d02da8046311f8dc954703c5
This adds the ability to hotplug network interfaces for the powervm
virt driver.
Blueprint: powervm-network-hotplug
Change-Id: I78b94c9731c35e3291d46b9bf9f5554e21c2429e
This adds a require_tenant_aggregate request filter which uses overlaid
nova and placement aggregates to limit placement results during scheduling.
It uses the same `filter_tenant_id` metadata key as the existing scheduler
filter we have today, so people already doing this with that filter will
be able to enable this and get placement to pre-filter those hosts for
them automatically.
This also allows making this filter advisory but not required, and supports
multiple tenants per aggregate, unlike the original filter.
Related to blueprint placement-req-filter
Change-Id: Idb52b2a9af539df653da7a36763cb9a1d0de3d1b
/os-fping was deprecated at API version 2.36, ``fping_path`` was
only used by this API so it should be able to be deprecated.
Change-Id: I7d3faae0013315d595386ff262cadf8b18f70c68
Change I4e755b9c66ec8bc3af0393e81cffd91c56064717 made the
[glance]/api_servers option optional. If not set, we attempt
to get the image service endpoint via keystoneauth adapter and
the service catalog via the request context.
Periodic tasks run without an actual token so there is no way
to get the service catalog and the KSA adapter code to get the
endpoint raises EndpointNotFound when trying to build the
"image_ref_url" entry in the legacy instance notification payload.
This change simply handles the EndpointNotFound and sets the
image_ref_url to the instance.image_ref, which for non-volume-backed
instances is the image id (for volume-backed instances it's an empty
string).
This doesn't affect versioned notifications since those don't use the
"image_ref_url" entry in the payload that is created, they just have
an "image_uuid" entry in the versioned notification payload which is
populated via instance.image_ref.
An upgrade impact release note is added in the case that some consuming
service is actually relying on that legacy notification field being
a URL and parsing it as such. The thinking here, however, is this is
better than not sending the field at all, or setting it to None.
Long-term this code all gets replaced with versioned notifications.
Change-Id: Ie23a9c922776b028da0720c939846cba233ac472
Closes-Bug: #1753550
This is to fix below typos in release notes:
Keytone
specifed
availabilty
maange
expetected
migratons
maintanance
Change-Id: Ifbecf095f2f549d4ec40892484ec1b725927fb44
This allows us to discover and map compute hosts by service instead of
by compute node, which will solve a major deployment ordering problem for
people using ironic. This also allows closing a really nasty race when
doing HA of nova-compute/ironic.
Change-Id: Ie9f064cb9caf6dcba2414acb24d12b825df45fab
Closes-Bug: #1755602
This option has been deprecated for a long time and any kinks, where
they existed, should have long since been worked out. Time to kill it.
Change-Id: Ifa686b5ce5e8063a8e5f2f22c89124c1d4083b80
The call to GET /allocation_candidates now accepts a 'member_of'
parameter, representing one or more aggregate UUIDs. If this parameter
is supplied, the allocation_candidates returned will be limited to those
with resource_providers that belong to at least one of the supplied
aggregates.
Blueprint: alloc-candidates-member-of
Change-Id: I5857e927a830914c96e040936804e322baccc24c
Currently nova-manage map_instances uses a marker set-up by which repeated
runs of the command will start from where the last run finished. Even
deleting the cell with the instance_mappings will not remove the marker
since the marker mapping has a NULL cell_mapping field. There needs to be
a way to reset this marker so that the user can run map_instances from the
beginning instead of the map_instances command saying "all instances are
already mapped" as is the current behavior.
Change-Id: Ic9a0bda9314cc1caed993db101bf6f874c0a0ae8
Closes-Bug: #1745358
To facilitate opaqueness of resource provider generation internals, we
need to return the (initial) generation when a provider is created. For
consistency with other APIs, we will do this by returning the entire
resource provider record (which includes the generation) from POST
/resource_providers.
Change-Id: I8624e194fe0173531c5aa2119c903e3c68b8c6cd
blueprint: generation-from-create-provider
Placement API microversion 1.19 enhances the payloads for the `GET
/resource_providers/{uuid}/aggregates` response and the `PUT
/resource_providers/{uuid}/aggregates` request and response to be
identical, and to include the ``resource_provider_generation``. As with
other generation-aware APIs, if the ``resource_provider_generation``
specified in the `PUT` request does not match the generation known by
the server, a 409 Conflict error is returned.
Change-Id: I86416e35da1798cdf039b42c9ed7629f0f9c75fc
blueprint: placement-aggregate-generation