This addresses comments from code review to add handling of PCPU during
the migration/copy of limits from the Nova database to Keystone. In
legacy quotas, there is no settable quota limit for PCPU, so the limit
for VCPU is used for PCPU. With unified limits, PCPU will have its own
quota limit, so for the automated migration command, we will simply
create a dedicated limit for PCPU that is the same value as the limit
for VCPU.
On the docs side, this adds more detail about the token authorization
settings needed to use the nova-manage limits migrate_to_unified_limits
CLI command and documents more OSC limit commands like show and delete.
Related to blueprint unified-limits-nova-tool-and-docs
Change-Id: Ifdb1691d7b25d28216d26479418ea323476fee1a
On reboot, check the instance volume status on the cinder side.
Verify if volume exists and cinder has an attachment ID, else
delete its BDMS data from nova DB and vice versa.
Updated existing test cases to use CinderFixture while rebooting as
reboot calls get_all_attachments
Implements: blueprint https://blueprints.launchpad.net/nova/+spec/cleanup-dangling-volume-attachments
Closes-Bug: 2019078
Change-Id: Ieb619d4bfe0a6472aefb118b58283d7ad8d24c29
As part of the move to using Ironic shards, we document that the best
practice for scaling Ironic and Nova deployments is to shard Ironic
nodes between nova-compute processes, rather than attempting to
user the peer_list.
Currently, we only allow users to do this using conductor groups.
This works well for those wanting a conductor group per L2 network
domain. But in general, conductor groups per nova-compute are
a very poor trade off in terms of ironic deployment complexity.
Futher patches will look to enable the use of ironic shards,
alongside conductor groups, to more easily shard your ironic nodes
between nova-compute processes.
To avoid confusion, we rename the partition_key configuration
value to conductor_group.
blueprint ironic-shards
Change-Id: Ia2e23a59dbd2f13c6f74ca975c249751bebf54b2
This adds documentation for unified limits and signals deprecation of
the nova.quota.DbQuotaDriver.
Related to blueprint unified-limits-nova-tool-and-docs
Change-Id: I3951317111396aa4df36c5700b4d4dd33e721a74
Despite having a NumInstancesFilter, we miss a weigher that would classify hosts
based on their instance usage.
Change-Id: Id232c2caf29d3443c61c0329d573a34a7481fd57
Implements-Blueprint: bp/num-instances-weigher
This change remvoes the az filter and always enabled
the placement pre-filter. As part of this removal
the config option to control enabling the pre-filter
is removed as it is now mandatory.
The AZ filter docs and tests are also removed and an upgrade
release note is added.
Depends-On: https://review.opendev.org/c/openstack/devstack/+/886972
Change-Id: Icc8580835beb2b4d40341f81c25eb1f024e70ade
The 'force' parameter of os-brick's disconnect_volume() method allows
callers to ignore flushing errors and ensure that devices are being
removed from the host.
We should use force=True when we are going to delete an instance to
avoid leaving leftover devices connected to the compute host which
could then potentially be reused to map to volumes to an instance that
should not have access to those volumes.
We can use force=True even when disconnecting a volume that will not be
deleted on termination because os-brick will always attempt to flush
and disconnect gracefully before forcefully removing devices.
Closes-Bug: #2004555
Change-Id: I3629b84d3255a8fe9d8a7cea8c6131d7c40899e8
Also did a bit of cleanup in the text message to tell *when* we need to bump
the min service version.
Given 2023.1 is our first SLURP release, we need to clarify the level of
support we now have for rolling upgrades.
Next cycle, we should update the min version to be Zed.
Change-Id: I2dd906f34118da02783bb7755e0d6c2a2b88eb5d
The deprecated `--live` option of the `server migrate` command
was removed I37ef09eca0db9286544a4b0bb33f845311baa9b2
Update docs with new arguments.
Change-Id: Id7a9a7509ca5e7811b6d3ce060390ea23c93d4ce
This adds some admin guide documentation about the stable compute_id
file. It covers upgrade, greenfield generation, and greenfield
pre-provisioning by deployment tools.
Related to blueprint stable-compute-uuid
Change-Id: I078b3f9e1919f2008628dc7b889e8696f1f6159a
This patch adds the following SPICE-related options to the 'spice'
configuration group of a Nova configuration:
- image_compression
- jpeg_compression
- zlib_compression
- playback_compression
- streaming_mode
These configuration options can be used to enable and set the SPICE
compression settings for libvirt (QEMU/KVM) provisioned instances.
Each configuration option is optional and can be set explictly to
configure the associated SPICE compression setting for libvirt. If all
configuration options are not set, then none of the SPICE compression
settings will be configured for libvirt, which corresponds to the
behavior before this change. In this case, the built-in defaults from
the libvirt backend (e.g. QEMU) are used.
Note that those options are only taken into account if SPICE support is
enabled (and the VNC support is disabled).
Implements: blueprint nova-support-spice-compression-algorithm
Change-Id: Ia7efeb1b1a04504721e1a5bdd1b5fa7a87cdb810
A new configuration option [filter_scheduler]pci_in_placement is added
that allows enabling the scheduler logic for PCI device handling in
Placement for flavor based PCI requests.
blueprint: pci-device-tracking-in-placement
Change-Id: I5ddf6d3cdc7e05cc4914b9b1e762fa02a5c7c550
While collecting information because of a question I received
about soft delete and shadow tables I realized that the documentation
contains bits and pieces here and there, but I couldn't find more.
This change summarizes what I found from docs and asking around.
I hope you find it useful.
Change-Id: I5ff90224cc27c57dc463604559d25298ed7b3f98
The [pci]alias configuration option now accepts two new optional fields:
* resource_class: that can be used to request PCI device by placement
RC name.
* traits: a comma separated list of placement trait names that can be
used to filter placement PCI resource provider by traits.
These fields has the matching counterpart in [pci]device_spec
implemented already.
These fields are matched by the Placement GET allocation_candidates
query therefore these fields are ignored when PCI device pools are
matched against IntancePCIRequest by nova.
Note that InstancePCIRequest object spec field is defined as a list of
dicts. But in reality nova creates the request always with a single
dict. So we restricted the placement logic to handle a single spec.
blueprint: pci-device-tracking-in-placement
Change-Id: I5c8f05c3c5d7597175e60b29e4ab2f22e6496ecd
This change updates the cpu and ram initial
allocation ratios to 4.0 and 1.0 to reflect
the typical values that are suitable for non web
hosting workloads.
Change-Id: I283eb270a4e47da15cf01d098b4f3952f7b5b570
implements: bp/nova-change-default-overcommit-values
This fixes the doc comments for the already merged (or being merged)
patches in the series.
blueprint: pci-device-tracking-in-placement
Change-Id: Ia99138d603722a66c9a6ac61b035384d86ccca75
This patch introduces the [pci]report_in_placement config option that is
False by default but if set to True will enable reporting of the PCI
passthrough inventories to Placement.
blueprint: pci-device-tracking-in-placement
Change-Id: I49a3dbf4c5708d2d92dedd29a9dc3ef25b6cd66c
PCI devices which are allocated to instances can be removed from the
[pci]device_spec configuration or can be removed from the hypervisor
directly. The existing PciTracker code handle this cases by keeping the
PciDevice in the nova DB exists and allocated and issue a warning in the
logs during the compute service startup that nova is in an inconsistent
state. Similar behavior is now added to the PCI placement tracking code
so the PCI inventories and allocations in placement is kept in such
situation.
There is one case where we cannot simply accept the PCI device
reconfiguration by keeping the existing allocations and applying the new
config. It is when a PF that is configured and allocated is removed and
VFs from this PF is now configured in the [pci]device_spec. And vice
versa when VFs are removed and its parent PF is configured. In this case
keeping the existing inventory and allocations and adding the new inventory
to placement would result in placement model where a single PCI device
would provide both PF and VF inventories. This dependent device
configuration is not supported as it could lead to double consumption.
In such situation the compute service will refuse to start.
blueprint: pci-device-tracking-in-placement
Change-Id: Id130893de650cc2d38953cea7cf9f53af71ced93