This demonstrates far more complex response schemas, including the
response to the rebuild action which is effectively the response to the
server show API.
Change-Id: I6dc355f3c3f164d0bc7887a58e8b13979f0b476e
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This adds config options for unified limits quotas that allow
admin/operators to specify which resources they will require or ignore
to have unified limits set in Keystone.
The options are only used when ``[quota]driver`` is set to
``UnifiedLimitsDriver``.
When the resource strategy is set to 'require' (the default), the
resource list will represent the resources that are required to have a
registered limit set in Keystone.
When the resource strategy is set to 'ignore', the resource list will
represent the resources that will be ignored for quota enforcement if
they do not have a registered limit set in Keystone.
Related to blueprint unified-limits-nova-unset-limits
Change-Id: Icb08fadbdbf9c1bf354c3091f05edce80ebf68c3
This makes 'nova-manage limits migrate_to_unified_limits' scan the API
database for flavors and detect if any resource classes are missing
registered limits in Keystone.
Related to blueprint unified-limits-nova-unset-limits
Change-Id: I431176fd4d09201c551d8f82c71515cd4616cfea
Qemu 8.0 and libvirt 9.3.0 added support for qemu emulated igb
network device. This patch adds the new igb value for hw_vif_model
so nova could eventually support booting VMs with such devices.
Subsequent patches will enable libvirt support.
Implements: blueprint igb-vif-model
Change-Id: I9c8dc1a663d0534d62798c5b4c8d4539551f7ae4
We dropped use of these some time ago but forgot to remove them from the
'doc/requirements.txt' file. Fix that oversight now.
Change-Id: I88e5e12d18264ce848457191ba3de2fbd8d8bf5c
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This changes the "note" about the requirement of configuring
[service_user] to a "warning" for better visibility.
Also replace a few setting values with variables instead of the
defaults from Devstack.
Change-Id: I561690582436832f4070a2d17aa8ff79b0f788fd
This file is being automatically generated during the docs build, no
need to have it persisted in git.
Change-Id: Ib45f722cc305e1d828d31724535e31ad3dda6c2e
This patch adds the following SPICE-related configuration option
to the 'spice' configuration group:
- require_secure
When set to true, libvirt will be provided with domain XML which
configures SPICE VDI consoles to require secure connections (that
is, connections protected by TLS). Attempts to connect without
TLS will receive an error indicating they should retry the connection
on the TLS port.
Change-Id: Ica7083b0836f8d66cad8a4b4097613103fc91560
This change implements the actual functionality to allow users to
launch instances with stateless firmware (read-only firmware image +
no NVRAM).
Note that this feature is supported by the libvirt virt driver, and
also requires libvirt >= 8.6.0.
Implements: blueprint libvirt-stateless-firmware
Change-Id: I7219bfa11ae98e65c326bec1a99c49d3e245cb9a
Add the new image property to request stateless firmware. The property
will be used by the libvirt driver once the actual logic to enable
the feature is implemented.
Partially-Implements: blueprint libvirt-stateless-firmware
Change-Id: I05d4ff89d2b713b217b6c690e40fd4a16a397b63
The resource tracker Claim object works on a copy of the instance object
got from the compute manager. But the PCI claim logic does not use the
copy but use the original instance object. However the abort claim logic
including the abort PCI claim logic worked on the copy only. Therefore the
claimed PCI devices are visible to the compute manager in the
instance.pci_decives list even after the claim is aborted.
There was another bug in the PCIDevice object where the instance object
wasn't passed to the free() function and therefore the
instance.pci_devices list wasn't updated when the device was freed.
Closes-Bug: #1860555
Change-Id: Iff343d4d78996cd17a6a584fefa7071c81311673
The 'openstack (registered )limit set' command examples are incorrectly
showing use of the --resource-name option. The --resource-name option
is only to be used to update the name of a limit's resource and will
actually result in a 409 error if the specified name already exists.
This removes --resource-name from those examples.
Change-Id: I785fce1ba927894cb3b1a2a13c4e8eaf91930f5b
In the evacuate vs rebuild doc, it is stated that nova does not
support volume-backed server rebuild which is not correct.
With the introduction of microversion 2.93, we support volume
backed server rebuild and this patch aims at correcting that
information.
Change-Id: I5da86ad115f628582404dee52bcbfb250fdb7cd4
This address review feedback on change
I7e1d10e66a260efd0a3f2d6522aeb246c7582178 to add some clarifying text
to the docs and release note.
Related to blueprint persistent-mdevs
Change-Id: I472552c64cc2c2ce06896158664faac0199d90bd
--before argument is currently described in ambiguous way: it
is not actually used to filter entries ARCHIVED before specified
date. Instead, it compares provided data with "deleted_at" value
for most rows and "updated_at" or "created_at" for remaining ones.
Since we already talk about time of deletion when describing
--before argument of "nova-manage db archive_deleted_rows" rows,
it make sense to not provide extra details here as well.
Change-Id: Ib5940e88a52dc8d32303e27237e567c3481fc3dc
The admin docs are missing some details about enabling unified limits,
like oslo.limit configuration and Keystone roles. This adds more
information about what roles are needed for what actions, how to set
quota limits, quota enforcement, and unified limits in general.
This also removes a couple of tables from the user docs that show
obsolete/deprecated quota limits because they may be more confusing
than helpful considering we don't want new deployments to use them and
they add more clutter to the page.
More info is also added regarding the CLI commands for unified limits
and makes it consistent between the user and admin docs.
Change-Id: Id93f9997d1b217e0c2151c88323564f7a7fefc02
After this patch nova rejects the add host to aggregate API action
if the host has instances and the new aggregate for the host would
mean that these instances need to move from one AZ (even from the
default one) to another. Such AZ change is not implemented in nova
and currently leads to stuck instances.
Similarly nova will reject remove host from aggregate API action if the
host has instances and the aggregate removal would mean that the
instances need to change AZ.
Depends-On: https://review.opendev.org/c/openstack/tempest/+/821732
Change-Id: I19c4c6d34aa2cc1f32d81e8c1a52762fa3a18580
Closes-Bug: #1907775
These are detected as errors since the clean up was done[1] in
the requirements repository.
[1] 314734e938f107cbd5ebcc7af4d9167c11347406
Bump the minimum versions to avoid installing these known bad versions.
Change-Id: I5ab0c3a1ac208e3967e65c298573079283a7b6cd
This commit removes the previous limitation on the number of tenants
that can be filtered using the `filter_tenant_id` aggregate property
in the AggregateMultitenancyIsolation scheduler filter.
The `filter_tenant_id` can now be used as a suffix, allowing for an
unlimited number of tenant ID properties to be set on the
aggregate. This update maintains backward compatibility.
Implements: blueprint aggregatemultitenancyisolation-to-support-unlimited-tenant
Signed-off-by: Sahid Orentino Ferdjaoui <sahid.ferdjaoui@industrialdiscipline.com>
Change-Id: Ic87d142647774b62a6af2cc5eb7a3cd66f9afeb7
This change mainly fixes incorrect use of backticks
but also adress some other minor issues like unbalanced
backticks, incorrect spacing or missing _ in links.
This change add a tox target to run sphinx-lint
as well as adding it to the relevent tox envs to enforce
it in ci. pre-commit is leveraged to install and execute
sphinx-lint but it does not reqiure you to install the
hooks locally into your working dir.
Change-Id: Ib97b35c9014bc31876003cef4362c47a8a3a4e0e
As of now, the server show and server list --long output
shows the availability zone, that is, the AZ to which the
host of the instance belongs. There is no way to tell from
this information if the instance create request included an
AZ or not.
This change adds a new api microversion to add support for
including availability zone requested during instance create
in server show and server list --long responses.
Change-Id: If4cf09c1006a3f56d243b9c00712bb24d2a796d3
cmd nova-manage volume_attachment refresh vm-id vol-id connetor
There were cases where the instance said to live in compute#1 but the
connection_info in the BDM record was for compute#2, and when the script
called `remote_volume_connection` then nova would call os-brick on
compute#1 (the wrong node) and try to detach it.
In some case os-brick would mistakenly think that the volume was
attached (because the target and lun matched an existing volume on the
host) and would try to disconnect, resulting in errors on the compute
logs.
- Added HostConflict exception
- Fixes dedent in cmd/manange.py
- Updates nova-mange doc
Closes-Bug: #2012365
Change-Id: I21109752ff1c56d3cefa58fcd36c68bf468e0a73