This adds the "nova-manage placement sync_aggregates"
command which will compare nova host aggregates to
placement resource provider aggregates and add any
missing resource provider aggregates based on the nova
host aggregates.
At this time, it's only additive in that the command
does not remove resource provider aggregates if those
matching nodes are not found in nova host aggregates.
That likely needs to happen in a change that provides
an opt-in option for that behavior since it could be
destructive for externally-managed provider aggregates
for things like ironic nodes or shared storage pools.
Part of blueprint placement-mirror-host-aggregates
Change-Id: Iac67b6bf7e46fbac02b9d3cb59efc3c59b9e56c8
If we're updating existing allocations for an instance due
to the project_id/user_id not matching the instance, we should
use the consumer_generation parameter, new in placement 1.28,
to ensure we don't overwrite the allocations while another
process is updating them.
As a result, the include_project_user kwarg to method
get_allocations_for_consumer is removed since nothing else
is using it now, and the minimum required version of placement
checked by nova-status is updated to 1.28.
Change-Id: I4d5f26061594fa9863c1110e6152069e44168cc3
Allocations created before microversion 1.8 didn't have project_id
/ user_id consumer information. In Rocky those will be migrated
to have consumer records, but using configurable sentinel values.
As part of heal_allocations, we can detect this and heal the
allocations using the instance.project_id/user_id information.
This is something we'd need if we ever use Placement allocation
information counting quotas.
Note that we should be using Placement API version 1.28 with
consumer_generation when updating the allocations, but since
people might backport this change the usage of consumer
generations is left for a follow up patch.
Related to blueprint add-consumer-generation
Change-Id: Idba40838b7b1d5389ab308f2ea40e28911aecffa
We can't easily add a blocker db sync migration to make
sure the migrate_instances_add_request_spec online data
migration has been run since we have to iterate both cells
(for instances) and the API DB (for request specs) and that's
not something we should do during a db sync call.
But we want to eventually drop the online data migration and
the accompanying compat code found in the api and conductor
services.
This adds a nova-status upgrade check for missing request specs
and fails if any existing non-deleted instances are found which
don't have a matching request spec.
Related to blueprint request-spec-use-by-compute
Change-Id: I1fb63765f0b0e8f35d6a66dccf9d12cc20e9c661
There were a few changes needed here:
1. There is no "API cell database", just the API
database, so this removes mentions of cells.
2. The VERSION argument was missing from the sync help.
3. The sync command does not create a database, it upgrades
the schema. Wording for that was borrowed from the
nova-manage db sync help.
4. Starting in Rocky, the api_db sync command also upgrades
the schema for the optional placement database if configured
so that's mentioned here as well.
Change-Id: Ibc49f93b8bd51d9a050acde5ef3dc8aad91321ca
Closes-Bug: #1778733
Mention that if no transport_url is provided then the one
in the configuration file will be used for command
``nova-manage cell_v2 simple_cell_setup [--transport-url <transport_url>]``,
just like that for other cell_v2 commands.
Change-Id: Ifededa59f7ffe5887e67e29b93f70fa70dfaef33
Change I496e8d64907fdcb0e2da255725aed1fc529725f2 made nova-scheduler
require placement >= 1.25 so this change updates the minimum required
version checked in the nova-status upgrade check command along with the
upgrade docs.
related to blueprint: granular-resource-requests
Change-Id: I0a17ee362461a8ae2113804687799bb9d9216110
This adds a new CLI which will iterate all non-cell0
cells looking for instances that (1) have a host,
(2) aren't undergoing a task state transition and
(3) don't have allocations in placement and try
to allocate resources, based on the instance embedded
flavor, against the compute node resource provider
on which the instance is currently running.
This is meant as a way to help migrate CachingScheduler
users off the CachingScheduler by first shoring up
instance allocations in placement for any instances
created after Pike, when the nova-compute resource
tracker code stopped creating allocations in placement
since the FilterScheduler does it at the time of
scheduling (but the CachingScheduler doesn't).
This will be useful beyond just getting deployments
off the CachingScheduler, however, since operators
will be able to use it to fix incorrect allocations
resulting from failed operations.
There are several TODOs and NOTEs inline about things
we could build on top of this or improve, but for now
this is the basic idea.
Change-Id: Iab67fd56ab4845f8ee19ca36e7353730638efb21
Change Id7eecbfe53f3a973d828122cf0149b2e10b8833f made
nova-scheduler require placement >= 1.24 so this change
updates the minimum required version checked in the
nova-status upgrade check command along with the upgrade
docs.
Change-Id: I4369f7fb1453e896864222fa407437982be8f6b5
We were using `warning`, and `important` themes to mark deprecations in
various places. We have a `deprecated` role, so this change switches to
use it.
Note that I also found the following files that mentioned deprecation,
but not in a way where using this role seemed appropriate. I'm
recording them here so you know I considered them.
doc/source/admin/configuration/hypervisor-kvm.rst
doc/source/admin/configuration/schedulers.rst
doc/source/cli/index.rst
doc/source/cli/nova-rootwrap.rst
doc/source/contributor/api.rst
doc/source/contributor/code-review.rst
doc/source/contributor/policies.rst
doc/source/contributor/project-scope.rst
doc/source/reference/policy-enforcement.rst
doc/source/reference/stable-api.rst
doc/source/user/feature-classification.rst
doc/source/user/flavors.rst
doc/source/user/upgrade.rst
Change-Id: Icd7613d9886cfe0775372c817e5f3d07d8fb553d
This ensures we have version-specific references to other projects [1].
Note that this doesn't mean the URLs are actually valid - we need to do
more work (linkcheck?) here, but it's an improvement nonetheless.
[1] https://docs.openstack.org/openstackdocstheme/latest/#external-link-helper
Change-Id: Ifb99e727110c4904a85bc4a13366c2cae300b8df
This is done in a couple of places in the documentation and is broken
rST. 'prog' is semantic markup that fits right in here, so use that.
Change-Id: Ic654e33daaf68b01f561ac8d792934d5a57a07e5
Change Ib984c30543acb3ca9cb95fb53d44d9ded0f5a5c8, which was added
in Newton when cells v2 was optional, added some transitional code
to the API for looking up an instance, which didn't rely on instance
mappings in a cell to find the instance if the minimum nova-osapi_compute
service version was from before Ocata.
People have reported this being a source of confusion when upgrading
from before Ocata, when cells v2 wasn't required, to Ocata+ where cells
v2 along with the mapping setup is required. That's because they might
have older nova-osapi_compute service version records in their 'nova'
(cell) database which makes the API think the code is older than it
actually is, and results in an InstanceNotFound error.
This change does two things:
1. Adds a warning to the compute API code in this scenario to serve
as a breadcrumb if a deployment hits this issue.
2. A nova-status check to look for minimum nova-osapi_compute service
versions across all cells and report the issue as a warning. It's
not an upgrade failure since we don't know how the nova-api service
is configured, but leave that investigation up to the deployer.
This is also written in such a way that we should be able to backport
this through to stable/ocata.
Change-Id: Ie2bc4616439352850cf29a9de7d33a06c8f7c2b8
Closes-Bug: #1759316
In Pike we started requiring that ironic instances have their
embedded flavor migrated to track the ironic node custom
resource class. This can be done either via the normal running
of the nova-compute service and ironic driver or via the
'nova-manage db ironic_flavor_migration' command.
This change adds a nova-status check to see if there are any
unmigrated ironic instances across all non-cell0 cells, and
is based mostly on the same logic within the nova-manage command
except it's multi-cell aware and doesn't use the objects.
Change-Id: Ifd22325e849db2353b1b1eedfe998e3d6a79591c
Through these new options, users can enable or disable a cell
through the CLI.
Related to blueprint cell-disable
Change-Id: I761f2e2b1f1cc2c605f7da504a8c8647d6d6a45e
This uses the member_of query parameter to placement to ensure that the
candidates returned are within the appropriate aggregate(s) if so
specified.
Related to blueprint placement-req-filter
Change-Id: If8ac06039ac9d647efdc088fbe944938e205e941
This patch adds the new column disabled to the list of columns to be
displayed by the list_cells command.
Related to blueprint cell-disable
Change-Id: I96a6d5e59d33c65314fc187c0286ce3408d30bdc
This patch removes the unnecessary maintenance of a date and version
from the CLI documentation.
NOTE: Cinder team also did the same removal with
the commit Idf78bbed44f942bb6976ccf4da67c748d9283ed9
Change-Id: I0a9dd49e68f2d47c58a46b107c77975e7b2aeaf7
This allows us to discover and map compute hosts by service instead of
by compute node, which will solve a major deployment ordering problem for
people using ironic. This also allows closing a really nasty race when
doing HA of nova-compute/ironic.
Change-Id: Ie9f064cb9caf6dcba2414acb24d12b825df45fab
Closes-Bug: #1755602
Currently nova-manage map_instances uses a marker set-up by which repeated
runs of the command will start from where the last run finished. Even
deleting the cell with the instance_mappings will not remove the marker
since the marker mapping has a NULL cell_mapping field. There needs to be
a way to reset this marker so that the user can run map_instances from the
beginning instead of the map_instances command saying "all instances are
already mapped" as is the current behavior.
Change-Id: Ic9a0bda9314cc1caed993db101bf6f874c0a0ae8
Closes-Bug: #1745358
This change https://review.openstack.org/#/c/515034/ (added in queens)
makes the archive_deleted_rows CLI remove instance mappings and request
specs from the API database if there are instances archived from the
main nova/cell database. However for this to work, the api database
connection should be set in the config file. So in the case that the
API database is not configured in the config file being used to run the
CLI, we should gracefully handle the condition and and stop archiving thus
prompting the user to set the api_db config and try the archival operation
again. This patch fixes the graceful handling.
Change-Id: I0c7b802a453aa423c7273ab724ce78eac0cfed4c
Closes-Bug: #1753833
This makes purge iterate over all cells if requested. This also makes our
post_test_hook.sh use the --all-cells variant with just the base config
file.
Related to blueprint purge-db
Change-Id: I7eb5ed05224838cdba18e96724162cc930f4422e
Since many people will want to fully purge shadow table data after archiving,
this adds a --purge flag to archive_deleted_rows which will automatically do
a full db purge when complete.
Related to blueprint purge-db
Change-Id: Ibd824a77b32cbceb60973a89a93ce09fe6d1050d
This patch adds the bit about removing the deleted rows from the
instance_mappings and request_specs tables as well permanently so
that the users are aware of this.
Change-Id: I183cc9f80b3feec6789332860b5aeb7591b710df
This adds a simple purge command to nova-manage. It either deletes all
shadow archived data, or data older than a date if provided.
This also adds a post-test hook to run purge after archive to validate
that it at least works on data generated by a gate run.
Related to blueprint purge-db
Change-Id: I6f87cf03d49be6bfad2c5e6f0c8accf0fab4e6ee