When we write out the glance policy for the multistore job, we do
so after glance has already started. After the json->yaml change,
we will decide too early in g-api lifetime that the legacy file
isn't present and thus never honor it later. So, create it and then
restart glance services.
Change-Id: Ic1c01366dbfdcfb85750b85f960b76aea934db59
An empty value for the 'all_tenants' query parameter of '/servers' and
'/servers/detail' means the value defaults to 'True', i.e. requesting
'/servers?all_tenants' is the same as '/servers?all_tenants=1'. Clarify
this, since the current wording is confusing.
Change-Id: Ib5fdd3b73aa5179e0379ee8f465e4118107786be
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Closes-Bug: #1909745
This change extends the conductor manager to append the cyborg
resource request to the request spec when performing an unshelve.
On shelve offload an instance will be deleted the instance's ARQs
binding info to free up the bound ARQs in Cyborg service.
And this change passes the ARQs to spawn during unshelve an instance.
This change extends the ``shelve_instance``, ``shelve_offload_instance``
and ``unshelve_instance`` rpcapi function to carry the arq_uuids.
Co-Authored-By: Wenping Song <songwenping@inspur.com>
Implements: blueprint cyborg-shelve-and-unshelve
Change-Id: I258df4d77f6d86df1d867a8fe27360731c21d237
This needs to be mocked to ensure the bus used for cdrom devices remains
consistent when the unit tests are ran on hosts with non-x86_64
architectures.
Closes-Bug: #1909969
Change-Id: Iccdbbb8cf2a9c0e01f0f32ed6d78a267486824af
This is a bit of a mess. Make it a little easier to parse.
Change-Id: I0c02299955a22a86fbb6b99d56050c8a21bef35b
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Introduce API microversion 2.88, which makes the following changes to
a number of 'os-hypervisors'. Specifically, the following fields are
dropped from both the '/os-hypervisors/detail' (detailed list) and
'/os-hypervisors/{hypervisor_id}' (show) APIs:
- current_workload
- cpu_info
- vcpus
- vcpus_used
- free_disk_gb
- local_gb
- local_gb_used
- disk_available_least
- free_ram_mb
- memory_mb
- memory_mb_used
- running_vms
In addition, the '/os-hypervisors/statistics' API, which provided a
summary of the above stats but for all hypervisors in the deployment, is
dropped entirely.
Finally, the '/os-hypervisors/{hypervisor}/uptime' API, which provided a
similar response to the '/os-hypervisors/{hypervisor}' API but with an
additional 'uptime' field, has been removed in favour of including this
field in the primary '/os-hypervisors/{hypervisor}' API.
A small tweak to 'tox.ini' that allows us to share some venvs is
included.
Part of blueprint modernize-os-hypervisors-api
Change-Id: I515e484ade6c6455f82a3067940a418a0d7d965a
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Many of the functions implementing the various 'os-hypervisors'
endpoints share common code. In particular any of these functions
contain calls to both the 'instance_get_all_by_host' and
'service_get_by_compute_host' APIs of 'nova.compute.api.HostAPI' so we
can include instance and service information in the responses. All of
these calls are guarded with exception handlers, but the exceptions
handled differ between resources.
There is one exception we need to care about for
'instance_get_all_by_host': 'HostMappingNotFound', which is raised
because the API is decorated with the 'target_cell' decorator. The
'service_get_by_compute_host' API is also decorated by the
'target_host_cell' decorator, however, it can also raise
'ComputeHostNotFound'. This exception is possible because the
'service_get_by_compute_host' API calls
'nova.objects.Service.get_by_compute_host', which in turns calls
'nova.db.sqlalchemy.api.service_get_by_compute_host', via the
'_db_service_get_by_compute_host' helper. Not all of the functions that
called 'service_get_by_compute_host' were correctly guarding against
'ComputeHostNotFound'.
In addition to this, the call to the 'get_host_uptime' API used by the
'/os-hypervisors/uptime' API can raise 'HostNotFound' if the service has
been deleted but the compute node still has to be manually cleaned up.
Conversely, a number of functions were handling 'ValueError' even though
this couldn't realistically be raised by the test.
Resolve all of the above.
Change-Id: Iacabaea31311ae14084b55341608e16e531e6bd5
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Related-Bug: #1646255
This is a hangover from the days of API extensions. Everything that's
covered here is covered in 'test_hypervisors' now.
Change-Id: Ie824b972c9f63af7c38d63ade1d293c3acc7538b
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
The commit fd351903a1 removed the last
real user of get_allocations_for_consumer_by_provider() so this function
is now removed.
Change-Id: I7b883348bb610a435ee1b79e921da0e698037534
The 'availability_zone' description in unshelve api-ref is confusing,
adding NOTE for unshelve request body:
Since microversion 2.77, allowed request body schema are {"unshelve":
null} or {"unshelve": {"availability_zone": <string>}}. A request body
of {"unshelve":{}} is not allowed.
Closes-Bug: #1908336
Change-Id: I66c209baf11c37ffebca52764263daae9e1dd50b
This change adds some basic notes in the api-ref for a wrinkle in the
current os-volume_attachments API in that both attach and detach actions
are async, returning to the caller *before* the underlying action is
complete. As such callers need to make seperate calls to ensure these
actions complete successfully.
There are also currently two different ways of polling for completion
when attaching and detaching.
- When attaching callers should use the volume status and list of
attachments as reported by the volume API.
- When detaching callers should instead use the list of volume attachments
reported by the os-volume_attachments API.
This is due to the way in which Nova currently completes the attachment
via c-api last during attachment and deletes the bdm record last during
detachment.
It would be useful to one day centralise on just using the
os-volume_attachments API to poll both actions but to do that we would
need to start tracking our own state within the BDMs.
Change-Id: Id367ee53ef1458b6a90fc107ab14f5b3cbba7a86
In a heavily IO deprived CI VM the db migration tests could take a
significant amount of time and eventually time out. This patch moves the
tests into the same test executor worker process to spread the load
generated by these test in time until a final solution is found. For
example we hope that [1] will help eventually to decrease the load.
[1] https://review.opendev.org/q/topic:bp/compact-db-migrations-wallaby
Change-Id: I6ce930fa86c82da1008089791942b1fff7d04c18
Related-Bug: #1823251