The server status values exposed out of the API and used
for filtering when listing instances comes from the values
in nova.api.openstack.common._STATE_MAP. Some of the values
listed in the docs were incorrectly using variable names from
the code, which don't necessarily match the actual value exposed
out of the API.
The compute API server concepts guide actually had this all
correct, so this just updates the API reference.
Change-Id: I30b6f27c6e7fc9365c203b620b311785f8b4b489
Closes-Bug: #1722403
Instead of ResourceProvider._get_aggregates() and
ResourceProvider._set_aggregates() being @classmethods, move them to
being module-level functions to be consistent with the similar
functions for inventory and allocation information.
blueprint: de-orm-resource-providers
Change-Id: I52db9b4ca89aeb2a4ce9d10820bdac6fabf43ea4
This patch series aims to remove the use of SQLAlchemy ORM querying
from the nova/objects/resource_provider.py module. Currently, there is
a mix of non-ORM (SQLAlchemy core expression API) and ORM (SQLAlchemy's
orm module and associated query generation from model reflection).
While implementing the database schema for the nested resource
providers table structure, which uses both an adjacency list model as
well as a cached root tree ID, it became obvious that using
the SQLAlchemy ORM modeling to generate the required queries for
various methods related to resource providers was resulting in awkward
and hard-to-reason-about code. Even using the recommended handling of
self-referential tables [1] for adjacency list modeling, the way the
session and querying handling was done in the resource_provider.py
module led to a number of lazy-loading problems and inactive session
errors.
In this starter patch, we tackle the ResourceProvider.get_by_uuid()
method, converting it to use the SQLAlchemy core expression API instead
of an ORM query.
[1] http://docs.sqlalchemy.org/en/latest/orm/self_referential.html
blueprint: de-orm-resource-providers
Change-Id: I2f14afa8fc01b0ec1b7ea3eaa0bf6c459a8681d2
In I18e7483ec9a484a660e1d306fdc0986e1d5f952b BDM was added to the instance
notifications. In general to add BDM to the payload an exta DB query is
needed. However the BDM is already locaded before the
notify_about_instance_create is called to send the notification. In this cases
loading the BDM again is unnecessary as the already loaded BDM can be reused.
This patch makes sure that notify_about_instance_create is called with the
already loaded BDM.
The remaining instance related versioned notification calls does not have
BDM already loaded.
Change-Id: Ic25de45c18348206f0309da6d4997f4bf336acb2
Closes-Bug: #1718226
I8849ae0f54605e003d5b294ca3d66dcef89d7d27 made it possible for
_get_instance_block_device_info to take a BDM parameter instead of
loading the BDM from the db. This allow us to load the BDM a bit
earlier in the call chain and pass that BDM to the notification calls
too.
The remaining calls of the notify_about_instance_action does not have
the BDM loaded already.
Change-Id: Icc3ffe4037a44f4f323bec2f80d99ca226742e22
Related-Bug: #1718226
In I18e7483ec9a484a660e1d306fdc0986e1d5f952b BDM was added to the instance
notifications. In general to add BDM to the payload an exta DB query is
needed. However in some places the BDM is already separately before the
notify_about_instance_action is called to send the notification. In this cases
loading the BDM again is unnecessary as the already loaded BDM can be reused.
This patch makes sure that notify_about_instance_action is called with the
already loaded BDM. There will be subsequent patches to do the same with
other notify calls.
Change-Id: I391554d3904a5a60b921ef4714a1cfd0a64a25c2
Related-Bug: #1718226
The if_notifications_enabled decorator skips the execution of the
decorated function if the versioned notifications are not configured
to be emitted. The send_instance_update_notification() call was wrongly
decorated with this decorator as it not only sends versioned
notification but also send the legacy compute.instance.update
notification as well. This caused that the legacy instance.update
notification was not emitted when the notification_format config option
was set to unversioned.
As the _send_versioned_instance_update() call already has the decorator
the solution is simply to remove the decorator from the
send_instance_update_notification() call.
Closes-Bug: #1721843
Change-Id: I9904adeb3de60cff4e29f1ab3c95399bbe9ff2e7
The errors_out_migration_ctxt context manager
sets the Migration object status to 'error', not 'failed'
which is also used in some places in the code. There is no
technical difference between either, they mean essentially
the same thing to the end user, but if we're going to use
them let's be clear in code comments.
Change-Id: Id51f3e2524ae6ee25f8d382070aeed9d666d992d
Provide a new method:
nova.utils.get_ksa_adapter(service_type, ks_auth=None, ks_session=None,
min_version=None, max_version=None))
...to configure a keystoneauth1 Adapter for a service. The Adapter, and
its component keystoneauth1 artifacts not passed into the method, are
loaded based on options in the conf group corresponding to the specified
service_type.
The ultimate goal is to replace the various disparate mechanisms used by
different services to do endpoint URL and version discovery. In Queens,
the original mechanisms will still take precedence, but (other than
[glance]api_servers - see the spec) will be deprecated. In Rocky, the
deprecated options will be removed.
This change incorporates the above utility into endpoint discovery for
glance and ironic. Future change sets will do the same for other
services (cinder, neutron, placement).
Change-Id: If625411f40be0ba642baeb02950f568f43673655
Partial-Implements: bp use-ksa-adapter-for-endpoints
Closes-Bug: #1707860
If we're listing by sort keys that yield many ambiguous results, we
may exacerbate issues in client pagination because we're not even
bound by insertion order given that we have multiple databases being
queried in parallel. So, even if the client didn't ask for it, throw
'uuid' into the end of sort_keys to provide us a stable ordering. This
was done for the default case by always including 'id' in the default
set of sort_keys, although a user could still break if they request
their own keys.
Note this also removes the recently-added explicit sort in the
test_bug_1689692 case, since we're enforcing a strict ordering with
this patch. Also, mriedem is awesome.
Change-Id: Ida446acb1286a8b215451a5d8d7d23882643ef13
Closes-Bug: #1721791
This method would not actually work for any query where multiple sort
keys were provided. Since it effectively ANDed all of the sort_key > val
conditions in the query, any multi-key sort would exclude a lot of
results.
This fix actually replicates much of the logic from the base
paginate_query() utility method, which properly handles multiple
keys by creating key1>val1 OR (key1=val2 AND key2>=val2) WHERE
clauses necessary for proper ordering.
Change-Id: I3dac96759f7c7f11a0e0e9d86731dd4d22462d33
Partial-Bug: #1721791
Change 82f16b88f3 deprecated
the TrustedFilter for removal in Queens, but there is an
entire document about using it which doesn't mention this,
so it's noted here.
Change-Id: I4f772a50cfdbc1f50759c67b234e5c7e29e81100
Since ostestr switched to running stestr under the covers, we lost the
old magic setting of the environment variables via .testr.conf for
capturing stderr/stdout and the test timeout. This makes the unit,
functional, and api-samples envs consistent with the py27 env that was
already updated to set those variables.
This also updates the pretty_tox3.sh script to run stestr directly.
Change-Id: I27fa9b7e25c1a1dc921653eec84864423f898a85
The cells API doesn't route the os-server-external-events API
and this test relies on that working, so we have to blacklist it.
Change-Id: I92e316cb9cfa5d47c415ba06edf45d7de68677f4
Closes-Bug: #1721644
We need to do this so we can have a migration uuid at the time we
call to scheduler to allocate for the new host. This just does the
plumbing through the RPC layers. The compute-side code can already
tolerate a migration having been already created for things like
live migration, so we just have to plumb it through.
Related to blueprint migration-allocations
Change-Id: I6bc6d28655368084f08fed9c4f56a285b7063338
The functional tests that are shelved offloaded instances and
asserted that the resource allocation of the instance are
freed were unstable. These tests only waited for the instance
state to become SHELVED_OFFLOADED before checked the allocations.
However the compute manager sets the instance state to
SHELVED_OFFLOADED before deleting the allocations[1]. Therefore these
tests were racy.
With this patch the test will wait not only for the instance status to
change but also for the instance host to be nulled as that happens
after the resources are freed.
[1] https://github.com/openstack/nova/blob/e4f89ed5dd4259188d020749fa8fb1c77be2c03a/nova/compute/manager.py#L4502-L4521
Change-Id: Ibb90571907cafcb649284e4ea30810a307f1737e
Closes-Bug: #1721514