If > 2.103, return a HTTP 404 (Not Found). Otherwise, proxy through to
the ServersController.
Change-Id: Ic6b487316bb1fbf2cf57de5d8e6aabf06f0cdf52
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
None of the other controllers do this. Don't do it here either. This is
mostly a bulk rename, with the exception of the combination of the
'_action_resize' and '_resize' methods, which were unnecessarily
separated, and the move of the 'delete' method to be next to the
'_delete' inner method.
Change-Id: I87381c6721e7a040c82f8124523116a1d4e2c684
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
All APIs except the root version APIs now use strict query string
parsing. A test is added to ensure same.
A couple of tests need to be updated since they were using the wrong
path: while the path is ignored when calling the controllers directly,
the query strings are not.
Change-Id: I6dcb5b8f1f865df8f6b17cd7f0d730c3bdff241e
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Adding more tests for graceful shutdown:
- shutdown the destination compute and see how live and cold migration
progress
- start build instance and ocne comoute start building instance then
shutdown the comoute service and see if build instance finish or not.
- revert resize server
Partial implement blueprint nova-services-graceful-shutdown-part1
Change-Id: I57132fb7b7fa614dfc138508581ff5a67aaed906
Signed-off-by: Ghanshyam Maan <gmaan.os14@gmail.com>
During graceful shutdown, compute service keep a 2nd RPC
server active which can be used to finish the in-progress
operations. Like live migration, resize and cold migrations
also perform RPC call among source and destination compute.
For those operation also, we can use 2nd RPC server and make
sure they will be completed during graceful shutdown.
A quick overview of what all RPC methods are involved in the
resize/cold migration and what all will be using 2nd RPC server:
Resize/cold migration
- prep_resize: No, resize/migration is not started yet.
- resize_instance: Yes, here the resize/migration starts.
- finish_resize: Yes
- cross cell resize case:
- prep_snapshot_based_resize_at_dest: NO, this is initial check and
migration is not started
- prep_snapshot_based_resize_at_source: Yes, this start the migration
Confirm resize: NO
- confirm_resize: NO
- cross cell confirm resize case:
- confirm_snapshot_based_resize - NO
Revert resize:
- revert_resize - NO
- check_instance_shared_storage: YES. This is called from dest to source
so we need source to respond to it so that revert can continue.
- finish_revert_resize on source- YES, at this stage, revert resize is
in progress and abandoning it here can lead migration to unreocverable
state.
- cross cell revert case:
- revert_snapshot_based_resize_at_dest: NO
- finish_revert_snapshot_based_resize_at_source: YES
Partial implement blueprint nova-services-graceful-shutdown-part1
Change-Id: If08b698d012a75b587144501d829403ec616f685
Signed-off-by: Ghanshyam Maan <gmaan.os14@gmail.com>
For graceful shutdown of compute service, it will have two RPC servers.
One RPC server is used for the new requests which will be stopped during
graceful shutdown and 2nd RPC server (listen on 'compute-alt' topic)
will be used to complete the in-progress operations.
We select the operations (case by case) and their RPC method to use
the 2nd PRC server so that they will not be interupted on shutdown
initiative and graceful shutdown time will keep 2nd RPC server active
for graceful_shutdown_timeout. A new method 'prepare_for_alt_rpcserver'
is added which will fallback to first RPC server if it detect the old
compute.
As this is upgrade impact, it bumps the compute/service version, adds
releasenotes for the same.
The list of operations who should use the 2nd RPC server will grow
evanutally and this commit moves the below operations to use the 2nd
RPC server:
* Live migration
- Live migration: It use 2nd RPC servers and will try to complete
the operation during shutdown.
- live_migration_force_complete does not need to use 2nd RPC server.
It is direct RPC request from API to compute and if that is
rejected during shutdown, it is fine and can be initiated again
once compute is up.
- live_migration_abort does not need to use 2nd RPC server. Ditto,
it is direct RPC request from API to compute. It cancel the queue
live migration but if migration is already started, then driver
cancel the migration. If it is rejected during shutdown because of
RPC is stopped, it is fine and can be initiated again.
* server external event
* Get server console
As graceful shutdown cannot be tested in tempest, this adds a new job
to test it. Currently it test the live migration operation which can
be extended to other operations who will use 2nd RPC server.
Partial implement blueprint nova-services-graceful-shutdown-part1
Change-Id: I4de3afbcfaefbed909a29a831ac18060c4a73246
Signed-off-by: Ghanshyam Maan <gmaan.os14@gmail.com>
Unlike the check for extra specs, the check for whether to include a
description field or not is driven entirely by API version rather than
API version and policy. We can therefore move the checks inside the
functions that generate the response rather than duplicating them
elsewhere.
Change-Id: I86aa4e1c62a0b0e6fa4d27e559d3197fb73851ba
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Ahead of adding additional tests. We do the following:
* Move the controller to an instance attribute rather than a class
attribute
* Modify tests so they all call controller methods directly rather than
setting up a fake router (this is the cause of the largest changes)
* Remove unnecessary aliasing of exceptions
* Remove unnecessary setUp arguments
* Split a test into multiple tests
* Standardize test class names
Change-Id: I2cac4cc79288f7b3bacc4a63a1d36d4cf12013d7
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
So that we can use them for API DB methods, which are found in
nova.objects instead of nova.db.
Change-Id: Ifb15ee90ac6a6400b7268ed80f727080e98c4cdf
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Previous patches removed direct eventlet usage from nova-compute so
now we can run it with native threading as well. This patch documents
the possibility and switches both nova-compute processes to native
threading mode in the nova-next job.
Change-Id: I7bb29c627326892d1cf628bbf57efbaedda12f1a
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
Move the execution of build_and_run_instance and snapshot_instance to
one common long task executor. Originally snapshot ran
on the RPC pool, build_and_run_instance ran on the default pool.
Also each of these tasks had a separate concurrency limit enforced by a
semaphore.
After this patch each of these tasks use a common Executor. The size of
that executor and the way how we limit the concurrency differs in
eventlet and in native threading mode.
In eventlet mode we have one big Executor with "unlimit" size and
individual semaphores are used for each task type to enforce the
configured limits.
In threading mode we requests the admin to configure the 2 limits to the
same number, and we warn if not. We use that limit (or the max of the 2
limits) as the size of the long task Executor. As the limits are the
same we don't enforce individual limit any more. The executor size will
ensure the shared limit is kept. As the limit is shared a single
operation type can consume the whole limit.
Note that while live migration is a long-running task we cannot put it into
the same long_task_executor as build and snapshot as we need:
1. a very small limit of concurrent live migrations compared to
builds and snapshots
2. a way to cancel live migrations easily that are waiting due to the
limit
Change-Id: I88a6a593af8a5b518715e1245a76ee54752afe83
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
We already deprecated the unlimited max_concurrent_live_migrations
config value and now we do the same for max_concurrent_builds and
max_concurrent_snapshots as well. The reason is similar.
* The unlimited meaning was a lie, it was limited by other constructs in
the code. For these option the limit was the size of the RPC executor
defaulted to 64.
* In native threading mode having unlimited concurrent tasks is
unfeasible due to the memory cost of native threads for each task.
The deprecation is done in a way that in eventlet mode we keep a similar
behavior as before but in native threading mode we enforce a strict
maximum even if unlimited is requested.
Change-Id: Ibbf76c2c85729820035c9791719bf2c864bce12b
Signed-off-by: Balazs Gibizer <gibi@redhat.com>