This patch adds the logic to initialize our DEFAULT_EXECUTOR based on
the concurrency mode.
If the OS_NOVA_DISABLE_EVENTLET_PATCHING env variable is set to "true"
then we use ThreadPoolExecutor so the service will use native threads.
Otherwise the service will keep using the GreenThreadPoolExecutor and
therefore using Eventlet.
There is a new config option `[DEFAULT]default_thread_pool_size`
added to define the size of the ThreadPoolExecutor and the existing
`[DEFAULT]default_green_pool_size` is deprecated. The two config options
are independent and each of them is only used if the service is started
with the matching concurrency mode.
Our test fixture that catches leaked threads are adapted to handle both
threading mode. But at the moment it is only tested with the eventlet
mode as no service can be switched to threading mode yet.
Change-Id: I17b8065e8b14f0401297258e43dc2a98031d61f4
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
Now that we are using executors we can clarify the naming of the global
and the function accessing it.
Change-Id: Iddaa1e9c72f8f32168996edcfacf93d6df16e7a7
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
All our thread spawning should pass the context and all should go
through either spawn to use the default executor or spawn_on with a
specific executor. Right now pass_context is only called locally in the
utils module. We make it private to signal that it is intentional.
This is a pure refactor with two steps:
* remove the public pass_context as it is unused by removing it from the
internall call chain of spawn and spawn_on
* rename _pass_context_wrapper
This change does not effect any existing behavior so no test changes
were needed.
Change-Id: I5ad19b5c1a6d489c0f7df7fcbd9f87858b03760c
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
So far the ConductorManager.cache_images functionality used an
explicit GreenPool. We change this pool to a GreenThreadPoolExecutor
and use then spawn_on to start the thread. This way we can
* can replace the direct eventlet dependency with an indirect one behind
an interface where the impl can easily switched out to
ThreadPoolExecutor in the future
* we can get rid of the explicit pass_context call and centralize that
instead to spawn_*
We don't have unit test changes here as we have good unit tests that are
asserting behavior not asserting implementation. So when the implementation
changes (like in this refactor patch) but the behavior does not, we don't
need to change the existing unit test coverage.
The existing test coverage is in:
nova.tests.unit.conductor.test_conductor.ConductorTaskAPITest
Change-Id: I15297fd7869ed5f8a58f0ca599ef11df91e04de6
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
Small fixes for Ibff6c73ad9af911a42204e53fee31ed5537c829d
Change-Id: I288c48476a5cd6f0f657f29aab6246500fe52765
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
Yet another proxy API documented, albeit very loosely. We also remove a
conditional that can never be reached: we will always have a network
from neutron by time we attempt to show it. If we didn't, we'd have
exited early due to an exception.
Change-Id: I008975b3eabf5f3552ebad7e5bbe847b9c7eaa16
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
These are all empty and are purely here to satisfy the check for
schemas and to allow us to potentially populate them for documentation
purposes later.
Change-Id: Ia52bc78b3392ec69382f3427f5676d52f9abee6d
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* Set additionalProperties to False, as expected
* Correct copy calls (not that it matters)
Change-Id: I97d8206d2df5deee0521ae69a73a32a7136c37be
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
We also fix some issues in the api-ref, since it'll be another while
before we can replace that.
Change-Id: If661e3af796475637c0e76b3dfbfd5b7a7f38c24
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This patch adds just the objects and notifications required to
support an extra spec to configure a USB controller inside
the guest. This is useful for SPICE consoles using the native
protocol.
Change-Id: I791b16c5bf0e860a188783c863e95dc423998b0a
Signed-off-by: Michael Still <mikal@stillhq.com>
This is the implementation for the sound model extra spec as
desired by the new libvirt spice-direct console mode. Sound
support is a frequently requested feature for VDI users.
Change-Id: I33b8fc0136b4c1783b5c493e8ca9a15110767f6c
Signed-off-by: Michael Still <mikal@stillhq.com>
This patch adds just the objects and notifications required to
support an extra spec to configure a sound device inside
the guest. This is useful for SPICE consoles using the native
protocol.
Change-Id: I2faeda0fd0fb9c8894d69558a1ccaab8da9f6a1b
Signed-off-by: Michael Still <mikal@stillhq.com>
So far the ComputeManager._sync_power_state functionality used an
explicit GreenPool. We change this pool to a GreenThreadPoolExecutor
and use then spawn_on to start the thread. This way we can
* can replace the direct eventlet dependency with an indirect one behind
an interface where the impl can easily switched out to
ThreadPoolExecutor in the future
* we can get rid of the explicit pass_context call and centralize that
instead to spawn_*
The ComputeManager live_migration_executor is already a
GreenThreadPoolExecutor so here we only switching to spawn_on to get rid
of the explicit pass_context call.
Change-Id: I51c5339d3f54f5c66856b6d4e06cd91ac04977cb
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
Now that spawn is using the default executor and returns a Future we can
have a spawn_on that takes the executor as a parameter so the
scatter-gather code path can use the same context passing logic but use
a separate executor.
This also change spawn() to use spawn_on() with the default executor.
And warns if the executor is busy an therefore the task is queued.
During testing this we discovered that SpawnIsSynchronous fixture made a
wrong assumption that eventlet.spawn(f) would raise the exception if f
raises. It does not. So now when we changed this fixture to the new
executor structure we fixed this test bug. There was two unit test
cases that depended on the wrong behavior they are adapted now.
Change-Id: I3bb40f5d2446dfd06253371d6abe8d3449b11265
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
As [1] switched over the implementation of spawn and spawn_n to the same
futurist Executor.submit we can now replace all spawn_n usage with spawn
and drop spawn_n from nova.utils.
[1]I3494660e1aaa1db46f9f08494cb5817ec7020cc5
Change-Id: I0027f119c0fbe8d5298307324eaf30c5e9e152d3
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
Nova uses nova.utils.spawn* to create new threads. This so far relied on
a GreenPool to provide GreenThreads. We changes this pool to a
futurist.GreenThreadPoolExecutor to have an interface where the
implementation can be swapped out to futurist.ThreadPoolExecutor to get
native threads instead.
This is an interface change on utils.spawn as it will return
futurist.Future instead of GreenThread. So couple of fixes needed across
nova to use:
* .result() instead of .wait()
* .add_done_callback() instead of .link(). Here we needed to change the
usage as the new callback does not forward args, so we rely on
closures instead.
This is also an interface and a behavior change for utils.spawn_n as it
now calls utils.spawn internally. This means that top of the above
detailed interface change there is behavior change for spawn_n.
The spawn creates GreenThread a wrapper around greenlet while spawn_n
created only the underlying greenlet. The greenlet cannot be managed
the same way as a more intelligent GreenThread, including the return
value but not limited to it, e.g. the whole cancellation mechanism
is missing from greenlet too. After this patch spawn_n will also use
GreenThread instead of naked greenlet. We consider the resulting small
performance change negligible.
Also the way we implement SpawnIsSynchronousFixture in our test is
adapted along with other test fixture adaptation to call / mock the
right functions.
Change-Id: I3494660e1aaa1db46f9f08494cb5817ec7020cc5
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
* Add a note explaining presence of xvpvnc console type
* Make 'url' mandatory in create response
* Remove unnecessary description fields: we will populate these later
* De-deuplcate request body schemas
* Re-add references to the rdp console to the api-ref
Change-Id: I5555b8cf7a83fad689e98522850b5550b49566ed
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
The RPC handler build_and_run_instance has a normal set of error handler
decorators but that call immediately spawns a separate thread and does not
wait for its result so it cannot get any exceptions. Therefore the error
handling should happen inside the thread. And it does. Mostly.
The _locked_do_build_and_run_instance is the function run by the thread
and it calls _do_build_and_run_instance. The _do_build_and_run_instance
has `except Exception` handling dropping the active exception and returning
an error value instead. So most of the exceptions are handled by
_do_build_and_run_instance without going through any of the error
decorators.
Whatever exception still bubbles up from _do_build_and_run_instance
are going through the error handling decorators top of
_do_build_and_run_instance.
The only source of exception outside of _do_build_and_run_instance but
within _locked_do_build_and_run_instance is
delete_allocation_for_instance. Exception from that call does not
go though any error handling decorators.
These error decorators has multiple tasks:
* wrap_exception calls _emit_legacy_exception_notification and
_emit_versioned_exception_notification so sending error
notifications
* reverts_task_state sets instance.task_state to None
* wrap_instance_fault records InstanceFault object to the DB
So there are two cases when error happens but the decorators are not
called:
* _do_build_and_run_instance catches the exception, does its own error
handling and cleanup and then returns an error value. The task state
revert and the InstanceFault creation is replicated to the individual
cleanup sections here.
Note that notification sending is not missing either as that is
handled by the _build_and_run_instance called from
_do_build_and_run_instance.
* delete_allocation_for_instance raises when called from
_locked_do_build_and_run_instance. We reach this call if
_do_build_and_run_instance failed already and did its own cleanup. So
again task state are already reset and InstanceFault already recorded
for the original fault and notification already sent.
So the only changes here is to add a note to build_and_run_instance so
that the real error handling are deeper in the stack so that readers will
not think error handling happen on top of the call stack.
There was two unit test cases that partially relied on the wrong
assumption that exceptions propagate to build_and_run_instance.
They could make that wrong assumption because the
SpawnIsSynchronousFixture simulates the GreenThread interface wrongly.
The goal of that fixture is to make any spawn (and spawn_n) call
synchronous meaning the function passed in is executed right away before
spawn returns. That is OK. But the fixture forgot that if the function
raises then such exception is now raised by spawn. Such a thing never
happens with a normal spawn. The exception (and the function return
value) only propagated when GreenThread.wait() is called.
So this patch fixes the fixture and corrects the two unit tests as well.
Change-Id: I440fff6663d0663fb1630dd096f352403982aa37
Due to a bug in python3.13 [1] the following code will leads to an
emptied dict by the GC even though we hold a reference to the dict.
import gc
class A:
def __init__(self, client):
self.__dict__ = client.__dict__
self.client = client
class B:
def __init__(self):
self.test_attr = "foo"
a = A(B())
print(a.__dict__)
print(a.client.__dict__)
gc.collect()
print("## After gc.collect()")
print(a.__dict__)
print(a.client.__dict__)
# Output with Python 13
{'test_attr': 'foo', 'client': <__main__.B object at 0x73ea355a8590>}
{'test_attr': 'foo', 'client': <__main__.B object at 0x73ea355a8590>}
## After gc.collect()
{'test_attr': 'foo', 'client': <__main__.B object at 0x73ea355a8590>}
{}
# Output with Python 12
{'test_attr': 'foo', 'client': <__main__.B object at 0x79c86f355400>}
{'test_attr': 'foo', 'client': <__main__.B object at 0x79c86f355400>}
## After gc.collect()
{'test_attr': 'foo', 'client': <__main__.B object at 0x79c86f355400>}
{'test_attr': 'foo', 'client': <__main__.B object at 0x79c86f355400>
Our neutron client has this kind of code and therefore failing in
python3.13. This patch adds __getattr__ instead of trying to hold a
direct reference to the __dict__. This seems to work around the
problem.
Co-Authored-By: Johannes Kulik <johannes.kulik@sap.com>
[1] https://github.com/python/cpython/issues/130327
Closes-Bug: #2103413
Change-Id: I87c9fbb9331135674232c6e77d700966a938b0ac
As part of the this BP[1], we are going to add the
Project Manager role as default in API policies.
This commit adds the project manager role context
in the policies unit tests so that when we change
the policy default to project manager then we can
know how the changes/permission will looks like.
NOTE: This does not change any policy default, only
adding manager context in existing unit tests.
Partial implement blueprint policy-manager-role-default
[1] https://blueprints.launchpad.net/nova/+spec/policy-manager-role-default
Change-Id: Ib1622e7593108aa4fac65fcc34cb616502abed94
Signed-off-by: Ghanshyam Mann <gmaan@ghanshyammann.com>
This rewrites the core logic of scatter-gather to use
the futurist lib that provides a consistent API to use either Green
or native ThreadPoolExecutor. This way the scatter-gather code
can be made independent from the actual concurrency backend.
In this change we also create a separate executor for the scatter-gather
calls to better control it.
A new config option [default]cell_worker_thread_pool_size is added
to define the number of threads in this ThreadPoolExecutor. For the
GreenThreadPoolExecutor case this config is ignored as only real
threads are expensive, green threads doesn't.
As oslo.sevice use os.fork(), and fork copies the parent process state
to the children processes we need to make sure that the children
processes get a proper, fresh executor state, so we destroy the
executor just before the fork.
The fixtures.SynchronousThreadPoolExecutorFixture is deleted. It was
globally mocking futurist.GreenThreadPoolExecutor which would now
mocking more than what we want. There was only one usage of it in the
ComputeManager testing where it is replaced with a direct change of the
manager internal executor in the test setup to get the same, but
localized, result.
Note that testing the threading mode is done in nova-next job
in I36c68740fae3e3a9bd3286a1b66d86fd3341aff5, pool statistics
reporting will be added by Id4244f5ae0fd49c99af2898789cdd510859e150d,
and documentation about our threading tunables are in
I003177de3a9f69c71c19eb8eaa7232785e03e669.
Change-Id: Ibff6c73ad9af911a42204e53fee31ed5537c829d