This change simply bumps the version of os-brick required by Nova to
version 4.2.0. This is itself required by
I9ad90817648ca12f80a6b53f6ba728df15cbafab that introduces support for
rbd volumes within the HyperV driver.
After much back and fourth it was decided to bump the required version
of os-brick ahead of that change to allow the following requirements.txt
and lower-constraints.txt changes caused by direct and in-direct
dependency changes to be documented clearly:
os-brick 4.2.0 depends on oslo.log>=4.4.0
\_ oslo-log 4.4.0 depends on python-dateutil>=2.7.0
os-brick 4.2.0 depends on oslo.serialization>=4.0.1
os-brick 4.2.0 depends on pbr>=5.5.0
os-brick 4.2.0 depends on oslo.privsep>=2.4.0
\_ oslo.privsep 2.4.0 depends on msgpack>=0.6.0
os-brick 4.2.0 depends on oslo.service>=2.4.0
os-brick 4.2.0 depends on tenacity>=6.2.0
os-brick 4.2.0 depends on oslo.context>=3.1.1
os-brick 4.2.0 depends on oslo.concurrency>=4.3.0
os-brick 4.2.0 depends on oslo.i18n>=5.0.1
os-brick 4.2.0 depends on six>=1.15.0
os-brick 4.2.0 depends on os-win>=5.1.0
The above changes have been tested with pip 21.0.1 to ensure the new
resolver is happy and that nothing has been missed.
Change-Id: Ic83f3c7c955d0df89d75f700ee4fe2bd7f715794
Implements: blueprint hyperv-rbd
The 'nova.exception_wrapper.wrap_exception' decorator accepted either a
pre-configured notifier or a 'get_notifier' function, but the forget was
never provided and the latter was consistently a notifier created via a
call to 'nova.rpc.get_notifier'. Simplify things by passing the
arguments relied by 'get_notifier' into 'wrap_exception', allowing the
latter to create the former for us.
While doing this rework, it became obvious that 'get_notifier' accepted
a 'published_id' that is never provided nowadays, so that is dropped. In
addition, a number of calls to 'get_notifier' were passing in
'host=CONF.host', which duplicated the default value for this parameter
and is therefore unnecessary. Finally, the unit tests are split up by
file, as they should be.
Change-Id: I89e1c13e8a0df18594593b1e80c60d177e0d9c4c
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
The '_prepare_pci_devices_for_use' function is Xen-specific and should
have been removed in change I73305e82da5d8da548961b801a8e75fb0e8c4cf1
("libvirt: Drop support for Xen"). Remove it now.
Change-Id: Ie7dc3247a0d6526224bed1af429234cc2403a70a
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
The bugfix I759aa36dc00a6c0612b9755dacd9aa414c408498 ensured that nova
rejectes a volume detach if the compute is down. But the func test still
has short rpc timeout set which sometimes causes that the server creates
times out on a slow node. Before the bugfix the short timeout was needed
to speed up the test but after the fix it is not needed any more so it
is removed to stabilize the test.
Change-Id: I12990eaca3820e56047e4d0e526c436fd2cfcf31
Related-Bug: #1909120
Since only Wallaby compute nodes will support the 'socket' PCI NUMA
affinity policy, this patch adds a ResourceRequest translator that adds
a required trait if the value of '(hw_|hw:)pci_numa_affinity_policy' is
'socket'.
The actual trait reporting by the libvirt driver will be added in a
future patch. Until then the 'socket' value remains a hidden no-op.
Implements: blueprint pci-socket-affinity
Change-Id: I908ff07e1107304ca5926cc04d2fdc8fe0da5ed9
This patch adds the 'socket' value to the allowed PCI NUMA affinity
policies, both to the 'hw:pci_numa_affinity_policy' flavor extra spec,
and the 'hw_pci_numa_affinity_policy' image property.
For now the new value is a no-op and remains undocumented. It will be
wired-in in a subsequent patch.
Implements: blueprint pci-socket-affinity
Change-Id: I0680d4e21f3e317ac702b55afef4c87e8acbfc3a
This patch registers for VIR_DOMAIN_EVENT_ID_DEVICE_REMOVED and
VIR_DOMAIN_EVENT_ID_DEVICE_REMOVAL_FAILED libvirt events and transforms
them to nova virt events.
This patch also extends the libvirt driver to have a driver specific
event handling function for these events instead of using the generic
virt driver event handler that passes all the existing lifecycle events
up to the compute manager.
This is part of the longer series trying to transform the existing
device detach handling to use libvirt events.
Co-Authored-By: Lee Yarwood <lyarwood@redhat.com>
Related-Bug: #1882521
Change-Id: I92eb27b710f16d69cf003712431fe225a014c3a8
This patch adds a `socket` field to NUMACell, and the libvirt driver
starts populating it. For testing, we need to fix how fakelibvirt's
HostInfo handled sockets: it previously assumed one or more sockets
within a NUMA node, but we want the reverse - one or more NUMA nodes
within a socket.
Implements: blueprint pci-socket-affinity
Change-Id: Ie4deb265f6093558ab86dc69f6ffab9da62ca15d
This is already handled by the LibvirtFixture
Change-Id: I9135f37fe29a7f6ecf54a2c2c1019c17c9815404
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
By add this to the 'LibvirtFixture', we ensure virtually every test
that validates a libvirt thing will have this stubbed automatically.
Change-Id: I03febf6ad7d76c7eec818d3b16a3ef8b26dcd84c
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Based on review feedback, we prefer to have the exception for
routed networks to not be prefilter-specific and just reraise
with the right exception type in the prefilter.
Change-Id: I9ccbbf3be8efc65fe7f480ad545fb5fc70767988
As explained in the spec, in order to support routed networks, we need
to add a new scheduler pre-filter with a specific conf option
so we can then look at the nova aggregates related to the segments.
Since this pre-filter is called every time we verify the scheduler,
that means that when we move instances, we should be also be able
to only support hosts that are in the same related aggregate.
NOTE(sbauza) : We're just setting admin_api variable in the
integrated helpers as it's used by _get_provider_uuid_by_host()
Implements: blueprint routed-networks-scheduling
Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com>
Co-Authored-By: Balazs Gibizer <balazs.gibizer@est.tech>
Change-Id: I667f56612d7f63834863476694cb1f4c71a58302
Before adding a prefilter, we need to add new methods for asking Neutron
about the related aggregates we need to ask Placement if the user
creates or moves an instance with a requested network or a port.
NOTE(sbauza): Looks like mypy doesn't like the current methods in
nova.network.neutron, so we need to fix the issues before adding this
module to mypy-files.txt.
Partially-Implements : blueprint routed-networks-scheduling
Change-Id: Ie166f3b51fddeaf916cda7c5ac34bbcdda0fd17a
Change I4d3193d8401614311010ed0e055fcb3aaeeebaed added some
additional local delete cleanup to prevent leaking of placement
allocations. The change introduced a regression in our "delete while
booting" handling as the _local_delete_cleanup required a valid
instance object to do its work and in two cases, we could have
instance = None from _lookup_instance if we are racing with a create
request and the conductor has deleted the instance record while we
are in the middle of processing the delete request.
This handles those scenarios by doing two things:
(1) Changing the _local_delete_cleanup and
_update_queued_for_deletion methods to take an instance UUID
instead of a full instance object as they really only need the
UUID to do their work
(2) Saving a copy of the instance UUID before doing another instance
lookup which might return None and passing that UUID to the
_local_delete_cleanup and _update_queued_for_deletion methods
Closes-Bug: #1914777
Change-Id: I03cf285ad83e09d88cdb702a88dfed53c01610f8
Nova doesn’t log the actual time that it is taking to live
migrate an instance but the number of cycles/2 that it does
when checking the migration job. If any operation takes
longer inside the loop, the reported time is wrong.
This behavior can cause some confusion when operators are
debugging issues.
This patch ensures that Nova logs the elapsed time of the
live migration.
Closes-Bug: #1916031
Change-Id: I1d622c28a09ddd2aa7d33fa7057b2f78dcaf97dc
Libvirt XML contains useful configuration information such as instance names,
flavors and images as metadata. This change extends this metadata to include
the IP addresses of the instances.
Example:
<metadata>
<nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
...
<nova:ports>
<nova:port uuid="567a4527-b0e4-4d0a-bcc2-71fda37897f7">
<nova:ip type="fixed" address="192.168.1.1" ipVersion="4"/>
<nova:ip type="fixed" address="fe80::f95c:b030:7094" ipVersion="6"/>
<nova:ip type="floating" address="11.22.33.44" ipVersion="4"/>
</nova:port>
</nova:ports>
...
</nova:instance>
</metadata>
Change-Id: I45f1df4935905170957c2ea2496c8a698a7464a2
blueprint: libvirt-driver-ip-metadata
Signed-off-by: Nobuhiro MIKI <nmiki@yahoo-corp.jp>
This adds two tests to cover a regression where racing create and
delete requests could result in a user receiving a 500 error when
attempting to delete an instance:
Unexpected exception in API method: AttributeError: 'NoneType' object
has no attribute 'uuid'
Related-Bug: #1914777
Change-Id: I8249c572c6f727ef4ca434843106b9b57c47e585
The "API unexpected exception" message can now be configured
by the cloud provider.
By default it continues to display the "launchpad" webpage to
report the nova bug, but it can be configured by the cloud
provider to point to a custom support page.
Change-Id: Ib262b91b57f832cbcc233f24f15572e1ea6803bd
Closes-Bug: #1810342
I4f551dc4b57905cab8aa005c5680223ad1b57639 introduced the environment
variable to disable the check-cherry-pick.sh script but forgot to allow
it to be passed into the pep8 tox env.
Change-Id: Ie8a672fd21184c810bfe9c0e3a49582189bf2111
This change starts to record the machine type of instances registered on
a given host during init_host or later in _configure_guest_by_virt_type
when spawning a new instance.
The machine type is recorded in the system metadata of the instance
under the ``image_hw_machine_type`` key as already used to store the
image metadata property ``hw_machine_type``. As a result no changes are
required to the code of libvirt_utils.get_machine_type that looks up the
machine type of an instance based on it's image metadata and host
config.
Functional tests are included to verify the basic behaviour of both this
new registration code and now this leads to instances being effectively
pinned to the machine type used during their initial spawn.
blueprint: libvirt-default-machine-type
Change-Id: I30c780a7729f1e7d791256bdc38d73b976c56268