In the contributor documentation, don't reference specific
distribution versions since they'll inevitably get out of date, and
these instructions are valid for all releases in the last 5 years at
least, and are not likely to change any time soon.
Change-Id: I7e7391a8850cf8a9dda763d9b85242fbbbb42af7
fractions.gcd is deprecated started in python 3.5 so
this change uses math.gcd if on py3 and fractions.gcd if
on py2.
Change-Id: Ib3dd924e967bc9b48d81dc81e1fcdeba0120985c
During live migration we update bdm.connection_info for attached volumes
in pre_live_migration to reflect the new connection on the destination
node. This means that after migration completes the BDM no longer has a
reference to the original connection_info to do the detach on the source
host. To address this, change I3dfb75eb added a second call to
initialize_connection on the source host to re-fetch the source host
connection_info before calling disconnect.
Unfortunately the cinder driver interface does not strictly require that
multiple calls to initialize_connection will return consistent results.
Although they normally do in practice, there is at least one cinder
driver (delliscsi) which doesn't. This results in a failure to
disconnect on the source host post migration.
This change avoids the issue entirely by fetching the BDMs prior to
modification on the destination node. As well as working round this
specific issue, it also avoids a redundant cinder call in all cases.
Note that this massively simplifies post_live_migration in the libvirt
driver. The complexity removed was concerned with reconstructing the
original connection_info. This required considering the cinder v2 and v3
use cases, and reconstructing the multipath_id which was written to
connection_info by the libvirt fibrechannel volume connector on
connection. These things are not necessary when we just use the original
data unmodified.
Other drivers affected are Xenapi and HyperV. Xenapi doesn't touch
volumes in post_live_migration, so is unaffected. HyperV did not
previously account for differences in connection_info between source and
destination, so was likely previously broken. This change should fix it.
Closes-Bug: #1754716
Closes-Bug: #1814245
Change-Id: I0390c9ff51f49b063f736ca6ef868a4fa782ede5
In tox versions after 3.0.0rc1 [1], setting the environment variable
PYTHONDONTWRITEBYTECODE will cause tox not to write .pyc files, which
means you don't have to delete them, which makes things faster.
In older tox versions, the env var is ignored.
If we bump the minimum tox version to something later than 3.0.0rc1, we
can remove the commands that find and remove .pyc files.
[1] https://github.com/tox-dev/tox/commit/336f4f6bd8b53223f940fc5cfc43b1bbd78d4699
Change-Id: I779a17afade78997ab084909a9e6a46b0f91f055
This is really because I wanted to be able to copy/paste from
object_hashes.txt in a subsequent patch and didn't want to introduce
unrelated ordering changes.
Change-Id: I064a52ebd17488334f4ecb88eaae69703a101ae6
Starting in Pike, we disallowed trying to update (enable/disable/force down)
non-nova-compute services because of multi-cell support using host mappings
to lookup service records, and simply because disabling non-compute services
doesn't do anything.
However, before microversion 2.53, the error the user gets back is confusing:
HTTP exception thrown: Host 'p024.domain.com' is not mapped to any cell
This change provides a useful error message in this case and also changes
the 404 response to a 400 response to align with the type of error and the
behavior of the 2.53 microversion.
Change-Id: I44f09aec60b0b18c458f9ba6d8b725db962e9cc7
Closes-Bug: #1805164
This addresses review comments from the following changes:
I61a3e8902a891bac36911812e4e7c080570e3850
I48e6db9693e470b177bf4c75211d8b883c768433
Ic70d2bb781b6a844849a5cf2fe4d271b5a81093d
I5a956513f3485074023e027430cc52ee7a3f92e4
Ica6152ccb97dce805969d964d6ed032bfe22a33f
Part of blueprint bandwidth-resource-provider
Change-Id: Idffaa6d206cda3f507e6be095356537f22302ad7
DriverVolumeBlockDevice will delete volume attachment when attach
fails, codes link:
https://github.com/openstack/nova/blob/907c7d2cf/nova/virt/block_device.py#L561-L568
However, nova.compute.manager will delete it again and it will raise
VolumeAttachmentNotFound exception. This outputs an incorrect error
log and this exception should be ignored.
Change-Id: I939c09e5b0efb3b17a9855af227e6d60c64d23e2
Closes-Bug: #1812969
Add a new microversion that removes support for the aforementioned
argument, which cannot be adequately guaranteed in the new placement
world.
Change-Id: I2a395aa6eccad75a97fa49e993b0300bdcfc7258
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Implements: blueprint remove-force-flag-from-live-migrate-and-evacuate
APIImpact
This patch lays the underground work for supporting the
``all-tenants`` filter.
Related to blueprint handling-down-cell
Change-Id: I7dcef20aed0178c81d6580aa9534288eaa383dab
This patch adds the plumbing required to ignore the value of the
``[api]/list_records_by_skipping_down_cells`` config from the new
microversion if cell_down_support is True. This config will be
considered only if cell_down_support is False in which case we
look at the [api]/list_records_by_skipping_down_cells and
accordingly skip the records from the down cells or generate an
API exception as the response.
The description of list_records_by_skipping_down_cells is updated
in Id9f12532897912b39093f63e9286540d9029edeb when the microversion
is added.
Related to blueprint handling-down-cell
Change-Id: Icbe27c941c9b934f8f1894e9b9da1d34f047e942
1) Change it to accommodate querying it for 'None' project_ids
in the "--all-tenats" case.
2) If the online data migration for populating queued_for_delete
has not been run for some reason, the the values could be NULL
in the database for instance_mapping.queued_for_delete. Under
such circumstances, we assume that such mappings with NULL
queued_for_delete have *not* being queued_for_deletion.
Related to blueprint handling-down-cell
Change-Id: I80a65bc026e26a272a9dc041b27f9839511db765
Now that os_traits provides symbols for each trait, we can refer to
traits by symbol rather than by string. This gives us compile-time
checking for free, in order to ensure that nova's use of traits lines
up with what os_traits provides.
Change-Id: Id1461c444f0f67b29e0a6a10181267ef1d1d8bc0
This patch adds two new fields to the RequestGroup ovo, requester_id and
provider_uuids. These two fields are needed to be able to hold and
communicate the mapping between the requester of the RequestGroup (e.g.
Neutron port) and the resource providers that are fulfilling the request
(e.g. network device RPs). If the RequestGroup represents the un-numbered
group then more than one RP can fulfill the request hence provider_uuids
is a list.
These new fields later in the series will be populated based on some
logic in the nova-conductor. However in the long run we expect that
these fields will be populated from the Placement allocation
candidates response.
blueprint bandwidth-resource-provider
Change-Id: Ic4735f92542e5e0ca36b459874dc486f6b360317
This patch collects the resource requests from each neutron port
involved in a server create request. Converts each request to
a RequestGroup object and includes them in the RequestSpec.
This way the requests are reaching the scheduler and there
they are included in the generation of the allocation_candidates
query.
This patch only handles the happy path of a server create request. But
it adds couple of TODOs to places where the server move operations
related code paths need to be implemented. That implementation will be
part of subsequent patches.
Note that this patch technically makes it possible to boot server with
one neutron port that has resource request. But it does not handle
multiple such ports or SRIOV ports where two PFs are supporting the
same physnet as well as many server lifecycle operations like resize,
migrate, live-migrate, unshelve. To avoid possible resource allocation
inconsistencies due to the partial support nova rejects any requests
that involves such ports. See the previous patches in this patch
series for details.
Also note that the simple boot cases are verified with functional tests
and in those tests we need to mock out the above described logic that
reject such requests. See a more background about this approach on the
ML [1].
[1] http://lists.openstack.org/pipermail/openstack-discuss/2018-December/001129.html
blueprint bandwidth-resource-provider
Change-Id: Ica6152ccb97dce805969d964d6ed032bfe22a33f