This makes purge iterate over all cells if requested. This also makes our
post_test_hook.sh use the --all-cells variant with just the base config
file.
Related to blueprint purge-db
Change-Id: I7eb5ed05224838cdba18e96724162cc930f4422e
Since many people will want to fully purge shadow table data after archiving,
this adds a --purge flag to archive_deleted_rows which will automatically do
a full db purge when complete.
Related to blueprint purge-db
Change-Id: Ibd824a77b32cbceb60973a89a93ce09fe6d1050d
On x86-64/q35 and aarch64/virt instances libvirt adds as many
pcie-root-port entries (aka virtual pcie slots) as it needs and adds one
free. If we want to hotplug network interfaces or storage devices then
we quickly run out of available pcie slots.
This patch allows to configure amount of PCIe slots in instance. Method
was discussed with upstream libvirt developers.
To have requested amount of pcie-root-port entries we have to create
whole PCIe structure starting with pcie-root/0 and then add as many
pcie-root-port/0 entries as we want slots. Too low value may get bumped
by libvirt to same as amount of inserted cards.
Systems not using new option will work same way as they did.
Implements: bp configure-amount-of-pcie-ports
Change-Id: Ic3c8761bcde3e842d1b8e1feff1d158630de59ae
This patch adds the bit about removing the deleted rows from the
instance_mappings and request_specs tables as well permanently so
that the users are aware of this.
Change-Id: I183cc9f80b3feec6789332860b5aeb7591b710df
This adds a simple purge command to nova-manage. It either deletes all
shadow archived data, or data older than a date if provided.
This also adds a post-test hook to run purge after archive to validate
that it at least works on data generated by a gate run.
Related to blueprint purge-db
Change-Id: I6f87cf03d49be6bfad2c5e6f0c8accf0fab4e6ee
If there is a request to create a snapshot of an instance and
another request to delete the instance at the same time, the
snapshot task might fail with libvirt error and this error
is not handled correctly by compute manager. As a result,
tracestack was printed in the compute log.
This patch fixes it by handling libvirt exception during live
snapshot and raise instance not found exception if the libvirt
exception is raised due to domain not found.
Change-Id: I585b7b03753ed1d28a313ce443e6918687d76a8b
Closes-Bug: #1722571
We have an API for setting the admin password for an already created
instance and we have a metadata API for retrieving the encrypted
password. In the libvirt driver, when a request to set the admin
password is received, it is indeed set in the guest but the instance
system metadata is never updated with the encrypted password, so
attempts to retrieve the password via the metadata service API result
in an empty string returned instead of the encrypted password.
This has been broken in the libvirt driver since the set admin password
password feature was added, as far as I can tell. The xen api driver,
however, handles the same thing correctly and this adds similar logic
to the libvirt driver to fix the problem.
Closes-Bug: #1748544
Change-Id: Icf44c4c94529cb75232abe1f3ecc5a4d3646b0cc
The file nova/api/openstack/__init__.py had imported a lot of
modules, notably nova.utils. This means that any code which
runs within that package, notably the placement service, imports
all those modules, even if it is not going to use them. This
results in scripts/binaries that are heavier than they need
to be and in some cases including modules, like eventlet, that
it would feel safe to not have in the stack.
Unfortunately we cannot sinply rename nova/api/openstack/__init__.py
to another name because it contains FaultWrapper and FaultWrapper
is referred to, by package path, from the paste.ini file and that
file is out there in config land, and something we prefer not to
change. Therefore alternate methods of cleaning up were explored
and this has led to some useful changes:
Fault wrapper is the only consumer of walk_class_hierarchy so
there is no reason for it it to be in nova.utils.
nova.wsgi contains a mismash of WSGI middleware and applications,
which need only a small number of imports, and Server classes
which are more complex and not required by the WSGI wares.
Therefore nova.wsgi was split into nova.wsgi and nova.api.wsgi.
The name choices may not be ideal, but they were chosen to limit
the cascades of changes that are needed across code and tests.
Where utils.utf8 was used it has been replaced with the similar (but not
exactly equivalient) method from oslo_utils.encodeutils.
Change-Id: I297f30aa6eb01fe3b53fd8c9b7853949be31156d
Partial-Bug: #1743120
Add more test cases for placement.aggregates to cover some edge case.
blueprint placement-test-enhancement
Change-Id: Ia18de50f3265b358e64523229140ce9a6e70dbbb
When a zero service version is returned, it means that we have no
services running for the requested binary. In that case, we should
assume the latest version available until told otherwise. This usually
happens in first-start cases, where everything is likely to be up to
date anyway.
This change addresses an issue where the version returned had been
hard-coded to 4.11 (mitaka).
Change-Id: I696a8ea8adbe9481e11407ecafd5e47b2bd29804
Closes-bug: 1753443
With the addition of multiattach we need to ensure that we
don't make brick calls to remove connections on detach volume
if that volume is attached to another Instance on the same
node.
This patch adds a new helper method (_should_disconnect_target)
to the virt driver that will inform the caller if the specified
volume is attached multiple times to the current host.
The general strategy for this call is to fetch a current reference
of the specified volume and then:
1. Check if that volume has >1 active attachments
2. Fetch the attachments for the volume and extract the server_uuids
for each of the attachments.
3. Check the server_uuids against a list of all known server_uuids
on the current host. Increment a connection_count for each item
found.
If the connection_count is >1 we return `False` indicating that the
volume is being used by more than one attachment on the host and
we therefore should NOT destroy the connection.
*NOTE*
This scenario is very different than the `shared_targets`
case (for which we supply a property on the Volume object). The
`shared_targets` scenario is specifically for Volume backends that
present >1 Volumes using a single Target. This mechanism is meant
to provide a signal to consumers that locking is required for the
creation and deletion of initiator/target sessions.
Closes-Bug: #1752115
Change-Id: Idc5cecffa9129d600c36e332c97f01f1e5ff1f9f
When the doc structure was changed the location of the notification
devref also changed. This patch updates the reference to this doc in
the AssertionError emited in the test if new legacy notification is
introduced.
Change-Id: Iff30752bac64801ad8950eea5861d2b230f30fdf
We need this in a later change to pull volume attachment
information from cinder for the volume being detached so
that we can do some attachment counting for multiattach
volumes being detached from instances on the same host.
Change-Id: I751fcb7532679905c4279744919c6cce84a11eb4
Related-Bug: #1752115