These options were deprecated way back in Rocky due to buggy behavior
they introduced. We can remove them now.
Change-Id: I9266edfd4ea6315239c54ff8d91e37d197c760c0
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
None of these backends has had upstream testing in a very long time, if
ever, and their usage levels are unknown. Deprecate them now so that we
can at least remove the worst of them (UML, Xen) in the next cycle.
Change-Id: Id5b15aa846a5ddaf4ac26fe586327aef8c08c89d
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
The 'architecture', 'hypervisor_type', 'hypervisor_version_requires' and
'vm_mode' image metadata properties have had new names for many cycles
now.
The example for the freshly renamed 'img_hv_requested_version' option
has been updated to show a Hyper-V example, since the Xen virt driver is
not tested and will likely be removed in the near future.
Change-Id: I5684d7d462d3f7cecd887216c5618139787ef5d7
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Only allow one detach/attach at a time with the same pattern instance-port_id
in order to avoid race condition when multiple detach/attach are run
concurrently.
When multiple detach run concurrently on a specific instance-port_id,
manager consider many of them as valid because info_cache still contains
the port and info_cache is refreshed only once the first request complete.
So during this gap of time, while the first request accomplishes the task,
all subsequent requests are destined to fail and log a warning [1] in
different location of code, depending on the outcome of the first request.
The Issue is that all those caught requests finally run a
deallocate_port_for_instance which will unbind the port.
This may cause a race condition, because a successful attach can pass between
those unbind, and be silently unbound, resulting in an infrastructure/DB
inconsistency.
[1] 'Detaching interface %(mac)s failed because the device is no longer found
on the guest.'
Closes-Bug: #1892870
Change-Id: Iea5969d0bd16dc9a6f1ba950224b0115e466ce66
Previously, the default value of num_retries for glance is 0.
It means that the request to glance is sent only one time.
On the other hand, neutron and cinder clients set the default
value to 3.
To align the default value for retry to other components, we
should change the default value to 3.
Closes-Bug: #1888168
Change-Id: Ibbd4bd26408328b9e1a1128b3794721405631193
When an attempt to delete an instance doesn't succeed and nova retries
on the next nova-compute restart, an instance not existing in the back
end anymore can lead to an uncatched exception in the vmware driver
prohibiting instance deletion. This is the case, if the instance had
volumes attached, because `_detach_instance_volumes()` always powers off
the instance - which cannot work if the instance doesn't exist anymore.
While the code already catched `ManagedObjectNotFoundException`, it also
needs to catch `InstanceNotFound` raised by `vm_util.get_vm_ref()` to
complete the deletion as seen in the traceback below (which comes from a
queens codebase):
Traceback (most recent call last):
File "/nova/compute/manager.py", line 874, in _init_instance
self._delete_instance(context, instance, bdms)
File "/nova/hooks.py", line 154, in inner
rv = f(*args, **kwargs)
File "/nova/compute/manager.py", line 2500, in _delete_instance
self._shutdown_instance(context, instance, bdms)
File "/nova/compute/manager.py", line 2392, in _shutdown_instance
requested_networks)
File "/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
self.force_reraise()
File "/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
File "/nova/compute/manager.py", line 2379, in _shutdown_instance
block_device_info)
File "/nova/virt/vmwareapi/driver.py", line 574, in destroy
self._detach_instance_volumes(instance, block_device_info)
File "/nova/virt/vmwareapi/driver.py", line 536, in _detach_instance_volumes
self._vmops.power_off(instance)
File "/nova/virt/vmwareapi/vmops.py", line 1762, in power_off
vm_util.power_off_instance(self._session, instance)
File "/nova/virt/vmwareapi/vm_util.py", line 1732, in power_off_instance
vm_ref = get_vm_ref(session, instance)
File "/nova/virt/vmwareapi/vm_util.py", line 171, in wrapper
return _vm_ref_cache(id, func, session, instance)
File "/nova/virt/vmwareapi/vm_util.py", line 162, in _vm_ref_cache
vm_ref = func(session, data)
File "/nova/virt/vmwareapi/vm_util.py", line 1214, in get_vm_ref
raise exception.InstanceNotFound(instance_id=uuid)
InstanceNotFound: Instance 2af34cc5-22e0-400c-8b80-f130e86027fd could not be found.
Change-Id: I65d2f76068e4b033ffd20959c9e74c870c8aa8e0
This series implements the referenced blueprint to allow for specifying
custom resource provider traits and inventories via yaml config files.
This fourth commit adds the config option, release notes, documentation,
functional tests, and calls to the previously implemented functions in
order to load provider config files and merge them to the provider tree.
Change-Id: I59c5758c570acccb629f7010d3104e00d79976e4
Blueprint: provider-config-file
A recent release note is preventing Nova from being cloned on
Windows since the file name contains pipes.
Change-Id: I373e31e3776e6733b00d5536982228b8bf97877d
When _poll_unconfirmed_resizes runs or a user tries to confirm
a resize in the API, if the source compute service is down the
migration status will be stuck in "confirming" status if it never
reached the source compute. Subsequent runs of
_poll_unconfirmed_resizes will not be able to auto-confirm the
resize nor will the user be able to manually confirm the resize.
An admin could reset the status on the server to ACTIVE or ERROR
but that means the source compute never gets cleaned up since you
can only confirm or revert a resize on a server with VERIFY_RESIZE
status.
This adds a check in the API before updating the migration record
such that if the source compute service is down the API returns a
409 response as an indication to try again later.
SingleCellSimple._fake_target_cell is updated so that tests using
it can assert when a context was targeted without having to stub
nova.context.target_cell. As a result some HostManager unit tests
needed to be updated.
Change-Id: I33aa5e32cb321e5a16da51e227af2f67ed9e6713
Closes-Bug: #1855927
Well, don't actually detail them. Just note that things are incomplete.
People can read the docs for more info.
Change-Id: Ie470af3e738327c6f2800f386dbe43319f896222
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This module will be used within nova.image.glance as part of the
nova-image-download-via-rbd blueprint. As this can technically be used
by multiple virt drivers it's time to break rbd_utils out from the
libvirt driver into a more generic place in the codebase.
Change-Id: I25baf5edd25d9e551686b7ed317a63fd778be533
As with previous changes, we're going to be doing some surgery to this
file shortly so enable type hints now. These are *super* incomplete but
at least we have a starting point.
Part of blueprint add-emulated-virtual-tpm
Change-Id: Iee44ea525deb0b43ae43df3ba08c95ea8a4e317c
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This series implements the referenced blueprint to allow for specifying
custom resource provider traits and inventories via yaml config files.
This third commit includes functions on the provider tree to merge
additional inventories and traits to resource providers and update
those providers on the provider tree. Those functions are not currently
being called, but will be in a future commit.
Co-Author: Tony Su <tao.su@intel.com>
Author: Dustin Cowles <dustin.cowles@intel.com>
Blueprint: provider-config-file
Change-Id: I142a1f24ff2219cf308578f0236259d183785cff
In vSphere 7.0, the VirtualDevice.key cannot be the same any more.
So set different values to VirtualDevice.key
Change-Id: I574ed88729d2f0760ea4065cc0e542eea8d20cc2
Closes-Bug: #1892961
What it is, why you'd want it and how you can configure it.
Part of blueprint add-emulated-virtual-tpm
Change-Id: I8e52a397bca8f09e6aaa6cab44eee7dded529c55
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Previous patches added support for parsing the vTPM-related flavor extra
specs and image metadata properties, the necessary integrations with the
Castellan key manager API etc. This change adds the ability to enable
support in the libvirt driver and create guests with vTPM functionality
enabled. Cold migration and resize are not yet supported. These will be
addressed in follow-on changes.
Functional tests are included. These require expansion of the
fakelibvirt stubs to implement basic secret management
Part of blueprint add-emulated-virtual-tpm
[1] https://review.opendev.org/686804
Change-Id: I1ff51f608b85dbb621814e70079ecfdd3d1a1d22
Co-Authored-By: Eric Fried <openstack@fried.cc>
Co-Authored-By: Stephen Finucane <stephenfin@redhat.com>
The VIR_MIGRATE_PARAM_PERSIST_XML parameter was introduced in libvirt
v1.3.4 and is used to provide the new persistent configuration for the
destination during a live migration:
https://libvirt.org/html/libvirt-libvirt-domain.html#VIR_MIGRATE_PARAM_PERSIST_XML
Without this parameter the persistent configuration on the destination
will be the same as the original persistent configuration on the source
when the VIR_MIGRATE_PERSIST_DEST flag is provided.
As Nova does not currently provide the VIR_MIGRATE_PARAM_PERSIST_XML
param but does provide the VIR_MIGRATE_PERSIST_DEST flag this means that
a soft reboot by Nova of the instance after a live migration can revert
the domain back to the original persistent configuration from the
source.
Note that this is only possible in Nova as a soft reboot actually
results in the virDomainShutdown and virDomainLaunch libvirt APIs being
called that recreate the domain using the persistent configuration.
virDomainReboot does not result in this but is not called at this time.
The impact of this on the instance after the soft reboot is pretty
severe, host devices referenced in the original persistent configuration
on the source may not exist or could even be used by other users on the
destination. CPU and NUMA affinity could also differ drastically between
the two hosts resulting in the instance being unable to start etc.
As MIN_LIBVIRT_VERSION is now > v1.3.4 this change simply includes the
VIR_MIGRATE_PARAM_PERSIST_XML param using the same updated XML for the
destination as is already provided to VIR_MIGRATE_PARAM_DEST_XML.
Co-authored-by: Tadayoshi Hosoya <tad-hosoya@wr.jp.nec.com>
Closes-Bug: #1890501
Change-Id: Ia3f1d8e83cbc574ce5cb440032e12bbcb1e10e98
No need for the libvirt driver in all its complexity here.
Change-Id: Ifea9a15fb01c0b25e9973024f4f61faecc56e1cd
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
We're going to gradually introduce support for the various instance
operations when using vTPM due to the complications of having to worry
about the state of the vTPM device on the host. Add in API checks to
reject all manner of requests until we get to include support for each
one. With this change, the upcoming patch to turn everything on will
allow a user to create, delete and reboot an instance with vTPM, while
evacuate, rebuild, cold migration, live migration, resize, rescue and
shelve will not be supported immediately.
While we're here, we rename two unit test files so that their names
match the files they are testing and one doesn't have to spend time
finding them.
Change-Id: I3862a06ca28b383d525bcc9dcbc6fb1d4062f193
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>