For xenapi driver, there needs to be some way to delete cached
images based on when they were created. add an optional arg to
control delete operation.
Change-Id: I24fc45e989aa951aab55a261fce77f7e3667d988
Closes-bug: 1481689
Reasoning:
- Setting custom properties on glance images allows us to select the
type of disk bus e.g. VIRTIO/IDE/SCSI.
Although SATA disk bus works perfectly for qemu/kvm, it is not allowed
due to a check in virt/libvirt/blockinfo.py : is_disk_bus_valid_for_virt
- some Linux (custom) images require use of SATA bus rather than any
other that seems to be allowed.
Change-Id: Ie32ff7acf31d80d4fc1adbeadaaf30a886d10e49
Closes-Bug: #1686136
OSError will lead instance to ERROR state, change to
MigrationPreCheckError will make the instance status not changed.
Also, modify some test cases to make unit test easier
Closes-Bug: 1694636
Change-Id: I3286c32ca205ffd2d5d1aaab88cc96699476e410
Reviews of other patches in this series raised the criticism that the
term 'allocation request' was being used, when the proper name for it
should be 'allocation_request'. I've broken these simple corrections
into a separate patch so as not to clutter up the other patches.
Blueprint: return-alternate-hosts
Change-Id: Idd2b9e3b0000fa8eeb2e0e9c3337b1d99b13ada7
The nova.cmd.status.UpgradeCommands._check_placement docstring claimed
that the method checks that compute nodes are registered in placement.
It doesn't. Removed that sentence.
Change-Id: Id8c99f3c8410aed43a77faeebb1f1eee53b24487
The new Cinder attach/detach flow stores the connection_info for each
attachment. This patch adds the use of attachment_get to
refresh_connection_info as opposed to using initialize_connection in
case the new flow has been used to create the attachment.
Co-authored-by: Matt Riedemann <mriedem.os@gmail.com>
Change-Id: I9a62b915fc0b9406613c14434793eb52e602df1e
Add 'delete_host' command in 'nova-manage cell_v2'.
Add an optional 'force' option in 'nova-manage cell_v2 delete_cell'.
If specifying the 'force' option, a cell can be deleted
even if the cell has hosts.
Change-Id: I8cd7826c2c03687c6b85731519778f09d542b236
Closes-Bug: #1721179
When ironic updates the instance.flavor to require the new custom
resource class, we really need the allocations to get updated. Easiest
way to do that is to make the resource tracker keep updating allocations
for the ironic virt driver. This can be dropped once the transition to
custom resource classes is complete.
If we were not to claim the extra resources, placement will pick nodes
that already have instances running on them when you boot an instance
with a flavor that only requests the custom resource class. This should
be what all ironic flavors do, before the upgrade to queens is
performed.
Closes-Bug: #1724589
Change-Id: Ibbf65a8d817d359786abcdffa6358089ed1107f6
The serial console feature is a little unknown and it's a little
confusing at first. This change adds a doc to explain this better.
Change-Id: Ia5a336694aec95db29545e31b2c6b364dd825a15
This change adds a new 'allocations' kwarg to the
nova.virt.driver.ComputeDriver.spawn method and its overrides. The
value is the 'allocations' member of the dict returned from GET
/allocations/{instance_uuid}, representing the resources allocated to
the instance from the scheduler via placement.
This provides the virt driver with a way to know exactly which inventory
has been allocated to an instance by placement, from which resource
providers. Previously, virt drivers would process the flavor to glean
this information. This works fine as long as there's one monolithic
resource provider - the compute host - providing all the inventory in
a known fashion. However, as we expand to using nested and shared
resource providers, it becomes necessary for the virt driver to know
that e.g. CPU inventory has been allocated from a specific NUMA node; or
e.g. an SR-IOV VF has been allocated from a specific PF. Placement
allows such allocations to be represented generically; this change
is how virt is made aware of the same.
Change-Id: I00eab47edf1150788777300680e853a872c1db40
Add a new filesytem mounting helper in privsep, and then start
moving things across to it. This currently implements mount and
unmount. We get to cleanup some rmdir calls while we're at it
which is nice as well.
I've added an upgrade note mentioning that we no longer ignore
the value of stderr from mount calls, as requesed in code review.
Change-Id: Ib5e585fa4bfb99617cd3ca983674114d323a3cce
blueprint: hurrah-for-privsep
With cinder v3, the new live migration flow is to
create new volume attachments on the destination host
during pre_live_migration, and then after a successful
live migration, delete the attachments on the source host.
If the live migration fails, the attachments on the
destination host will be deleted.
A new dictionary, old_vol_attachment_ids, is added (without
persistence) to migrate_data. This will store the original
attachment_ids of the source host's attachments. If the migrate fails,
the attachment_id in the bdm will be replaced with the old
attachment_id.
An existing unit test, test_pre_live_migration_handles_dict, is
updated to avoid an exception thrown by the new code. A fake
instance is now used instead of a simple dict.
6 new unit tests are added.
Partially Implements: blueprint cinder-new-attach-apis
Depends-On: I9a62b915fc0b9406613c14434793eb52e602df1e
Change-Id: I0bfb11296430dfffe9b091ae7c3a793617bd9d0d
We add a has_inventory_changed() and an update_inventory() method to
the nova.compute.provider_tree.ProviderTree object. These methods will
be used by both the resource tracker and the virt driver when they need
to set inventory information for a particular resource provider.
Change-Id: Ieea566e273fd26c8321e1183ffd0e55aa6b00c55
blueprint: nested-resource-providers
Restarting a compute service properly in the functional test environment
is tricky. During a recent bugfix a util function was introduced to do
the restart. This patch moves the util to the base class and spreads
the usage of it to more functional tests to make them more realistic.
Change-Id: I17f67a02b27a90658df48856963ea3fb327e81dc