UEFI support in the VMware driver has been added with commit fc0c6d2.
This patch fixes the support matrix to reflect this.
Change-Id: I8b08e11ae4dd7f1101758b29ae3424d790b26ed1
When online_data_migrations raise exceptions, nova/cinder-manage catches
the exceptions, prints fairly useless "something didn't work" messages,
and moves on. Two issues:
1) The user(/admin) has no way to see what actually failed (exception
detail is not logged)
2) The command returns exit status 0, as if all possible migrations have
been completed successfully - this can cause failures to get missed,
especially if automated
This change adds logging of the exceptions, and introduces a new exit
status of 2, which indicates that no updates took effect in the last
batch attempt, but some are (still) failing, which requires intervention.
Change-Id: Ib684091af0b19e62396f6becc78c656c49a60504
Closes-Bug: #1796192
Add a new microversion 2.67 to support specify ``volume_type``
when boot instances.
Part of bp boot-instance-specific-storage-backend
Change-Id: I13102243f7ce36a5d44c1790f3a633703373ebf7
This patch implements live migration of instances across compute nodes.
Each compute node must be managing a cluster in the same vCenter and ESX
hosts must have vMotion enabled [1].
If the instance is located on a datastore shared between source
and destination cluster, then only the host is changed. Otherwise, we
select the most suitable datastore on the destination cluster and
migrate the instance there.
[1] https://kb.vmware.com/s/article/2054994
Co-Authored-By: gkotton@vmware.com
blueprint vmware-live-migration
Change-Id: I640013383e684497b2d99a9e1d6817d68c4d0a4b
The placement API version 1.28 introduced consumer generation as a way
to make updating allocation safe even if it is done from multiple
places.
This patch changes delete_allocation_for_instance to use PUT
/allocations instead of DELETE /allocations to benefit from the consumer
generation handling.
In this patch the report client will GET the current allocation of the
instance including the consumer generation and then try to PUT an empty
allocation with that generation. If this fails due to a consumer
generation conflict, meaning something modified the allocation of the
instance in between GET and PUT then the report client will raise
AllocationDeleteFailed exception. This will cause that the instance
goes to ERROR state.
This patch only detects a small portion of possible cases when
allocation is modified outside of the delete code path. The rest can
only be detected if nova would cache at least the consumer generation
of the instance.
To be able to put the instance state to ERROR the instance.destroy()
call is moved to the end to of the deletion call path. To keep the
instance.delete.end notification behavior consistent with this move
(e.g. deleted_at field is filled) the notification sending needed to
be moved too.
Blueprint: use-nested-allocation-candidates
Change-Id: I77f34788dd7ab8fdf60d668a4f76452e03cf9888
This adds some background, guidelines and structural
notes on writing nova-status upgrade checks.
This is intentionally written with some potentially
redundant information or nova developers as it's
also meant to be consumed outside nova as part of the
community-wide "upgrade-checkers" goal for Stein [1].
Story: 2003570
[1] https://governance.openstack.org/tc/goals/stein/upgrade-checkers.html
Change-Id: I340b25edeab3ac19c5d0bedfc69acd037d57bdd2
Some operators could be confused if they start conductor workers with an
imcomplete setup. Just adding a clear note on the dependency.
Change-Id: I142de27f045ddb4c298ecae5a35bcb98ac863e3d
Scheduler hints are not really documented very well at all except
for being mentioned per scheduler filter in the admin configuration
guide, nor are they documented within relation to flavor extra
specs which are both used for impacting scheduling decisions and
are choices that a deployer has to make based on how they configure
their cloud.
This change adds a document about scheduler hints and how they are
similar to and different from flavor extra specs, including end
user discoverability and interoperability, and thoughts on which
should be used if writing a custom scheduler filter.
The TODO in the API guide is also resolved by linking to this
document.
Change-Id: Ib1f35baacf59efafb9e4bccfcc4f0025d99ad5b2
Native QEMU LUKS decryption support was added for the
libvirt driver in Queens, but there are no docs in the
feature support matrix about encrypted volume support
at all, so this attempts to close that gap.
Change-Id: I035164a0c4222814784306381f9a11413c8de9e2
The time has come.
These filters haven't been necessary since Ocata [1]
when the filter scheduler started using placement
to filter on VCPU, DISK_GB and MEMORY_MB. The
only reason to use them with any in-tree scheduler
drivers is if using the CachingScheduler which doesn't
use placement, but the CachingScheduler itself has
been deprecated since Pike [2]. Furthermore, as of
change [3] in Stein, the ironic driver no longer
reports vcpu/ram/disk inventory for ironic nodes
which will make these filters filter out ironic nodes
thinking they don't have any inventory. Also, as
noted in [4], the DiskFilter does not account for
volume-backed instances and may incorrectly filter
out a host based on disk inventory when it would
otherwise be OK if the instance is not using local
disk.
The related aggregate filters are left intact for
now, see blueprint placement-aggregate-allocation-ratios.
[1] Ie12acb76ec5affba536c3c45fbb6de35d64aea1b
[2] Ia7ff98ff28b7265058845e46b277317a2bfc96d2
[3] If2b8c1a76d7dbabbac7bb359c9e572cfed510800
[4] I9c2111f7377df65c1fc3c72323f85483b3295989
Change-Id: Id62136d293da55e4bb639635ea5421a33b6c3ea2
Related-Bug: #1787910
Add a thin wrapper to invoke the POST /reshaper placement API with
appropriate error checking. This bumps the placement minimum to the
reshaper microversion, 1.30.
Change-Id: Idf8997d5efdfdfca6967899a0882ffb9ecf96915
blueprint: reshape-provider-tree
Corrects the support matrix to indicate that PowerVM supports
shelve/unshelve. Shelve only requires a driver to support power_off
and snapshot, both of which the PowerVM driver has implemented.
Change-Id: Iec56a2a61e90d3d97468b1d7f72a0b28975e6cd1
z/VM is added in Rocky releaes and this patch added the CI
information and its coverage for z/VM.
blueprint: add-zvm-driver-rocky
Change-Id: Ibf44bc81ab0281c95dd4add9e09df584d61bc460
z/VM driver is in Rocky release now and this patch adds
the z/VM support matrix update.
blueprint: add-zvm-driver-rocky
Change-Id: I58016140c7f556df91ce258733455647a26dd727
Its unclear which metadata service the section is refering to
and moreover none of them seem to need an api_database section.
Change-Id: I77f7a0f3a1e4a3702ca330cbe3f54b6a77bb77b0
Closes-Bug: #1785237
All these links are invalid currently, and getting updated
with the best replacements that can be found, or removed if
there is none.
Change-Id: I26c183b7de1bcc08b903146897795148a2d57e6d
Partial-Bug: #1765737
The admin config resize doc was linking to a now non-existent
user guide doc which was deleted in pike. This change imports
the resize user guide from the openstack-manuals stable/ocata
branch, fixes the link, and updates the resize user doc to
(1) link to our internal shutdown_timeout config option reference
and (2) link to the image properties doc in glance for the
os_shutdown_timeout image property.
Change-Id: I9988abfd344d1d3b0b6eaf32b036369b51853965
Closes-Bug: #1784715
Per change I97215e94efdd8c05045872fb9ba7d2089dc6efb8, nova
does not perform version discovery or try to deal with backlevel
placement versions, and the upgrade docs say that placement needs
to be upgraded before upgrading any nova services. Because of
this assertion, we can remove the per-release nova-scheduler-specific
note in the placement upgrade notes for Rocky because while it's
accurate for the nova-scheduler service, it might not be accurate
for the nova-api, nova-conductor or nova-compute services which
also use placement. So the best guidance is to just globally say
that placement must be upgraded before *any* nova services, which
our upgrade doc already says.
Change-Id: I8bf6ab049f15ad24a5fbf0557bd0cd8652101901
Add a method for libvirt driver to get cpu traits.
This is used for compute nodes to report cpu traits to Placement.
Change-Id: I9bd80adc244c64277d2d00e7d79c3002c8f9d57e
blueprint: report-cpu-features-as-traits
This links into the AZ docs a nice video from the Rocky
summit which covers what AZs are and aren't, use cases,
gotchas, and how they are implemented outside of nova
to compare and contrast. Overall it's a nice educational
presentation which is useful for those wishing to learn
more about AZs than what our documentation provides.
Change-Id: Ib67826620735f05edc987481af4c07b6d19f3c1d
This adds a link to Rocky summit video from CERN about how
they performed their upgrade from cells v1 to cells v2.
Change-Id: I5c7dc5aca232a9d330968de29eeee6a55cb035ab
Changes in svg:
- schema with nova-network is removed and one with Neutron is made as default
- Placement service is added to a party
- titles and arrows are aligned
Change-Id: If7e4a0b92c8713dabcb16a5e7820fbf479d82917
The code to generate a support matrix has been pulled into a common
library. Using this instead of duplicating code in various projects that
need it.
Change-Id: If5c0bf2b0dcd7dbb7d316139ecb62a936fd15439
Co-Authored-By: Stephen Finucane <stephenfin@redhat.com>
Due to change I8d426f2635232ffc4b510548a905794ca88d7f99 in Pike,
which ironically was meant to avoid up-calls (I think), it
introduced an up-call during reschedules for server create and
resize to set the instance.availability_zone based on the
alternate host selected during the reschedule.
This adds the up-call to our list of known issues in the cells
v2 docs so we can track the issue and make people aware of it.
Change-Id: Id819f91477613a013b89b1fb0b2def3b0fd4b08c
Related-Bug: #1781286
This just links to the osc-placement plugin docs
for managing required and forbidden traits in the
flavor extra specs docs.
Change-Id: I8549dc404a62a05d327a2c7a4813e7cc505d6b06