This changes does two things to the admin scheduler configuration
docs:
1. Notes the limitation from bug 1802111 for the older
AggregateMultiTenancyIsolation filter and mentions that
starting in Rocky, using tenant isolation with placement
is better.
2. Notes that when isolating tenants via placement, the metadata
key "filter_tenant_id" can be suffixed to overcome the limitation
in bug 1802111.
Change-Id: I792c5df01b7cbc46c8363e261bc7422b09180e56
Closes-Bug: #1802111
UEFI support in the VMware driver has been added with commit fc0c6d2.
This patch fixes the support matrix to reflect this.
Change-Id: I8b08e11ae4dd7f1101758b29ae3424d790b26ed1
after the changed[1], the nubmers of service components on
controller node should be two rather than three.
[1]: https://review.openstack.org/#/c/604277/
Change-Id: Iada43eb7f36f946d1713b20a50ebdeb8c69d0545
Closes-Bug: #1801444
Small modification to correct spelling mistake.
Change-Id: I4bf378e5316ecc48f66eefae4f45d5a505adc305
Signed-off-by: Nguyen Hai Truong <truongnh@vn.fujitsu.com>
In the Configuration Guide's section on KVM:
* expand on the implications of selecting a CPU mode and model
for live migration,
* explain the cpu_model_extra_flags option,
* discuss how to enable nested guests, and the implications and
limitations of doing so,
* bump the heading level of "Guest agent support".
Closes-Bug: 1791678
Change-Id: I671acd16c7e5eca01b0bd633caf8e58287d0a913
The purpose of the RT._normalize_inventory_from_cn_obj method is
to set allocation_ratio and reserved amounts on standard resource
class inventory records that get sent to placement if the virt driver
did not specifically set a ratio or reserved value (which none but
the ironic driver do).
If the allocation_ratio or reserved amount is in the inventory
data dict from the virt driver, then the normalize method ignores
it and lets the virt driver take priority.
However, with change I6a706ec5966cdc85f97223617662fe15d3e6dc08,
any virt driver that implements the update_provider_tree() interface
is storing the inventory data on the ProviderTree object which gets
cached and re-used, meaning once allocation_ratio/reserved is set
from RT._normalize_inventory_from_cn_obj, it doesn't get unset and
the normalize method always assumes the driver provided a value which
should not be changed, even if the configuration value changes.
We can make the config option changes take effect by changing
the semantics between _normalize_inventory_from_cn_obj and
drivers that implement the update_provider_tree interface, like
for the libvirt driver. Effectively with this change, when a driver
implements update_provider_tree(), they now control setting the
allocation_ratio and reserved resource amounts for inventory they
report. The libvirt driver will use the same configuration option
values that _normalize_inventory_from_cn_obj used. The only difference
is in update_provider_tree we don't have the ComputeNode facade to
get the "real" default values when the allocation_ratio is 0.0, so
we handle that like "CONF.cpu_allocation_ratio or 16.0". Eventually
that will get cleaned up with blueprint initial-allocation-ratios.
Change-Id: I72c83a95dabd581998470edb9543079acb6536a5
Closes-Bug: #1799727
This will check if a deployment is currently using consoles and warns
the operator to set [workarounds]enable_consoleauth = True on their
console proxy host if they are performing a rolling upgrade which is
not yet complete.
Partial-Bug: #1798188
Change-Id: Idd6079ce4038d6f19966e98bcc61422b61b3636b
This is a follow up to commit c4c6dc736 to clarify some
confusing comments in the code, add more comments in
the actual runtime code, and also provide an example
in the CLI man page docs along with an explanation of
the output, specifically for the case that $found>0
but done=0 and what that means.
Change-Id: I0691caab2c44d3189504c54e51bb263ecdc5d1d2
Related-Bug: #1794364
The CachingScheduler has been deprecated since Pike [1].
It does not use the placement service and as more of nova
relies on placement for managing resource allocations,
maintaining compabitility for the CachingScheduler is
exorbitant.
The release note in this change goes into much more detail
about why the FilterScheduler + Placement should be a
sufficient replacement for the original justification
for the CachingScheduler along with details on how to migrate
from the CachingScheduler to the FilterScheduler.
Since the [scheduler]/driver configuration option does allow
loading out-of-tree drivers and the scheduler driver interface
does have the USES_ALLOCATION_CANDIDATES variable, it is
possible that there are drivers being used which are also not
using the placement service. The release note also explains this
but warns against it. However, as a result some existing
functional tests, which were using the CachingScheduler, are
updated to still test scheduling without allocations being
created in the placement service.
Over time we will likely remove the USES_ALLOCATION_CANDIDATES
variable in the scheduler driver interface along with the
compatibility code associated with it, but that is left for
a later change.
[1] Ia7ff98ff28b7265058845e46b277317a2bfc96d2
Change-Id: I1832da2190be5ef2b04953938860a56a43e8cddf
When online_data_migrations raise exceptions, nova/cinder-manage catches
the exceptions, prints fairly useless "something didn't work" messages,
and moves on. Two issues:
1) The user(/admin) has no way to see what actually failed (exception
detail is not logged)
2) The command returns exit status 0, as if all possible migrations have
been completed successfully - this can cause failures to get missed,
especially if automated
This change adds logging of the exceptions, and introduces a new exit
status of 2, which indicates that no updates took effect in the last
batch attempt, but some are (still) failing, which requires intervention.
Change-Id: Ib684091af0b19e62396f6becc78c656c49a60504
Closes-Bug: #1796192
This adds a few more things to the upgrade checks
reference docs after getting questions about it
during the Stein release goal implementation from
other project teams:
1. Notes the high level set of steps for upgrading
nova in grenade and where nova-status fits into
that sequence.
2. Links to the oslo.upgradecheck library which
didn't exist when the original document was
written.
3. Adds a FAQs section with Q&A for several things
that have come up during the Stein release goal.
Change-Id: I990e5dbe563fa342f7481c3720445b924447ad54
Story: 2003657
Add a new microversion 2.67 to support specify ``volume_type``
when boot instances.
Part of bp boot-instance-specific-storage-backend
Change-Id: I13102243f7ce36a5d44c1790f3a633703373ebf7
This patch implements live migration of instances across compute nodes.
Each compute node must be managing a cluster in the same vCenter and ESX
hosts must have vMotion enabled [1].
If the instance is located on a datastore shared between source
and destination cluster, then only the host is changed. Otherwise, we
select the most suitable datastore on the destination cluster and
migrate the instance there.
[1] https://kb.vmware.com/s/article/2054994
Co-Authored-By: gkotton@vmware.com
blueprint vmware-live-migration
Change-Id: I640013383e684497b2d99a9e1d6817d68c4d0a4b
The cli/nova-idmapshift.html has been removed
since Ibce28d20d166da154833376cf51f1877b829925e.
Remove the redirect to cli/nova-idmapshift.html
because it is useless currently.
Change-Id: I57e6285475a31af49a3791c00d5d61deb64438bc
The scheduler_default_filters option is deprecated in favor of
the [scheduler]/enabled_filters option. This change updates
the docs to use the enabled_filters option over the deprecated
scheduler_default_filters option.
Change-Id: I6cc78056179e01752e48e51a4e3552d52d66074b
Closes-Bug: #1794306
Add a note to the documentation,the GPU vendor's VGPU
driver software needs to be installed and configured.
Change-Id: I8618a312818f6f26d358b40e723fecf74c0d2eb7
The placement API version 1.28 introduced consumer generation as a way
to make updating allocation safe even if it is done from multiple
places.
This patch changes delete_allocation_for_instance to use PUT
/allocations instead of DELETE /allocations to benefit from the consumer
generation handling.
In this patch the report client will GET the current allocation of the
instance including the consumer generation and then try to PUT an empty
allocation with that generation. If this fails due to a consumer
generation conflict, meaning something modified the allocation of the
instance in between GET and PUT then the report client will raise
AllocationDeleteFailed exception. This will cause that the instance
goes to ERROR state.
This patch only detects a small portion of possible cases when
allocation is modified outside of the delete code path. The rest can
only be detected if nova would cache at least the consumer generation
of the instance.
To be able to put the instance state to ERROR the instance.destroy()
call is moved to the end to of the deletion call path. To keep the
instance.delete.end notification behavior consistent with this move
(e.g. deleted_at field is filled) the notification sending needed to
be moved too.
Blueprint: use-nested-allocation-candidates
Change-Id: I77f34788dd7ab8fdf60d668a4f76452e03cf9888
Because nova-consoleauth had deprecated since version 18.0.0,
it is better not to give reference of this service in verify operation
documentation file.
Change-Id: I0c3b9cfe96bcc3d7b6106c3e972ee9e2f79e419b
This adds some background, guidelines and structural
notes on writing nova-status upgrade checks.
This is intentionally written with some potentially
redundant information or nova developers as it's
also meant to be consumed outside nova as part of the
community-wide "upgrade-checkers" goal for Stein [1].
Story: 2003570
[1] https://governance.openstack.org/tc/goals/stein/upgrade-checkers.html
Change-Id: I340b25edeab3ac19c5d0bedfc69acd037d57bdd2
Some operators could be confused if they start conductor workers with an
imcomplete setup. Just adding a clear note on the dependency.
Change-Id: I142de27f045ddb4c298ecae5a35bcb98ac863e3d
Mention that image needs ssh password authorization configured
in order to allow ssh login with admin password.
Change-Id: I65a94b266dbef9863acc07306cbe2bd81c95c893