Config option ``libvirt.live_migration_progress_timeout`` was
deprecated in Ocata, and can now be removed.
This patch remove live_migration_progress_timeout and also remove
the migration progress timeout related logic.
Change-Id: Ife89a705892ad96de6d5f8e68b6e4b99063a7512
blueprint: live-migration-force-after-timeout
This patch remove the auto trigger post-copy, and add a new libvirt
configuration 'live_migration_completion_action'.
This option determines what actions will be taken against a VM after
``live_migration_completion_timeout`` expires. This option is set to
'abort' action by default, that means the live migrate operation will
be aborted after completion timeout expires. If option is set to
'force_complete', that means will either pause the VM or trigger
post_copy depending on if post copy is enabled and available.
Note that the progress based post-copy triggering from the libvirt
driver will be removed in next patch [1].
[1] Ife89a705892ad96de6d5f8e68b6e4b99063a7512
Change-Id: I0d286d12e588b431df3d94cf2e65d636bcdea2f8
blueprint: live-migration-force-after-timeout
This adds a new section to the admin scheduler configuration
docs devoted to allocation ratios to call out the differences
between the override config options and the initial ratio
options, and how they interplay with the resource provider
inventory allocation ratio override that can be performed
via the placement REST API directly.
This moves the note about bug 1804125 into the new section
and also links to the docs from the initial allocation ratio
config option help text.
Part of blueprint initial-allocation-ratios
Related-Bug: #1804125
Change-Id: I7d8e822cd40dccaf5244e2cd95fa1af43fa9ed87
This borrows from the release note in change
I01f20f275bbd5451ace5c1e6f41ab38d488dae4e to document the
regression, introduced in Ocata, where allocation ratio settings
in the aggregate core/ram/disk filters are not honored because
of placement being used by the FilterScheduler.
While there is related work going on around this in
blueprint initial-allocation-ratios and
blueprint placement-aggregate-allocation-ratios, it is still
a limitation in the current code base and needs to be called
out in the docs.
Change-Id: Ifaf596a8572637f843f47daf5adce394b0365676
Related-Bug: #1804125
The installation of the nova-consoleauth service was erroneously
removed from the docs prematurely. The nova-consoleauth service
is still being used in Rocky, with the removal being possible in
Stein.
This should have been fixed as part of change
Ibbdc7c50c312da2acc59dfe64de95a519f87f123 but was missed.
This is also related to the release note update in Rocky
under change Ie637b4871df8b870193b5bc07eece15c03860c06.
Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com>
Closes-Bug: #1793255
Related-Bug: #1798188
Change-Id: Ied268da9e70bd2807c2dfe7a479181fbec52979d
This changes does two things to the admin scheduler configuration
docs:
1. Notes the limitation from bug 1802111 for the older
AggregateMultiTenancyIsolation filter and mentions that
starting in Rocky, using tenant isolation with placement
is better.
2. Notes that when isolating tenants via placement, the metadata
key "filter_tenant_id" can be suffixed to overcome the limitation
in bug 1802111.
Change-Id: I792c5df01b7cbc46c8363e261bc7422b09180e56
Closes-Bug: #1802111
In the Configuration Guide's section on KVM:
* expand on the implications of selecting a CPU mode and model
for live migration,
* explain the cpu_model_extra_flags option,
* discuss how to enable nested guests, and the implications and
limitations of doing so,
* bump the heading level of "Guest agent support".
Closes-Bug: 1791678
Change-Id: I671acd16c7e5eca01b0bd633caf8e58287d0a913
The CachingScheduler has been deprecated since Pike [1].
It does not use the placement service and as more of nova
relies on placement for managing resource allocations,
maintaining compabitility for the CachingScheduler is
exorbitant.
The release note in this change goes into much more detail
about why the FilterScheduler + Placement should be a
sufficient replacement for the original justification
for the CachingScheduler along with details on how to migrate
from the CachingScheduler to the FilterScheduler.
Since the [scheduler]/driver configuration option does allow
loading out-of-tree drivers and the scheduler driver interface
does have the USES_ALLOCATION_CANDIDATES variable, it is
possible that there are drivers being used which are also not
using the placement service. The release note also explains this
but warns against it. However, as a result some existing
functional tests, which were using the CachingScheduler, are
updated to still test scheduling without allocations being
created in the placement service.
Over time we will likely remove the USES_ALLOCATION_CANDIDATES
variable in the scheduler driver interface along with the
compatibility code associated with it, but that is left for
a later change.
[1] Ia7ff98ff28b7265058845e46b277317a2bfc96d2
Change-Id: I1832da2190be5ef2b04953938860a56a43e8cddf
This is a relic that has long since been replaced by the noVNC proxy
service. Start preparing for its removal.
Change-Id: Icb225dec3ad291b751e475bd3703ce0eb30b44db
I did know this was a thing but only barely. As with RDP, the
documentation is very minimal but it should contain enough pointers for
anyone playing with this stuff.
Change-Id: I0b62d42eae7c325566ee065dcdc0f73b7223d471
I didn't even know this was a thing. Call it out...and promptly link to
the Cloudbase documentation, which I don't want to reproduce here for
reasons of expediency.
Change-Id: I4416bf5c5c4e906bcfdeec5a7ae41f747029a292
The link between the various consoles was never well understood (by me,
at least). Clarify this by restructuring the document to highlight the
few differences between these services.
Change-Id: I08991796aaced2abc824f608108c0c786181eb65
This patch implements live migration of instances across compute nodes.
Each compute node must be managing a cluster in the same vCenter and ESX
hosts must have vMotion enabled [1].
If the instance is located on a datastore shared between source
and destination cluster, then only the host is changed. Otherwise, we
select the most suitable datastore on the destination cluster and
migrate the instance there.
[1] https://kb.vmware.com/s/article/2054994
Co-Authored-By: gkotton@vmware.com
blueprint vmware-live-migration
Change-Id: I640013383e684497b2d99a9e1d6817d68c4d0a4b
The scheduler_default_filters option is deprecated in favor of
the [scheduler]/enabled_filters option. This change updates
the docs to use the enabled_filters option over the deprecated
scheduler_default_filters option.
Change-Id: I6cc78056179e01752e48e51a4e3552d52d66074b
Closes-Bug: #1794306
Add a note to the documentation,the GPU vendor's VGPU
driver software needs to be installed and configured.
Change-Id: I8618a312818f6f26d358b40e723fecf74c0d2eb7
Mention that image needs ssh password authorization configured
in order to allow ssh login with admin password.
Change-Id: I65a94b266dbef9863acc07306cbe2bd81c95c893
The docs for AggregateMultiTenancyIsolation were misleading in that
tenants are not restricted to hosts only in a tenant-isolated
aggregate. It's the opposite: hosts in the tenant-isolated aggregate
are only available for tenants configured for that aggregate.
This fixes the docs including an example for clarification, and also
adds a functional test to show the behavior of the filter.
Change-Id: Ic55b88e7ad21ab5b7ad063eac743ff9406aae559
Related-Bug: #1771523
Change I1a1143ddf8da5fb9706cf53dbfd6cbe84e606ae1 in Ocata
deprecated the libvirt.live_migration_progress_timeout
and disabled it by default. This change updates the config
option help to refer to the bug so people don't have to hunt
for it via git history, and also touches up the admin docs.
In the one doc, mention of the option is removed altogether
because it basically says, "here is a loaded gun, but don't
use it!". It's better to just not mention the option at all.
Change-Id: I33f3d508a2af6c94435f86ac740cf24b97dba76e
Related-Bug: #1644248
Add a new paragraph on how to correlate OpenStack logs with vCenter logs
in order to find what went wrong.
Change-Id: I71069f61af99d1c0f8fda28e6ce0b2873f2042d8
In the "Networking with neutron" doc,
a description of a configuration file is broken.
So fix it.
Change-Id: I3927c858a54a09966478d0ecc2c62b76d0d4548d
Closes-Bug: #1789567
The time has come.
These filters haven't been necessary since Ocata [1]
when the filter scheduler started using placement
to filter on VCPU, DISK_GB and MEMORY_MB. The
only reason to use them with any in-tree scheduler
drivers is if using the CachingScheduler which doesn't
use placement, but the CachingScheduler itself has
been deprecated since Pike [2]. Furthermore, as of
change [3] in Stein, the ironic driver no longer
reports vcpu/ram/disk inventory for ironic nodes
which will make these filters filter out ironic nodes
thinking they don't have any inventory. Also, as
noted in [4], the DiskFilter does not account for
volume-backed instances and may incorrectly filter
out a host based on disk inventory when it would
otherwise be OK if the instance is not using local
disk.
The related aggregate filters are left intact for
now, see blueprint placement-aggregate-allocation-ratios.
[1] Ie12acb76ec5affba536c3c45fbb6de35d64aea1b
[2] Ia7ff98ff28b7265058845e46b277317a2bfc96d2
[3] If2b8c1a76d7dbabbac7bb359c9e572cfed510800
[4] I9c2111f7377df65c1fc3c72323f85483b3295989
Change-Id: Id62136d293da55e4bb639635ea5421a33b6c3ea2
Related-Bug: #1787910
A guest must have a NUMA topology for numa-aware-vswitches to have any
effect. Call this out in the documentation.
Change-Id: Id0a637bcd0cbce29811acd7e56419350695cd3fd
Attaching SR-IOV ports to existing instances is not supported
since the compute service does not perform any kind of PCI
device allocation, so we should fail fast with a clear error
if attempted. Note that the compute RPC API "attach_interface"
method is an RPC call from nova-api to nova-compute so the error
raised here will result in a 400 response to the user.
Blueprint sriov-interface-attach-detach would need to be
implemented to support this use case, and could arguably involve
a microversion to indicate when the feature was made available.
A related neutron docs patch https://review.openstack.org/594325
is posted for mentioning the limitation with SR-IOV port attach
as well.
Change-Id: Ibbf2bd3cdd45bcd61eebff883c30ded525b2495d
Closes-Bug: #1708433
ChanceScheduler is deprecated in Pike [1] and will be removed in a
subsequent release.
[1] https://review.openstack.org/#/c/492210/
Change-Id: I44f9c1cabf9fc64b1a6903236bc88f5ed8619e9e
The document adds the hypervisor introduction and links for
z/VM and its related entry in doc/source/admin/configuration/hypervisors.rst
Change-Id: I02b4c7ece38988e916a60cd1d91a5244bf91afa5
blueprint: add-zvm-driver-rocky
This patch does the following:
1. Mention that the current doc is only relavant to cold-migration.
2. Additional live-migration reference is given.
3. The inappropriate --live flag in the example is removed.
4. Policy violation message is updated.
5. Replace a nova command with openstack commmand
Change-Id: Idaa7915ea47d11e30da3f12318082a10a4e73b3b