Remove the 'os_compute_api:os-flavor-manage' policy.
The 'os_compute_api:os-flavor-manage' policy has been deprecated
since 16.0.0 Pike.
The policy has been replaced with the following policies.
- os_compute_api:os-flavor-manage:create
- os_compute_api:os-flavor-manage:delete
Change-Id: I856498dfcebfa330598a22dd7c660bd6f158351b
This is a follow up to change I0f206d9db70465d8ce6b1404f546f3e00eeb6e23
where we changed the docs from using "nova flavor-update" to
"openstack flavor set --description" but unlike nova CLI which
negotiates the highest available microversion by default, OSC does not
and defaults to 2.1 which won't work when trying to set a flavor
description. This change adds the --os-compute-api-version option
to the command line example to make that command work.
Change-Id: I7eacc30b4cf3a5ef89f90ec599f21eaa12bf2a10
Convert ``option`` to the shiny :oslo.config:option:`section.option`
format in admin/configuration/hypervisore-kvm.
Recognizing this could be done to a lot more files; I just happened to
be looking at this one today.
Change-Id: If1b02ce99152ffd00d4f461dc4539606db1bb13b
- This change updates the admin flavor docs
to reflect the use of osc to update flavor
descriptions
- This change documents that modifcations to
flavor extra_specs are not reflected in an
instance's embedded flavor.
Change-Id: I0f206d9db70465d8ce6b1404f546f3e00eeb6e23
The dependent tempest change enables the volume multiattach
tests in the tempest-full and tempest-slow jobs, on which
nova already gates, which allows us to drop the special
nova-multiattach job which is mostly redundant test coverage
of the other tempest.api.compute.* tests, and allows us to
run one fewer job on nova/cinder/tempest changes in Stein.
The docs are updated to reflect the source of the testing
now.
Also depends on cinder dropping its usage of the nova-multiattach
job before we can drop the job definition from nova.
Depends-On: https://review.openstack.org/606978
Depends-On: https://review.openstack.org/606985
Change-Id: I744afa1df9a6ed8e0eba4310b20c33755ce1ba88
This took me a good hour to suss and while there were a couple of Google
hits for it, the top suggestion was to use TCP (rather than SSH) and
disable all security, which is rarely good advice.
Paste an sample error and link to the doc where you can find advice of
resolving the issue.
Change-Id: I3805361834f7d954ae6759a22f61f02db139bcc5
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
This makes the following changes:
* re-orders the page to move the nova-network
specific information to the bottom
* creates two sections: one for CLI and one for
nova-network
* mentions at the top that by default neutron
manages security groups and their quota and
links to the neutron docs
* drops the mention of the 'nova' CLI since there
are no examples in this doc using that CLI
Change-Id: Ifd23424ac14bacf4bf7a0716c268f48ec869a41e
There is some important stuff in the admin/configuration
docs sub-tree like information about configuring hypervisor
drivers and scheduler filters/weighers but it wasn't easily
found since it wasn't in the admin toc tree. This adds it
to the overall admin home page and adds a TODO that we need
to organize that admin page into sections somehow.
Change-Id: I5952a2dd590407b1ce56805df6f90a472cc878bf
Link to the "Secure live migration with QEMU-native TLS" document from
other relevant guides, and small blurbs of text where appropriate.
Blueprint: support-qemu-native-tls-for-live-migration
Change-Id: I9c6676897d27254e2e16bf7e36a74bf9f3da3832
Signed-off-by: Kashyap Chamarthy <kchamart@redhat.com>
This spec proposes to add ability to allow users to use
``Aggregate``'s ``metadata`` to override the global config options
for weights to achieve more fine-grained control over resource
weights.
blueprint: per-aggregate-scheduling-weight
Change-Id: I6e15c6507d037ffe263a460441858ed454b02504
This resolves the TODO from Ocata change:
I8871b628f0ab892830ceeede68db16948cb293c8
By adding a min=0.0 value to the soft affinity
weight multiplier configuration options.
It also removes the deprecated [DEFAULT] group
alias from Ocata change:
I3f48e52815e80c99612bcd10cb53331a8c995fc3
Change-Id: I79e191010adbc0ec0ed02c9d589106debbf90ea8
Add a document about using the "native TLS" encryption feature of QEMU
and libvirt to secure live migration data transports — including disks
that are on non-shared storage ("block migration"). This ties into the
newly introduced Nova configuration attribute,
``[libvirt]/live_migration_with_native_tls``, to that end.
Blueprint: support-qemu-native-tls-for-live-migration
Change-Id: Ic1af52bc3608f8f586244dd26dad1f47604e3278
Signed-off-by: Kashyap Chamarthy <kchamart@redhat.com>
We typically use '-' for H2-type headers and '~' for H3-style. This
document was using the opposite which was mighty confusing for your dear
editor. Simply switch them around and reduce that confusion.
Change-Id: I69712bab7deeb75b3fe619c9d93a078f90b76dad
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Fix broken links in doc/source/user/cells.rst.
In addition, fix a format of a console code block
in doc/source/admin/pci-passthrough.rst.
Change-Id: I66a2adb3ff75da6e267536f25c2eda5925f2fa87
Closes-Bug: #1808906
A recent thread in the mailing list [1] reminded me that we
don't have any documentation for the service user token feature
added back in ocata under blueprint use-service-tokens.
This change adds a troubleshooting entry for when using service
user tokens would be useful, and links to it from two known
trouble spots: live migration timeouts and creating images.
[1] http://lists.openstack.org/pipermail/openstack-discuss/2018-December/001130.html
Change-Id: I1dda889038ffe67d53ceb35049aa1f2a9da39ae8
Closes-Bug: #1809165
- Move deprecated services to the end of the document
- Update incorrect information regarding nova-consoleauth
- Move configuration options that were specified for the wrong service
- Don't give the impression that the serial console is libvirt-only
Change-Id: Ie0fd987a1e5c130b8e31c84910814d5d051f2b31
This change does a few things:
* Links live_migration_completion_timeout to the config
option guide.
* Links the force complete API reference to the feature support
matrix to see which drivers support the operation.
* Fixes the server status mentioned in the troubleshooting for
the force complete API reference (a live migrating server
status is MIGRATING, not ACTIVE). The same text is copied to the
abort live migration API reference troubleshooting for
consistency (and since using the server status is more natural than
the task_state).
* Links to the admin guide for troubleshooting live migration
timeouts.
Change-Id: I496d3f4b99e3d7e978c7ecb13ab3b67023fcb919
Closes-Bug: #1808579
Config option ``libvirt.live_migration_progress_timeout`` was
deprecated in Ocata, and can now be removed.
This patch remove live_migration_progress_timeout and also remove
the migration progress timeout related logic.
Change-Id: Ife89a705892ad96de6d5f8e68b6e4b99063a7512
blueprint: live-migration-force-after-timeout
This patch remove the auto trigger post-copy, and add a new libvirt
configuration 'live_migration_completion_action'.
This option determines what actions will be taken against a VM after
``live_migration_completion_timeout`` expires. This option is set to
'abort' action by default, that means the live migrate operation will
be aborted after completion timeout expires. If option is set to
'force_complete', that means will either pause the VM or trigger
post_copy depending on if post copy is enabled and available.
Note that the progress based post-copy triggering from the libvirt
driver will be removed in next patch [1].
[1] Ife89a705892ad96de6d5f8e68b6e4b99063a7512
Change-Id: I0d286d12e588b431df3d94cf2e65d636bcdea2f8
blueprint: live-migration-force-after-timeout
Live migration is currently totally broken if a NUMA topology is
present. This affects everything that's been regrettably stuffed in with
NUMA topology including CPU pinning, hugepage support and emulator
thread support. Side effects can range from simple unexpected
performance hits (due to instances running on the same cores) to
complete failures (due to instance cores or huge pages being mapped to
CPUs/NUMA nodes that don't exist on the destination host).
Until such a time as we resolve these issues, we should alert users to
the fact that such issues exist. A workaround option is provided for
operators that _really_ need the broken behavior, but it's defaulted to
False to highlight the brokenness of this feature to unsuspecting
operators.
Change-Id: I217fba9138132b107e9d62895d699d238392e761
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Related-bug: #1289064
This adds a new section to the admin scheduler configuration
docs devoted to allocation ratios to call out the differences
between the override config options and the initial ratio
options, and how they interplay with the resource provider
inventory allocation ratio override that can be performed
via the placement REST API directly.
This moves the note about bug 1804125 into the new section
and also links to the docs from the initial allocation ratio
config option help text.
Part of blueprint initial-allocation-ratios
Related-Bug: #1804125
Change-Id: I7d8e822cd40dccaf5244e2cd95fa1af43fa9ed87
This borrows from the release note in change
I01f20f275bbd5451ace5c1e6f41ab38d488dae4e to document the
regression, introduced in Ocata, where allocation ratio settings
in the aggregate core/ram/disk filters are not honored because
of placement being used by the FilterScheduler.
While there is related work going on around this in
blueprint initial-allocation-ratios and
blueprint placement-aggregate-allocation-ratios, it is still
a limitation in the current code base and needs to be called
out in the docs.
Change-Id: Ifaf596a8572637f843f47daf5adce394b0365676
Related-Bug: #1804125