This is a follow up to [1]. The user docs home page has a link
to the admin AZ guide but not a direct link to the user AZ guide
so that is added here.
[1] https://review.opendev.org/#/c/667133/10/doc/source/user/index.rst@75
Change-Id: I4acb120d2e347a43abc584107c7a19bb422af384
This is a follow-up patch for https://review.opendev.org/676730.
In the TOC of the current PDF file [1], most contents related to
user and admin guides are located under "For Contributors" section.
This is weird. It happens because the latex builder constructs
the document tree based on "toctree" directives even though they
are marked as "hidden".
This commit reorganizes "toctree" per section.
The "toctree" directives must be placed at the end of
individual sections. Otherwise, content of a last section and
content just after "toctree" directive are concatenated
into a same section in the rendered LaTeX document.
This commit also improves the following as well:
* Specify "openany" as "extraclassoptions" to skip blank pages
along with "oneside" to use same page style for odd and even pages.
* Set "tocdepth" and "secnumdepth" to 3 respectively.
"tocdepth" controls the depth of TOC and "secnumdepth" controls
the level of numbered sections in TOC.
Note that this commit does not reorganize file structure under doc/source.
I believe this should be done separately.
[1] https://docs.openstack.org/nova/latest/doc-nova.pdf
Change-Id: Ie9685e6a4798357d4979aa6b4ff8a03663a9c71c
Story: 2006100
Task: 35140
These closely related features are the source of a disproportionate
number of bugs and a large amount of confusion among users. The spread
of information around multiple docs probably doesn't help matters.
Do what we've already done for the metadata service and remote consoles
and clean these docs up. There are a number of important changes:
- All documentation related to host aggregates and availability zones is
placed in one of three documents, '/user/availability-zones',
'/admin/aggregates' and '/admin/availability-zones'. (note that there
is no '/user/aggregates' document since this is not user-facing)
- References to these features are updated to point to the new location
- A glossary is added. Currently this only contains definitions for host
aggregates and availability zones
- nova CLI commands are replaced with their openstack CLI counterparts
- Some gaps in related documentation are closed
Change-Id: If847b0085dbfb4c813d4a8d14d99346f8252bc19
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
This information was mistakenly removed when references to the
nova-consoleauth service were removed from the docs in change
Ie96e18ea7762b93b4116b35d7ebcfcbe53c55527.
Closes-Bug: #1846401
Change-Id: I08fa4650d190114775993e8094efbf46b984dfdc
With the fix for bug 1781286 for reschedules during server
create and resize/migrate, we can update the cells v2 docs
saying the up-call issue for that big is now fixed.
Change-Id: I9ff116de8b63c0fbfb880008718b1386178b1d1a
Related-Bug: #1781286
- Remove references to configuration options (which end-users will not
be able to set)
- Switch from the '--confirm' and '--revert' flags to the top-level
commands
- You can't revert a failed resize so don't suggest otherwise
- Fix some wrapping
Change-Id: I934e7ca327ff4c9bb9b00d1df3323df05da956d9
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Depends-On: I733796d3bda6c3755a3d3548bbe695abb474a6a0
We don't need to do a whole lot here. The key things to note are that
some host level configuration is now necessary, that the 'isolate' CPU
thread policy behaves slightly differently, and that you can request
'PCPU' inventory explicitly instead of using 'hw:cpu_policy=dedicated'
or the image metadata equivalent.
Part of blueprint cpu-resources
Change-Id: Ic1f98ea8a7f6bdc86f2d6b4734774fa380f8cc10
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
In the depends-on the upgrade notes in placement are moved to a
different URL. Because the link here in nova was to an anchor, not
a URL, a redirect on the placement side will not catch this, so
explicitly update the link.
Depends-On: https://review.opendev.org/683783
Change-Id: Ib07eacb9150bbb8b0726cfe06ae334c7a764955c
I forget this every darn time I'm working on something that requires DB
migrations.
Change-Id: I6ca988793cf2bfc2d5938acc158fd94e61e06a92
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Track compute node inventory for the new MEM_ENCRYPTION_CONTEXT
resource class (added in os-resource-classes 0.4.0) which represents
the number of guests a compute node can host concurrently with memory
encrypted at the hardware level.
This serves as a "master switch" for enabling SEV functionality, since
all the code which takes advantage of the presence of this inventory
in order to boot SEV-enabled guests is already in place, but none of
it gets used until the inventory is non-zero.
A discrete inventory is required because on AMD SEV-capable hardware,
the memory controller has a fixed number of slots for holding
encryption keys, one per guest. Typical early hardware only has 15
slots, thereby limiting the number of SEV guests which can be run
concurrently to 15. nova needs to track how many slots are available
and used in order to avoid attempting to exceed that limit in the
hardware.
Work is in progress to allow QEMU and libvirt to expose the number of
slots available on SEV hardware; however until this is finished and
released, it will not be possible for nova to programatically detect
the correct value with which to populate the MEM_ENCRYPTION_CONTEXT
inventory. So as a stop-gap, populate the inventory using the value
manually provided by the cloud operator in a new configuration option
CONF.libvirt.num_memory_encrypted_guests.
Since this commit effectively enables SEV, also add all the relevant
documentation as planned in the AMD SEV spec[0]:
- Add operation.boot-encrypted-vm to the KVM hypervisor feature matrix.
- Update the KVM section of the Configuration Guide.
- Update the flavors section of the User Guide.
- Add a release note.
[0] http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#documentation-impact
blueprint: amd-sev-libvirt-support
Change-Id: I659cb77f12a38a4d2fb118530ebb9de88d2ed30d
The conductor doc is not really end user material,
so this moves it under reference/, removes it from the
user page and adds it to the reference index for internals.
Also makes the contributor page link to the reference internals
since it's kind of weird to have one contributor section that
only mentions one thing but the internals under reference have
a lot more of that kind of detail. Finally, a todo is added so
we don't forget to update the reference internals about versioned
objects at some point since that's always a point of confusion
for people.
Change-Id: I8d3dbce5334afaa3e1ca309b2669eff9933a0104
This adds support, in a new microversion, for specifying an availability
zone to the unshelve server action when the server is shelved offloaded.
Note that the functional test changes are due to those tests using the
"latest" microversion where an empty dict is not allowed for unshelve
with 2.77 so the value is changed from an empty dict to None.
Implements: blueprint support-specifying-az-when-restore-shelved-server
Closes-Bug: #1723880
Change-Id: I4b13483eef42bed91d69eabf1f30762d6866f957
We get questions about if it's possible to configure nova to
always use cinder for root volumes even if the user isn't
explicitly booting from volume. This change adds a FAQ entry
about that to the block device mappings docs and also suggests
a way to force users to use volume-backed servers using the
max_local_block_devices option. The config option help for that
option is also cleaned up since --image is a CLI option but we
should stick to REST API parameters in descriptions and the
default mentioned in the help already gets rendered by oslo.config.
Finally, a simple functional test is added to assert the workaround
mentioned in the docs.
Change-Id: I3e77796b051e8d007fefe2eea06d38e8265e8272
This change adds the ablity for a user or operator to contol
the virtualisation of a performance monitoring unit within a vm.
This change introduces a new "hw:pmu" extra spec and a corresponding
image metadata property "hw_pmu".
The glance image metadata doc will be updated seperately by:
https://review.opendev.org/#/c/675182
Change-Id: I5576fa2a67d2771614266022428b4a95487ab6d5
Implements: blueprint libvirt-pmu-configuration
These were deprecated during Stein [1] and can now be removed, lest they
cause hassle with the PCPU work. As noted in [1], the aggregate
equivalents of same are left untouched for now.
[1] https://review.opendev.org/#/c/596502/
Change-Id: I8a0d332877fbb9794700081e7954f2501b7e7c09
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
The api documentation is now published on docs.openstack.org instead
of developer.openstack.org. Update all links that are changed to the
new location.
Note that Neutron publishes to api-ref/network, not networking anymore.
Note that redirects will be set up as well but let's point now to the
new location.
For details, see:
http://lists.openstack.org/pipermail/openstack-discuss/2019-July/007828.html
Change-Id: Id2cf3aa252df6db46575b5988e4937ecfc6792bb
If more than one numbered request group is in the placement a_c query
then the group_policy is mandatory. Based on the PTG discussion [1]
'none' seems to be a good default policy from nova perspective. So this
patch makes sure that if the group_policy is not provided in the flavor
extra_spec and there are more than one numbered group in the request and
the flavor only provide one or zero groups (so groups are coming from
other sources like neutron ports) then the group_policy is defaulted to
'none'.
The reasoning behind this change: If more than one numbered request
group is coming from the flavor extra_spec then the creator of the
flavor is responsible to add a group_policy to the flavor. So in this
nova only warns but let the request fail in placement to force the
fixing of the flavor. However when numbered groups are coming from
other sources (like neutron ports) then the creator of the flavor
cannot know if additional group will be included so we don't want to
force the flavor creator but simply default the group_policy.
[1] http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005807.html
Change-Id: I0681de217ed9f5d77dae0d9555632b8d160bb179
Since cells v2 was introduced, nova operators must run two commands to
migrate the database schemas of nova's databases - nova-manage api_db
sync and nova-manage db sync. It is necessary to run them in this order,
since the db sync may depend on schema changes made to the api database
in the api_db sync. Executing the db sync first may fail, for example
with the following seen in a Queens to Rocky upgrade:
nova-manage db sync
ERROR: Could not access cell0.
Has the nova_api database been created?
Has the nova_cell0 database been created?
Has "nova-manage api_db sync" been run?
Has "nova-manage cell_v2 map_cell0" been run?
Is [api_database]/connection set in nova.conf?
Is the cell0 database connection URL correct?
Error: (pymysql.err.InternalError) (1054, u"Unknown column
'cell_mappings.disabled' in 'field list'") [SQL: u'SELECT
cell_mappings.created_at AS cell_mappings_created_at,
cell_mappings.updated_at AS cell_mappings_updated_at,
cell_mappings.id AS cell_mappings_id, cell_mappings.uuid AS
cell_mappings_uuid, cell_mappings.name AS cell_mappings_name,
cell_mappings.transport_url AS cell_mappings_transport_url,
cell_mappings.database_connection AS
cell_mappings_database_connection, cell_mappings.disabled AS
cell_mappings_disabled \nFROM cell_mappings \nWHERE
cell_mappings.uuid = %(uuid_1)s \n LIMIT %(param_1)s'] [parameters:
{u'uuid_1': '00000000-0000-0000-0000-000000000000', u'param_1': 1}]
(Background on this error at: http://sqlalche.me/e/2j85)
Despite this error, the command actually exits zero, so deployment tools
are likely to continue with the upgrade, leading to issues down the
line.
This change modifies the command to exit 1 if the cell0 sync fails.
This change also clarifies this ordering in the upgrade and nova-manage
documentation, and adds information on exit codes for the command.
Change-Id: Iff2a23e09f2c5330b8fc0e9456860b65bd6ac149
Closes-Bug: #1832860
Turns out we've a *lot* of disparate metadata systems. Attempt to both
link them somewhat through extensive cross-referencing and extract out
deployment-specific stuff from user-facing docs. Lots of changes here,
but in summary:
- Split out admin-focused content from the metadata API, config drive,
user data and vendordata docs.
- Merge the config drive, metadata service, vendordata and user-data
user docs, which are mostly talking about the same thing and are
fairly barren without the deployment components
- Make use of various oslo.config and Sphinx roles
Side note: I miss when we have tech writers to do this stuff for us :(
Change-Id: I4fb2b628bd93358a752e2397ae353221758e2984
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
This is a follow up to b129511050
which changes the code block in the docs to python and also
cleans up some of the NumInstancesFilter documentation, specifically
removing wording around "running" instances because the filter
does not care about state, only the number of instances reported
against a host - they could be stopped/paused/etc.
Change-Id: I7cdf79f21415fbf799fa5ad586f1b8063afed272
Since blueprint return-alternate-hosts in Queens, the scheduler
returns a primary selected host and some alternate hosts based
on the max_attempts config option. The only reschedules we have
are during server create and resize/cold migrate. The list of
alternative hosts are passed down from conductor through compute
and back to conductor on reschedule and if conductor gets a list
of alternate hosts on reschedule it will not call the scheduler
again. This means the RetryFilter is effectively useless now since
it shouldn't ever filter out hosts on the first schedule attempt
and because we're using alternates for reschedules, we shouldn't
go back to the scheduler on a reschedule. As a result this change
deprecates the RetryFilter and removes it from the default list
of enabled filters.
Change-Id: Ic0a03e89903bf925638fa26cca3dac7db710dca3
We're going to remove all the code, but first, remove the docs.
Part of blueprint remove-consoleauth
Change-Id: Ie96e18ea7762b93b4116b35d7ebcfcbe53c55527
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
The RamFilter has deprecated since the stein release. We can show
another simple filter class here.
Change-Id: I15a935e80f6656c96c1e208746af1c89bf37b670
Replace 'is comprised of' with 'comprises'
because the 'is comprised of' is disputed use.
Change-Id: If66d2e4583b00102d52635457f3c3f8c2adee1be
Closes-Bug: #1831309
The description at the start of the docs is actually pre-pike
behavior replaced by the first "future plans" item which was
the counting quotas effort in pike. This change re-words the
main description to talk about counting quotas at a high level,
links in more of the relevant APIs, links to the config options
correctly, drops the future plan about counting quotas (since it
is no longer the future) and fleshes out a little blurb about
hierarchical quotas by linking to the unified limits spec.
Change-Id: I559938f69c9b462a9b0baeb5d1d4d915f893273e
This removes the old reference to the nova-manage
command to refresh quota usage which no longer
applies since we started counting quotas in the
Pike release.
This replaces with a reference to the down-cell
known issue for counting quotas.
Change-Id: I2765f3ca3dc95345d4e4c4db43ac3dff4a509259