This adds a mention of the nova-scheduler service requiring
placement 1.17 and also links to the placement upgrade notes
from the more general upgrade notes, since we are now firmly
in a place where placement needs to be upgraded before nova.
Since we consider placement global, this removes the 1.14
note about nova-compute since we assume that if you're going
to upgrade placement to get 1.17 for the scheduler, and control
services should be upgraded before computes, then the computes
are going to get a new enough placement service automatically.
Change-Id: I06937c7642dca4a1932cbbf46569acc9c58e44a6
This is a follow up to Ie039322660fd0e2e0403843448379b78114c425b.
A few things are changed here:
* The note about using file injection is removed. File injection
was deprecated in the API in Queens and not something that we
really want users using.
* Mention that creating a flavor is typically admin-only.
* Link to the BDM docs for more details about BDM parameter values.
* Update the manage-ip-address docs to make the examples rely on
using the networking resource CLIs rather than any proxy APIs
that were available in nova.
Change-Id: Ifa2e2bbb4c5f51f13d1a5832bd7dbf9f690fcad7
This imports the "launch instance" end user guide docs from
the openstack-manuals repo. As part of the docs migration
in Pike, these were forgotten. The copied contents come from
the stable/ocata branch of openstack-manuals, and therefore
likely need some updating, but that could be done in follow up
changes. This is an initial import to (1) publish the content
again somewhere and (2) fix broken links in the cinder docs
for booting from volume.
Change-Id: Ie039322660fd0e2e0403843448379b78114c425b
Partial-Bug: #1714017
Related-Bug: #1711267
This takes most of the release note and adds it to the user
flavor docs which is more discoverable for an end user.
Change-Id: Ia83af4dfcc0c040679b0d0cd5282830fca27bd63
This patch enables the flavor extra spec 'required:[traits]'. The
admin can specifiy a set of traits that a flavor required.
To enable this, the placement 1.17 is required, it added trait
support in the `GET /allocation_candidates` API. So bump the
minimal placement version requirement to 1.17 and update the
check in cmd `nova-status`.
Implement blueprint request-traits-in-nova
Change-Id: Ia51ace951e9a0402f873ce9751a8cd3c0283db08
The default behavior for the "nova-manage cell_v2 map_instances"
command is to map all instances in the cell in batches of 50.
This can be slow when there are several thousand instances in the
deployment and an operator may want to specify a higher --max-count
value and run the command until it completes.
This simply updates the command option description and man page to
point this out for consideration.
Change-Id: I59c2ed89fe02212977445f6825c6da8fedbb8ccf
Related-Bug: #1742649
This change introduces a new microversion which must be used
to create a server from a multiattach volume or attach a multiattach
volume to an existing server instance.
Attaching a multiattach volume to a shelved offloaded instance is not
supported since an instance in that state does not have a compute host
so we can't tell if the compute would support the multiattach volume
or not. This is consistent with the tagged attach validation with 2.49.
When creating a server from a multiattach volume, we'll check to see
if all computes in all cells are upgraded to the point of even supporting
the compute side changes, otherwise the server create request fails with
a 409. We do this because we don't know which compute node the scheduler
will pick and we don't have any compute capability filtering in the
scheduler for multiattach volumes (that may be a future improvement).
Similarly, when attaching a multiattach volume to an existing instance,
if the compute isn't new enough to support multiattach or the virt
driver simply doesn't support the capability, a 409 response is returned.
Presumably, operators will use AZs/aggregates to organize which hosts
support multiattach if they have a mixed hypervisor deployment, or will
simply disable multiattach support via Cinder policy.
The unit tests are covering error conditions with the new flow. A new
functional scenario test is added for happy path testing of the new boot
from multiattach volume flow and attaching a multiattach volume to more
than one instance.
Tempest integration testing for multiattach is added in change
I80c20914c03d7371e798ca3567c37307a0d54aaa.
Devstack support for multiattach is added in change
I46b7eabf6a28f230666f6933a087f73cb4408348.
Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com>
Implements: blueprint multi-attach-volume
Change-Id: I02120ef8767c3f9c9497bff67101e57e204ed6f4
The nova noVNC proxy server has gained the ability to use the VeNCrypt
authentication scheme to secure network communications with the compute
node VNC servers. This documents how to configure the QEMU/KVM compute
nodes and the noVNC proxy server nodes.
Change-Id: If3cea87568efff0874cd8851cabc6770812c545b
Blueprint: websocket-proxy-to-host-security
Co-Authored-By: Stephen Finucane <sfinucan@redhat.com>
This change set adds Open vSwitch VIF support for the PowerVM virt
driver.
Change-Id: If23aeb890c4365014a9f1262647611162f981f12
Partially-Implements: blueprint powervm-nova-it-compute-driver
When deleting a cell, if there are instance mappings to the cell,
the command fails with the following message.
* There are existing instances mapped to cell with uuid UUID.
But even if all instances have been deleted in the cell,
the same message is shown.
So in that case, add a warning that the instance mappings have to
be deleted by 'nova-manage db archive_deleted_rows'
before deleting the cell.
Change-Id: I2a163fb50a7e71ce9f463bc9ddeffe2ea47d1588
Closes-Bug: #1725331
The cells v2 layout documentation clearly states that there are no
upcalls from cells back to the central API services. This mislead
me for sometime as I could not fathom how a compute node in a cell
was supposed to report its resource info.
It turns out nova looks up the placement service in the keystone
catalogue and contacts it directly which to my mind is an upcall. I
wonder if the author of the not felt that the placement service is
not really part of nova?
Change-Id: If14be8b182f0af4e4e6641046fec638c07e26546
Closes-Bug: #1742421
Document the ``nova-manage cell_v2 list_hosts`` command for listing
hosts in one or all v2 cells.
Change-Id: I46fece55f1647fe7a41906054ad0d6213315187b
Related-Bug: #1735687
This fills in the TODOs for the unit, functional and
docs part of the API contributor guide.
Since we don't rely on the DocImpact tag in the commit
message for API changes (that tag results in a nova bug
and was meant mostly for making changes to docs external
to the nova repo, which is no longer the case), this
changes that section to just talk about the in-tree docs
that should be updated for API changes.
Change-Id: I9ca423c09185d2e3733357fd47aaba82d716eea4
Not only libvirt/KVM driver but also libvirt/QEMU works with cpu
topology feature in nova. So we just update the document.
Change-Id: If8f0229072c8518c9301a872b98862687d93a044
In the comments to I8f0c3006d1bb97d228f73256c58a79235cd12670, a request
for clarification was made on when the last-modified header should
be "now". This adds an example to help things a bit more clear.
Change-Id: I301f17bc7aad9f0037d2b13aa6e493ac9a6abb80
Unlike in nova-manage create_cell, in nova-manage update_cell the check
for the same combination of transport-url and/or database_connection
does not exist. Hence it allows a user to update a cell's transport-url
and/or database_connection to another existing cell's transport/db urls.
Change-Id: Ia5d5566c535d6da3d215392590a2d362e1226424
Closes-Bug: #1729806
Deprecated in Pike:
I660e0316b11afcad65c0fe7bd167ddcec9239a8b
This filter relies on the flavor.id primary key which will
change as (1) flavors were migrated to the API database and
(2) when a flavor is changed by deleting and re-creating the
flavor.
Also, as noted in blueprint put-host-manager-instance-info-on-a-diet,
this is one step forward in getting us to a point where the only
thing that the in-tree filters care about in the HostState.instances
dict is the instance uuid (for the affinity filters). Which means
we can eventually stop RPC casting all instance information from
all nova-compute services to the scheduler for every instance create,
delete, move or periodic sync task - we only would need to send the
list of instance UUIDs. That should help with RPC traffic in a large
and busy deployment.
Change-Id: Icb43fe2ef5252d2838f6f8572c7497840a9797a1