These were deprecated during Stein [1] and can now be removed, lest they
cause hassle with the PCPU work. As noted in [1], the aggregate
equivalents of same are left untouched for now.
[1] https://review.opendev.org/#/c/596502/
Change-Id: I8a0d332877fbb9794700081e7954f2501b7e7c09
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
The api documentation is now published on docs.openstack.org instead
of developer.openstack.org. Update all links that are changed to the
new location.
Note that Neutron publishes to api-ref/network, not networking anymore.
Note that redirects will be set up as well but let's point now to the
new location.
For details, see:
http://lists.openstack.org/pipermail/openstack-discuss/2019-July/007828.html
Change-Id: Id2cf3aa252df6db46575b5988e4937ecfc6792bb
Some options are now automatically configured by the version 1.20:
- project
- html_last_updated_fmt
- latex_engine
- latex_elements
- version
- release.
Change-Id: I3a5c7e115d0c4f52b015d0d55eb09c9836cd2fe7
If more than one numbered request group is in the placement a_c query
then the group_policy is mandatory. Based on the PTG discussion [1]
'none' seems to be a good default policy from nova perspective. So this
patch makes sure that if the group_policy is not provided in the flavor
extra_spec and there are more than one numbered group in the request and
the flavor only provide one or zero groups (so groups are coming from
other sources like neutron ports) then the group_policy is defaulted to
'none'.
The reasoning behind this change: If more than one numbered request
group is coming from the flavor extra_spec then the creator of the
flavor is responsible to add a group_policy to the flavor. So in this
nova only warns but let the request fail in placement to force the
fixing of the flavor. However when numbered groups are coming from
other sources (like neutron ports) then the creator of the flavor
cannot know if additional group will be included so we don't want to
force the flavor creator but simply default the group_policy.
[1] http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005807.html
Change-Id: I0681de217ed9f5d77dae0d9555632b8d160bb179
Before I97f06d0ec34cbd75c182caaa686b8de5c777a576 it was possible to
create servers with neutron ports which had resource_request (e.g. a
port with QoS minimum bandwidth policy rule) without allocating the
requested resources in placement. So there could be servers for which
the allocation needs to be healed in placement.
This patch extends the nova-manage heal_allocation CLI to create the
missing port allocations in placement and update the port in neutron
with the resource provider uuid that is used for the allocation.
There are known limiations of this patch. It does not try to reimplement
Placement's allocation candidate functionality. Therefore it cannot
handle the situation when there is more than one RP in the compute
tree which provides the required traits for a port. In this situation
deciding which RP to use would require the in_tree allocation candidate
support from placement which is not available yet and 2) information
about which PCI PF an SRIOV port is allocated from its VF and which RP
represents that PCI device in placement. This information is only
available on the compute hosts.
For the unsupported cases the command will fail gracefully. As soon as
migration support for such servers are implemented in the blueprint
support-move-ops-with-qos-ports the admin can heal the allocation of
such servers by migrating them.
During healing both placement and neutron need to be updated. If any of
those updates fail the code tries to roll back the previous updates for
the instance to make sure that the healing can be re-run later without
issue. However if the rollback fails then the script will terminate with
an error message pointing to documentation that describes how to
recover from such a partially healed situation manually.
Closes-Bug: #1819923
Change-Id: I4b2b1688822eb2f0174df0c8c6c16d554781af85
Change Ic857918b15496049b5ccacde9515f130cc0bd7e9 against
openstack-manuals updated the quotas document to use openstackclient
commands in place of novaclient commands. It missed the fact that you
need to pass the '--class' parameter if you wish to set a quota for a
class rather than a project. Correct this.
Change-Id: I5dc65924fee65f6340d1495a9b1b992001c30731
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Closes-Bug: #1834057
The 'binary' parameter has been changed to the 'source'
since I95b5b0826190d396efe7bfc017f6081a6356da65.
But the notification document has not been updated yet.
Replace the 'binary' parameter with the 'source' parameter.
Change-Id: I141c90ac27d16f2e9c033bcd2f95ac08904a2f52
Closes-Bug: #1836005
Add links to the document for adding a new microversion support
in python-novaclient.
Depends-On: https://review.opendev.org/667002
Change-Id: Ic58afe401464a0da2b19306e7cc6ce412f177b16
Replace the link to the NovaAPIRef wiki with
the link to the API reference guideline in the nova doc.
Change-Id: I211e828c54256391aea38e475171e92aac230e56
Mention the new way to specify hosts where instances are launched.
Compare with these two ways.
Also fixed a few things in the existing docs:
1. Fixed the policy rule name for forced_host.
2. Updated the "openstack availability zone list" command to use the
--compute option.
3. Replaced the "openstack host list" example with
"openstack compute service list --service nova-compute"
since the os-hosts API was deprecated in 2.43.
Depends-On: https://review.opendev.org/#/c/669609/
Part of Blueprint: add-host-and-hypervisor-hostname-flag-to-create-server
Change-Id: Ic471c0621e15aa497d33ef010c7f87890508fbeb
Since cells v2 was introduced, nova operators must run two commands to
migrate the database schemas of nova's databases - nova-manage api_db
sync and nova-manage db sync. It is necessary to run them in this order,
since the db sync may depend on schema changes made to the api database
in the api_db sync. Executing the db sync first may fail, for example
with the following seen in a Queens to Rocky upgrade:
nova-manage db sync
ERROR: Could not access cell0.
Has the nova_api database been created?
Has the nova_cell0 database been created?
Has "nova-manage api_db sync" been run?
Has "nova-manage cell_v2 map_cell0" been run?
Is [api_database]/connection set in nova.conf?
Is the cell0 database connection URL correct?
Error: (pymysql.err.InternalError) (1054, u"Unknown column
'cell_mappings.disabled' in 'field list'") [SQL: u'SELECT
cell_mappings.created_at AS cell_mappings_created_at,
cell_mappings.updated_at AS cell_mappings_updated_at,
cell_mappings.id AS cell_mappings_id, cell_mappings.uuid AS
cell_mappings_uuid, cell_mappings.name AS cell_mappings_name,
cell_mappings.transport_url AS cell_mappings_transport_url,
cell_mappings.database_connection AS
cell_mappings_database_connection, cell_mappings.disabled AS
cell_mappings_disabled \nFROM cell_mappings \nWHERE
cell_mappings.uuid = %(uuid_1)s \n LIMIT %(param_1)s'] [parameters:
{u'uuid_1': '00000000-0000-0000-0000-000000000000', u'param_1': 1}]
(Background on this error at: http://sqlalche.me/e/2j85)
Despite this error, the command actually exits zero, so deployment tools
are likely to continue with the upgrade, leading to issues down the
line.
This change modifies the command to exit 1 if the cell0 sync fails.
This change also clarifies this ordering in the upgrade and nova-manage
documentation, and adds information on exit codes for the command.
Change-Id: Iff2a23e09f2c5330b8fc0e9456860b65bd6ac149
Closes-Bug: #1832860
This adds a new mandatory placement request pre-filter
which is used to exclude compute node resource providers
with the COMPUTE_STATUS_DISABLED trait. The trait is
managed by the nova-compute service when the service's
disabled status changes.
Change I3005b46221ac3c0e559e1072131a7e4846c9867c makes
the compute service sync the trait during the
update_available_resource flow (either on start of the
compute service or during the periodic task run).
Change Ifabbb543aab62b917394eefe48126231df7cd503 makes
the libvirt driver's _set_host_enabled callback reflect
the trait when the hypervisor goes up or down out of band.
Change If32bca070185937ef83f689b7163d965a89ec10a will add
the final piece which is the os-services API calling the
compute service to add/remove the trait when a compute
service is disabled or enabled.
Since this series technically functions without the API
change, the docs and release note are added here.
Part of blueprint pre-filter-disabled-computes
Change-Id: I317cabbe49a337848325f96df79d478fd65811d9
The migration guide for porting from old methods to upt
was missing the allocations kwarg in the code samples.
Change-Id: I43fd8d5eeb382d1e5472fa4e9a2f01bd0e4bf243
The output of 'nova-manage cell_v2 list_cells' command
has been modified since Idb306e4f854957fd2ded9ae061fc27f0ef768fc0 and
I96a6d5e59d33c65314fc187c0286ce3408d30bdc.
Fix the output of the command in the install documents.
Change-Id: I45d0e2ee1ff182347345af46f9a388f2884ba6cd
Closes-Bug: #1833647
Turns out we've a *lot* of disparate metadata systems. Attempt to both
link them somewhat through extensive cross-referencing and extract out
deployment-specific stuff from user-facing docs. Lots of changes here,
but in summary:
- Split out admin-focused content from the metadata API, config drive,
user data and vendordata docs.
- Merge the config drive, metadata service, vendordata and user-data
user docs, which are mostly talking about the same thing and are
fairly barren without the deployment components
- Make use of various oslo.config and Sphinx roles
Side note: I miss when we have tech writers to do this stuff for us :(
Change-Id: I4fb2b628bd93358a752e2397ae353221758e2984
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
The --before option to nova manage db purge and archive_deleted_rows
accepts a string to be parsed by dateutils.parser.parse() with
fuzzy=True. This is fairly forgiving, but doesn't handle e.g. "now - 1
day". This commit adds some clarification to the help strings, and some
examples to the docs.
Change-Id: Ib218b971784573fce16b6be4b79e0bf948371954
This is a follow up to b129511050
which changes the code block in the docs to python and also
cleans up some of the NumInstancesFilter documentation, specifically
removing wording around "running" instances because the filter
does not care about state, only the number of instances reported
against a host - they could be stopped/paused/etc.
Change-Id: I7cdf79f21415fbf799fa5ad586f1b8063afed272
Since blueprint return-alternate-hosts in Queens, the scheduler
returns a primary selected host and some alternate hosts based
on the max_attempts config option. The only reschedules we have
are during server create and resize/cold migrate. The list of
alternative hosts are passed down from conductor through compute
and back to conductor on reschedule and if conductor gets a list
of alternate hosts on reschedule it will not call the scheduler
again. This means the RetryFilter is effectively useless now since
it shouldn't ever filter out hosts on the first schedule attempt
and because we're using alternates for reschedules, we shouldn't
go back to the scheduler on a reschedule. As a result this change
deprecates the RetryFilter and removes it from the default list
of enabled filters.
Change-Id: Ic0a03e89903bf925638fa26cca3dac7db710dca3
This fixes a couple of places in the admin scheduler config
docs that were listing out the enabled_filters default value
incorrectly because the ComputeFilter was missing. Rather than
try to keep the docs mirrored with the actual default value,
this change references the config option in one spot and avoids
listing the defaults in another.
Change-Id: I837aefcd37556a7b66b523529c5ca1f3dee8ac7f
Closes-Bug: #1833120
We're going to remove all the code, but first, remove the docs.
Part of blueprint remove-consoleauth
Change-Id: Ie96e18ea7762b93b4116b35d7ebcfcbe53c55527
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
The RamFilter has deprecated since the stein release. We can show
another simple filter class here.
Change-Id: I15a935e80f6656c96c1e208746af1c89bf37b670
Starting in noVNC v1.1.0, the token query parameter is no longer
forwarded via cookie [1]. We must instead use the 'path' query
parameter to pass the token through to the websocketproxy [2].
This means that if someone deploys noVNC v1.1.0, VNC consoles will
break in nova because the code is relying on the cookie functionality
that v1.1.0 removed.
This modifies the ConsoleAuthToken.access_url property to include the
'path' query parameter as part of the returned access_url that the
client will use to call the console proxy service.
This change is backward compatible with noVNC < v1.1.0. The 'path' query
parameter is a long supported feature in noVNC.
Co-Authored-By: melanie witt <melwittt@gmail.com>
Closes-Bug: #1822676
[1] https://github.com/novnc/noVNC/commit/51f9f0098d306bbc67cc8e02ae547921b6f6585c
[2] https://github.com/novnc/noVNC/pull/1220
Change-Id: I2ddf0f4d768b698e980594dd67206464a9cea37b
When the 'nova-manage cellv2 discover_hosts' command is run in parallel
during a deployment, it results in simultaneous attempts to map the
same compute or service hosts at the same time, resulting in
tracebacks:
"DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, u\"Duplicate
entry 'compute-0.localdomain' for key 'uniq_host_mappings0host'\")
[SQL: u'INSERT INTO host_mappings (created_at, updated_at, cell_id,
host) VALUES (%(created_at)s, %(updated_at)s, %(cell_id)s,
%(host)s)'] [parameters: {'host': u'compute-0.localdomain',
%'cell_id': 5, 'created_at': datetime.datetime(2019, 4, 10, 15, 20,
%50, 527925), 'updated_at': None}]
This adds more information to the command help and adds a warning
message when duplicate host mappings are detected with guidance about
how to run the command. The command will return 2 if a duplicate host
mapping is encountered and the documentation is updated to explain
this.
This also adds a warning to the scheduler periodic task to recommend
enabling the periodic on only one scheduler to prevent collisions.
We choose to warn and stop instead of ignoring DBDuplicateEntry because
there could potentially be a large number of parallel tasks competing
to insert duplicate records where only one can succeed. If we ignore
and continue to the next record, the large number of tasks will
repeatedly collide in a tight loop until all get through the entire
list of compute hosts that are being mapped. So we instead stop the
colliding task and emit a message.
Closes-Bug: #1824445
Change-Id: Ia7718ce099294e94309103feb9cc2397ff8f5188