The api documentation is now published on docs.openstack.org instead
of developer.openstack.org. Update all links that are changed to the
new location.
Note that Neutron publishes to api-ref/network, not networking anymore.
Note that redirects will be set up as well but let's point now to the
new location.
For details, see:
http://lists.openstack.org/pipermail/openstack-discuss/2019-July/007828.html
Change-Id: Id2cf3aa252df6db46575b5988e4937ecfc6792bb
Some options are now automatically configured by the version 1.20:
- project
- html_last_updated_fmt
- latex_engine
- latex_elements
- version
- release.
Change-Id: I3a5c7e115d0c4f52b015d0d55eb09c9836cd2fe7
Ubuntu 12.04 is rather long in the tooth now. Remove the bindep markers
for it.
Change-Id: Ie5c2d7ab1e3e637a1d42712e22a7a6e6d6427020
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
When starting nova-compute for the first time with a new node,
the ResourceTracker will create a new ComputeNode record in
_init_compute_node but without all of the fields set on the
ComputeNode, for example "free_disk_gb".
Later _update_usage_from_instances will set some fields on the
ComputeNode record (even if there are no instances on the node,
why - I don't know) like free_disk_gb.
This will make the eventual call from _update() to _resource_change()
update the value in the old_resouces dict and return True, and then
_update() will try to update those ComputeNode changes to the database.
If that update fails, for example due to a DBConnectionError, the
value in old_resources will still be for the current version of the node
in memory but not what is actually in the database.
Note that this failure does not result in the compute service failing
to start because ComputeManager._update_available_resource_for_node
traps the Exception and just logs it.
A subsequent trip through the RT._update() method - because of the
update_available_resource periodic task - will call _resource_change
but because old_resource matches the current state of the node, it
returns False and the RT does not attempt to persist the changes to
the DB. _update() will then go on to call _update_to_placement
which will create the resource provider in placement along with its
inventory, making it potentially a candidate for scheduling.
This can be a problem later in the scheduler because the
HostState._update_from_compute_node method may skip setting fields
on the HostState object if free_disk_gb is not set in the
ComputeNode record - which can then break filters and weighers
later in the scheduling process (see bug 1834691 and bug 1834694).
The fix proposed here is simple: if the ComputeNode.save() in
RT._update() fails, restore the previous value in old_resources
so that the subsequent run through _resource_change will compare the
correct state of the object and retry the update.
An alternative to this would be killing the compute service on startup
if there is a DB error but that could have unintended side effects,
especially if the DB error is transient and can be fixed on the next
try.
Obviously the scheduler code needs to be more robust also, but those
improvements are left for separate changes related to the other bugs
mentioned above.
Also, ComputeNode.update_from_virt_driver could be updated to set
free_disk_gb if possible to workaround the tight coupling in the
HostState._update_from_compute_node code, but that's also sort of
a whack-a-mole type change best made separately.
Change-Id: Id3c847be32d8a1037722d08bf52e4b88dc5adc97
Closes-Bug: #1834712
If more than one numbered request group is in the placement a_c query
then the group_policy is mandatory. Based on the PTG discussion [1]
'none' seems to be a good default policy from nova perspective. So this
patch makes sure that if the group_policy is not provided in the flavor
extra_spec and there are more than one numbered group in the request and
the flavor only provide one or zero groups (so groups are coming from
other sources like neutron ports) then the group_policy is defaulted to
'none'.
The reasoning behind this change: If more than one numbered request
group is coming from the flavor extra_spec then the creator of the
flavor is responsible to add a group_policy to the flavor. So in this
nova only warns but let the request fail in placement to force the
fixing of the flavor. However when numbered groups are coming from
other sources (like neutron ports) then the creator of the flavor
cannot know if additional group will be included so we don't want to
force the flavor creator but simply default the group_policy.
[1] http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005807.html
Change-Id: I0681de217ed9f5d77dae0d9555632b8d160bb179
Release 3.15.0 of keystoneauth1 introduced the ability to pass
X-Openstack-Request-Id to request methods (get/put/etc) via a
global_request_id kwarg rather than having to put it in a headers dict.
This commit bumps the minimum ksa level to 3.15.0 and takes advantage of
the new kwarg to replace explicit header construction in
SchedulerReportClient (Placement) and neutronv2/api methods.
Also normalizes the way param lists were being passed from
SchedulerReportClient's REST primitives (get/put/post/delete) into the
Adapter equivalents. There was no reason for them to be different.
Change-Id: I2f6eb50f4cb428179ec788de8b7bd6ef9bbeeaf9
Our minimum for libvirt is now 3.0.0 while our QEMU minimum is 2.8.0,
meaning some checks for older versions can be removed.
Change-Id: Ibecdfb1e903d3c1f711e1d61212be00176110a9b
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Once TODO noted that a block could be removed once we bump to libvirt
1.3.8 or greater. We require 3.0.0 now so that's resolved. Another one
looks like it should be resolved in 3.2.0 so the TODO is updated to
highlight this for future reviewers.
Change-Id: I5235751b1dbc77ecc919eec7f3e022cd70085051
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
'getattr' is really powerful and we make extensive use of it in nova.
However, the way we've used it for VIF lookups, where we use it to
retrieve functions by a key, seems to be a bit of an anti-pattern. Not
only does it completely break static code coverage analysers that we can
use to help us root out code that's not tested (or is tested but is
never used in production) but, more importantly, it makes it so much
more difficult to figure out what on earth is going on in an already
complex part of the codebase.
Be verbose and, in the absence of a true switch statement in Python, use
simple if-else blocks to do the lookups. Due to how this is done, we're
able to remove a few previously no-op functions. Funnily enough, this
actually results in fewer LOC despite being more explicit. #winning?
Change-Id: Idf08adca1e3a0d19e20bca2447c83f7372516cb7
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
These will never be reached since the '_nova_to_osvif_vif_vhostuser'
function in the 'nova.network.os_vif_util' provides as fallthrough case
since change Ifab3006454708ab290b93f02d82b794c334c3946.
Change-Id: I14ab55178692ff13df114a4c628430561df1a55e
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Nova context request_id is not propagated for
port binding operations in neutron.
So fix it.
Change-Id: I76163c46b1f01ba7ff592d162b106ea2e5bb34cb
Closes-Bug: #1829914
Since 4817165fc5, when reverting a
resized instance back to the source host, the libvirt driver waits for
vif-plugged events when spawning the instance. When called from
finish_revert_resize() in the source compute manager, libvirt's
finish_revert_migration() does not pass vifs_already_plugged to
_create_domain_and_network(), making the latter use the default False
value.
When the source compute manager calls
network_api.migrate_instance_finish() in finish_revert_resize(), this
updates the port binding back to the source host. If Neutron is
configured to use OVS hybrid plug, it will send the vif-plugged event
immediately after completing this request. This happens before the
virt driver's finish_revert_migration() method is called. This causes
the wait in the libvirt driver to time out because the event is
received before Nova starts waiting for it.
The neutron ovs l2 agent sends vif-plugged events when two conditions
are met. First the port must be bound to the host managed by the
l2 agent and second, the agent must have completed configuring the
port on ovs. This involves assigning the port a local VLAN for tenant
isolation, applying security group rules if required and applying
QoS policies or other agent extensions like service function chaining.
During the boot process, we bind the port first to the host
then plug the interface into ovs which triggers the l2 agent to
configure it resulting in the emission of the vif-plugged event.
In the revert case, as noted above, since the vif is already plugged
on the source node when hybrid-plug is used, binding the port to the
source node fulfils the second condition to send the vif-plugged event.
Events sent immediately after port binding update are hereafter known
as "bind-time" events. For ports that do not use OVS hybrid plug,
Neutron will continue to send vif-plugged events only when Nova
actually plugs the VIF. These types of events are hereafter known as
"plug-time" events. OVS hybrid plug is a per agent setting, so for
a particular host, bind-time events are an all-or-nothing thing for the
ovs backend: either all VIF_TYPE=ovs ports have them, or no ovs ports
have them. In general, a host will only have one network backend.
The only exception to this is SR-IOV. SR-IOV is commonly deployed on
the same host as other network backends such as OVS or linuxbridge.
SR-IOV ports with VNIC_TYPE=direct-physical will always have only
bind-time events. If an instance mixes OVS ports with hybrid-plug=False
with direct physical ports, it will have both kinds of events.
For same host resize reverts we do not update the binding host as the
host does not change, as such for same host resize we do not receive
bind time events. For same host revert we therefore do not wait for
bind time events in the compute manager.
This patch adds functions to the NetworkInfo model that return what
kinds of events each VIF has. These are then used in the migration
revert logic to decide when to wait for external events: in the
compute manager, when binding the port, for bind-time events,
and/or in libvirt, when plugging the VIFs, for plug-time events.
Closes-bug: #1832028
Closes-Bug: #1833902
Co-Authored-By: Sean Mooney <work@seanmooney.info>
Change-Id: I51673e58fc8d5f051df911630f6d7a928d123a5b
Change Ic857918b15496049b5ccacde9515f130cc0bd7e9 against
openstack-manuals updated the quotas document to use openstackclient
commands in place of novaclient commands. It missed the fact that you
need to pass the '--class' parameter if you wish to set a quota for a
class rather than a project. Correct this.
Change-Id: I5dc65924fee65f6340d1495a9b1b992001c30731
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Closes-Bug: #1834057
The 'binary' parameter has been changed to the 'source'
since I95b5b0826190d396efe7bfc017f6081a6356da65.
But the notification document has not been updated yet.
Replace the 'binary' parameter with the 'source' parameter.
Change-Id: I141c90ac27d16f2e9c033bcd2f95ac08904a2f52
Closes-Bug: #1836005
Add links to the document for adding a new microversion support
in python-novaclient.
Depends-On: https://review.opendev.org/667002
Change-Id: Ic58afe401464a0da2b19306e7cc6ce412f177b16
Replace the link to the NovaAPIRef wiki with
the link to the API reference guideline in the nova doc.
Change-Id: I211e828c54256391aea38e475171e92aac230e56
ProviderTree used to keep track of root providers in a list. Since we
don't yet have sharing providers, this would always be a list of one for
non-ironic deployments, or N for ironic deployments of N nodes.
To find a provider (by name or UUID), we would iterate over this list,
an O(N) operation. For large ironic deployments, this added up fast -
see the referenced bug.
With this change, we store roots in two dicts: one keyed by UUID, one
keyed by name. To find a provider, we first check these dicts. If the
provider we're looking for is a root, this is now O(1). (If it's a
child, it would still be O(N), because we iterate over all the roots
looking for a descendant that matches. But ironic deployments don't have
child providers (yet?) (right?) so that should be n/a. For non-ironic
deployments it's unchanged: O(M) where M is the number of descendants,
which should be very small for the time being.)
Test note: Existing tests in nova.tests.unit.compute.test_provider_tree
thoroughly cover all the affected code paths. There was one usage of
ProviderTree.roots that was untested and broken (even before this
change) which is now fixed.
Change-Id: Ibf430a8bc2a2af9353b8cdf875f8506377a1c9c2
Closes-Bug: #1816086
Change Iefd7a60139043929aee63a3660fabdded1622029 made these
mocks unnecessary so this change just removes them.
Change-Id: I6fbeef09d868ff7c179f8e791944cc6e8ae10802
As with Iea948bcc43315286e5c130485728152d4710bfcb for the
devstack-plugin-ceph-tempest job this change disables ssh validation in
the nova-lvm job to avoid commonly seen failures:
http://status.openstack.org/elastic-recheck/#1808010
Related-Bug: #1802971
Change-Id: I566f9a630d06226252bde800d07aba34c6876857