These warnings come from oslo.db code (tracked with bug 1814199)
so there isn't much nova can do about that right now, outside of
monkey patching oslo.db which is a bad idea.
Let's ignore the warning until the bug in oslo.db is fixed to
avoid blowing up our unit/functional test console output logs
which in turn is intermittently triggering subunit.parser failures.
Change-Id: Ifdfeadae8b348d788de2cd665544015366271d66
Related-Bug: #1813147
This adds a new config option to control the maximum number of disk
devices allowed to attach to a single instance, which can be set per
compute host.
The configured maximum is enforced when device names are generated
during server create, rebuild, evacuate, unshelve, live migrate, and
attach volume. When the maximum is exceeded during server create,
rebuild, evacuate, unshelve, or live migrate, the server will go into
ERROR state and the server fault will contain the reason. When the
maximum is exceeded during an attach volume request, the request fails
fast in the API with a 403 error.
The configured maximum on the destination is not enforced before cold
migrate because the maximum is enforced in-place only (the destination
is not checked over RPC). The configured maximum is also not enforced
on shelved offloaded servers because they have no compute host, and the
option is implemented at the nova-compute level.
Part of blueprint conf-max-attach-volumes
Change-Id: Ia9cc1c250483c31f44cdbba4f6342ac8d7fbe92b
If the instance info_cache is corrupted somehow, like during
a host reboot and the ports aren't wired up properly or
a mistaken policy change in neutron results in nova resetting
the info_cache to an empty list, the _heal_instance_info_cache
is meant to fix it (once the current state of the ports for
the instance in neutron is corrected). However, the task is
currently only refreshing the cache *based* on the current contents
of the cache, which defeats the purpose of neutron being the source
of truth for the ports attached to the instance.
This change makes the _heal_instance_info_cache periodic task
pass a "force_refresh" kwarg, which defaults to False for backward
compatibility with other methods that refresh the cache after
operations like attach/detach interface, and if True will make
nova get the current state of the ports for the instance from neutron
and fully rebuild the info_cache.
To not lose port order in info_cache this change takes original order
from nova historical data that are stored as VirtualInterfaceList
objects. For ports that are not registered as VirtualInterfaces
objects it will add them at the end of port_order list. Due to this
for instances older than Newton another patch was introduced to fill
missing VirtualInterface objects in the DB [1].
Long-term we should be able to refactor some of the older refresh
code which leverages the cache to instead use the refresh_vif_id
kwarg so that we do targeted cache updates when we do things like
attach and detach ports, but that's a change for another day.
[1] https://review.openstack.org/#/c/614167
Co-Authored-By: Maciej Jozefczyk <maciej.jozefczyk@corp.ovh.com>
Change-Id: I629415236b2447128ae9a980d4ebe730a082c461
Closes-Bug: #1751923
In change [1] we modified _heal_instance_info_cache periodic task
to use Neutron point of view while rebuilding InstanceInfoCache
objects.
The crucial point was how we know the previous order of ports, if
the cache was broken. We decided to use VirtualInterfaceList objects
as source of port order.
For instances older than Newton VirtualInterface objects doesn't
exist, so we need to introduce a way of creating it.
This script should be executed while upgrading to Stein release.
[1] https://review.openstack.org/#/c/591607
Change-Id: Ic26d4ce3d071691a621d3c925dc5cd436b2005f1
Related-Bug: 1751923
When external DNS service is enabled, use user's context to request
dns_name reset instead of using admin context. The dns record need
be found in user's zone and recordset.
Change-Id: I35335b501f8961b9ac8e5f92e0686e402b78617b
Closes-Bug: #1812110
When nova needs to create ports in Neutron in a network that has minimum
bandwidth policy Nova would need to create allocation for the bandwidth
resources. The port creation happens in the compute manager after the
scheduling and resource claiming. Supporting for this is considered out
of scope for the first iteration of this feature.
To avoid resource allocation inconsistencies this patch propose to
reject such request. This rejection does not break any existing use
case as minimum bandwidth policy rule only supported by the SRIOV
Neutron backend but Nova only support booting with SRIOV port if those
ports are precreated in Neutron.
Co-Authored-By: Elod Illes <elod.illes@ericsson.com>
Change-Id: I7e1edede827cf8469771c0496b1dce55c627cf5d
blueprint: bandwidth-resource-provider
Values less than 0 can be set in the config option
max_concurrent_live_migrations and it is treated as 0 currently.
In the next release, it will set the minimum value 0 in the config
option and it will raise a ValueError if the value is less than 0.
So add a warning for the change in the next release.
Change-Id: Ib23e787cea2e0bfb4ae77e859502d723619cea7c
Before this change, the vrouter VIF type used legacy VIF plugging. This
changeset ports the plugging methods over to an external os-vif plugin,
simplifying the in-tree code.
Miscellaneous notes:
* There are two "vrouter" Neutron VIF types:
* "contrail_vrouter" supporting vhostuser plugging, and
* "vrouter", supporting kernel datapath plugging.
* The VIFGeneric os-vif type is used for the kernel TAP based
plugging when the vnic_type is 'normal'.
* For multiqueue support, the minimum version of libvirt 1.3.1 is
required. In that case, libvirt creates the TAP device, rather than
the os-vif plugin. (This is the minimum version for Rocky and later)
ref: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1574957
* The corresponding commit on Tungsten Fabric / OpenContrail for this
work is at:
https://github.com/Juniper/contrail-nova-vif-driver/commit/ed01d315e5707b4f670468454729dc2031c5f780
Change-Id: I047856982251fddc631679fb2dbcea0f3b0db097
Signed-off-by: Jan Gutter <jan.gutter@netronome.com>
blueprint: vrouter-os-vif-conversion
The unit test for ComputeNode.obj_make_compatible() made two mistakes:
* they asserted that a field is not in the primitives, but primitives
is a dict where the top level keys are nova.object_data,
nova.object_version, ... etc. So the assertNotIn call was succeded
false positively.
* they did not always initialized the tested field in the ComputeNode
object if a field is not initialized then it is never in the
primitives.
This patch fixed the unit tests but in the meantime it is uncovered that
some of the compatibility code was missing from the ComputeNode ovo. So
those are also added now.
Change-Id: I2010f12b591dff381597c577920738712093e4ce
Remove the following configuration options in the 'quota' group
because they have not been used any more
since Ie01ab1c3a1219f1d123f0ecedc66a00dfb2eb2c1.
- reservation_expire
- until_refresh
- max_age
Change-Id: I56401daa8a2eee5e3aede336b26292f77cc0edd6
Currently, the libvirt driver has a limit on the maximum number of
disk devices allowed to attach to a single instance of 26. If a user
attempts to attach a volume which would make the total number of
attached disk devices > 26 for the instance, the user receives a
500 error from the API.
This adds a new exception type TooManyDiskDevices and raises it for the
"No free disk devices names" condition, instead of InternalError, and
handles it in the attach volume API. We raise TooManyDiskDevices
directly from the libvirt driver because InternalError is ambiguous and
can be raised for different error reasons within the same method call.
Closes-Bug: #1770527
Change-Id: I1b08ed6826d7eb41ecdfc7102e5e8fcf3d1eb2e1
Attaching a port with minimum bandwidth policy would require to update
the allocation of the server. But for that nova would need to select the
proper networking resource provider under the compute resource provider
the server is running on.
For the first iteration of the feature we consider this out of scope. To
avoid resource allocation inconsistencies this patch propose to reject
such attach interface request. Rejecting such interface attach does not
break existing functionality as today only the SRIOV Neutron backend
supports the minimum bandwidth policy but Nova does not support
interface attach with SRIOV interfaces today.
A subsequent patch will handle attaching a network that has QoS policy.
Co-Authored-By: Elod Illes <elod.illes@ericsson.com>
Change-Id: Id8b5c48a6e8cf65dc0a7dc13a80a0a72684f70d9
blueprint: bandwidth-resource-provider
Nova skips detaching of ovs dpdk interfaces
thinking that it's already detached because
get_interface_by_cfg() return no inteface.
This is due to _set_config_VIFVHostUser()
not setting target_dev in configuration while
LibvirtConfigGuestInterface sets target_dev
if tag "target" is found in the interface.
As target_dev is not a valid value for
vhostuser interface, it will not be checked
for vhostuser type.
Change-Id: Iaf185b98c236df47e44cda0732ee0aed1fd6323d
Closes-Bug: #1807340
When shelve an instance, if the instance has volume attached,
with new attach/detach flow, we will delete the old attachment
and create a new attachment, the volume status will be ``reserved``.
If the user tries to detach these volumes, it fails due to that
Cinder does not allow a begin_detaching() call on a `reserved` volume.
Actually for shelved instances, we can just skip this step and
directly detach it.
Change-Id: Ib1799feebbd8f4b0f389168939df7e5e90c8add1
closes-bug: #1808089