Commit Graph

53236 Commits

Author SHA1 Message Date
Zuul aa89979e67 Merge "Reject networks with QoS policy" 2019-02-02 17:58:36 +00:00
Zuul 026ecda1d5 Merge "Turn off rp association refresh in nova-next" 2019-02-02 04:01:31 +00:00
Zuul bcecee9ac9 Merge "Commonize _update code path" 2019-02-01 22:04:35 +00:00
Matt Riedemann 1fa2e9c3a0 Ignore SAWarnings for "Evaluating non-mapped column expression"
These warnings come from oslo.db code (tracked with bug 1814199)
so there isn't much nova can do about that right now, outside of
monkey patching oslo.db which is a bad idea.

Let's ignore the warning until the bug in oslo.db is fixed to
avoid blowing up our unit/functional test console output logs
which in turn is intermittently triggering subunit.parser failures.

Change-Id: Ifdfeadae8b348d788de2cd665544015366271d66
Related-Bug: #1813147
2019-02-01 11:41:07 -05:00
Zuul 5a4863aa15 Merge "Make 'plugin' a required argument for '_get_vif_instance'" 2019-02-01 00:18:59 +00:00
Zuul 33aad0fe41 Merge "Force refresh instance info_cache during heal" 2019-01-31 14:01:54 +00:00
Zuul ab5a9bba31 Merge "Add fill_virtual_interface_list online_data_migration script" 2019-01-31 13:43:35 +00:00
Zuul 3c3608b171 Merge "Fix using template cell urls with nova-manage" 2019-01-31 13:34:29 +00:00
Zuul 5b5b5749e9 Merge "unused images are always deleted (add to in-tree hper-v code)" 2019-01-31 13:07:12 +00:00
Zuul 1822e4f9d4 Merge "Fix config docs for handle_virt_lifecycle_events" 2019-01-31 10:38:28 +00:00
Zuul ea32c35cdc Merge "Add configuration of maximum disk devices to attach" 2019-01-31 10:33:56 +00:00
Matt Riedemann 24f0902e2d Fix config docs for handle_virt_lifecycle_events
* Fixes the "workarounds_group" typo.
* Fixes the formatting on the referenced bug.

Change-Id: I1439b76be3febbd89d933928d6419144eb8689ed
2019-01-30 17:52:37 -05:00
Zuul 16dda27748 Merge "Consolidate inventory refresh" 2019-01-30 22:21:07 +00:00
melanie witt bb0906f4f3 Add configuration of maximum disk devices to attach
This adds a new config option to control the maximum number of disk
devices allowed to attach to a single instance, which can be set per
compute host.

The configured maximum is enforced when device names are generated
during server create, rebuild, evacuate, unshelve, live migrate, and
attach volume. When the maximum is exceeded during server create,
rebuild, evacuate, unshelve, or live migrate, the server will go into
ERROR state and the server fault will contain the reason. When the
maximum is exceeded during an attach volume request, the request fails
fast in the API with a 403 error.

The configured maximum on the destination is not enforced before cold
migrate because the maximum is enforced in-place only (the destination
is not checked over RPC). The configured maximum is also not enforced
on shelved offloaded servers because they have no compute host, and the
option is implemented at the nova-compute level.

Part of blueprint conf-max-attach-volumes

Change-Id: Ia9cc1c250483c31f44cdbba4f6342ac8d7fbe92b
2019-01-30 15:47:10 +00:00
Zuul 9fab7e73e3 Merge "Reject interface attach with QoS aware port" 2019-01-30 13:27:59 +00:00
Zuul 16dbf086eb Merge "Don't call begin_detaching when detaching volume from shelved vm" 2019-01-30 13:20:40 +00:00
Zuul a219f602f7 Merge "Convert vrouter legacy plugging to os-vif" 2019-01-30 10:40:02 +00:00
Zuul d21ac550c6 Merge "Fix string interpolations in logging calls" 2019-01-30 10:30:02 +00:00
Matt Riedemann ba44c155ce Force refresh instance info_cache during heal
If the instance info_cache is corrupted somehow, like during
a host reboot and the ports aren't wired up properly or
a mistaken policy change in neutron results in nova resetting
the info_cache to an empty list, the _heal_instance_info_cache
is meant to fix it (once the current state of the ports for
the instance in neutron is corrected). However, the task is
currently only refreshing the cache *based* on the current contents
of the cache, which defeats the purpose of neutron being the source
of truth for the ports attached to the instance.

This change makes the _heal_instance_info_cache periodic task
pass a "force_refresh" kwarg, which defaults to False for backward
compatibility with other methods that refresh the cache after
operations like attach/detach interface, and if True will make
nova get the current state of the ports for the instance from neutron
and fully rebuild the info_cache.

To not lose port order in info_cache this change takes original order
from nova historical data that are stored as VirtualInterfaceList
objects. For ports that are not registered as VirtualInterfaces
objects it will add them at the end of port_order list. Due to this
for instances older than Newton another patch was introduced to fill
missing VirtualInterface objects in the DB [1].

Long-term we should be able to refactor some of the older refresh
code which leverages the cache to instead use the refresh_vif_id
kwarg so that we do targeted cache updates when we do things like
attach and detach ports, but that's a change for another day.

[1] https://review.openstack.org/#/c/614167

Co-Authored-By: Maciej Jozefczyk <maciej.jozefczyk@corp.ovh.com>
Change-Id: I629415236b2447128ae9a980d4ebe730a082c461
Closes-Bug: #1751923
2019-01-30 10:03:26 +00:00
Maciej Jozefczyk 3534471c57 Add fill_virtual_interface_list online_data_migration script
In change [1] we modified _heal_instance_info_cache periodic task
to use Neutron point of view while rebuilding InstanceInfoCache
objects.
The crucial point was how we know the previous order of ports, if
the cache was broken. We decided to use VirtualInterfaceList objects
as source of port order.
For instances older than Newton VirtualInterface objects doesn't
exist, so we need to introduce a way of creating it.
This script should be executed while upgrading to Stein release.

[1] https://review.openstack.org/#/c/591607

Change-Id: Ic26d4ce3d071691a621d3c925dc5cd436b2005f1
Related-Bug: 1751923
2019-01-30 10:03:19 +00:00
Zuul bdc8923101 Merge "Fix port dns_name reset" 2019-01-30 02:16:57 +00:00
Zuul 9c98d5c312 Merge "Raise 403 instead of 500 error from attach volume API" 2019-01-30 02:16:41 +00:00
Zuul 5be2638ebb Merge "docs: Update references to "QEMU-native TLS" document" 2019-01-29 17:36:21 +00:00
Zuul 9ecfe6a66b Merge "libvirt: A few miscellaneous items related to "native TLS"" 2019-01-29 17:36:14 +00:00
Takashi NATSUME 552213e79f Fix string interpolations in logging calls
String interpolation should be delayed to be handled
by the logging code, rather than being done
at the point of the logging call.

* https://docs.openstack.org/oslo.i18n/latest/user/guidelines.html#adding-variables-to-log-messages

The check rule for string format method will be added
in openstack/hacking.

TrivialFix
Change-Id: I6ec56ec35bcb33d6627a47b66c4f7fc2c6f22658
2019-01-29 15:06:39 +09:00
Hang Yang 1b797f6f7e Fix port dns_name reset
When external DNS service is enabled, use user's context to request
dns_name reset instead of using admin context. The dns record need
be found in user's zone and recordset.

Change-Id: I35335b501f8961b9ac8e5f92e0686e402b78617b
Closes-Bug: #1812110
2019-01-28 14:54:02 -08:00
Balazs Gibizer 8364abecfa Reject networks with QoS policy
When nova needs to create ports in Neutron in a network that has minimum
bandwidth policy Nova would need to create allocation for the bandwidth
resources. The port creation happens in the compute manager after the
scheduling and resource claiming. Supporting for this is considered out
of scope for the first iteration of this feature.

To avoid resource allocation inconsistencies this patch propose to
reject such request. This rejection does not break any existing use
case as minimum bandwidth policy rule only supported by the SRIOV
Neutron backend but Nova only support booting with SRIOV port if those
ports are precreated in Neutron.

Co-Authored-By: Elod Illes <elod.illes@ericsson.com>

Change-Id: I7e1edede827cf8469771c0496b1dce55c627cf5d
blueprint: bandwidth-resource-provider
2019-01-28 15:50:25 +01:00
Zuul c134feda3d Merge "Skip checking of target_dev for vhostuser" 2019-01-28 14:25:32 +00:00
Zuul 907c7d2cfe Merge "Fix ComputeNode ovo compatibility code" 2019-01-26 23:27:24 +00:00
Zuul dfd3ad3214 Merge "Add a warning for max_concurrent_live_migrations" 2019-01-26 15:59:00 +00:00
Zuul 15703056ab Merge "Cleanup vendordata docs" 2019-01-26 07:14:15 +00:00
Zuul c8926feb26 Merge "Remove unused quota options" 2019-01-26 05:27:42 +00:00
Takashi NATSUME e607a1e564 Add a warning for max_concurrent_live_migrations
Values less than 0 can be set in the config option
max_concurrent_live_migrations and it is treated as 0 currently.
In the next release, it will set the minimum value 0 in the config
option and it will raise a ValueError if the value is less than 0.
So add a warning for the change in the next release.

Change-Id: Ib23e787cea2e0bfb4ae77e859502d723619cea7c
2019-01-25 20:18:55 +00:00
Jan Gutter 172855f293 Convert vrouter legacy plugging to os-vif
Before this change, the vrouter VIF type used legacy VIF plugging. This
changeset ports the plugging methods over to an external os-vif plugin,
simplifying the in-tree code.

Miscellaneous notes:

 * There are two "vrouter" Neutron VIF types:
    * "contrail_vrouter" supporting vhostuser plugging, and
    * "vrouter", supporting kernel datapath plugging.
 * The VIFGeneric os-vif type is used for the kernel TAP based
   plugging when the vnic_type is 'normal'.
 * For multiqueue support, the minimum version of libvirt 1.3.1 is
   required. In that case, libvirt creates the TAP device, rather than
   the os-vif plugin. (This is the minimum version for Rocky and later)
   ref: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1574957
 * The corresponding commit on Tungsten Fabric / OpenContrail for this
   work is at:
   https://github.com/Juniper/contrail-nova-vif-driver/commit/ed01d315e5707b4f670468454729dc2031c5f780

Change-Id: I047856982251fddc631679fb2dbcea0f3b0db097
Signed-off-by: Jan Gutter <jan.gutter@netronome.com>
blueprint: vrouter-os-vif-conversion
2019-01-25 17:17:55 +02:00
Zuul e3fc005e4e Merge "Add missing ws seperator between words" 2019-01-25 14:22:29 +00:00
Balazs Gibizer 819961c2c0 Fix ComputeNode ovo compatibility code
The unit test for ComputeNode.obj_make_compatible() made two mistakes:
* they asserted that a field is not in the primitives, but primitives
  is a dict where the top level keys are nova.object_data,
  nova.object_version, ... etc. So the assertNotIn call was succeded
  false positively.
* they did not always initialized the tested field in the ComputeNode
  object if a field is not initialized then it is never in the
  primitives.

This patch fixed the unit tests but in the meantime it is uncovered that
some of the compatibility code was missing from the ComputeNode ovo. So
those are also added now.

Change-Id: I2010f12b591dff381597c577920738712093e4ce
2019-01-25 14:17:59 +00:00
Zuul f8c260864e Merge "Kill @safe_connect in _get_provider_traits" 2019-01-25 14:06:00 +00:00
Zuul 0a50198158 Merge "Move interface enabling to privsep." 2019-01-25 12:32:49 +00:00
Takashi NATSUME 4743b08f47 Remove unused quota options
Remove the following configuration options in the 'quota' group
because they have not been used any more
since Ie01ab1c3a1219f1d123f0ecedc66a00dfb2eb2c1.

- reservation_expire
- until_refresh
- max_age

Change-Id: I56401daa8a2eee5e3aede336b26292f77cc0edd6
2019-01-25 15:17:15 +09:00
melanie witt 6489f2d2b4 Raise 403 instead of 500 error from attach volume API
Currently, the libvirt driver has a limit on the maximum number of
disk devices allowed to attach to a single instance of 26. If a user
attempts to attach a volume which would make the total number of
attached disk devices > 26 for the instance, the user receives a
500 error from the API.

This adds a new exception type TooManyDiskDevices and raises it for the
"No free disk devices names" condition, instead of InternalError, and
handles it in the attach volume API. We raise TooManyDiskDevices
directly from the libvirt driver because InternalError is ambiguous and
can be raised for different error reasons within the same method call.

Closes-Bug: #1770527

Change-Id: I1b08ed6826d7eb41ecdfc7102e5e8fcf3d1eb2e1
2019-01-25 01:21:41 +00:00
Zuul 9419c3e054 Merge "Per aggregate scheduling weight" 2019-01-24 19:58:52 +00:00
Balazs Gibizer bd6f33070b Reject interface attach with QoS aware port
Attaching a port with minimum bandwidth policy would require to update
the allocation of the server. But for that nova would need to select the
proper networking resource provider under the compute resource provider
the server is running on.

For the first iteration of the feature we consider this out of scope. To
avoid resource allocation inconsistencies this patch propose to reject
such attach interface request. Rejecting such interface attach does not
break existing functionality as today only the SRIOV Neutron backend
supports the minimum bandwidth policy but Nova does not support
interface attach with SRIOV interfaces today.

A subsequent patch will handle attaching a network that has QoS policy.

Co-Authored-By: Elod Illes <elod.illes@ericsson.com>

Change-Id: Id8b5c48a6e8cf65dc0a7dc13a80a0a72684f70d9
blueprint: bandwidth-resource-provider
2019-01-24 16:56:43 +01:00
arches a19c38a6ab Skip checking of target_dev for vhostuser
Nova skips detaching of ovs dpdk interfaces
thinking that it's already detached because
get_interface_by_cfg() return no inteface.
This is due to _set_config_VIFVHostUser()
not setting target_dev in configuration while
LibvirtConfigGuestInterface sets target_dev
if tag "target" is found in the interface.

As target_dev is not a valid value for
vhostuser interface, it will not be checked
for vhostuser type.

Change-Id: Iaf185b98c236df47e44cda0732ee0aed1fd6323d
Closes-Bug: #1807340
2019-01-24 15:26:17 +00:00
Stephen Finucane c43ff8ac4b Make 'plugin' a required argument for '_get_vif_instance'
'plugin' is the one argument for 'os_vif.objects.vif.VIFBase' (and
therefore all subclasses) that doesn't have a default and is therefore
required [1]. Enforce this on the nova side and prevent possible slip
ups like those seen in [2].

[1] https://github.com/openstack/os-vif/blob/1.11.1/os_vif/objects/vif.py#L27-L52
[2] https://review.openstack.org/#/c/565471/4/nova/network/os_vif_util.py@408

Change-Id: I9598008deff92fae704786b467ef622848f55cf9
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
2019-01-23 17:11:30 +00:00
zhufl 9f04a0b37b Add missing ws seperator between words
This is to add missing ws seperator between words.

Change-Id: I4e892e6b75aa5c222ec8154f2f4ad832b556ccbf
2019-01-23 15:42:32 +08:00
Zuul 56811efa35 Merge "Move simple execute call to processutils." 2019-01-23 02:09:13 +00:00
Zuul 4a5267f4c9 Merge "Convert port to str when validate console port" 2019-01-23 00:21:58 +00:00
Zuul 5025c74290 Merge "Extend NeutronFixture to return port with resource request" 2019-01-22 23:12:14 +00:00
Zuul cb4ea29266 Merge "Use X-Forwarded-Proto as origin protocol if present" 2019-01-22 23:12:08 +00:00
Kevin_Zheng 41b982c9fe Don't call begin_detaching when detaching volume from shelved vm
When shelve an instance, if the instance has volume attached,
with new attach/detach flow, we will delete the old attachment
and create a new attachment, the volume status will be ``reserved``.

If the user tries to detach these volumes, it fails due to that
Cinder does not allow a begin_detaching() call on a `reserved` volume.

Actually for shelved instances, we can just skip this step and
directly detach it.

Change-Id: Ib1799feebbd8f4b0f389168939df7e5e90c8add1
closes-bug: #1808089
2019-01-22 17:24:17 -05:00