In the absence of a specified fixed address with which to associate a
floating IP, the first IPv4 address on the port should be associated.
Without the check for IPv4, IPv6 ports can be associated with a (IPv4)
floating IP, which is not supported.
Change-Id: Ib66a9109cc1c7999474daca5970d0af1f70886e4
Closes-Bug: 1437855
This test was triggering an _instance_update() which waited for 60s on
a non-existent conductor service. Stubbing it out makes the test go
from taking 60s to (basically) 0s.
Change-Id: I412a1b47532be2450743c54aba52fc6e47de90c0
This test was triggering an _instance_update(), which waited for 60s on a
non-existent conductor service. Stubbing it out makes the test go from
taking 60s to (basically) 0s.
Change-Id: Iafc9ff73162d1b767e0ef4b694e187df1714f7e6
This test was triggering two attempts to revert the instance task state,
which was making a call to conductor that would never return because
there is no conductor running for this test. Since we were in a save-and-
reraise block, this just got ignored. Thus, this test used to take 120s
purely because it was waiting for two such attempts, at 60s each. Now it
takes (basically) zero time. Yay.
Change-Id: Ibe63cf4c47b3966dc95f70d5a0c9907ae5168264
When rebooting a compute host, guest VMs can be getting shutdown
automatically by the hypervisor and the virt driver is sending events to
the compute manager to handle them. If the compute service is still up
while this happens it will try to call the stop API to power off the
instance and update the database to show the instance as stopped.
When the compute service comes back up and events come in from the virt
driver that the guest VMs are running, nova will see that the vm_state
on the instance in the nova database is STOPPED and shut down the
instance by calling the stop API (basically ignoring what the virt
driver / hypervisor tells nova is the state of the guest VM).
Alternatively, if the compute service shuts down after changing the
intance task_state to 'powering-off' but before the stop API cast is
complete, the instance can be in a strange vm_state/task_state
combination that requires the admin to manually reset the task_state to
recover the instance.
Let's just try to avoid some of this mess by disconnecting the event
handling when the compute service is shutting down like we do for
neutron VIF plugging events. There could still be races here if the
compute service is shutting down after the hypervisor (e.g. libvirtd),
but this is at least a best attempt to do the mitigate the potential
damage.
Closes-Bug: #1444630
Related-Bug: #1293480
Related-Bug: #1408176
Change-Id: I1a321371dff7933cdd11d31d9f9c2a2f850fd8d9
As nova-spec api-microversions, versions API needs to expose minimum
and maximum microversions to version API, because clients need to
know available microversions through the API. That is very important
for the interoperability.
This patch adds these versions as the nova-spec mentioned.
Note:
As v2(not v2.1) API change manner, we have added new extensions if
changing API. However, this patch doesn't add a new extension even
if adding new parameters "version" and "min_version" because version
API is independent from both v2 and v2.1 APIs.
Closes-Bug: #1443375
Change-Id: Id464a07d624d0e228fe0aa66a04c8e51f292ba0c
Commit ebfa09fa19 added an RPC proxy but
as part of that was passing migrate_data=None for pre_live_migration
which breaks live block migration when not using shared storage.
Closes-Bug: #984996
Change-Id: I2a83f1fb0e4468f9a6c67a188af725c3406139d1
There are several conditions checked within pre_live_migration with
little logging, so add some debug logging to see what we're doing while
running through this method. This focuses mainly on non-shared storage
block migration since that's what we're currently testing in the aiopcpu
job.
Related-Bug: #984996
Change-Id: Ia331c967e46e7d1ade42afc1ee37e6de7a246631
Without this field, PciDevicePool.from_dict will treat numa_node key in
the dict as a tag, which in turn means that the scheduler client will
drop it when converting stats to objects before reporting.
Converting it back to dicts on the scheduler side thus will not have
access to the numa_node information which would cause any requests that
will look for the exact match between the device and instance NUMA nodes
in the NUMATopologyFilter to fail.
Change-Id: I7381f909620e8e787178c0be9a362f8d3eb9ff7d
Closes-Bug: #1441169
A previous change converted the one use of this method to objects so
remove and deprecate the now unused method in conductor.
Change-Id: I22681b6cac638d471519eecc0b1ebec84664a72f
The metadata class no longer makes use of the conductor parameter. Just
remove it from there and from calling methods.
Change-Id: I19e4d383913b8dc1584fa70f264ff77141906c62
Adds the ec2_ids attribute to expected_attrs in Instance queries where
the new attribute is needed to avoid extra rpc rounds to fetch the data.
Related to blueprint liberty-objects
Change-Id: I4d4f1c417e2a8c4eaac6010afb3291f40ecad2d9
Adds a new EC2Ids object to store ec2 ids associated with an instance.
The object structure resembles what's being returned from the current
conductor helper method get_ec2_ids().
This object does not correspond directly with any database table but
rather a collection of values gathered from issuing different db calls
and converting returned values to ec2 format.
Related to blueprint liberty-objects
Change-Id: I068acf687c116e3c75a352616b9555486a165423
order of arguments that passed to
ComputeManager.live_,migration() differs in ComputeManager and
_ComputeV4Proxy classes
Change-Id: I23c25d219e9cdd0673ae6a12250219680fb7bda9
Closes-Bug:#1442656
There was a mismatch in the V4 proxy in the call signatures of this
function. This was missed because the "destination" parameter is passed
in the rpcapi as the host to contact, which is consumed by the rpc
layer and not passed. Since it was not called one of the standard
names (either host if to be not passed, or host_param if to be passed),
this was missed.
Change-Id: Idf2160934dade650ed02b672f3b64cb26247f8e6
Closes-Bug: #1442602
This patch creates compute RPC API version 4.0, while retaining
compatibility in rpcapi and manager for 3.x, allowing for
continuous deployment scenarios.
UpgradeImpact - Deployments doing continuous deployment should follow this
process to upgrade without any downtime:
1) Set [upgrade_levels] compute=kilo in your config.
2) Upgrade to this commit.
3) Once everything has been upgraded, remove the entry in
[upgrade_levels] so that all rpc clients to the nova-compute service
start sending the new 4.0 messages.
Change-Id: Id96e77c739e7473774e110646204520d6163d8a5
Reserve 10 migrations so that we have the option of backporting fixes
that include a db migration to the stable/kilo branch. We did this
same thing to allow backports and stable/juno in
c2ce0a90e3.
Change-Id: I1e6be551b609d1250f0d9a1078f7c22298686003
The nova api for creating nova-network networks has an optional
request parameter "id" which maps to the string uuid for the
network to create. The nova-manage network create command represents
it as the option --uuid. The parameter is currently being ignored
by the nova-network manager. This change sets the uuid when creating
the network if it has been specified.
Closes-Bug: #1441931
Change-Id: Ib29e632b09905f557a7a6910d58207ed91cdc047