If a MaxRetriesExceeded exception is raised by
scheduler_utils.populate_retry then request_spec will be empty in the
exception handler[1], then _set_vm_state_and_notify method will just
put a empty dict as request_spec into the payload of notification[2].
It would make more sense if we report the actual value of request_spec
in the notification.
[1]https://github.com/openstack/nova/blob/13.0.0.0rc3/nova/conductor/manager.py#L382
[2]https://github.com/openstack/nova/blob/13.0.0.0rc3/nova/scheduler/utils.py#L109
Simply moving the initialization of request_spec up one line before the
call to populate_retry should fix the issue.
Change-Id: I7c51f635d52f368c8df549f62024cbdf64a032b3
Closes-Bug: #1575998
The context.get_admin_context is used in places that it's not necessary,
and that's likely because there's no helper method to retrieve a non
admin context. This adds such a helper method and adds a note to
get_admin_context warning that it's not usually the right choice.
Change-Id: I2e6a2efa4bcdf3f8688897972a6cf8a5af3f90d6
Upgrade to gabbi 1.26.1 to use the new inner_fixtures feature to
capture log and stdout/stderr per individual test request. The
existing nova fixtures are used for the capture.
Depends-On: Ic13dc14f62334aefbcced93872ec564cab157898
Change-Id: Ic6f5a50df37b4680a60c4aa94f7587aec232c367
The image property os_type is only needed when the instance
is required to have UEFI Secure Boot enabled. os_type is not
required if this feature is not required.
Closes-Bug: #1628854
Change-Id: Ib27ca25d8ee9fa82673943221aaa216ab274d4fe
- Fix indentation
- Add 'min' parameter to 'num_retries' configuration option. Users have
been warned about this since 2014
Change-Id: Icf1bdc2b9331cfb2f5a699f626c25ebb6d7648b2
Libvirt has "domainSetUserPassword" callback and virtuozzo driver
has an implementation for it. So in this patch we allow to use this
functionality for virtuozzo hypervisor.
Change-Id: Ia398afadfd9fd9544c5d843338ab25c0930d9f74
Implements: blueprint virtuozzo-instance-admin-password
When booting an instance there is logic in the conductor to check if a
delete has been issued. This is done by looking for a BuildRequest
object and discontinuing the build if it's not found. However the
conductor then deletes the BuildRequest so a reschedule attempt will not
find the BuildRequest object. This incorrectly stops the reschedule.
The filter_properties dict is updated with the number of scheduling
attempts for each reschedule so by looking at the value found there we
know if a reschedule is being attempted. If that's the case then bypass
the logic that checks for, and deletes, the BuildRequest object.
Change-Id: Ibf28d1d8f54703b465ccc497281419356cd0136e
Closes-Bug: 1628530
The cover job runs the unit tests and having stale
pycs can skew those results, so remove the pyc files
before running the tests with --coverage.
Change-Id: I7393d2df36e715dbf53ba9ae6a077bdc8e79b5a5
The archive_deleted_rows command in nova-manage is often-broken and
not well tested by us. We can test it to some degree in functional tests,
but running it against a real database with real deleted stuff in it
is a good idea. This adds a post-test hook and runs the archive so that
after a full test run in the gate, we'll see the output. Later, we should
make a failed run of this fatal, but for now, just run it so we can see
how close we are to being able to gate on it.
Change-Id: I16b2e00eede6af455cb74ca4e6ca951d56fdbcbc
The check_img_metadata_properties_quota validation method not
only checks quota for image metadata but also the type of the
metadata object in the request (dict) and the key length of
the metadata items in the dict - such that they were between length
1 and 255.
The metadata schema property handles all of that validation for
us so we don't need to do it in python, which was probably a carry
over from the legacy v2 API which didn't use json schema validation.
Now that the legacy v2 API code is gone, we can remove the explicit
python code checks in check_img_metadata_properties_quota and just
let the schema validator do it's job.
Change-Id: Ibec92e278887cd06e91687ca91e75f9b7b28098c
This is something I expect has been very broken for a long time. We
have rows in tables such as instance_extra, instance_faults, etc that
pertain to a single instance, and thus have a foreign key on their
instance_uuid column that points to the instance. If any of those
records exist, an instance can not be archived out of the main
instances table.
The archive routine currently "handles" this by skipping over said
instances, and eventually iterating over all the tables to pull out
any records that point to that instance, thus freeing up the instance
itself for archival. The problem is, this only happens if those extra
records are actually marked as deleted themselves. If we fail during
a cleanup routine and leave some of them not marked as deleted, but
where the instance they reference *is* marked as deleted, we will
never archive them.
This patch adds another phase of the archival process for any table
that has an "instance_uuid" column, which attempts to archive records
that point to these deleted instances. With this, using a very large
real world sample database, I was able to archive my way down to
zero deleted, un-archivable instances (from north of 100k).
Closes-Bug: #1622545
Change-Id: I77255c77780f0c2b99d59a9c20adecc85335bb18