While we wait for keystone to merge the patch to unblock oslo.db 10.0.0,
silence these warnings from newer versions of SQLAlchemy. TODOs are left
since we don't want to keep these around forever.
Change-Id: If48dd949ec4d69a09c87178f16d56a2517e21fd8
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Take advantage of the neutronclient bindings for the port binding APIs
added in neutronclient 7.1.0 to avoid having to vendor this stuff
ourselves.
Change-Id: Icc284203fb53658abe304f24a62705217f90b22b
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
The 'nova-manage db archive_deleted_rows --task-log' functional tests
involve manipulating time to assert archive behaviors when the --before
flag is also used.
While timedelta was used, set_time_override was not, so depending on
the date the test ran on + the number of days in the current month and
next two months, the test could fail. Task log audit periods are one
calendar month by default and the compute manager calls
last_completed_audit_period() without specifying a unit.
This changes the tests to use a time override to ensure predictable
behavior with regard to the audit period boundaries. The tests were
moved into their own test case classes in order to override the time
before services were started, so that the "service up" calculations
work as expected.
Closes-Bug: #1934519
Change-Id: I9b16a3a849937aba5b90ed1ab9a80b7f0103f673
This is inspired by database query filtering that occurs when --before
is passed to nova-manage db archive_deleted_rows. There is a comparison
WHERE deleted_at < before and not all database records populate it.
This verifies that no records are missed due to that comparision when
archiving with --before.
Related-Bug: #1751192
Change-Id: I44b79cfb236e94444740e32b0dd0a2344c29f340
Replace references to novaclient with OSC in the boot from volume guide.
This is essentially a revert of commit aa3964118, which was a revert of
an earlier attempt at doing this that fell down because it didn't
reflect the changes in CLI parameters between the different tools.
Change-Id: Ic99440dd618243517f64506e3da88885fc2c44c9
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
If a user requests an invalid volume UUID when creating an instance,
a 'VolumeNotFound' exception will be raised. This is not currently
handled. Correct this.
Change-Id: I6137dc1b6b51321fee1c080bf4b85197b19bf223
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Closes-Bug: #1930448
Since 3.7.0, oslo policy started the DeprecationWarning[1] if
deprecated_reason and deprecated_since param are not passed
in DeprecatedRule or they are passed in RuleDefault object.
[1] https://github.com/openstack/oslo.policy/blob/3.7.0/oslo_policy/policy.py#L1538
Change-Id: Idbbc203c6ae65aee29f9463a4911bae2bb541f41
This patch is based upon a downstream patch which came up in discussion
amongst the ironic community when some operators began discussing a case
where resource providers had disappeared from a running deployment with
several thousand baremetal nodes.
Discussion amongst operators and developers ensued and we were able
to determine that this was still an issue in the current upstream code
and that time difference between collecting data and then reconciling
the records was a source of the issue. Per Arun, they have been running
this change downstream and had not seen any reoccurances of the issue
since the patch was applied.
This patch was originally authored by Arun S A G, and below is his
original commit mesage.
An instance could be launched and scheduled to a compute node between
get_uuids_by_host() call and _get_node_list() call. If that happens
the ironic node.instance_uuid may not be None but the instance_uuid
will be missing from the instance list returned by get_uuids_by_host()
method. This is possible because _get_node_list() takes several minutes to return
in large baremetal clusters and a lot can happen in that time.
This causes the compute node to be orphaned and associated resource
provider to be deleted from placement. Once the resource provider is
deleted it is never created again until the service restarts. Since
resource provider is deleted subsequent boots/rebuilds to the same
host will fail.
This behaviour is visibile in VMbooter nodes because it constantly
launches and deletes instances there by increasing the likelihood
of this race condition happening in large ironic clusters.
To reduce the chance of this race condition we call _get_node_list()
first followed by get_uuids_by_host() method.
Change-Id: I55bde8dd33154e17bbdb3c4b0e7a83a20e8487e8
Co-Authored-By: Arun S A G <saga@yahoo-inc.com>
Related-Bug: #1841481
This change is a partial revert of
Ibf8dca4bd57b3bddb39955b53cc03564506f5754
to reintoduce a try-except which is required for
some non standard hardware.
On the Cavium ThunderX platform, it's possible to have
virutal functions which are netdevs which are not associated
to a PF. This causes the PF name lookup to fail.
Prior to Ibf8dca4bd57b3bddb39955b53cc03564506f5754
when the lookup failed it was caught and we skipped
populating the parent PF interface name.
This change restores that behavior.
Closes-Bug: #1915255
Change-Id: Ia10ccdd9fbed3870d0592e3cbbff17f292651dd2
The link of `TLS everywhere` should be 'https://docs.openstack.org/
project-deploy-guide/tripleo-docs/latest/features/tls-everywhere.html'.
Closes-Bug: #1933062
Change-Id: I468b82edeb899b0a780f8b545ad23ee0428a93ea
The nova-tox-functional-py36 job was replaced with the current py38
version during Victoria by I1d6a2986fcb0435cfabdd104d202b65329909d2b.
However as clearly stated in both the Victoria and Xena runtime
reference documents python 3.6 remains supported through CentOS 8 and
later CentOS 8 stream.
This change reintroduces functional test coverage for py36 using a
CentOS 8 stream based job.
[1] https://governance.openstack.org/tc/reference/runtimes/victoria.html
[2] https://governance.openstack.org/tc/reference/runtimes/xena.html
Change-Id: I6ef77bd92f2595016a99d1953414d3f554f6b2eb
Only install mariadb on EL based hosts. Also when using mariadb and
postgresql on EL based distros we need to ensure each service is
configured and actually started before using either.
Co-Authored-By: Ade Lee <alee@redhat.com>
Change-Id: I7122933d85bd7d0333c2c35e0f1a8414c1baa6d5
Nova has never supported specifying per numa node
cpu toplogies. Logically the cpu toplogy of a guest
is independent of its numa toplogy and there is no
way to model different cpu toplogies per numa node
or implement that in hardware.
The presence of the code in nova that allowed the generation
of these invalid configuration has now been removed as it
broke the automatic selection of cpu topologies based
on hw:max_[cpus|sockets|threads] flavor and image properties.
This change removed the incorrect code and related unit
tests with assert nova could generate invalid topologies.
Closes-Bug: #1910466
Change-Id: Ia81a0fdbd950b51dbcc70c65ba492549a224ce2b
This change reproduces bug #1910466
When hw:cpu_max_[sockets|cores|threads] is configured
in addition to an explict numa topologies and cpu pinning
nova is currently incapable of generating the correct
virtual CPU topology resulting in an index out of range
error as we attempt to retrieve the first topology
from an empty list.
This change reproduces the error via a new functional
test.
Related-Bug: #1910466
Change-Id: I333b3d85deed971678141307dd06545e308cf989
This was removed in change I9b964c8e68051a995635a3d5f5aa09af2b0dcb82 but
it's still a valid thing to do. Reintroduce it, cleaning up the test in
the process by removing references to dump tables (which were removed
entirely in change I17db7cdaad2c6368092b4fb00d5959711ad249f9) as well as
references to migrations that no longer exist as they've been squashed.
Change-Id: I70219a094e473da113c9855b610c11faea50f3b3
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>