This patch adds functionality to the scheduler "report client" to ensure
that the client calls the placement API to create a resource provider
record for the local compute host managed by the Nova resource tracker.
The report client keeps a cache of resource provider objects, keyed by
resource provider UUID and constructed from the results of placement
REST API calls to get information about a resource provider. If a
resource provider matching a UUID was not found in the placement REST
API, the report client automatically creates the resource provider
record via the placement REST API. These resource provider objects will
be used in followup patches that add creation of inventory and
allocation records to the scheduler report client.
Included in this patch is a new [placement] nova.conf configuration
section with a single os_region_name configuration option that allows
Nova to grab the placement API endpoint URL for the particular OpenStack
region that it is in. We do not support endpoint URL overrides for the
placement API service. We only use the Keystone service catalog for
finding the endpoint for the placement service. We intentionally modeled
the determination of the placement endpoint URL after similar code that
determines the volume endpoint URL in /nova/volume/cinder.py.
This redoes the placement API using keystone session, and stubs out
where we can do more reasonable handling of errors. This works if we
fill out the right credentials in the placement section of the config
file.
Co-Authored-By: Sean Dague <sean@dague.net>
Change-Id: I9d28b51da25c523d22c373039e6d8b36fd96eba6
blueprint: generic-resource-pools
We want a wsgi_script as the entry point for our placement API
actually getting run, this is wrapping in the smooth and mellow pbr
patterns that make it sensible to also run just by starting it on the
command line.
This also actually initializes the logging subsystem for the placement
API, and does the standard pattern of dumping the configuration if
DEBUG is enabled. Pieces of this were cribbed/inspired by equivalent
keystone code.
The config directory is now setable via environment, which may be be
needed by folks with venvs.
Change-Id: I00d032554de273d7493cfb467f81687c08fd5389
We were checking a min qemu version (1.5.3) against the libvirt
version, because it wasn't being passed right. This was generating an
incorrect warning for all users.
Change-Id: Ib127f2183a4f67a25da483838ca65daf10b3cd9a
Add more unit tests for vendordata2, as requested on the intial
review. While doing this I realized that a HTTP status of "NO
CONTENT" is valid, but will result in nothing being added to the
config drive. We handle that case by just having an empty section.
Additionally, I've decided that thrown exceptions for REST service
requests shouldn't bubble up like they did initially, as that would
stop instance boot. Instead, log them and then add an empty
section to the config drive as well.
Change-Id: If82312d9ca22a87929b947bcf7fed33a108cc720
Blueprint: vendordata-reboot
The openstackdocstheme includes a bug reference link, which defaults
to openstack manuals. We want to update this to be a Nova bug instead.
This also cleans up the pre openstackdocstheme support code
Change-Id: Iace4619c37b04b1504a7051e9e5274b2a3b77c24
Modified type of block_migration from input params
of os-migrateLive API based on API version 2.25
Change-Id: I82c6537d137b462dbe6d05c07a9b3afb5a1501d5
Closes-Bug: #1551782
While there are some nova-manage commands to take an existing deployment
and migrate all of its hosts into a new cellsv2 environment there was no
way to add more hosts to a cell. This command can be run at any time
after the initial migration and will map any hosts in a cell that have
not been seen before.
Nothing else changes about adding hosts to a deployment. Configure them
to use a nova database and start them up and they'll register themselves
with that database. This new command simply lets the API know how to
route requests to those hosts. Until this is done instances can not be
booted on those hosts.
Partially-Implements: bp cells-scheduling-interaction
Change-Id: I8c044e5b480edddead28d8c3527d003da566ed1e
During the boot process there is a point where failures, most often
quota failures, cause the newly created instance to be deleted. In this
situation the BuildRequest and InstanceMapping object that were just
created should be deleted as well. If they are not then it's possible
that an instance list will return the BuildRequest.instance as a regular
instance though it should be deleted.
Change-Id: Ic2dd3bb7db3ce563a358bed03adaa37ff12c30fd
Partially-implements: bp add-buildrequest-ob
StableObjectJsonFixture has been in o.vo since the 1.9.0
release and we require >=1.13.0 in g-r now, so we should
use the fixture from the library rather than keep a
duplicate in nova.
This is also needed to use o.vo 1.17.0 which has change
39a057becc10d1cfb5a5d5024bfcbbe6db1b56be that breaks the
fixture's erroneous unit test.
Change-Id: Idd0e02a1c19300c3ab7a57cbacb78d1f07037843
Closes-Bug: #1618115
Added functional api_sample_test for keypair-list command
for different users for microversion v2.10
Closes-Bug: #1599904
Change-Id: I92cd06efeafb00f5f4678e94185789026896be3a
As mentioned in link[1], if we need filter on python3,
Replace filter(lambda obj: test(obj), data) with:
[obj for obj in data if test(obj)].
[1] https://wiki.openstack.org/wiki/Python3
Change-Id: Ie484ccd7ef0428313a29e9ef6930ebb2646ee879