The example given in this section was not matching the explanation
of how the filter works.
Updated the example with right properties and aggregate name
Change-Id: Ieadeb0d736cc83a41093e6f4dfeb75d2396976ec
Closes-Bug: #1684261
The service account that is being used by Nova needs "Profile-driven
storage view" permission for SPBM[0] to work. It is located under the
"Profile-driven storage" node in the Privileges tree.
This patch fixes the doc to address this.
[0] https://blueprints.launchpad.net/nova/+spec/vmware-spbm-support
Change-Id: I026b2394e6aa2fef8b1990923f9dcf8b3945175c
Now that we have this information, we can use it as a pre-filtering for
suitable hosts.
With this patch we complete the blueprint. As a result, documentation
and release notes are bundled in the patch and previously inactive tests
are now enabled.
Part of blueprint numa-aware-vswitches
Change-Id: Ide262733ffd7714fdc702b31c61bdd42dbf7acc3
Add a method for libvirt driver to get cpu traits.
This is used for compute nodes to report cpu traits to Placement.
Change-Id: I9bd80adc244c64277d2d00e7d79c3002c8f9d57e
blueprint: report-cpu-features-as-traits
Split up the evacuate instance documentation into two parts. One
for the existing single instance evacuation and a second new part
for the nova host-evacuate procedure.
Change-Id: Ibcdc2bc3f08e2fab23b9821feae0f489fb64a8f7
Closes-Bug: #1763039
Replace nova commands with openstack commands.
Add an example to create a private flavor.
Repopulate the "Modify a flavor" section.
Replace 'extra_spec' with 'extra_specs'.
Fix a wrong link.
Remove rxtx-factor in descriptions and command examples.
Change-Id: I14295dddc302a603a71f71ccb6fcc5745ca7826c
enhance doc including remove 'nova-api' daemon which is deprecated
to use wsgi instead, and added some operations for password
response.
Change-Id: I4cb7ac55683951aa5900699ba587da03c22fb0a1
When rescuing an instance having a vGPU, we were not using the vGPU.
There would then be a race condition during the rescue where the vGPU
could be passed to another instance.
Instead, we should just make sure the vGPU would also be in the rescued
instance.
Change-Id: I7150e15694bb149ae67da37b5e43b6ea7507fe82
Closes-bug: #1762688
Add the discard flag to libvirt XML when supported by libvirt and qemu,
and when using file backed memory.
The discard flag causes qemu to discard allocated memory via calling
madvise with MADV_REMOVE when using file backed memory, to prevent
writing out dirty instance memory. This is a significant performance
improvement for shutting down instances that have recently written to
significant portions of their memory.
As qemu and libvirt do not guarantee the discard is run, this cannot be
used for security purposes.
Change-Id: Ia7cf4414feb335b3c2e863b4c8b4ff559b275c34
Implements: blueprint libvirt-file-backed-memory
File backed memory is enabled per Nova compute host. When enabled, host
will report 'file_backed_memory_capacity' for available memory.
When enabled, instances will create memory backing files in the
directory specified in libvirt's qemu.conf file 'memory_backing_dir'
config option.
This feature is not compatible with memory overcommit, and requires
'ram_allocation_ratio' to be set to 1.0
Change-Id: I676291ec0faa1dea0bd5050ef8e3426d171de4c6
Implements: blueprint libvirt-file-backed-memory
If the compute endpoint in the service catalog is configured
for /v2 legacy compat mode, microversions in the request are
silently ignored by the LegacyV2CompatibleWrapper. This
adds a troubleshooting entry for that situation.
At this point, we might want to consider deprecating or at
least logging warnings if microversions are requested and
LegacyV2CompatibleWrapper strips them out, but that's fodder
for a separate change.
Change-Id: Ia7ecbf95d0a3e14c7f82b6a93c2ac4c4cfb89549
non-FS based Storage Repository will be supported after vdi streaming
patches finished. Remove SR limitation in the document.
Change-Id: Idaf461c849ac28b46e8971e5dd2f0e986a9a5c32
With the new image handler, it creates an image proxy which
will use the vdi streaming function from os-xenapi to
remotely export VHD from XenServer(image upload) or import
VHD to Xenerver(image download).
The existing GlanceStore uses custom functionality to directly
manipulate files on-disk, so it has the restriction that SR's
type must be file system based: e.g. ext or nfs. The new
image handler invokes APIs formally supported by XenServer
to export/import VDI remotely, it can support other SR
types also e.g. lvm, iscsi, etc.
Note:
vdi streaming would be supported by XenServer 6.5 or above.
The function of image handler depends on os-xenapi 0.3.3 or
above, so bump os-xenapi's version to 0.3.3 and also declare
depends on the patch which bump version in openstack/requirements.
Blueprint: xenapi-image-handler-option-improvement
Change-Id: I0ad8e34808401ace9b85e1b937a542f4c4e61690
Depends-On: Ib8bc0f837c55839dc85df1d1f0c76b320b9d97b8
The vif_driver option was deprecated in Ocata:
I599f3449f18d2821403961fb9d52e9a14dd3366b
And can now be removed. The only supported networking
backend is neutron + ovs.
Related to blueprint remove-nova-network
Co-Authored-By: Naichuan Sun <naichuan.sun@citrix.com>
Change-Id: Ia977f115335f00bc36249fa67437b4336d524251
1. url for `Ceilometer` doc is not correct
2. nova-cert has been removed
(change I2c78a0c6599b92040146cf9f0042cff8fd2509c3)
and should no appear in the example
3. phrasing issue in explanation for "used_now" of host resource
usage
4. 'opensack server list' repsonse is different from that of 'nova list'
5. add info about diagnostic statistics format
Change-Id: I6a2a7b396fee2a5cbae633d5c259f5f0961b9b60
There is concern over the ability for compute nodes to reasonably
determine which events should count against its consecutive build
failures. Since the compute may erronenously disable itself in
response to mundane or otherwise intentional user-triggered events,
this patch adds a scheduler weigher that considers the build failure
counter and can negatively weigh hosts with recent failures. This
avoids taking computes fully out of rotation, rather treating them as
less likely to be picked for a subsequent scheduling
operation.
This introduces a new conf option to control this weight. The default
is set high to maintain the existing behavior of picking nodes that
are not experiencing high failure rates, and resetting the counter as
soon as a single successful build occurs. This is minimal visible
change from the existing behavior with default configuration.
The rationale behind the default value for this weigher comes from the
values likely to be generated by its peer weighers. The RAM and Disk
weighers will increase the score by number of available megabytes of
memory (range in thousands) and disk (range in millions). The default
value of 1000000 for the build failure weigher will cause competing
nodes with similar amounts of available disk and a small (less than ten)
number of failures to become less desirable than those without, even
with many terabytes of available disk.
Change-Id: I71c56fe770f8c3f66db97fa542fdfdf2b9865fb8
Related-Bug: #1742102
This patch is the first step in syncing the nova host aggregate
information with the placement service. The scheduler report client gets
a couple new public methods -- aggregate_add_host() and
aggregate_remove_host(). Both of these methods do **NOT** impact the
provider tree cache that the scheduler reportclient keeps when
instantiated inside the compute resource tracker.
Instead, these two new reportclient methods look up a resource provider
by *name* (not UUID) since that is what is supplied by the
os-aggregates Compute API when adding or removing a "host" to/from a
nova host aggregate.
Change-Id: Ibd7aa4f8c4ea787774becece324d9051521c44b6
blueprint: placement-mirror-host-aggregates
This change adds vSCSI Fibre Channel volume support via cinder for the
PowerVM virt driver. Attach, detach, and extend are the supported
volume operations by the PowerVM vSCSI FC adapter. PowerVM CI volume
tests are run on-demand only which can be done by leaving a comment
with "powervm:volume-check".
Blueprint: powervm-vscsi
Change-Id: I632993abe70f9f98a032a35891b690db15ded6a0
This adds a request filter that, if enabled, allows us to use placement
to select hosts in the desired availability zone by looking up the uuid
of the associated host aggregate and using that in our query for
allocation candidates. The deployer needs the same sort of mirrored
aggregate setup as the tenant filter, and documentation is added here to
make that clear.
Related to blueprint placement-req-filter
Change-Id: I7eb6de435e10793f5445724d847a8f1bf25ec6e3
1. Beginning with the Queens release, the keystone install guide
recommends running all interfaces on the same port. This patch
updates the install guide to reflect that change.
2. update the deprecated neutron auth options
Change-Id: I5c0a6389b759153bae06fa43846f03ac083c3db4