This change documents certain hyper-v driver features that are not
included in the driver support matrix.
Change-Id: I29f6d816138bd31ad6bc8d327636b202d718bdff
The only ones remaining are some real crufty SVGs and references to
things that still exist because nova-network was once a thing.
Change-Id: I1aebf86c05c7b8c1562d0071d45de2fe53f4588b
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Add a section to the support matrix for image caching
(``has_imagecache`` virt driver capability).
Change-Id: I9147c5ea6b276b4fe18a981f4360844009bd3d95
Partial-Bug: #1847302
The info of Delete (Abort) on-going live migration is missing
in support matrix, it could be useful for users to consider
using this feature.
This patch adds it.
Change-Id: I2f917627fa451d20b1fd1ff35025481a4e525084
Closes-Bug: #1808902
Track compute node inventory for the new MEM_ENCRYPTION_CONTEXT
resource class (added in os-resource-classes 0.4.0) which represents
the number of guests a compute node can host concurrently with memory
encrypted at the hardware level.
This serves as a "master switch" for enabling SEV functionality, since
all the code which takes advantage of the presence of this inventory
in order to boot SEV-enabled guests is already in place, but none of
it gets used until the inventory is non-zero.
A discrete inventory is required because on AMD SEV-capable hardware,
the memory controller has a fixed number of slots for holding
encryption keys, one per guest. Typical early hardware only has 15
slots, thereby limiting the number of SEV guests which can be run
concurrently to 15. nova needs to track how many slots are available
and used in order to avoid attempting to exceed that limit in the
hardware.
Work is in progress to allow QEMU and libvirt to expose the number of
slots available on SEV hardware; however until this is finished and
released, it will not be possible for nova to programatically detect
the correct value with which to populate the MEM_ENCRYPTION_CONTEXT
inventory. So as a stop-gap, populate the inventory using the value
manually provided by the cloud operator in a new configuration option
CONF.libvirt.num_memory_encrypted_guests.
Since this commit effectively enables SEV, also add all the relevant
documentation as planned in the AMD SEV spec[0]:
- Add operation.boot-encrypted-vm to the KVM hypervisor feature matrix.
- Update the KVM section of the Configuration Guide.
- Update the flavors section of the User Guide.
- Add a release note.
[0] http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#documentation-impact
blueprint: amd-sev-libvirt-support
Change-Id: I659cb77f12a38a4d2fb118530ebb9de88d2ed30d
UEFI support in the VMware driver has been added with commit fc0c6d2.
This patch fixes the support matrix to reflect this.
Change-Id: I8b08e11ae4dd7f1101758b29ae3424d790b26ed1
This patch implements live migration of instances across compute nodes.
Each compute node must be managing a cluster in the same vCenter and ESX
hosts must have vMotion enabled [1].
If the instance is located on a datastore shared between source
and destination cluster, then only the host is changed. Otherwise, we
select the most suitable datastore on the destination cluster and
migrate the instance there.
[1] https://kb.vmware.com/s/article/2054994
Co-Authored-By: gkotton@vmware.com
blueprint vmware-live-migration
Change-Id: I640013383e684497b2d99a9e1d6817d68c4d0a4b
Native QEMU LUKS decryption support was added for the
libvirt driver in Queens, but there are no docs in the
feature support matrix about encrypted volume support
at all, so this attempts to close that gap.
Change-Id: I035164a0c4222814784306381f9a11413c8de9e2
z/VM driver is in Rocky release now and this patch adds
the z/VM support matrix update.
blueprint: add-zvm-driver-rocky
Change-Id: I58016140c7f556df91ce258733455647a26dd727
Add a method for libvirt driver to get cpu traits.
This is used for compute nodes to report cpu traits to Placement.
Change-Id: I9bd80adc244c64277d2d00e7d79c3002c8f9d57e
blueprint: report-cpu-features-as-traits
The code to generate a support matrix has been pulled into a common
library. Using this instead of duplicating code in various projects that
need it.
Change-Id: If5c0bf2b0dcd7dbb7d316139ecb62a936fd15439
Co-Authored-By: Stephen Finucane <stephenfin@redhat.com>
File backed memory is enabled per Nova compute host. When enabled, host
will report 'file_backed_memory_capacity' for available memory.
When enabled, instances will create memory backing files in the
directory specified in libvirt's qemu.conf file 'memory_backing_dir'
config option.
This feature is not compatible with memory overcommit, and requires
'ram_allocation_ratio' to be set to 1.0
Change-Id: I676291ec0faa1dea0bd5050ef8e3426d171de4c6
Implements: blueprint libvirt-file-backed-memory
Even though the feature is technically virt driver agnostic,
the plumbing happens through the virt drivers, so the feature
is only supported by certain virt drivers (libvirt only at
the time of this patch). So this adds a section to the feature
support matrix about the trusted certs validation feature.
Also updates the certificate validation user docs based on
the nova boot --trusted-image-certificate-id option name
in the dependent python-novaclient change.
Depends-On: https://review.openstack.org/500396/
Related to blueprint nova-validate-certificates
Change-Id: Ic5cb4a98c73cc404c7033cf183f25a97aba3c994
This change adds vSCSI Fibre Channel volume support via cinder for the
PowerVM virt driver. Attach, detach, and extend are the supported
volume operations by the PowerVM vSCSI FC adapter. PowerVM CI volume
tests are run on-demand only which can be done by leaving a comment
with "powervm:volume-check".
Blueprint: powervm-vscsi
Change-Id: I632993abe70f9f98a032a35891b690db15ded6a0
This adds the ability to hotplug network interfaces for the powervm
virt driver.
Blueprint: powervm-network-hotplug
Change-Id: I78b94c9731c35e3291d46b9bf9f5554e21c2429e
Nova has two pages in documentation listing things supported on several
architectures/hypervisors. This patch adds initial state of AArch64
into support matrix.
Document minimal qemu/libvirt for aarch64. Version 3.6.0 was first one
which worked for us with Nova without a need for extra patches.
Change-Id: I2ee7be9e88e20ed0f77be07fed4fdd800533b3c5
This change introduces a new microversion which must be used
to create a server from a multiattach volume or attach a multiattach
volume to an existing server instance.
Attaching a multiattach volume to a shelved offloaded instance is not
supported since an instance in that state does not have a compute host
so we can't tell if the compute would support the multiattach volume
or not. This is consistent with the tagged attach validation with 2.49.
When creating a server from a multiattach volume, we'll check to see
if all computes in all cells are upgraded to the point of even supporting
the compute side changes, otherwise the server create request fails with
a 409. We do this because we don't know which compute node the scheduler
will pick and we don't have any compute capability filtering in the
scheduler for multiattach volumes (that may be a future improvement).
Similarly, when attaching a multiattach volume to an existing instance,
if the compute isn't new enough to support multiattach or the virt
driver simply doesn't support the capability, a 409 response is returned.
Presumably, operators will use AZs/aggregates to organize which hosts
support multiattach if they have a mixed hypervisor deployment, or will
simply disable multiattach support via Cinder policy.
The unit tests are covering error conditions with the new flow. A new
functional scenario test is added for happy path testing of the new boot
from multiattach volume flow and attaching a multiattach volume to more
than one instance.
Tempest integration testing for multiattach is added in change
I80c20914c03d7371e798ca3567c37307a0d54aaa.
Devstack support for multiattach is added in change
I46b7eabf6a28f230666f6933a087f73cb4408348.
Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com>
Implements: blueprint multi-attach-volume
Change-Id: I02120ef8767c3f9c9497bff67101e57e204ed6f4
This change set adds Open vSwitch VIF support for the PowerVM virt
driver.
Change-Id: If23aeb890c4365014a9f1262647611162f981f12
Partially-Implements: blueprint powervm-nova-it-compute-driver
quiesce and unquiesce are virt driver and supported in libvirt
we need document those functions into the support matrix
to let admin/user be able to refer to.
Change-Id: If1277cde2aff44b5651154fc05c3cd4377237c60
This updates the config drive status to complete for PowerVM [1]. It
also updates the status for PowerVM features previously classified as
unknown.
[1] https://review.openstack.org/#/c/409404/
Change-Id: Idc5e40f2473d27c31c5a620ad9b93cce01dc7f85
Commit ace11d3 adds a serial port device to instances, so the serial
console output can be sent to a virtual serial port concentrator (VSPC).
This patch finishes the implementation by returning the output saved by
VSPC to the end user. The config option 'serial_log_dir' should have the
same value as in the VSPC configuration (i.e. it must point to the same
directory).
The VSPC implementation is available at
https://github.com/openstack/vmware-vspc
blueprint vmware-console-log-complete
Change-Id: I7e40dc41b0354d414bc8eae331f8257959e1d123
The attach volume section has LXC listed as 'missing' and you
need to be able to attach a volume when performing a swap
volume operation, so swap volume must also be marked as missing
for LXC.
Change-Id: I97b024d3ff817a7152906b0a88b1b64db93d7d7d
The drivers that support creating a server with device
tags is different from the drivers that support attaching
volumes and interfaces with tags, and they are different
operations, so this adds separate actions to the feature
support matrix.
Change-Id: I00ad8be5520e30b2c240ae9f2697ce617aab3ac2
Closes-Bug: #1701421
The virtual device tagging support was added for several
hypervisors and it's useful to document in the support
matrix to provide more info to user.
Change-Id: Idab929904aaba924f9f1f4814ff959de01f72f83
Partial-Bug: #1701421
Auto disk config is only enabled by 'auto_disk_config'
in instance metadata and image metadata, only xen seems
implemneted this feature.
So update doc to indicate basic operation and remove
hyper-v from support list.
Change-Id: I447e96b59bc77be7c0bb66e1b3657a1d92741a5c
To say something is unclear in the doc provided to user
doesn't make sense, this patch makes the description for
get host info clearer and split into ip and uptime part
VMWare managed a cluster so host uptime is not applicable
and not implemented.
Change-Id: I95c7ecd85d556b8938ba0db127a04cf2a64feccc
nova console-log is supported in novaclient from nova
API 2.1 (2.0) and nova trigger-crash-dump was added for
API 2.17 (commit 6cbb22583b94660cfd78d8ee0068778d5279ceca)
so we can add those cli for user reference.
Change-Id: I17bf421a7eb2ec9ff7e94704889ea22bebfa980b
nova.virt.hardware.InstanceInfo had several fields that have never been
used since their inception two and a half years ago [1]. This change set
removes them. They are (were):
max_mem_kb: (int) the maximum memory in KBytes allowed
mem_kb: (int) the memory in KBytes used by the instance
num_cpu: (int) the number of virtual CPUs for the
instance
cpu_time_ns: (int) the CPU time used in nanoseconds
We also rename the 'id' field to 'internal_id' for two reasons: First,
because 'id' is a builtin; second, to emphasize that this is not
(necessarily) tied to the Instance's real id/uuid.
[1] https://review.openstack.org/#/c/133777
Change-Id: I5fe5c8121800e2b8da0860d53d818b7bd83c9e9d
This enables Ironic to boot bare metal machines from Cinder
volume. Ironic virt driver needs to pass the remote volume
connection information down to Ironic when spawning a new
bare metal instance requested to boot from a Cinder volume.
This implements get_volume_connector method for the Ironic
driver. It will get connector information from the Ironic service
and pass it to Cinder's initialize_connection method for attached
volumes. And then it puts the returned value into Ironic.
This patch changes the required Ironic API version to 1.32 for using
new API for volume resources.
Co-Authored-By: Satoru Moriya <satoru.moriya.br@hitachi.com>
Co-Authored-By: Hironori Shiina <shiina.hironori@jp.fujitsu.com>
Change-Id: I319779af265684715f0142577a217ab66632bf4f
Implements: blueprint ironic-boot-from-volume
Set the attach and detach interface features as complete.
Implements: blueprint ironic-hotplug-interfaces
Depends-On: I48c4706b3eb6e0a5105e463236870921d55dbd93
Change-Id: I8ed286d57ccaab9a6cb0eda62e30859e7a17e826
Per the spec [1]:
user/ – end-user content such as concept guides, advice, tutorials,
step-by-step instructions for using the CLI to perform specific tasks,
etc.
The remaining content all ends up in here.
[1] specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration
Change-Id: I480eee9cd7568efe2f76dd185004774588eb4a99