There's nothing of use in here. A section on creating extensions for the
API is removed since this is no longer a thing.
Change-Id: I18a6f642c046051cd6084ab920d78f27887ca13d
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This a follow up for change Ic8783053778cf4614742186e94059d5675121db1
and corrects the 'image_property set --property' arg format in the
hw_machine_type doc. Newline formats in the nova-manage CLI doc is
cleaned up to be consistent throughout and unnecessary () is removed
from the ImagePropertyCommands class.
Related to blueprint libvirt-device-bus-model-update
Change-Id: I5b67e9ae5125f6dad68cae7ac0601ac5b02e74b3
host arch in libvirt driver support
This is split 2 of 3 for the architecture emulation feature.
This implements emulated multi-architecture support through qemu
within OpenStack Nova.
Additional config variable check to pull host architecture into
hw_architecture field for emulation checks to be made.
Adds a custom function that simply performs a check for
hw_emulation_architecture field being set, allowing for core code to
function as normal while enabling a simple check to enable emulated
architectures to follow the same path as all multi-arch support
already established for physical nodes but instead levaraging qemu
which allows for the overall emulation.
Added check for domain xml unit test to strip arch from the os tag,
as it is not required uefi checks, and only leveraged for emulation
checks.
Added additional test cases test_driver validating emulation
functionality with checking hw_emulation_architecture against the
os_arch/hw_architecture field. Added required os-traits and settings
for scheduler request_filter.
Added RISCV64 to architecture enum for better support in driver.
Implements: blueprint pick-guest-arch-based-on-host-arch-in-libvirt-driver
Closes-Bug: 1863728
Change-Id: Ia070a29186c6123cf51e1b17373c2dc69676ae7c
Signed-off-by: Jonathan Race <jrace@augusta.edu>
After moving the nova APIs policy as per the new guidlines
where system scoped token will be only allowed to access
system level APIs and will not be allowed any operation
on project level APIs. With that we do not need below
base rules (who have hardcoded 'system_scope:all' check_str):
- system_admin_api
- system_reader_api
- system_admin_or_owner
- system_or_project_reader
At this stage (phase-1 target), we allow below roles as targeted
in phase-1 [1]
1. ADMIN(this is System Administrator with scope_type 'system'
when scope enabled otherwise legacy admin)
2. PROJECT_ADMIN
3. PROJECT_MEMBER
4. PROJECT_READER
& below one specific to nova
5. PROJECT_READER_OR_ADMIN (to allow system admin and project reader
to list flavor extra specs)
This complete the phase-1 of RBAC community-wide goal[2] for nova.
Add release notes too.
[1] https://governance.openstack.org/tc/goals/selected/consistent-and-secure-rbac.html#how-operator
[2] https://governance.openstack.org/tc/goals/selected/consistent-and-secure-rbac.html#yoga-timeline-7th-mar-2022
Partial implement blueprint policy-defaults-refresh-2
Change-Id: I075005d13ff6bfe048bbb21d80d71bf1602e4c02
This adds an image property show and image property set command to
nova-manage to allow users to update image properties stored for an
instance in system metadata without having to rebuild the instance.
This is intended to ease migration to new machine types, as updating
the machine type could potentially invalidate the existing image
properties of an instance.
Co-Authored-By: melanie witt <melwittt@gmail.com>
Blueprint: libvirt-device-bus-model-update
Change-Id: Ic8783053778cf4614742186e94059d5675121db1
Before, the definition of live_migration_downtime didn't explain
if any exception/timeout occurs if the migration exceeds the value.
This is just used as a reference for nova and if any problem happens
when the VM gets paused, there will be no abort or force-complete.
Closes-Bug: #1960345
Signed-off-by: Pedro Almeida <pedro.monteiroazevedodemouraalmeida@windriver.com>
Change-Id: I336481d1801a367b5628fedcd2aa5f5cf763355a
While most of the SR-IOV related documentation resides in the Neutron
repository which is going to have a separate section on the topic of
supporting remote-managed ports and off-path networking backends, there
are still some things specific to Nova which are worth documenting in
Nova docs.
https://docs.openstack.org/neutron/latest/admin/config-sriov.html
Implements: blueprint integration-with-off-path-network-backends
Change-Id: I3c5fe8ec0539e10d07b1b4888e9833bc7ede1d04
This was eventually added in Yoga, not Xena.
Change-Id: I8afe755732c95d023b7c4bd99964507f54d324f1
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This was actually three documents in one:
- An admin doc detailing how to configure and use notifications
- A contributor doc describing how to extend the versioned notifications
- A reference doc listing available versioned notifications
Split the doc up to reflect this
Change-Id: I880f1c77387efcc3c1e147323b224e10156e0a52
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Mostly copy-paste from the spec, but at least this is in-tree and
updatable.
Change-Id: I4cad2111065fbc1840d44fc9f4bf6ac585e18db6
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Cells mean NUMA cells below in text.
By default, first instance's cell are placed to the host's cell with
id 0, so it will be exhausted first. Than host's cell with id 1 will
be used and exhausted. It will lead to error placing instance with
number of cells in NUMA topology equal to host's cells number if
some instances with one cell topology are placed on cell with id 0
before. Fix will perform several sorts to put less used cells at
the beginning of host_cells list based on PCI devices, memory and
cpu usage when packing_host_numa_cells_allocation_strategy is set
to False (so called 'spread strategy'), or will try to place all
VM's cell to the same host's cell untill it will be completely
exhausted and only after will start to use next available host's
cell (so called 'pack strategy'), when the configuration option
packing_host_numa_cells_allocation_strategy is set to True.
Partial-Bug: #1940668
Change-Id: I03c4db3c36a780aac19841b750ff59acd3572ec6
Based on review feedback on [1] and [2].
[1] If39db50fd8b109a5a13dec70f8030f3663555065
[2] I518bb5d586b159b4796fb6139351ba423bc19639
Change-Id: I44920f20213462a3abe743ccd38b356d6490a7b4
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
As we already discussed at the PTG, the consensus was to accept contributors
to use this label for asking cores to review some changes.
Documenting it first so a dependent patch would then modify Gerrit once
we agree.
Change-Id: I38e999954e2c91d049e1af5cda6dd0b4f8168a0e
When suspending a VM in OpenStack, Nova detaches all the mediated
devices from the guest machine, but does not reattach them on the resume
operation. This patch makes Nova reattach the mdevs that were detached
when the guest was suspended.
This behavior is due to libvirt not supporting the hot-unplug of
mediated devices at the time the feature was being developed. The
limitation has been lifted since then, and now we have to amend the
resume function so it will reattach the mediated devices that were
detached on suspension.
Closes-bug: #1948705
Signed-off-by: Gustavo Santos <gustavofaganello.santos@windriver.com>
Change-Id: I083929f36d9e78bf7713a87cae6d581e0d946867
The commit replaces DefCore committee (a former name) by
Interop Working Group (the current name) and updates a few
more old interop references.
Change-Id: I578a21d610b5b680b4549bf34e1857307a1b8e74
The nova-manage placement heal_allocations CLI is capable of healing
missing placement allocations due to port resource requests. To support
the new extended port resource request this code needs to be adapted
too.
When the heal_allocation command got the port resource request
support in train, the only way to figure out the missing allocations was
to dig into the placement RP tree directly. Since then nova gained
support for interface attach with such ports and to support that
placement gained support for in_tree filtering in allocation candidate
queries. So now the healing logic can be generalized to following:
For a given instance
1) Find the ports that has resource request but no allocation key in the
binding profile. These are the ports we need to heal
2) Gather the RequestGroups from the these ports and run an
allocation_candidates query restricted to the current compute of the
instance with in_tree filtering.
3) Extend the existing instance allocation with a returned allocation
candidate and update the instance allocation in placement.
4) Update the binding profile of these ports in neutron
The main change compared to the existing implementation is in step 2)
the rest mostly the same.
Note that support for old resource request format is kept alongside of
the new resource request format until Neutron makes the new format
mandatory.
blueprint: qos-minimum-guaranteed-packet-rate
Change-Id: I58869d2a5a4ed988fc786a6f1824be441dd48484
As with the cells v2 docs before this, we have a number of architecture
focused documents in tree. The 'user/architecture' guide is relatively
up-to-date but is quite shallow, while the 'admin/arch' guide is
in-depth but almost a decade out-of-date, with references to things
like nova's in-built block storage service. Replace most of the latter
with more up-to-date information and the merge the former into it,
before renaming the file to 'admin/architecture'.
Change-Id: I518bb5d586b159b4796fb6139351ba423bc19639
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
We currently have three cells v2 documents in-tree:
- A 'user/cellsv2-layout' document that details the structure or
architecture of a cells v2 deployment (which is to say, any modern
nova deployment)
- A 'user/cells' document, which is written from a pre-cells v2
viewpoint and details the changes that cells v2 *will* require and the
benefits it *would* bring. It also includes steps for upgrading from
pre-cells v2 (that is, pre-Pike) deployment or a deployment with cells
v1 (which we removed in Train and probably broke long before)
- An 'admin/cells' document, which doesn't contain much other than some
advice for handling down cells
Clearly there's a lot of cruft to be cleared out as well as some
centralization of information that's possible. As such, we combine all
of these documents into one document, 'admin/cells'. This is chosen over
'users/cells' since cells are not an end-user-facing feature. References
to cells v1 and details on upgrading from pre-cells v2 deployments are
mostly dropped, as are some duplicated installation/configuration steps.
Formatting is fixed and Sphinx-isms used to cross reference config
option where possible. Finally, redirects are added so that people can
continue to find the relevant resources. The result is (hopefully) a
one stop shop for all things cells v2-related that operators can use to
configure and understand their deployments.
Change-Id: If39db50fd8b109a5a13dec70f8030f3663555065
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
A recent customer call highlighted some misunderstandings about the two
weighers in the nova tree. Firstly, the basis for the metrics used by
the 'IoOpsWeigher' was not well explained and required some spelunking
through the code to understand. Secondly, the 'BuildFailureWeigher'
multiplier, configured by '[scheduler] build_failure_weight_multiplier',
defaults to a very large value for reasons that are not apparent unless
you read the commit logs for that weigher (hint: it's because we wanted
to preserve the behavior of the older filter-based approach to handling
nodes with build failures). Expand the documentation to fill both gaps.
In the process, we also correct some small nits with this doc, mostly
centered around whitespace.
Change-Id: If2d329b86808bdc70619fbe057dd25a938eb79da
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
At the moment, oslo.reports is enabled when running nova-api
standalone, but not when using uWSGI.
We're now updating the uwsgi entry point as well to include the
oslo.reports hook, which is extremely helpful when debugging
deadlocks.
Change-Id: I605f0e40417fe9b0a383cc8b3fefa1325f9690d9
The 'nova-manage placement audit' tool has functionality that can
delete orphaned allocations in placement. Add a section for it in the
doc for troubleshooting orphaned allocations.
Change-Id: I697de57cf7eb43c0993af2b1f5b3f5c4395ef097
This adds some basic documentation for the above command and also
includes some very generic osc commands to use when checking volume
attachments.
Blueprint: nova-manage-refresh-connection-info
Change-Id: Ib3d680654fe0809c9e8341dffd3a63ab02945a38
This patches adjusts the nova documentation about the extended port
resource request support in nova as the neutron API extension did not
land in Xena.
Change-Id: I3b961426745084bdb4a6d04468f5a3c762be4cfa
blueprint: qos-minimum-guaranteed-packet-rate
Currently, when 'nova-manage db archive_deleted_rows' is run with
the --until-complete option, the process will archive rows in batches
in a tight loop, which can cause problems in busy environments where
the aggressive archiving interferes with other requests trying to write
to the database.
This adds an option for users to specify an amount of time in seconds
to sleep between batches of rows while archiving with --until-complete,
allowing the process to be throttled.
Closes-Bug: #1912579
Change-Id: I638b2fa78b81919373e607458e6f68a7983a79aa
The interface attach and detach logic is now fully adapted to the new
extended resource request format, and supports more than one request
group in a single port.
blueprint: qos-minimum-guaranteed-packet-rate
Change-Id: I73e6acf5adfffa9203efa3374671ec18f4ea79eb