Ironic is adding support for VNC consoles tracked under the following
spec[1]. This change provides support for the Nova Ironic driver to
access the consoles created by this feature effort.
This supersedes an existing Nova spec[2] to add VNC console support to
the Ironic driver, so this change can be considered to implement this
spec also. This change can be merged independently of the Ironic work,
as the Ironic driver handles the VNC console not being available.
The pre-requesites for a graphical console being available for an Ironic
driver node is:
- Ironic is configured to enable graphical consoles
- The node ``console_interface`` is a graphical driver such as
``redfish-graphical`` or ``fake-graphical``
- ``nova-novncproxy`` can make network connections to the VNC servers
which run adjacent to ``ironic-conductor``
The associated depends on adds the novnc validation check to the
baremetal basic ops, which is run in job
ironic-tempest-ipa-wholedisk-bios-agent_ipmitool-tinyipa.
In the support matrix console.vnc support is set to partial for ironic
due to the current lack of vencrypt support on the ironic side.
[1] https://specs.openstack.org/openstack/ironic-specs/specs/approved/graphical-console.html
[2] https://specs.openstack.org/openstack/nova-specs/specs/2023.1/approved/ironic-vnc-console.html
Related-Bug: 2086715
Implements: blueprint ironic-vnc-console
Change-Id: Iec26c67e29f91954eafc6a5a81086e36798d3f26
Signed-off-by: Steve Baker <sbaker@redhat.com>
This changes the thread pool usage of the ComputeManager to go through
the concurrency mode aware util functions.
The concurrent live migration pool had a seemingly unlimited option
when configured with value 0, but in reality GreenThreadPool has a
default worker size of 1000. In reality it is almost never right to
have more than one live migration running concurrently. Also with
native threading having 1000 worker is just too costly. So we
decided to deprecate the value 0 and changed the implementation of
unlimited to mean 5 threads in native threading mode. We kept the 1000
greenthread in eventlet mode for backward compatibility.
The _sync_power_states periodic task also spawn tasks for each instance
to be synced. As it uses a shared data structure across these tasks
and the caller a lock is needed to avoid race conditions.
Also the default pool size is 1000 for these tasks in our configuration.
That would use a lot of memory on a busy host in native threading mode.
So we changed the default value from 1000 to 5.
Change-Id: I9567d5fabdf086b5d0493103d9f6bde4f66af387
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
This is a follow up for the release notes added in the commit
35207ee8b5 that changed the default mode
for the scheduler and the API services. At that time we missed to note
the upgrade impact of such change. So this patch extends the reno with
an upgrade note.
Change-Id: I280e7eb9c1da6eeaf50e96e8b19e296961f2651a
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
This has not been supported for some time.
Change-Id: Ic7073740deb0bf9670eebe77f0f8b0daca100a5c
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This patch switches the default concurrency mode to native threading
for the services that gained native threading support in Flamingo:
nova-scheduler, nova-api, and nova-metadata.
The OS_NOVA_DISABLE_EVENTLET_PATCHING env variable still can be used to
explicitly switch the concurrency mode to eventlet by
OS_NOVA_DISABLE_EVENTLET_PATCHING=false
We also ensure that the cover, docs, py3xx and functional tox targets
are still running with eventlet while py312-threading kept running
with native threading.
Change-Id: I86c7f31f19ca3345218171f0abfa8ddd4f8fc7ea
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
This is technical dead end and not something we're going to be able to
support long-term in pbr. We need to push users away from this. Doing so
highlights quite a few place where our docs need some work, particularly
in light of the recent removal of the eventlet servers.
Change-Id: I2ffaed710fac2612f5337aca5192af15eab46861
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
When using the weigher, we need to target the right cell context for the
existing instances in the host.
fill_metadata was also having an issue as we need to pass the dict value
from the updated dict by keying the instance uuid, not the whole dict of
updated instances.
Change-Id: I18260095ed263da4204f21de27f866568843804e
Closes-Bug: #2125935
Signed-off-by: Sylvain Bauza <sbauza@redhat.com>
Previous patches removed direct eventlet usage from nova-conductor so
now we can run it with native threading as well. This patch documents
the possibility and switches both nova-conductor process to native
threading mode in the nova-next job.
Change-Id: If26c0c7199cbda157f24b99a419697ecb6618fa6
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
This change fixes duplicate consecutive words from docs
as well as code.
Signed-off-by: Rajesh Tailor <ratailor@redhat.com>
Change-Id: I236ff41fccf831023b6f85840097148a30e84743
This is the last piece to allow users to request AMD SEV-ES for memory
encryption instead of AMD SEV. The CPU feature for memory encryption
can now be requested via the hw:mem_encryption_model flavor extra spec
or via the hw_mem_encryption_model image property.
Implements: blueprint amd-sev-es-libvirt-support
Change-Id: Ifc9b86ad7db887cc22b2cd252fe8adc81fdc29c6
Signed-off-by: Takashi Kajinami <kajinamit@oss.nttdata.com>
This patch refines our logging, doc, and release notes about the native
threading mode of scheduler, api, and metadata services to ask for
pre-prod testing before enabled in production.
Change-Id: I04bbb3d7e4664a0cab8b30f4c34ee71774536353
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
It turns out that nova-api and nova-metadata only depend on spawning
threads via scatter-gather. The scatter-gather already supports both
eventlet and threading mode so we can switch these services.
Our WSGI services (nova-api, nova-metadata) are not relying on
oslo.service to fork worker processes, but expect the web server to
handle that (uwsgi, apache mod_wsgi). This means we don't need to handle
any forking issues as no nova code runs before the fork.
Change-Id: Id3a339c605dfc730bdb7994c3ca45baafeb5af80
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
At the service startup nova need to initialize either the eventlet or
the threading backend of oslo.service. So this patch reuses the existing
logic behind OS_NOVA_DISABLE_EVENTLET_PATCHING.
When OS_NOVA_DISABLE_EVENTLET_PATCHING env variable is set to true the
service will select the threading backend otherwise the eventlet
backend.
Also to avoid later monkey patch calls to invalidated the selection if
the threading backend is selected then the monkey_patch code is
poisoned.
This patch also makes sure that oslo.messaging also initialized with the
matching executor backend.
As this is the last step to make nova-scheduler run in threading mode
this patch adds a release notes as well.
Change-Id: I6e2e6a43df78d23580b5e7402352a5036100ab36
Signed-off-by: Balazs Gibizer <gibi@redhat.com>
Keystone project manager role can be used for the project-level
management APIs. Nova introduced the manager role in policy
defaults.
To introduced the manager role, we need to make migrations
policies more granular. Adding the separate policies for host
related operation allow us to open the migration operations
to project manager role. Existing policy is checked if migration
without specifying host and new policy is checked if host is
specified. Same will be applied to list migrations, new policy
will control to return the host info.
Also, Adding doc and releasenotes.
Partial implement blueprint policy-manager-role-default
Change-Id: Ie7d135e4d24ac6d53c46a4c69ade0b0bda554e71
Signed-off-by: Ghanshyam Mann <gmaan@ghanshyammann.com>
Signed-off-by: ghanshyam <gmaan@ghanshyammann.com>
Either the vendor_id and product_id needs to be set or the
resource_class needs to be set in each alias. This is now validated when
the alias is parsed to avoid late failure during placement
allocation_candidates query.
Closes-Bug: #2111440
Change-Id: I7fd43b3d6faac8c4098b0983e8adc596414823a1
Document and the limitation of the PCI in Placement feature that it
does not support [pci]alias configuration where the name of the alias is
repeated. E.g.
[pci]
alias = { "name": "vf1", "product_id":"10ca", "vendor_id":"8086", "device_type":"type-VF"}
alias = { "name": "vf1", "product_id":"f000", "vendor_id":"8086", "device_type":"type-VF"}
This would mean the alias vf1 can be fulfilled from devices with product
id 10ca OR f000. However this OR relationship cannot be encoded to a
single Placement allocation query as Placement does not support
requesting alternative resource classes for a request[2].
This limitation was encoded in the original PCI in Placement
implementation[1] but we missed to mention it in the doc.
This is now fixed.
[1]https://github.com/openstack/nova/blob/0d484ce37d86e989c8abdf57aec5e334f68206ef/nova/objects/request_spec.py#L504-L528
[2]https://docs.openstack.org/api-ref/placement/#list-allocation-candidates
Related-Bug: #2102038
Change-Id: I9dd78b1498f870a4e4c3f26c23d42d105aec0350
In c12eebd4c6 we missed that there are
another set of config option that become unused now. So this is a follow
up patch to remove those as well.
Change-Id: Ie00805b5f72b118db134aeb8399ef4c72f434966
The doc is now clarifying that [filter_scheduler]pci_in_placement needs
to be set to nova-api, nova-scheduler, and nova-conductor config as
well.
Closes-Bug: #2112303
Change-Id: I3c7be2f109a97ef5cc4b2dc76cb7c58ef8c68afa