setup: Remove pbr's wsgi_scripts

This is technical dead end and not something we're going to be able to
support long-term in pbr. We need to push users away from this. Doing so
highlights quite a few place where our docs need some work, particularly
in light of the recent removal of the eventlet servers.

Change-Id: I2ffaed710fac2612f5337aca5192af15eab46861
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This commit is contained in:
Stephen Finucane
2025-05-06 13:22:44 +01:00
parent 32f58e8ad6
commit 5da2dc2060
20 changed files with 113 additions and 142 deletions
+2 -1
View File
@@ -75,9 +75,10 @@ redirectmatch 301 ^/nova/([^/]+)/user/placement.html$ /placement/$1/
redirectmatch 301 ^/nova/([^/]+)/user/upgrade.html$ /nova/$1/admin/upgrades.html
redirectmatch 301 ^/nova/([^/]+)/user/user-data.html$ /nova/$1/user/metadata.html
redirectmatch 301 ^/nova/([^/]+)/user/vendordata.html$ /nova/$1/user/metadata.html
redirectmatch 301 ^/nova/([^/]+)/user/wsgi.html$ /nova/$1/admin/wsgi.html
redirectmatch 301 ^/nova/([^/]+)/vendordata.html$ /nova/$1/user/metadata.html
redirectmatch 301 ^/nova/([^/]+)/vmstates.html$ /nova/$1/reference/vm-states.html
redirectmatch 301 ^/nova/([^/]+)/wsgi.html$ /nova/$1/user/wsgi.html
redirectmatch 301 ^/nova/([^/]+)/wsgi.html$ /nova/$1/admin/wsgi.html
redirectmatch 301 ^/nova/([^/]+)/admin/arch.html$ /nova/$1/admin/architecture.html
redirectmatch 301 ^/nova/([^/]+)/admin/adv-config.html$ /nova/$1/admin/index.html
redirectmatch 301 ^/nova/([^/]+)/admin/configuration/schedulers.html$ /nova/$1/admin/scheduling.html
+13 -15
View File
@@ -19,7 +19,7 @@ Laski gave at the Austin (Newton) summit which may be worth watching.
.. note::
Cells v2 is different to the cells feature found in earlier versions of
nova, also known as cells v1. Cells v1 was deprecated in 16.0.0 (Pike) and
nova, also known as Cells v1. Cells v1 was deprecated in 16.0.0 (Pike) and
removed entirely in Train (20.0.0).
@@ -34,14 +34,13 @@ This means a multi-cell deployment will not be radically different from a
Consider such a deployment. It will consists of the following components:
- The :program:`nova-api-wsgi` service which provides the external REST API to
users.
- The Compute API which provides the external REST API to users.
- The :program:`nova-scheduler` and ``placement`` services which are
responsible for tracking resources and deciding which compute node instances
should be on.
- An "API database" that is used primarily by :program:`nova-api-wsgi` and
- An "API database" that is used primarily by the Compute API and
:program:`nova-scheduler` (called *API-level services* below) to track
location information about instances, as well as a temporary location for
instances being built but not yet scheduled.
@@ -268,8 +267,8 @@ database schemas, respectively.
API database
~~~~~~~~~~~~
The API database is the database used for API-level services, such as
:program:`nova-api-wsgi` and, in a multi-cell deployment, the superconductor.
The API database is the database used for API-level services, such as the
Compute API and, in a multi-cell deployment, the superconductor.
The models and migrations related to this database can be found in
``nova.db.api``, and the database can be managed using the
:program:`nova-manage api_db` commands.
@@ -797,22 +796,21 @@ Starting from the 19.0.0 (Stein) release, the :doc:`nova metadata API service
.. rubric:: Global
If you have networks that span cells, you might need to run Nova metadata API
globally. When running globally, it should be configured as an API-level
globally by setting :oslo.config:option:`api.local_metadata_per_cell` to
``false``. When running globally, it should be configured as an API-level
service with access to the :oslo.config:option:`api_database.connection`
information. The nova metadata API service **must not** be run as a standalone
service, using the :program:`nova-metadata-wsgi` service, in this case.
information.
.. rubric:: Local per cell
Running Nova metadata API per cell can have better performance and data
isolation in a multi-cell deployment. If your networks are segmented along
cell boundaries, then you can run Nova metadata API service per cell. If you
choose to run it per cell, you should also configure each
cell boundaries, then you can run Nova metadata API service per cell by setting
:oslo.config:option:`api.local_metadata_per_cell` to ``true``. If you choose to
run it per cell, you should also configure each
:neutron-doc:`neutron-metadata-agent
<configuration/metadata-agent.html?#DEFAULT.nova_metadata_host>` service to
point to the corresponding :program:`nova-metadata-wsgi`. The nova metadata API
service **must** be run as a standalone service, using the
:program:`nova-metadata-wsgi` service, in this case.
point to the hostname/IP address of the corresponding Compute Metadata API.
Console proxies
~~~~~~~~~~~~~~~
@@ -1024,7 +1022,7 @@ FAQs
using the ``nova-manage cell_v2 update_cell`` command but the API is still
trying to use the old settings.
The cell mappings are cached in the :program:`nova-api-wsgi` service worker so you
The cell mappings are cached in the compute API service worker so you
will need to restart the worker process to rebuild the cache. Note that there
is another global cache tied to request contexts, which is used in the
nova-conductor and nova-scheduler services, so you might need to do the same
+6 -6
View File
@@ -28,12 +28,12 @@ responsibilities of services and drivers are:
.. rubric:: Services
:doc:`nova-metadata-wsgi </user/wsgi>`
A WSGI application that serves the Nova Metadata API.
:doc:`nova-api-wsgi </user/wsgi>`
:doc:`Compute API </admin/wsgi>`
A WSGI application that serves the Nova OpenStack Compute API.
:doc:`Metadata API </admin/metadata-service>`
A WSGI application that serves the Nova Metadata API.
:doc:`nova-compute </cli/nova-compute>`
Manages virtual machines. Loads a Service object, and exposes the public
methods on ComputeManager through a Remote Procedure Call (RPC).
@@ -102,8 +102,8 @@ the defaults from the :doc:`install guide </install/index>` will be sufficient.
* :placement-doc:`Placement service <>`: Overview of the placement
service, including how it fits in with the rest of nova.
* :doc:`Running nova-api on wsgi </user/wsgi>`: Considerations for using a real
WSGI container instead of the baked-in eventlet web server.
* :doc:`Running nova-api on wsgi </admin/wsgi>`: Considerations for deploying
the APIs.
* :doc:`Nova service concurrency </admin/concurrency>`: Considerations on how
to use and tune Nova services in threading mode.
+1 -1
View File
@@ -291,7 +291,7 @@ Refer to :oslo.config:option:`pci.alias` for syntax information.
Refer to :ref:`Affinity <pci-numa-affinity-policy>` for ``numa_policy``
information.
Once configured, restart the :program:`nova-api-wsgi` service.
Once configured, restart the Compute API service.
Configuring a flavor or image
+2 -2
View File
@@ -52,7 +52,7 @@ following components:
- One or more :program:`nova-novncproxy` service. Supports browser-based noVNC
clients. For simple deployments, this service typically runs on the same
machine as :program:`nova-api-wsgi` because it operates as a proxy between
machine as the Compute API because it operates as a proxy between
the public network and the private compute host network.
- One or more :program:`nova-compute` services. Hosts the instances for which
@@ -427,7 +427,7 @@ Here's the general flow of actions:
1. The user requests a serial console connection string for an instance
from the REST API.
2. The :program:`nova-api-wsgi` service asks the :program:`nova-compute`
2. The Compute API service asks the :program:`nova-compute`
service, which manages that instance, to fulfill that request.
3. That connection string gets used by the user to connect to the
:program:`nova-serialproxy` service.
+1 -1
View File
@@ -1373,7 +1373,7 @@ via the ``nova.api.extra_spec_validator`` `entrypoint`__.
The module containing your custom filter(s) must be packaged and available in
the same environment(s) that the nova controllers, or specifically the
:program:`nova-scheduler` and :program:`nova-api-wsgi` services, are available in.
:program:`nova-scheduler` and Compute API services, are available in.
As an example, consider the following sample package, which is the `minimal
structure`__ for a standard, setuptools-based Python package:
+4 -1
View File
@@ -61,13 +61,14 @@ process ID 8675, you can then run:
This command triggers the Guru Meditation report to be printed to
``/var/log/nova/nova-compute-err.log``.
For WSGI based services like ``nova-api-wsgi`` and ``nova-metadata-wsgi`` using
For the WSGI-based compute API and metadata API services, using
signals is not trivial due to the web server's own signal handling. An
alternative to the signal-based approach that works equally well for
freestanding and hosted entry points is to use a file-based trigger.
Configure the service to trigger the GMR by the modification time changes of
a file or directory.
.. code-block::
[oslo_reports]
@@ -75,6 +76,7 @@ a file or directory.
Then the report can be triggered by touching the file or directory. The GMRb
will be emitted in the same place where the service logs normally.
.. code-block:: console
touch /var/lib/nova
@@ -83,6 +85,7 @@ Note that some web servers freeze the request handler processes when there is
no HTTP request to be handled. This prevent the file system monitoring loop to
detect the change. So after touching the file make a HTTP request to the given
WSGI application.
.. code-block:: console
openstack compute service list
+3 -4
View File
@@ -111,9 +111,8 @@ scopes in order to perform actions with unified limits.
Configuration
-------------
To enable unified limits quotas, some Nova configuration of
the :program:`nova-api-wsgi` and :program:`nova-conductor` services is
necessary.
To enable unified limits quotas, some Nova configuration of the Compute API and
:program:`nova-conductor` services is necessary.
Set the quota driver to the ``nova.quota.UnifiedLimitsDriver``:
@@ -417,7 +416,7 @@ the quota limit for that resource will be considered to be unlimited and all
requests to allocate that resource will be accepted. Any resource not in the
list will be considered to have 0 quota.
The options should be configured for the :program:`nova-api-wsgi` and
The options should be configured for the Compute API and
:program:`nova-conductor` services. The :program:`nova-conductor` service
performs quota enforcement when :oslo.config:option:`quota.recheck_quota` is
``True`` (the default).
+8 -9
View File
@@ -29,10 +29,10 @@ Configuration
The service you must configure to enable the ``StaticJSON`` vendordata module
depends on how guests are accessing vendordata. If using the metadata service,
configuration applies to either :program:`nova-api-wsgi` or
:program:`nova-metadata-wsgi`, depending on the deployment, while if using
config drives, configuration applies to :program:`nova-compute`. However,
configuration is otherwise the same and the following options apply:
configuration applies to either the compute API or metadata API depending on
the deployment, while if using config drives, configuration applies to
:program:`nova-compute`. However, configuration is otherwise the same and the
following options apply:
- :oslo.config:option:`api.vendordata_providers`
- :oslo.config:option:`api.vendordata_jsonfile_path`
@@ -114,11 +114,10 @@ Configuration
As with ``StaticJSON``, the service you must configure to enable the
``DynamicJSON`` vendordata module depends on how guests are accessing
vendordata. If using the metadata service, configuration applies to either
:program:`nova-api-wsgi` or :program:`nova-metadata-wsgi`, depending on the
deployment, while if using config drives, configuration applies to
:program:`nova-compute`. However, configuration is otherwise the same and the
following options apply:
vendordata. If using the metadata service, configuration applies to either the
compute API or metadata API depending on the deployment, while if using config
drives, configuration applies to :program:`nova-compute`. However,
configuration is otherwise the same and the following options apply:
- :oslo.config:option:`api.vendordata_providers`
- :oslo.config:option:`api.vendordata_dynamic_ssl_certfile`
+49
View File
@@ -0,0 +1,49 @@
Using WSGI with Nova
====================
.. versionchanged:: 33.0.0
Removed support for the eventlet server scripts, ``nova-api``,
``nova-api-metadata`` and ``nova-api-os-compute``. Only WSGI-based
deployments are supported going forward and it is no longer possible to run
the compute API and metadata APIs in the same process, as it was with the
eventlet scripts.
.. versionchanged:: 33.0.0
Removed the ``nova-api-wsgi`` and ``nova-metadata-wsgi`` WSGI scripts
previously provided by Nova. Deployment tooling should instead reference
the Python module paths for these services, ``nova.wsgi.osapi_compute`` and
``nova.wsgi.metadata``, if their chosen WSGI server supports this
(gunicorn, uWSGI) or implement a ``.wsgi`` script themselves if not
(mod_wsgi).
Nova provides two APIs: a compute API (a.k.a. the REST API) and a
:doc:`metadata API </user/metadata>`. Both of these APIs are implemented as
generic Python HTTP servers that implement WSGI_ and are expected to be
deployed using a server with WSGI support.
To facilitate this, Nova provides a WSGI module for each API that provide the
``application`` object that most WSGI servers require. These can be found at
``nova.wsgi.osapi_compute`` and ``nova.wsgi.metadata`` for the compute API and
the metadata API, respectively. The ``application`` objects are automatically
configured with configuration from ``nova.conf`` and ``api-paste.ini`` by
default, and the the config files and config directory can be overridden via
the ``OS_NOVA_CONFIG_FILES`` and ``OS_NOVA_CONFIG_DIR`` environment variables.
.. note::
File paths listed in ``OS_NOVA_CONFIG_FILES`` are relative to
``OS_NOVA_CONFIG_DIR`` and delimited by ``;``.
DevStack deploys the compute and metadata APIs behind Apache using uwsgi_ via
mod_proxy_uwsgi_. Inspecting the configuration created there can provide some
guidance on one option for managing the WSGI scripts. It is important to
remember, however, that one of the major features of using WSGI is that there
are many different ways to host a WSGI application. Different servers make
different choices about performance and configurability. It is up to you, as a
deployer, to choose an appropriate server for your deployment.
.. _WSGI: https://www.python.org/dev/peps/pep-3333/
.. _uwsgi: https://uwsgi-docs.readthedocs.io/
.. _mod_proxy_uwsgi: http://uwsgi-docs.readthedocs.io/en/latest/Apache.html#mod-proxy-uwsgi
+3 -6
View File
@@ -54,12 +54,9 @@ daemonize correctly after starting up.
WSGI Services
-------------
Starting in the 2025.2 release, the only way to deploy the nova api is in a
wsgi container (uwsgi or apache/mod_wsgi). These are the wsgi entry points to
do that:
* :doc:`nova-api-wsgi </user/wsgi>`
* :doc:`nova-metadata-wsgi </user/wsgi>`
Starting in the 2025.2 release, the only way to deploy the compute and metadata
APIs is via WSGI (uwsgi or apache/mod_wsgi). Refer to :doc:`/admin/wsgi` for
more information.
Additional Tools
----------------
+1 -1
View File
@@ -28,7 +28,7 @@ You also need to let the nova user run :program:`nova-rootwrap` as root in
To make allowed commands node-specific, your packaging should only install
``{compute,network}.filters`` respectively on compute and network nodes, i.e.
:program:`nova-api-wsgi` nodes should not have any of those files installed.
API nodes should not have any of those files installed.
.. note::
+3 -3
View File
@@ -157,8 +157,8 @@ the defaults from the :doc:`install guide </install/index>` will be sufficient.
cells allow sharding of your compute environment. Upfront planning is key to
a successful cells v2 layout.
* :doc:`Running nova-api on wsgi <user/wsgi>`: Considerations for using a real
WSGI container.
* :doc:`Running nova-api on wsgi <admin/wsgi>`: Considerations for deploying
under WSGI.
.. # NOTE(amotoki): toctree needs to be placed at the end of the section to
# keep the document structure in the PDF doc.
@@ -168,7 +168,7 @@ the defaults from the :doc:`install guide </install/index>` will be sufficient.
user/feature-classification
user/support-matrix
admin/cells
user/wsgi
admin/wsgi
Maintenance
-----------
+6 -8
View File
@@ -16,24 +16,22 @@ users; quotas are limited per project (the number of instances, for example).
OpenStack Compute can scale horizontally on standard hardware, and download
images to launch instances.
OpenStack Compute consists of the following areas and their components:
OpenStack Compute consists of two :doc:`WSGI </admin/wsgi>` APIs and a number
of services:
``nova-api-wsgi`` service
Compute API
Accepts and responds to end user compute API calls. The service supports the
OpenStack Compute API. It enforces some policies and initiates most
orchestration activities, such as running an instance.
``nova-metadata-wsgi`` service
Metadata API
Accepts metadata requests from instances. For more information, refer to
:doc:`/admin/metadata-service`.
``nova-compute`` service
A worker daemon that creates and terminates virtual machine instances through
hypervisor APIs. For example:
- libvirt for KVM or QEMU
- VMwareAPI for VMware
hypervisor APIs. The default hypervisor is libvirt with KVM or QEMU, but
other hypervisors are supported.
Processing is fairly complex. Basically, the daemon accepts actions from the
queue and performs a series of system commands such as launching a KVM
-40
View File
@@ -1,40 +0,0 @@
Using WSGI with Nova
====================
Since the version 2025.2 the only way to run the compute API and metadata API
is using a generic HTTP server that supports WSGI_ (such as Apache_ or nginx_).
The nova project provides two automatically generated entry points that
support this: ``nova-api-wsgi`` and ``nova-metadata-wsgi``. These read
``nova.conf`` and ``api-paste.ini`` by default and generate the required
module-level ``application`` that most WSGI servers require.
If nova is installed using pip, these two scripts will be installed into
whatever the expected ``bin`` directory is for the environment.
The config files and config directory can be overridden via the
``OS_NOVA_CONFIG_FILES`` and ``OS_NOVA_CONFIG_DIR`` environment variables.
File paths listed in ``OS_NOVA_CONFIG_FILES`` are relative to
``OS_NOVA_CONFIG_DIR`` and delimited by ``;``.
The new scripts replace older experimental scripts that could be found in the
``nova/wsgi`` directory of the code repository. The new scripts are *not*
experimental.
When running the compute and metadata services with WSGI, sharing the compute
and metadata service in the same process is not supported (as it is in the
eventlet-based scripts).
In devstack as of May 2017, the compute and metadata APIs are hosted by a
Apache communicating with uwsgi_ via mod_proxy_uwsgi_. Inspecting the
configuration created there can provide some guidance on one option for
managing the WSGI scripts. It is important to remember, however, that one of
the major features of using WSGI is that there are many different ways to host
a WSGI application. Different servers make different choices about performance
and configurability.
.. _WSGI: https://www.python.org/dev/peps/pep-3333/
.. _apache: http://httpd.apache.org/
.. _nginx: http://nginx.org/en/
.. _uwsgi: https://uwsgi-docs.readthedocs.io/
.. _mod_proxy_uwsgi: http://uwsgi-docs.readthedocs.io/en/latest/Apache.html#mod-proxy-uwsgi
+2 -1
View File
@@ -76,9 +76,10 @@
/nova/latest/user/user-data.html 301 /nova/latest/user/metadata.html
/nova/latest/user/upgrade.html 301 /nova/latest/admin/upgrades.html
/nova/latest/user/vendordata.html 301 /nova/latest/user/metadata.html
/nova/latest/user/wsgi.html 301 /nova/latest/admin/wsgi.html
/nova/latest/vendordata.html 301 /nova/latest/user/metadata.html
/nova/latest/vmstates.html 301 /nova/latest/reference/vm-states.html
/nova/latest/wsgi.html 301 /nova/latest/user/wsgi.html
/nova/latest/wsgi.html 301 /nova/latest/admin/wsgi.html
/nova/latest/admin/arch.html 301 /nova/latest/admin/architecture.html
/nova/latest/admin/adv-config.html 301 /nova/latest/admin/index.html
/nova/latest/admin/configuration/schedulers.html 301 /nova/latest/admin/scheduling.html