From 164c3f0f6164cdc88a439a393c53b5712f17d982 Mon Sep 17 00:00:00 2001 From: Stephen Finucane Date: Fri, 9 May 2025 11:18:55 +0100 Subject: [PATCH] Remove openSUSE/SLES from install guide This has not been supported for some time. Change-Id: Ic7073740deb0bf9670eebe77f0f8b0daca100a5c Signed-off-by: Stephen Finucane --- doc/source/admin/configuring-migrations.rst | 6 +- doc/source/install/compute-install-obs.rst | 255 ------------ doc/source/install/compute-install.rst | 4 +- doc/source/install/controller-install-obs.rst | 392 ------------------ doc/source/install/controller-install.rst | 6 +- 5 files changed, 5 insertions(+), 658 deletions(-) delete mode 100644 doc/source/install/compute-install-obs.rst delete mode 100644 doc/source/install/controller-install-obs.rst diff --git a/doc/source/admin/configuring-migrations.rst b/doc/source/admin/configuring-migrations.rst index 79a38cf24b..e7c5d32f72 100644 --- a/doc/source/admin/configuring-migrations.rst +++ b/doc/source/admin/configuring-migrations.rst @@ -167,12 +167,10 @@ disk array LUNs, Ceph or GlusterFS. The next steps show how a regular Linux system might be configured as an NFS v4 server for live migration. For detailed information and alternative ways to -configure NFS on Linux, see instructions for `Ubuntu`_, `RHEL and derivatives`_ -or `SLES and OpenSUSE`_. +configure NFS on Linux, see instructions for `Ubuntu`_ and `RHEL`_. .. _`Ubuntu`: https://help.ubuntu.com/community/SettingUpNFSHowTo -.. _`RHEL and derivatives`: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/nfs-serverconfig.html -.. _`SLES and OpenSUSE`: https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_nfs_configuring-nfs-server.html +.. _`RHEL`: https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_using_network_file_services/deploying-an-nfs-server_configuring-and-using-network-file-services #. Ensure that UID and GID of the nova user are identical on the compute hosts and the NFS server. diff --git a/doc/source/install/compute-install-obs.rst b/doc/source/install/compute-install-obs.rst deleted file mode 100644 index c52635cc27..0000000000 --- a/doc/source/install/compute-install-obs.rst +++ /dev/null @@ -1,255 +0,0 @@ -Install and configure a compute node for openSUSE and SUSE Linux Enterprise -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Compute service on a -compute node. The service supports several hypervisors to deploy instances or -virtual machines (VMs). For simplicity, this configuration uses the Quick -EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute -nodes that support hardware acceleration for virtual machines. On legacy -hardware, this configuration uses the generic QEMU hypervisor. You can follow -these instructions with minor modifications to horizontally scale your -environment with additional compute nodes. - -.. note:: - - This section assumes that you are following the instructions in this guide - step-by-step to configure the first compute node. If you want to configure - additional compute nodes, prepare them in a similar fashion to the first - compute node in the :ref:`example architectures - ` section. Each additional compute node - requires a unique IP address. - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - -#. Install the packages: - - .. code-block:: console - - # zypper install openstack-nova-compute genisoimage qemu-kvm libvirt - -#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions: - - * In the ``[DEFAULT]`` section, set the ``compute_driver``: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - compute_driver = libvirt.LibvirtDriver - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` - account in ``RabbitMQ``. - - * In the ``[service_user]`` section, configure :ref:`service user - tokens `: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [service_user] - send_service_user_token = true - auth_url = https://controller/identity - auth_type = password - project_domain_name = Default - project_name = service - user_domain_name = Default - username = nova - password = NOVA_PASS - - Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in - the Identity service. - - * In the ``[DEFAULT]`` section, configure the ``my_ip`` option: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS - - Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the - management network interface on your compute node, typically ``10.0.0.31`` - for the first node in the :ref:`example architecture - `. - - * Configure the ``[neutron]`` section of **/etc/nova/nova.conf**. Refer to - the :neutron-doc:`Networking service install guide - ` for more details. - - * In the ``[vnc]`` section, enable and configure remote console access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [vnc] - # ... - enabled = true - server_listen = 0.0.0.0 - server_proxyclient_address = $my_ip - novncproxy_base_url = http://controller:6080/vnc_auto.html - - The server component listens on all IP addresses and the proxy - component only listens on the management interface IP address of - the compute node. The base URL indicates the location where you - can use a web browser to access remote consoles of instances - on this compute node. - - .. note:: - - If the web browser to access remote consoles resides on - a host that cannot resolve the ``controller`` hostname, - you must replace ``controller`` with the management - interface IP address of the controller node. - - * In the ``[glance]`` section, configure the location of the Image service - API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [glance] - # ... - api_servers = http://controller:9292 - - * In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/run/nova - - * In the ``[placement]`` section, configure the Placement API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [placement] - # ... - region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:5000/v3 - username = placement - password = PLACEMENT_PASS - - Replace ``PLACEMENT_PASS`` with the password you choose for the - ``placement`` user in the Identity service. Comment out any other options - in the ``[placement]`` section. - -#. Ensure the kernel module ``nbd`` is loaded. - - .. code-block:: console - - # modprobe nbd - -#. Ensure the module loads on every boot by adding ``nbd`` to the - ``/etc/modules-load.d/nbd.conf`` file. - -Finalize installation ---------------------- - -#. Determine whether your compute node supports hardware acceleration for - virtual machines: - - .. code-block:: console - - $ egrep -c '(vmx|svm)' /proc/cpuinfo - - If this command returns a value of ``one or greater``, your compute node - supports hardware acceleration which typically requires no additional - configuration. - - If this command returns a value of ``zero``, your compute node does not - support hardware acceleration and you must configure ``libvirt`` to use QEMU - instead of KVM. - - * Edit the ``[libvirt]`` section in the ``/etc/nova/nova.conf`` file as - follows: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [libvirt] - # ... - virt_type = qemu - -#. Start the Compute service including its dependencies and configure them to - start automatically when the system boots: - - .. code-block:: console - - # systemctl enable libvirtd.service openstack-nova-compute.service - # systemctl start libvirtd.service openstack-nova-compute.service - -.. note:: - - If the ``nova-compute`` service fails to start, check - ``/var/log/nova/nova-compute.log``. The error message ``AMQP server on - controller:5672 is unreachable`` likely indicates that the firewall on the - controller node is preventing access to port 5672. Configure the firewall - to open port 5672 on the controller node and restart ``nova-compute`` - service on the compute node. - -Add the compute node to the cell database ------------------------------------------ - -.. important:: - - Run the following commands on the **controller** node. - -#. Source the admin credentials to enable admin-only CLI commands, then confirm - there are compute hosts in the database: - - .. code-block:: console - - $ . admin-openrc - - $ openstack compute service list --service nova-compute - +----+-------+--------------+------+-------+---------+----------------------------+ - | ID | Host | Binary | Zone | State | Status | Updated At | - +----+-------+--------------+------+-------+---------+----------------------------+ - | 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 | - +----+-------+--------------+------+-------+---------+----------------------------+ - -#. Discover compute hosts: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova - - Found 2 cell mappings. - Skipping cell0 since it does not contain hosts. - Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc - Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc - Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 - Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 - - .. note:: - - When you add new compute nodes, you must run ``nova-manage cell_v2 - discover_hosts`` on the controller node to register those new compute - nodes. Alternatively, you can set an appropriate interval in - ``/etc/nova/nova.conf``: - - .. code-block:: ini - - [scheduler] - discover_hosts_in_cells_interval = 300 diff --git a/doc/source/install/compute-install.rst b/doc/source/install/compute-install.rst index 2470ab786c..e943e1c0af 100644 --- a/doc/source/install/compute-install.rst +++ b/doc/source/install/compute-install.rst @@ -2,8 +2,7 @@ Install and configure a compute node ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Compute service on a -compute node for Ubuntu, openSUSE and SUSE Linux Enterprise, -and Red Hat Enterprise Linux and CentOS. +compute node for Ubuntu, Red Hat Enterprise Linux and CentOS Stream. The service supports several hypervisors to deploy instances or virtual machines (VMs). For simplicity, this configuration uses the Quick @@ -27,4 +26,3 @@ environment with additional compute nodes. compute-install-ubuntu compute-install-rdo - compute-install-obs diff --git a/doc/source/install/controller-install-obs.rst b/doc/source/install/controller-install-obs.rst deleted file mode 100644 index 311e2d2ddf..0000000000 --- a/doc/source/install/controller-install-obs.rst +++ /dev/null @@ -1,392 +0,0 @@ -Install and configure controller node for openSUSE and SUSE Linux Enterprise -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Compute service, -code-named nova, on the controller node. - -Prerequisites -------------- - -Before you install and configure the Compute service, you must create -databases, service credentials, and API endpoints. - -#. To create the databases, complete these steps: - - * Use the database access client to connect to the database server - as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - * Create the ``nova_api``, ``nova``, and ``nova_cell0`` databases: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE nova_api; - MariaDB [(none)]> CREATE DATABASE nova; - MariaDB [(none)]> CREATE DATABASE nova_cell0; - - * Grant proper access to the databases: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - - Replace ``NOVA_DBPASS`` with a suitable password. - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to admin-only CLI commands: - - .. code-block:: console - - $ . admin-openrc - -#. Create the Compute service credentials: - - * Create the ``nova`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt nova - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | 8a7dbf5279404537b1c7b86c033620fe | - | name | nova | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - * Add the ``admin`` role to the ``nova`` user: - - .. code-block:: console - - $ openstack role add --project service --user nova admin - - .. note:: - - This command provides no output. - - * Create the ``nova`` service entity: - - .. code-block:: console - - $ openstack service create --name nova \ - --description "OpenStack Compute" compute - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Compute | - | enabled | True | - | id | 060d59eac51b4594815603d75a00aba2 | - | name | nova | - | type | compute | - +-------------+----------------------------------+ - -#. Create the Compute API service endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - compute public http://controller:8774/v2.1 - - +--------------+-------------------------------------------+ - | Field | Value | - +--------------+-------------------------------------------+ - | enabled | True | - | id | 3c1caa473bfe4390a11e7177894bcc7b | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 060d59eac51b4594815603d75a00aba2 | - | service_name | nova | - | service_type | compute | - | url | http://controller:8774/v2.1 | - +--------------+-------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - compute internal http://controller:8774/v2.1 - - +--------------+-------------------------------------------+ - | Field | Value | - +--------------+-------------------------------------------+ - | enabled | True | - | id | e3c918de680746a586eac1f2d9bc10ab | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 060d59eac51b4594815603d75a00aba2 | - | service_name | nova | - | service_type | compute | - | url | http://controller:8774/v2.1 | - +--------------+-------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - compute admin http://controller:8774/v2.1 - - +--------------+-------------------------------------------+ - | Field | Value | - +--------------+-------------------------------------------+ - | enabled | True | - | id | 38f7af91666a47cfb97b4dc790b94424 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 060d59eac51b4594815603d75a00aba2 | - | service_name | nova | - | service_type | compute | - | url | http://controller:8774/v2.1 | - +--------------+-------------------------------------------+ - -#. Install Placement service and configure user and endpoints: - - * Refer to the :placement-doc:`Placement service install guide - ` - for more information. - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - -.. note:: - - As of the Newton release, SUSE OpenStack packages are shipped with the - upstream default configuration files. For example, ``/etc/nova/nova.conf`` - has customizations in ``/etc/nova/nova.conf.d/010-nova.conf``. While the - following instructions modify the default configuration file, adding a new - file in ``/etc/nova/nova.conf.d`` achieves the same result. - -#. Install the packages: - - .. code-block:: console - - # zypper install \ - openstack-nova-api \ - openstack-nova-scheduler \ - openstack-nova-conductor \ - openstack-nova-novncproxy \ - iptables - -#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions: - - * In the ``[api_database]`` and ``[database]`` sections, configure database - access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [api_database] - # ... - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api - - [database] - # ... - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova - - Replace ``NOVA_DBPASS`` with the password you chose for the Compute - databases. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` - account in ``RabbitMQ``. - - * In the ``[keystone_authtoken]`` section, configure Identity service - access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [keystone_authtoken] - # ... - www_authenticate_uri = http://controller:5000/ - auth_url = http://controller:5000/ - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = nova - password = NOVA_PASS - - Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in - the Identity service. - - .. note:: - - Comment out or remove any other options in the ``[keystone_authtoken]`` - section. - - * In the ``[service_user]`` section, configure :ref:`service user - tokens `: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [service_user] - send_service_user_token = true - auth_url = https://controller/identity - auth_type = password - project_domain_name = Default - project_name = service - user_domain_name = Default - username = nova - password = NOVA_PASS - - Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in - the Identity service. - - * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to use the - management interface IP address of the controller node: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = 10.0.0.11 - - * Configure the ``[neutron]`` section of **/etc/nova/nova.conf**. Refer to - the :neutron-doc:`Networking service install guide - ` - for more details. - - * In the ``[vnc]`` section, configure the VNC proxy to use the management - interface IP address of the controller node: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [vnc] - enabled = true - # ... - server_listen = $my_ip - server_proxyclient_address = $my_ip - - * In the ``[glance]`` section, configure the location of the Image service - API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [glance] - # ... - api_servers = http://controller:9292 - - * In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/run/nova - - * In the ``[placement]`` section, configure access to the Placement - service: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [placement] - # ... - region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:5000/v3 - username = placement - password = PLACEMENT_PASS - - Replace ``PLACEMENT_PASS`` with the password you choose for the - ``placement`` service user created when installing - :placement-doc:`Placement `. Comment out or remove any other - options in the ``[placement]`` section. - -#. Populate the ``nova-api`` database: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage api_db sync" nova - - .. note:: - - Ignore any deprecation messages in this output. - -#. Register the ``cell0`` database: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova - -#. Create the ``cell1`` cell: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova - -#. Populate the nova database: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage db sync" nova - -#. Verify nova cell0 and cell1 are registered correctly: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova - +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ - | Name | UUID | Transport URL | Database Connection | Disabled | - +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ - | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0?charset=utf8 | False | - | cell1 | f690f4fd-2bc5-4f15-8145-db561a7b9d3d | rabbit://openstack:****@controller:5672/nova_cell1 | mysql+pymysql://nova:****@controller/nova_cell1?charset=utf8 | False | - +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ - -Finalize installation ---------------------- - -* Start the Compute services and configure them to start when the system boots: - - .. code-block:: console - - # systemctl enable \ - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - # systemctl start \ - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service diff --git a/doc/source/install/controller-install.rst b/doc/source/install/controller-install.rst index fd97c98574..7835086c33 100644 --- a/doc/source/install/controller-install.rst +++ b/doc/source/install/controller-install.rst @@ -1,12 +1,10 @@ Install and configure controller node ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This section describes how to install and configure the Compute service -on the controller node for Ubuntu, openSUSE and SUSE Linux Enterprise, -and Red Hat Enterprise Linux and CentOS. +This section describes how to install and configure the Compute service on the +controller node for Ubuntu, Red Hat Enterprise Linux and CentOS Stream. .. toctree:: controller-install-ubuntu - controller-install-obs controller-install-rdo