Disconnecting volume from the compute host
cmd nova-manage volume_attachment refresh vm-id vol-id connetor There were cases where the instance said to live in compute#1 but the connection_info in the BDM record was for compute#2, and when the script called `remote_volume_connection` then nova would call os-brick on compute#1 (the wrong node) and try to detach it. In some case os-brick would mistakenly think that the volume was attached (because the target and lun matched an existing volume on the host) and would try to disconnect, resulting in errors on the compute logs. - Added HostConflict exception - Fixes dedent in cmd/manange.py - Updates nova-mange doc Closes-Bug: #2012365 Change-Id: I21109752ff1c56d3cefa58fcd36c68bf468e0a73
This commit is contained in:
@@ -1572,7 +1572,9 @@ command.
|
||||
* - 5
|
||||
- Instance state invalid (must be stopped and unlocked)
|
||||
* - 6
|
||||
- Instance is not attached to volume
|
||||
- Volume is not attached to the instance
|
||||
* - 7
|
||||
- Connector host is not correct
|
||||
|
||||
|
||||
Libvirt Commands
|
||||
|
||||
Reference in New Issue
Block a user