Issue:
ESXi will immediately report the datastore as inactive. Trying to remove the datastore now in the vsphere client will error.
Trying to delete:
Call "HostDatastoreSystem.RemoveDatastore" for object "ha-datastoresystem" on ESXi "IP address" failed.
Operation failed, diagnostics report: Unable to query live VMFS state of volume.: No such file or directory
Operation failed, diagnostics report: Unable to query live VMFS state of volume.: No such file or directory
Trying to unmount:
Call "HostStorageSystem.UnmountVmfsVolume" for object "storageSystem" on ESXi "IP address" failed.
Ideally, before you remove LUNs on the backend storage, you should unmount and delete the datastore from ESXi.
In most cases these datastores can only be removed by rebooting the ESXi server.
Even VMWare support couldn't help with this issue. This article only describes one case in which I was successful in removing those datastores without a reboot.
Check out this VmWare KB as well.
Running 'esxcli storage core device list' will show the LUN is still being handled by ESXi
If you rescan teh adapters from the CLI you will get a hint:
~ # esxcli storage core adapter rescan --all
Rescan complete, however some dead paths were not removed because they were in use by the system. Please use the 'storage core device world list' command to see the VMkernel worlds still using these paths.
Error while scanning interfaces, unable to continue. Error was Not all VMFS volumes were updated; the error encountered was 'No connection'.
~ # esxcli storage core device world list | grep naa.60000000000000000e00000000010001
where naa.60000000000000000e00000000010001 is the UID of the removed datastore.
~ # esxcli storage core device world list | grep naa.60000000000000000e0000000001000
naa.60000000000000000e00000000010001 32789 1 idle0
naa.60000000000000000e00000000010001 32823 1 helper3-0
naa.60000000000000000e00000000010001 32825 1 helper3-2
naa.60000000000000000e00000000010001 32884 1 helper24-0
naa.60000000000000000e00000000010001 32900 1 OCFlush
naa.60000000000000000e00000000010001 32902 1 BCFlush-0
naa.60000000000000000e00000000010001 33611 1 helper48-0
naa.60000000000000000e00000000010001 33612 1 helper48-1
naa.60000000000000000e00000000010001 33613 1 helper48-2
naa.60000000000000000e00000000010001 33614 1 helper48-3
naa.60000000000000000e00000000010001 33615 1 helper48-4
naa.60000000000000000e00000000010001 33616 1 helper48-5
naa.60000000000000000e00000000010001 33617 1 helper48-6
naa.60000000000000000e00000000010001 33618 1 helper48-7
naa.60000000000000000e00000000010001 33619 1 helper48-8
naa.60000000000000000e00000000010001 33620 1 helper48-9
naa.60000000000000000e00000000010001 33621 1 helper48-10
naa.60000000000000000e00000000010001 33622 1 helper48-11
naa.60000000000000000e00000000010001 33623 1 helper48-12
naa.60000000000000000e00000000010001 33624 1 helper48-13
naa.60000000000000000e00000000010001 33625 1 helper48-14
naa.60000000000000000e00000000010001 33626 1 helper48-15
naa.60000000000000000e00000000010001 922045 1 hostd-worker
naa.60000000000000000e00000000010001 922048 1 hostd-worker
naa.60000000000000000e00000000010001 922094 1 hostd-worker
naa.60000000000000000e00000000010001 922095 1 hostd-worker
naa.60000000000000000e00000000010001 922406 1 hostd-vix-worke
naa.60000000000000000e00000000010001 922465 1 hostd-worker
naa.60000000000000000e00000000010001 922466 1 hostd-worker
naa.60000000000000000e00000000010001 922467 1 hostd-worker
naa.60000000000000000e00000000010001 922468 1 hostd-worker
naa.60000000000000000e00000000010001 922469 1 hostd-worker
naa.60000000000000000e00000000010001 934019 1 hostd-worker
naa.60000000000000000e00000000010001 934020 1 hostd-worker
naa.60000000000000000e00000000010001 934021 1 hostd-worker
naa.60000000000000000e00000000010001 958872 1 hostd-worker
naa.60000000000000000e00000000010001 2771299 1 sh
naa.60000000000000000e00000000010001 32789 1 idle0
naa.60000000000000000e00000000010001 32823 1 helper3-0
naa.60000000000000000e00000000010001 32825 1 helper3-2
naa.60000000000000000e00000000010001 32884 1 helper24-0
naa.60000000000000000e00000000010001 32900 1 OCFlush
naa.60000000000000000e00000000010001 32902 1 BCFlush-0
naa.60000000000000000e00000000010001 33611 1 helper48-0
naa.60000000000000000e00000000010001 33612 1 helper48-1
naa.60000000000000000e00000000010001 33613 1 helper48-2
naa.60000000000000000e00000000010001 33614 1 helper48-3
naa.60000000000000000e00000000010001 33615 1 helper48-4
naa.60000000000000000e00000000010001 33616 1 helper48-5
naa.60000000000000000e00000000010001 33617 1 helper48-6
naa.60000000000000000e00000000010001 33618 1 helper48-7
naa.60000000000000000e00000000010001 33619 1 helper48-8
naa.60000000000000000e00000000010001 33620 1 helper48-9
naa.60000000000000000e00000000010001 33621 1 helper48-10
naa.60000000000000000e00000000010001 33622 1 helper48-11
naa.60000000000000000e00000000010001 33623 1 helper48-12
naa.60000000000000000e00000000010001 33624 1 helper48-13
naa.60000000000000000e00000000010001 33625 1 helper48-14
naa.60000000000000000e00000000010001 33626 1 helper48-15
naa.60000000000000000e00000000010001 922045 1 hostd-worker
naa.60000000000000000e00000000010001 922048 1 hostd-worker
naa.60000000000000000e00000000010001 922094 1 hostd-worker
naa.60000000000000000e00000000010001 922095 1 hostd-worker
naa.60000000000000000e00000000010001 922406 1 hostd-vix-worke
naa.60000000000000000e00000000010001 922465 1 hostd-worker
naa.60000000000000000e00000000010001 922466 1 hostd-worker
naa.60000000000000000e00000000010001 922467 1 hostd-worker
naa.60000000000000000e00000000010001 922468 1 hostd-worker
naa.60000000000000000e00000000010001 922469 1 hostd-worker
naa.60000000000000000e00000000010001 934019 1 hostd-worker
naa.60000000000000000e00000000010001 934020 1 hostd-worker
naa.60000000000000000e00000000010001 934021 1 hostd-worker
naa.60000000000000000e00000000010001 958872 1 hostd-worker
naa.60000000000000000e00000000010001 2771299 1 sh
Tried:
- restarting services
# services.sh restart
- stop / start vsan
# /etc/init.d/vsantraced stop | start
- stop start storage IO
# /etc/init.d/storageRM stop | start
None of these worked to help remove my old datastore
It turned out in the end I had SCSI UNMAP running on this datastore at the time when I removed the LUN.
I disabled UNMAP for the host and after that I issued a rescan / refresh and the datastore was removed
To disable UNMAP on the host:
# esxcli system settings advanced set --int-value 0 --option /VMFS3/EnableBlockDelete
Set int-value to 1 to re-enable