Event Log: System
Event ID: 1793
Error: 0x16
Hint: SyncPoolFailure
Applies to: Azure Local 22H2, Azure Local 23H2, Azure Local 24H2, Azure Local (future releases), Windows Server Failover cluster. Windows Server with Storage Spaces Direct.
I was working for a customer with an Azure Local 22H2 stack what we were preparing to upgrade to 23H2. Before upgrade, we wanted to do some hardware changes. This included changing from the old RAID controller that used 2 SSD disks in one of the front disk bays, to a ThinkSystem M.2 with Mirroring Enablement Kit (2x M.2 NVMe SSD sticks on a PCI Adapter Card).
I want to write an article about the proces of removing a node, reinstall and add again. That article will be written soon I hope.
But this article is about a very generic error I came across after I had reinstalled the 22H2 node, named it and joined to domain (used same name on node as before), install roles/features, network configuration and joining back to cluster.
I could live migrate virtual machines to this node but virtual disks could not be moved from the running node to this reinstalled node. The error in failover cluster was: The device does not recognize the command
At first I thought I missed some features. I did but that did not resolve the issue. I ran Test-Cluster and found alot of network related issues. But none of them I believed to be the root cause.
Once I came across the PowerShell command “Get-PhysicalDisk”, I could see that the storage pool had added the new M.2 disk we had installed the OS on. Of cause this failed because the disk was in use, but never the less, the disk was in the storage pool. I could also see it in Windows Admin Center where I could see the M.2 disk in the disk inventory overview.
The reason this happened is because “Always add new drives” was enabled in the S2D configuration.
We can disable this using PowerShell:
Get-StorageSubSystem Cluster* | Set-StorageHealthSetting -Name "System.Storage.PhysicalDisk.AutoPool.Enabled" -Value False
or use WAC: (Settings > Storage Spaces and pools > Always add new drives)

We do not have to disable this feature but since we have other nodes in the cluster that we need to replace the same hardware in, we want to pause this feature until we are finished replacing hardware, then we will enable the feature again.
To fix the issue, we need to remove the disk from the storage pool. We can use this PowerShell command to acomplish this:
$PDToRemove = Get-PhysicalDisk -Friendlyname "Lenovo ThinkSystem M.2"
Remove-PhysicalDisk -PhysicalDisks $PDToRemove -StoragePoolFriendlyName "DemoPool"
After this, I could move virtual disks in the cluster to this reinstalled node without errors. No reboot was required.
Comments