We are running four ESXi 6.5.0 VMware hosts, managed vie vSphere web client.
Recently, I added a new VMFS 6 volume in our storage, originally created with 2TB capacity, but quickly extended to 3TB before actually taking it to production.
Ever since, I do not get consistent volume cacpacity readings.
For example: A guest system had a disk of 1700GB originally on a different volume; I migrated that to the new volume, and after that try to enlarge it. So in change settings, the disk showed as 1700GB and maximum size 2.77TB – but when I attempted to set it to 2000GB, I got an alarm “insufficient disk capacity”, and – lo and behold – the maximum size had gone down to 1.86TB! With this, I managed to increase my 1700 to 1900GB only …
When checking the volume info under “configure” – “general” – “capacity”, it sometimes shows 1.86TB as capacity (and allocated), but sometimes without anything special happening in-between it shows 2.77TB (with 1.86TB allocated). At times when the lower capacity is shown, clicking “update capacity” changes nothing.
Under “device backing”, it shows 2.77TB (as far as I can tell without fluctuation?).
Under “monitor” – “performance” – “overview”, I sometimes see 2.77 and sometime 1.86TB as well.
And so on …
Each of the four hosts shows the volume correctly as LUN 10 = 2.77TB.
Strangely, the event log for the volume shows several entries “Capacity of … enlarged from 2047894093824 bytes to 3047816167424 bytes”. More specifically, I have the “volume created” event at 2019-07-05 11:44:16 and those “capacity extended” events at
2019-07-10 09:58:01. That’s eight times up to now, way more than e.g. once per host, and mostly (but not completely) correlated with times when I have vsphere open, all in all not really explicable to me.
- What can be the cause of these observed fluctuations?
- How can it be mended?
- Once mended, how can I then be sure that it has been mended for good (I do not want to extend a disk an then be informed at a random moment that pat of the disk does not “exist”)?