Hello, it’s definitely been a while. Things in the land of VxRail have been crazy, and I need to challenge myself to keep up to date with the blog.
Before you try to upgrade your cluster, in order to get the package, you MUST go to SolVe Online to generate a procedure which will contain the link (if SolVe hasn’t been updated yet, it will be shortly). This upgrade won’t be on the support page via search, nor in VxRail Manager as an “internet upgrade”.
As some may be aware, this is the first time we have adopted a .0 release. In the past, for major upgrades, we, along with many customers, would wait for U1. Was that a bad thing? Not really as it allowed us to follow out customer’s release paths as most would wait for U1 over .0 (unless they were in a dev/test or lab environment). Another change we made is the versioning. To better align with the vSphere versions, VxRail will use “7.0” for this release, that way it’s easier to correlate VxRail and vSphere versions.
A few highlights for VxRail 7.0 and what will be in this release:
External Storage Verification
During the install or an upgrade, VxRail manage will call out which hosts have external FC storage array connected to it. The storage array could be connected to the VxRail before or after the install and the system will tell you during the upgrade about this external array. In addition, during node add, this will be called out. This is may seem minor, but we do have some customers that leverage FC storage and during the upgrade, reminding them of this, may help make the upgrade go smoother as they need to make sure VMs can vMotion to another node with a FC adapter. Also, friendly reminder, VxRail LCM event will not LCM the external storage array.
Two PNIC Support for NSX-T
This is going to be a key feature for our customers with NSX-T. With vSphere 7.0, there’s now a single VDS for both NSX-V and NSX-T. In previous versions, NSX-V would use the VDS and NSX-T would use an nVDS. This is key for our customers that are running VCF (which VCF 4.0 on VxRail is coming). Since VxRail always created its own VDS for the system traffic, if a customer wanted to run NSX-T as well, we would need to add additional network ports to the config. Now, we can run NSX-T using a few as 2 total network ports on a single node.
With vSphere 7.0 and the new VDS 7.0, this VDS 7.0 is capable to have the NSX-T VIBS installed, where we manage the VDS on the portgroup level. The VDS itself is managed by the vCenter, but the same VDS could have as well NSX-T portgroups (called overlay segments or vlan-segments). These segments could enforce DFW rules and the VDS is able to terminate the GENEVE tunnel.
PSC is now embedded with internal vCenter
This has been something that’s been asked for a little while: “When can we embed the PSC with the VxRail created vCenter?”. Well, we can now do that. This will be done automatically as part of the upgrade (blog coming) and will be handled by simply providing a temp IP (as we have always done for major upgrades). For new installs, there will obviously no longer be the requirement to specify the PSC info.
Avoid Accidental use of vLCM
We know there are many great tools out there to help keep your clusters up to date. In the past, ESXi could be updated on a VxRail cluster via VUM (VMware update manager) and it wouldn’t cause too much harm (some pesky alerts is all). In that scenario, all a customer would need to do is upgrade via VxRail LCM process to the package that had the ESXi version in it. With VxRail, we want to make sure our customers are safe and only apply the tested/validated versions of our continuously validated state (or composite packages). In order to help our customers, we have engineered a solution to prevent them from going into an unsupported state. When you try to use vLCM, the system will tell you that it’s a VxRail system and it’s unsupported to use vLCM.
Greater Flexibility with Node Add
This is a greatly welcomed feature. Long ago, you would need to have a node being added to the cluster running the same version as the cluster. Then a while ago, we added in the ability to update a node with older code to the newer version of code on the cluster. This feature would require that the cluster and new node were at the same major version (say 4.7) to work. Now as long as the node is running 4.7.300 (which isn’t even the version currently shipping on new node) or greater, then the node can be added to a VxRail 7.0 cluster. So that means if you upgrade your cluster to VxRail 7.0 and then order a node with 4.7 on it (or your account team did), you can easily add that node and have VxRail run the upgrade to VxRail 7.0 as part of the node add process.
So, with all the great stuff here, there has to be some warnings, right? Yes! As I mentioned at the beginning, we won’t broadcast out the upgrade in VxRail Manager – you’ll need to go to SolVe to get the procedure (and the link) for the upgrade package. Also, due to lack of drivers available, the Quanta based nodes can’t run vSphere 7.0 and therefore can’t upgrade to VxRail 7.0.
All in all, this is a big release for VxRail customers, but the reality is it’s really just a stepping stone for some major features coming in the 7.x release train. 2020 is going to be a big year for new features inside VxRail and I’m definitely looking forward to it.