VxRail 7.0.0 is out!

Hello, it’s definitely been a while.  Things in the land of VxRail have been crazy, and I need to challenge myself to keep up to date with the blog.

Before you try to upgrade your cluster, in order to get the package, you MUST go to SolVe Online to generate a procedure which will contain the link (if SolVe hasn’t been updated yet, it will be shortly).  This upgrade won’t be on the support page via search, nor in VxRail Manager as an “internet upgrade”.

As some may be aware, this is the first time we have adopted a .0 release.  In the past, for major upgrades, we, along with many customers, would wait for U1.  Was that a bad thing?  Not really as it allowed us to follow out customer’s release paths as most would wait for U1 over .0 (unless they were in a dev/test or lab environment).  Another change we made  is the versioning.  To better align with the vSphere versions, VxRail will use “7.0” for this release, that way it’s easier to correlate VxRail and vSphere versions.

A few highlights for VxRail 7.0 and what will be in this release:

External Storage Verification
During the install or an upgrade, VxRail manage will call out which hosts have external FC storage array connected to it.  The storage array could be connected to the VxRail before or after the install and the system will tell you during the upgrade about this external array. In addition, during node add, this will be called out.  This is may seem minor, but we do have some customers that leverage FC storage and during the upgrade, reminding them of this, may help make the upgrade go smoother as they need to make sure VMs can vMotion to another node with a FC adapter.  Also, friendly reminder, VxRail LCM event will not LCM the external storage array.

Two PNIC Support for NSX-T
This is going to be a key feature for our customers with NSX-T.  With vSphere 7.0, there’s now a single VDS for both NSX-V and NSX-T.  In previous versions, NSX-V would use the VDS and NSX-T would use an nVDS.  This is key for our customers that are running VCF (which VCF 4.0 on VxRail is coming).  Since VxRail always created its own VDS for the system traffic, if a customer wanted to run NSX-T as well, we would need to add additional network ports to the config.  Now, we can run NSX-T using a few as 2 total network ports on a single node.

With vSphere 7.0 and the new VDS 7.0, this VDS 7.0 is capable to have the NSX-T VIBS installed, where we manage the VDS on the portgroup level. The VDS itself is managed by the vCenter, but the same VDS could have as well NSX-T portgroups (called overlay segments or vlan-segments). These segments could enforce DFW rules and the VDS is able to terminate the GENEVE tunnel.

PSC is now embedded with internal vCenter
This has been something that’s been asked for a little while: “When can we embed the PSC with the VxRail created vCenter?”.  Well, we can now do that.  This will be done automatically as part of the upgrade (blog coming) and will be handled by simply providing a temp IP (as we have always done for major upgrades).  For new installs, there will obviously no longer be the requirement to specify the PSC info.

Avoid Accidental use of vLCM
We know there are many great tools out there to help keep your clusters up to date.  In the past, ESXi could be updated on a VxRail cluster via VUM (VMware update manager) and it wouldn’t cause too much harm (some pesky alerts is all).  In that scenario, all a customer would need to do is upgrade via VxRail LCM process to the package that had the ESXi version in it.  With VxRail, we want to make sure our customers are safe and only apply the tested/validated versions of our continuously validated state (or composite packages).  In order to help our customers, we have engineered a solution to prevent them from going into an unsupported state.  When you try to use vLCM, the system will tell you that it’s a VxRail system and it’s unsupported to use vLCM.

Greater Flexibility with Node Add
This is a greatly welcomed feature.  Long ago, you would need to have a node being added to the cluster running the same version as the cluster.  Then a while ago, we added in the ability to update a node with older code to the newer version of code on the cluster.  This feature would require that the cluster and new node were at the same major version (say 4.7) to work.  Now as long as the node is running 4.7.300 (which isn’t even the version currently shipping on new node) or greater, then the node can be added to a VxRail 7.0 cluster.  So that means if you upgrade your cluster to VxRail 7.0 and then order a node with 4.7 on it (or your account team did), you can easily add that node and have VxRail run the upgrade to VxRail 7.0 as part of the node add process.

So, with all the great stuff here, there has to be some warnings, right?  Yes!  As I mentioned at the beginning, we won’t broadcast out the upgrade in VxRail Manager – you’ll need to go to SolVe to get the procedure (and the link) for the upgrade package.  Also, due to lack of drivers available, the Quanta based nodes can’t run vSphere 7.0 and therefore can’t upgrade to VxRail 7.0.

All in all, this is a big release for VxRail customers, but the reality is it’s really just a stepping stone for some major features coming in the 7.x release train.  2020 is going to be a big year for new features inside VxRail and I’m definitely looking forward to it.

17 thoughts on “VxRail 7.0.0 is out!

  1. Thanks for the heads up. It doesn’t look like SolVe has been updated.

    Question: what happens to the existing PSC? Does the upgrade process “clean up after itself?”


      1. Thanks. I did notice that earlier. This certainly got my attention:

        IMPORTANT: vSphere 7.0 license keys are required for a cluster to run on vSphere 7.0. The LCM
        precheck will not stop a user from upgrading without 7.0 license keys. Without them, the nodes will be
        running on 60-day evaluation license keys. User needs to go my.vmware.com to determine whether their
        clusters need 7.0 license keys and acquire them there if applicable.


    1. Great article Jeremy! I’m a big fan of the VxRail concept for more than 3 years. VxRail 7.0.0 with VMware VCF 4.0 sound like rocket power! One question. In the past VxRail released a new version when the first update of vSphere was released. What is the reason to do it this time a month after the GA release of vSphere 7?


      1. Vincent, Great question. You are correct, in the past, we would always wait for the U1 release before publishing the corresponding VxRail package. With 7.0, this will be the first time we released the .0 release. Also, with VxRail, we are part of the “SimShip” program which means we have a SLA to provide the VxRail update within 30 days of VMware going GA – this is for all releases.


  2. Hi Jeremy, thanks for the update. Do you know why upgrading from 4.7.411 isn´t supported and when we can expect vGPU-Support?


    1. Marc, apologies for the delay. 4.7.411 isn’t supported because it’s actually a newer vSphere build than the 7.0.000 build. I know it doesn’t make a ton of sense, but the ESXi version in 4.7.411 was released after 7.0. We will have another update in a month or so that will allow 4.7.411 customers (and really any VMware customer that’s on the latest 6.7 build) to update to 7.0.


  3. Any idea how it will be possible to migrate from a “standard Vxrail stretched” cluster to a Vxrail stretched cluster with VCF easily? The goal is to have the ability to run containers of course 🙂


  4. Will it be possible to migrate smoothly from “standard stretched VxRail cluster” to stretched VxRail cluster with VCF 4.0? Mainly for kubernetes stuff.


  5. Hi Jeremy, what you have mention is for sure not correct.
    You said: With a single VDS, you will now have different port groups to support both V and T on a single VDS.

    A ESXi host CANNOT have NSX-V and NSX-T installed at the same time running.

    What you most likely have tried to explain is, in the past with had to have a host with 2 virtual switches, one VDS with 2 pNIC, managed by the vCenter was used for the typically ESXi hypervisor vmkernel traffic like vMotion or vSAN, and the second virtual switches with two additional pNICs, the NVDS, managed by NSX-T to enforce DFW or/and to terminate the GENEVE Overlay tunnel traffic. For sure, for hosts with 2 pNIC only, we could run all portgroups (in this case with NSX-T called vlan-segments) on the NVDS, but not every customer has liked this setup due a few good reasons.
    With vSphere 7.0 and the new VDS 7.0, this VDS 7.0 is capable to have the NSX-T VIBS installed, where we manage the VDS on the portgroup level. The VDS itself is managed by the vCenter, but the same VDS could have as well NSX-T portgroups (called overlay segments or vlan-segments). These segments could enforce DFW rules and the VDS is able to terminate the GENEVE tunnel.
    Best Regards


Leave a Reply to Jeremy Merrill Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s