As mentioned in the previous post discussing the licensing flexibility brought to VxRail 4.7, VxRail 4.7.000 is now in the “RTS” (Release To Ship) phase, meaning customers can now request that Dell EMC Account teams order/ship their new nodes with VxRail 4.7 software on the nodes. As part of that phase, we also had to publish the RASR (Rapid Appliance Self Recovery) package. RASR allows the field to also upgrade the factory image on the nodes – this is done for node replacement and installs most often.
Given that the RASR package is now GA, I decided to take one of our P470 hybrid clusters and reset it with the 4.7.000 VxRail software. For a recap of some of the VxRail 4.7 features, you can read about that here.
There are a few things that are different with VxRail 4.7 (besides moving to vSphere 6.7 EP5 as the base vSphere version). One of the things that’s different is we’ve moved the loudmouth process to its own private VLAN. We use loudmouth over IPv6 multicast for node discovery for cluster builds as well as node adds. By putting the IPv6 traffic on a dedicated VLAN, we can now keep that traffic running East to West on the TOR switch instead of having it just run on the public management network(s). You can see here VxRail 4.7 now has 2 more networks with “Private” at the beginning. Those, by default, will run on VLAN 3939 – BUT it can be changed.
In order to accommodate for the loudmouth process running on its own VLAN, I had to change the new networks to all use VLAN ID of 0:
It’s a very simple change – but again, this will allow more flexibility in the deployment. In the past, we would just have the Management and VM Network with VLAN ID of 0. If customers were tagging their management VLAN, the PS install team would have to run similar commands on each node. Now, if customers define VLAN 3939 on their TOR ahead of time, the PS Install team won’t have to do anything extra. Also, to make things easier, this is completely aligned with our newly release SmartFabric and VxRail integration.
As for the first run, there are only a few minor changes. The first that will be noticed is on the “Networks” screen.
At the end of the networks screen, you’ll see that we can now define the “Management Network VLAN ID”. As mentioned above, that will allow for us to assign the Management VLAN tag during the first run, instead of having to run commands against each node prior to the first run.
The next change is around one of the user accounts, which is called “mystic”. The VxRail Manager VM is a SUSE Linux based VM. By default, the root account is locked out of ssh, so support would connect using the “mystic” account, which had a default password. Now, on the final password screen, we will define the password for that account.
A few things to note with the msytic account and the password associated with it. As you can see in the warning message, that account can’t have the same password as the password we use for non-ESXi accounts. This is something you’ll definitely want to make note of.
Overall, those were the only changes made during the first run, but they’re all positive changes and something customers have been asking for. After my system validated, I started the build:
The build itself is mostly the same, but I did notice there are a few more tasks that have been added (in 4.5 code there were 69 tasks). The build takes about the same amount of time, but on a properly configured system that has validated, you can just let it run in the background.
After the install was done, you will see the familiar “Hooray” screen that we all know and love:
There is one difference though. The VxRail will now install with a 60 day vSAN Eval license (I discussed the licensing changes here). This is something that PS will handle, but I did think it was good to show the difference here.
After that, you’re ready to use your VxRail cluster on vSphere 6.5 EP5 as the hypervisor version. One of the big benefits here is you can start on VxRail 4.7 code and not have to upgrade from 4.5 for new clusters. While checking out the vCenter HTML 5 client (in dark mode), I did make note of a couple other changes.
First, you’ll see that we’ve now increased the EVC setting on VxRail clusters (this was also the case in some of the newer versions of VxRail 4.5):
On previous versions of VxRail we would set EVC mode to Ivy Bridge as that was the Intel processor we supported in the first version of VxRail.
The last thing that I noticed while quickly playing around is that vSAN 6.7 added some additional information in the capacity reporting functionality. Now you can see how much free space you can expect with a given policy. In addition, you also have the option to display “Capacity History”. Since this was a newly installed system, it wasn’t reporting any data on that.
Overall, there aren’t many changes to first run for VxRail 4.7, but the minor changes we did make are another step to make the product even more simple than the previous releases. I’m really excited to start seeing VxRail 4.7 in customer sites to get their feedback.