In a previous post about changes made to the first run/install of a VxRail cluster running our 4.7 code, I made mention that we’ve now moved the IPv6 Multicast traffic (also known as “Loudmouth”) to leverage a private network. This will help us in a few ways, but one of the main reasons for the change was based on customer requests.
One of the things we (the VxSEALs) often get called in to talk about are the networking requirements to install VxRail. Most of the time, the conversation goes smoothly and it’s not a long conversation. Every once in a while though, when we mention “IPv6 Multicast”, some customers do want to dive a little deeper into the conversation. It’s not that this is “bad”, but much rather it’s different from what customer are used to. Quick note – using IPv6 multicast or similar technology isn’t uncommon in the HCI space.
When we have this conversation, customers really just want to know more about the “why” than anything. The “why” for using IPv6 multicast/Loudmouth with VxRail is that we use it for node discovery – when the clusters are first powered on, they don’t have an IP address that’s routable by the customer’s existing networking setup. In this scenario, we use the link local IPv6 address along with multicast to help the nodes discover one another. Until VxRail 4.7, this traffic would just piggyback on the public Management Network.
For most customers, this wasn’t an issue, but we did run into a few that wanted the traffic more isolated. Their reasoning was sound – by using the public Management Network, that traffic could also travel North/South into their core switch versus keeping it isolated to East/West on their TOR switch. This is completely reasonable and something that we’ve been able to solve by adding in a “Private Management Network” and “Private VM Network” during first run.
On a unconfigured node, you can see the default configuration.
With VxRail 4.7, we had to create 2 new networks with a default VLAN ID of 3939. This is just the default and it can be changed. Another benefit is that if a customer configures their switch the VLAN of 3939, it doesn’t require the PS Install team(s) to change the Management VLAN ID prior to the installation if it’s a tagged network. This is now handled during the first run. You can set the private network to the same VLAN ID, if needed. Having a dedicated VLAN for the Loudmouth traffic isn’t a requirement, but much rather an option.
For our existing customers that would like to change where their loudmouth traffic will reside, you can also make the change after upgrading to VxRail 4.7.001 or later code. The change is simple and can be done through vCenter by editing the distributed port group of “VxRail Management-UUID“.
One consideration is that during a node add event, you will need to modify the VLAN IDs for the 2 private networks if you’re not using the default of 3939. Today, node adds are handled by a PS (either Dell EMC or certified partner), but in the future, they will be something a customer can do (stay tuned).
It’s a fairly simple/minor change that we’ve made, but I do believe it’ll continue to drive simplicity into the installation process. This change also allows us to integrate VxRail into the SmartFabric framework, which allows us to automatically configure Dell EMC TOR switches for VxRail (first vendor to do so).
If you have any questions, please feel free to comment and I’ll reply (or you can reach me on email or Twitter).