SDDC Architectures: Workload Mobility & Recovery with NSX 6.2, vSphere 6.x & SRM 6.x

In this blog post I am going to talk about one of the subjects that I have been always passionate about at VMware. Application continuity is something that is gaining an incredible traction and interest today from customers and with all the ground breaking technologies that VMware has introduced with the release of vSphere 6.0 and NSX 6.2, this has become a reality today in the software-defined datacenter era.

I have everything you would expect to come with such rich topic from a business case, to real-customer deployment, to a detailed architecture, to even a recoded video demo’ing parts of the technologies explained here. So, without further ado, let’s get to the details.

The Business Case: Workload mobility in a Telco-over-cloud environment

Right before my recent transition to the VMware SDDC R&D organization, my last project in the Professional Services Org was probably the most interesting one throughout my 5 years in my role as a field architect. I was tasked to design a state-of-the-art Telco Over Cloud platform for one of the largest Telcos in the region. As you would expect also from a carrier-grade design, application and service continuity was one of the top priorities in that project.

Apart from the traditional disaster recovery requirements for this Telco, there was a requirement also for a disaster avoidance on a datacenter/location level. The requirement here was to perform a rapid workload mobility across two sites should a foreseeable outage were to strike one of the two active/active sites. Network Functions Virtualization (NFV) management workloads was the first aspect that we looked at here. The Telco had multiple Network Equipment Providers (NEPs) involved in the project, and the customer wanted to have a unified platform to enable the management applications continuity. This was required regardless of the NEP specific application capabilities to tolerate failures. In fact, the two NEPs that were involved in this project had a clustering mechanism builtin in their Virtualized Network Functions Manager (VNFM) application. This, however, was a traditional Active/Standby clustering mechanism that would require a downtime for switch over. In carrier-grade environments, a one second of downtime would have a big impact on business.

Same as the VNFMs, the Cloud Management Platform (CMP) used here was vCloud Director – Service Provider Edition (vCD-SP), which was no exception for the required availability since it provides the gateways (through APIs) between the VNFMs and the actual VNFs being provisioned. Although vCD-SP is an Active/Active stateless application, it still requires a database tier that has to be abstracted. Same thing holds true for other operational workloads like the NEPs Element Management Systems (EMS) as well as VMware’s applications like vRealize Operations, Log Insight and so forth. Those workloads plays a critical role to monitor and manage an environment that is operating in real-time.

So how did we achieve that application continuity in such mission-critical environment? The simple and short answer is through the combination of vSphere’s Long Distance vMotion along with the Cross-vCenter Network & Security in NSX 6.2. The long answer is that we will discuss and examine in the rest of the article in detail.

The Architecture

The Architecture

Breaking the physical and logical boundaries

In vSphere 6.0, VMware introduced a ground-breaking new feature called Long Distance vMotion (LDvMotion). I was most excited about this feature more than any other technology since the invention of the original vMotion itself. As a matter of fact, I created the very first prototype inside VMware using early builds of vSphere + NSX to show case how you can live migrate a VM across two datacenter using LDvMotion and NSX’s L2VPN network extension. All in software, no network or storage extension across sites.

That previous design with the NSX Edge Services Gateway (ESG) & L2VPN is still a valid design and can be implemented in many customer use cases, however, with the introduction of NSX 6.2, VMware has taken this to a whole new level. The new Cross-vCenter NSX feature in the 6.2 release basically allows you to stretch your Logical Switches (aka VXLANs) across sites for L2 networking and to create also a brand new Universal Distributed Logical router for L3 routing. Sounds cool? Not as cool as adding a Local Egress capabilities into the mix. But more on that in a bit.

Stretching your networking and security constructs

If you try to speak to a network engineer about Layer 2 extensions across sites, he/she would probably push back and start explaining to you how that would be a very bad idea and how a wrong configuration in the infamous Spanning Tree protocol would bring down your two sites should an incident of that nature occur. That’s probably right, expect that we are not doing any of that here in this architecture. We are actually doing a software overlay on top of an existing layer 3 physical and traditional networks. Think of it the same way VMware first introduced the VXLANs to abstract L2 networks across racks over existing layer 3 networks within one datacenter. What is different here is that this capability is now possible across vCenter Servers and, consequently, in the context of our current design, across datacenters. VMware has basically introduced a new technique to peer two (or more) NSX Managers together and to unify the controllers into one universal cluster. The outcome is simply amazing. You can now create universal transport zones across compute clusters that are managed by different and independent vCenter Servers sitting in different datacenter/locations as long as you have a very basic L3 connectivity across them which is the case in the vast majority of today’s customer environments. You are probably already doing that today with the vCenter Enhanced Linked-Mode to manage your two sites from one single WebClient. If you do, congratulations, you already have most of this architecture already in place in your environment!

vCross-vCenter NSX

NSX Peering and vCenter Server Enhanced Linked-Mode

Local Egress Optimization

So now that we’ve explained (in a very high level) how the L2 networks are extended across two sites with the new Cross-vCenter NSX 6.2 capabilities, lets have a look at the L3 aspect or north/south traffic. The first question that customers ask at this point, regardless of the great L2 applications adjacency benefits they recognize, is this: what about my L3 traffic, do my application need to traverse datacenters to exit to the dc core, enterprise edge or the internet? The answer is simply no, but it really depends on how you architect your logical L2 & L3 networking. If you want all your traffic to exit from one site, you can. If you don’t, you don’t have to and in this case you enable the Local Egress in your universal DLRs. The way you do this is that when you deploy your UDLR for the first time, you enable the option “Local Egress” during the configuration wizard. Once your do, your UDLR will be deployed and extended across the two datacenters. The next steps would be as follows:
1) You deploy two different Control VMs, one in each datacenter. This is not mandatory but it is required in our current design here because you need to establish an OSPF routing adjacency between your UDLR and the upstream ESGs.
2) You then need to create two uplinks (yes, that’s possible with the UDLR) and each one goes to a universal logical switch that is, in turn, linked to your ESGs that are sitting at each site. In the diagram you can see two ESGs with internal interfaces going to those vxlans while their uplinks are going upstream to a unique VLAN that is relevant to each site. Through the latter, you are establishing another OSPF adjacency with your datacenter L3 physical device. We are also leveraging here the equal cost multi-path (ECMP) to allow a better performance (more N/S bandwidth) and better availability (faster convergence within an ECMP nodes cluster). In the next section I will explain how your e/w and n/s traffic will flow.

Your East/West and North/South application traffic

In our current design, I used a practical example for multi-tier tier applications to make things realistic. As you can see, we have the vRealize Suite (vRealize Automation, Log Insight, Operations, etc). If we take vRA as an example, it’s a typical three tier application that consist of Web tier (appliances and IIS), application (Manager Service) and database (MS-SQL). There are also some services dependencies like single sign-on which is represented here by the vRealize Identity Appliance. All your east/west traffic across these nodes happen over L2 since they share the same logical switch. The north/south traffic that need to go out/in happens through the Edges that the UDLR is uplinked to. In our case here, the UDLR has one internal interface that is acting as the default gateway for those application nodes. Keep in mind here that if one node is sitting in site A, then it’s default gateway is local in that datacenter. Same thing holds true to another node that might be sitting in site B, its default gateway (in our case 172.16.10.1) is also local to site B. Now what about the routed traffic all the way to the DC core? It happens exactly the same way, thanks to the Local Egress optimization that we enabled on the UDLR. The first node in site A will be routed to the upstream ESGs in the same site and from there to the DC core which is the Layer 3 switch (192.168.110.1). The exact same situation happens for site B, the node sitting there will be routed to the ESGs sitting there and then to the Layer 3 switch (192.168.210.1). If you are interested to know in detail how this is happening, you can read about the locale-id concept in NSX. This is largely out of scope of our article here but I am planning to do a technical deep dive into in a future blog post.

The Dynamic Routing – upstream & downstream

I’ve briefly mentioned earlier that we are establishing an OSPF adjacency between the ESGs and the upstream switches. Lets have a closer look into that.
If we take a look starting from the virtualized application itself (vRA in our case), we said that its default gateway is the internal UDLR logical interface 172.16.10.1.

That UDLR has an OSPF peering happening with the ESGs in an ECMP topology. The ESGs, in turn, have another OSPF peering happening with their upstream physical L3 switches.

NSX Routing

NSX upstream and downstream routing adjacencies

vCenter Enhanced-Linked Mode and PSCs

One of the important subjects I don’t want to miss here is the vCenter Enhanced-Linked mode. You have everything to win by enabling this in your vSphere virtual infrastructure right from the was of managing your inventory and licensing, all the way to performing cross datacenter migrations (vMotion & Storage vMotion) right from the comfort of your Web Client. Although your can perform that using APIs, who would want that if you can simply drag and drop objects from the UI. In the following demo, we will examine that.

Your First Long Distance vMotion

So this is it. This is where we test the beauty of this new vSphere 6.0+NSX 6.2 capabilities to live migrate your VMs across datacenters. In the past, that was exclusive only to physically stretched L2 networks and storage. Today, we do all that in software thanks to the universal logical objects. With that said, when you right-click on your VM and chose Migrate, you will have the typical wizard asking you for choosing between three options: Compute only, Storage Only or both. Since we are in the new brave world of all things software, we are choosing here the third option to migrate across both compute and storage to a completely different and independent datacenter. Next, you choose your compute cluster and storage destinations, both of which are in the second datacenter. When you reach the point of choosing your network, you will find that vCenter is already displaying the associated VXLAN on the other side with the same segment ID. That’s simply because this very VXLAN segment is a universal object. Now let’s play the demo.

Demo: VM migration across three datacenters

To make things ever more interesting, I’m not going to demo a VM migration across two datacenters. That was probably cool prior to VMworld 2015 when I first prototyped this with NSX. We are going to demo here two consequent VM migrations across three datacenters. This is also to show you how flexible the design is and how scalable you can take it further.

Security

The last subject but definitely not least of importance is security. You may be already wondering at this point: what about the VM security constructs? Do I lose that when I migrate the VM? Of course the answer is no. You keep and maintain those settings via the universal DFW. Since we are also taking the vRA example here as an application, let me talk in a bit more detail as to how you would secure this application in your datacenter with this design.

1) L2 micro-segementation: Firstly, you can strict the traffic between the vRA nodes in their own L2 network. For example, the DB node speaks only with Manager Service and Web tier over the required ports. No access is allowed ingress or egress to anything else.
2) L3 end user access: The vRA like any application with Web tier, requires end user access to the portal. Since we already know that this happens through https and vmrc, then we simply open those ports only to the end-users networks. Those latter networks are blocked to access anything else in the vRA VXLAN network.

One would ask here: why do I apply these security enforcement on the NSX DFW rather than my traditional ACLs or DC firewalls? The answer is simple. While you can still do that the very same old way you used to secure your applications (virtual or physical), you would want to do this on the NSX DFW to enable those security settings to migrate with the VM across datacenters (or after a failover with SRM as I will show in part two). NSX is enabling you here to avoid any requirement to reconfigure (or preconfigure) any of those security rules. This is a onetime configuration that will be maintained throughout your application lifecycle.

Of course there are even more advanced techniques you can follow here, assuming you are in a highly secured or multi-tenant environment, to perform deep packet inspections and that can simply be done through redirecting your traffic to any of VMware security partners (like Palo Alto Networks or CheckPoint, to name a few) as long as they are certified with NSX.

NSX Universal DFW Security Rules

Conclusion

Let me get back to the original business use case. In our design here, we enabled the Telco to provide a unified platform, regardless of the applications and their vendors, to seamlessly live migrate those applications across datacenters. Those migrations are neutral to the location or network connectivity as long as the telco can maintain the LDvMotion requirement of 150ms. No stretched networking, storage or compute clusters were required here. No vendor-spesific hardware solutions either were used here. This is all software-defined, and configurable in a matter of minutes (not even hours) thought your vSphere WebClient. This design also will work on the vast majority of todays datacenters since we require no special cross-datacenter solutions whatsoever.

What’s Next?

Next is disaster recovery. We have seen here how disaster avoidance can be achieved, next I will show you how NSX and SRM work beautifully and are in fact like a match made in heaven. Forget everything you know about applications failover, ip addressing/DNS changes, scripting or manual routing convergency. I am going to demonstrate (with this very same design) how you can recover your applications with zero-touch infrastructure right after a major disaster taking out your entire datacenter. Stay tuned for part two.

Postscript:
– The content of this blog post (writeup, architecture and video) was produced last year in my previous role in the ISBU. I never had the chance to complete and publish except this month but everything mentioned here is still up-to-date and the customer reference here is already running in production.
– This solution was prototyped using my NSX vLab from a proof-of-concept all the way to a customer deployment. I would highly recommend to you to check it out. It can help you, regardless of your role being an architect, consultant or admin, in your planing, design, deployment and validation phases.

Share Button