This is a two-part blog post to show you two different solution architectures for VMware NSX 6.0 and vCloud Director. These architectures are focused primarily on securing, load-balancing and publishing the vCD portal through NSX. Although this blog post is typically targeted for Service Providers, I do not see why it won’t fit the bill for an enterprise as well that would be interested in publishing it’s cloud service for external consumption (by partners, subsidiaries, contractors ..etc).
Before we go any further into the details, please note that this solution is applicable to *any* vCD release since we are not touching here specific interoperability features between the two products. In fact, the very same concept I am presenting here could be applied to almost any application that requires to be exposed to the internet. I’ve chosen vCD here as a case in point. I have been involved in so many public cloud projects and I know first hand how it is a challenging task, using traditional solutions, to achievethe same results. I can’t remember how many hours, days (and sometime even weeks!) I had to wait for customers to get a simple load-balancing configuration done correctly, not to mention the IP allocations from the network teams or, even worse, the security part and what needs to be opened, closed or monitored. In this blog post, you will see how all that can be done in a matter of minutes now with the minimum network/security intervention. Welcome to the NSX world.
Preparing the layout components
We will need to have our two vCD cells ready and configured with three network cards. I’ve blogged about this long time back in a similar solution here
(using the traditional and hard way). To recap, the first two NICs will be assigned to the upstream HTTP and VMRC services. The third NIC, will be assigned to interface with the downstream management services (e.g. vCenter Server, DB, DNS..etc).
From the NSX part, we will need to create two Logical Switches identified as “External Perimeter” and “Internal Perimeter”. The former will be connected to the first two NICs of the cell, and the latter will be connected to the third NIC. Next, we will need to provision a couple of NSX Edges. The first one, again, will be identified as an “External Edge” and the second one will be an “Internal Edge”. This latter Edge could be either an NSX Edge Services GW or an NSX Logical Router/Bridge. This depends on your use case and how you require to route/firewall your internal perimeter zone. More on that in a future post.
The NSX Logical Switches
The NSX Edges
The Internal Edge
In this Edge, we will create two interfaces. The first is the “Uplink” to be connected to the traditional management network (D-Portgroup) that you already have in your environment (the one that typically have the vCenter Server, ESXi hosts..etc). The second is the “Internal” interface to be connected to “Internal Perimeter” Logical Switch. See the screenshot below.
Note that the Internal interface has an IP address of “10.20.30.1” which will be the default gateway for the third interface on the cells. On the other side, the Uplink interface of the Edge has the IP address 192.168.110.5 and the default gateway for that Edge should be your existing physical router/switch on the network. Confused? Have a look into the diagram as it reflects all these configurations with the same exact IP addresses and NSX components.
The External Edge
Unlink the above one, this Edge has to be provisioned as “Edge Services Gateway”. We do not have an option here to have it as a “Logical Router/Bridge” since we do need different services like Load-balancing, NAT’ing, Firewall’ing (and in the next solution, a VPN Gateway). Now let’s examine this Edge in details.
As you see from the screenshot above, we still have two interfaces for this Edge. One as an “Internal” interface, connected to the “External Perimeter” logical switch. The second is an “Uplink” interface and connected to the Internet router. This is typically a port group with a dedicated uplink NICs to DMZ switches in your Management Cluster. Note here that the Uplink has two IP addresses. In the screenshot you can see them as (192.168.225 & 226), while on the diagram you will see them as (22.214.171.124 & 222). As you may have guessed, I do not have in my lab a public IP addresses, instead, I am using a different external network to simulate an Internet connectivity. Now, why do we have two IP addresses set on the Uplink? The answer is that we will have one IP address dedicated to the HTTP service, and the second dedicated to the VMRC service. Both of which will be NAT’ed and Load-balanced to the first two IP/Interfaces of the vCD Cells. Again, have a look into the diagram as it would save you a thousand words of explaining this in writing.
Since this post is intended to show the architecture of the solution rather than the how-to, I will not go through the details of configuring the Load-banalcing (probably will do that in a future post/video). The thing to note here though is that the NSX will take care of the NAT rules creation. This point had me confused at the beginning where I thought that I had to do the NATing first. In our case here, as soon as the load-balancing configuration is done, the NSX will automatically publish the NAT rules as shown in the screenshot below.
Configuring the relevant Firewall rules
I recommend that you keep this as a last step after configuring and testing your environment fully. After that, you can start enabling the relevant firewall rules to open/close specific ports. On the External Edge IPs, you typically want to open only ports 80 (http) and 443 (https) since all the vCD communication happens over SSL. For the Internal Edge, you will need to open the ports that are required by vCD to communicate with your management servers/services like vCenter Server, ESXi hosts, DNS, NTP ..etc. I’ve produced a (very old!) diagram showing a sample of these ports here
. Make sure to get the up-to-date list of the relevant vCD version you are running in your environment.
As you have seen, provisioning network and security services has never been easier. With NSX, we did all the L2-L7 provisioning and configuration right from one console and yet had a minimum dependencies over the physical network. Looking closer into this architecture, you can see how we are securing the vCD cells from the upstream and downstream traffic. If a hacker were to break into the cells through the first/external firewall, he/she would still need to go through another firewall wall to touch your network. Things can get even more interesting when we look at the NSX extensibility. For example, we can hook up a virtual IPS to the external Edge from any VMware security partner (e.g. Symantec) and have our traffic deeply inspected against exploits and vulnerabilities targeting the Linux or vCD software. The possibilities are really endless here.
In my next post, I will show you a different approach to this by enabling the SSL VPN-Plus feature on the external Edge and how that will change the external access to the vCD cells.
P.S. If you are a VMware employee, I have this lab running on the VMware OneCloud if you want to examine the configurations, functionalities or architecture. Reach out to me over email to give you access.