If you are a VMware employee, you can have an instant access to this lab on VMware’s OneCloud. If you are a VMware customer and would like to have a demo on any of those topics mentioned here, you can reach out to your local SE/AM and see if they can arrange a remote demo for you. That’s the whole beauty of those cloud labs!
I’ll start this post by saying something that might sound a bit shocking for many: I have never owned a home lab in my life or even had access to a physically dedicated labs in my career. I have been always an avid fan of virtually nested labs and have always used them as the only way to develop, test and validate solutions. This has become especially obvious when I joined VMware in 2010 where I used quite heavily our internal “OneCloud” (called at that time vSEL) to run all my labs.
Ever since, I have architected and developed so many labs that I cannot even count. The one I am sharing here in this blog post is not the biggest but I can fairly say that it’s the one I am proud of the most. Apart from the incredible flexibility that it gives me in testing almost anything I want in the NSX world, it also allowed me to learn new topics, validate my solutions and last but not least demonstrate them to colleagues or customers. Take the NSX 6.2 Local Egress as an example. This is one of the most powerful features in NSX yet one of the most difficult subjects to understand for me when I first read about it. With this lab, it was quite easy to design and implement it in no time and to learn all about its powerful capabilities along the process (future blog posts coming soon on this).
Granted, there are some things that cannot be done (yet!) in this vLab like the use of VLANs, but that is not a show-stopper at all to achieve what you want to architect/test/validate in your NSX labs. As you will see, for example, I have substituted the use of VLANs by dedicating interfaces on the core router at each datacenter to have its own subnet + default gateway. That pretty much allows you to simulate a typical enterprise environment with different networks for management, production, campus access, etc. The use of VLANs may come later by leveraging for example the Arista vEOS or the Cisco L2vIOS (depending on the licensing terms & conditions for eval/lab use).
Although this lab can be used in so many purposes, here are my top favorite topics that can be fully demonstrated:
1) NSX 6.2 Local Egress: as mentioned above, I leveraged this lab mainly to test and validate this powerful feature in NSX when 6.2 was out. You can combine that with some workload mobility using Long Distance vMotion to showcase how you can live migrate your application across datacenters with zero-downtime. I will take about this in detail in my very next blog post.
2) Disaster Recovery: this is a very hot topic now in the NSX world. Forget about what you used to do in the old days with SRM and the painful process of changing your applications IP addressing after a failover process. With SRM + NSX, you can demonstrate how they both form an unmatchable solution for a fast and efficient recovery of your apps. Not just that, you can also test and validate the actual NSX system recovery across sites when you lose for example a complete datacenter. How can you recover your Net/Sec environment in the recovery site, and how easy will you do that.
3) Routing: this is one my favorite topics. You can leverage this lab to configure and test your OSPF/BGP routing adjacencies between your NSX environment and your physical network (simulated here with Cisco CSR1000V). That includes all the ECMP goodness as well.
4) Integrations with CMPs: so whether you have vRA, vCD-SP or even VIO, this lab will be your best bet to integrate and test NSX with those CMPs. Bet it a single site, single cluster or a multi-site with multiple clusters, you name it. All you need is to configure your favorite CMP with NSX and start all the fun of automating network and app provisioning.
5) Micro-segmentation: not only you can test the app-to-app micro-sgementation, but also combine that with security enforcement for those apps with external access (campus, remote or internet) users. This is a great way to explain to someone the powerful DFW capabilities in NSX and how you can leverage it in real-world to secure and harden your apps.
6) Pen-Testing: to build up on the previous point, you can take this further and start performing some penetration testing from within your datacenter network or from your campus network of from your remote locations to try to exploit some vulnerabilities in your applications. Combine that with some of the NSX partners NGFW solutions like Palo Alto Networks, McAfree, Trend-micro, etc, and you’ve got a very powerful platform to do your testing (with your favorite tools like Metasploit for example).
This list can keep going on and on. These are just some of my fav topics. In future blog posts I will keep developing this lab to introduce new NSX features or solutions. Just as an example, I am already working now on setting up an L2VPN between sites as well as a standalone NSX Edge in the remote site in a hub-spoke topology. I will keep updating this Lab and blogging about its development here. If you are a VMware employee, you can have an instant access to this lab on VMware’s OneCloud. If you are a VMware customer and would like to have a demo on any of those topics mentioned above, you can reach out to your local SE/AM and see if they can arrange a remote demo for you. That’s the whole beauty of those cloud labs!
And in case you are still wondering why I prefer to use this nested lab over a physical home lab, here are also my top reasons:
– Resources: no matter how rich I am, I am not going to be able to match the resources required to build such large environment of two DCs + remote site in a physical form. This can pretty much scale also as I fancy adding more datacenters or remote locations. Also, why would I want to buy a Cisco router to run MPLS in my core network if I can just use their CSR? Take that example and apply it also on VMware for physical ESXi hosts, or better yet, on storage vendors like the dying Fibre Channel arrays or even NFS filers.
– Flexibility: obviously it is easier to deal with digital files than metal hardware. I can do pretty much whatever I want like span-shotting an ESXi host before upgrading it or a router before applying a written configuration, etc.
– Sharing: obviously I am all about knowledge sharing so it would be a bit tricky if I wanted to share with you (my colleague or customer) my home lab. Here, I can either share with you the access to the lab (with RDP access or with direct vCD access), or I can even publish it to our internal catalog for anyone to deploy it and have his/her full control over.
– Tear & Deploy: sometimes I like to do quite disruptive tasks like simulating an actual disaster strike on a complete datacenter (routers, links, dc, etc) and test how a recovery can be achieved. This is normally a disruptive task in the physical world that requires some effort to return back to the original state. In this type of nested labs, it has never been easier. All you can do is simply delete your lab when you are done, and then deploy a clean one for a fresh start.
Lastly, I will leave you with some built points to list down what the lab consists of but the attached blueprints/diagram still still speak the thousand words.
– NSX 6.2.1 (upgraded from 6.2.0)
– vCenter Server 6.0 U1 (upgraded from 6.0 GA)
– ESXi 6.0 U1 (upgraded from 6.0 GA using Embedded Host Client)
– vRA 7.0 GA
– vCD-SP 8.0 GA
– Two site-independent ESGs peered upstream with the core router (CSR1000V) in ECMP configuration and downstream with the UDRL control VM (which is again unique at each site).
– Local Egress routing is enabled on the UDLR.
– One core area 0 connecting the edge routers at each site (primary, secondary and branch) and the SP Router.
– Two unique NSSA ospf areas at the primary and secondary sites (10 and 20) and they are configured from each site router as totally NSSA to avoid exchanging the core routes with the ESGs.
CMP – Service Provider Model:
– vCD-SP 8.0 is configured with two vCenter Servers and NSX Managers for both sites.
– Two Provider-vDCs each pointing to the resource cluster of each site.
– Two Organization-vDCs carved up from the previous PvDCs in PAYG model.
– Edge Gateway configured on the first Org-vDC and setup with External-Direct, Private and NAT-Routed OrgNets.
CMP – Enterprise Model:
– vRA 7.0 installed and configured in simplified mode.
– Two Endpoints pointing to vCenter Servers at the Primary and Secondary sites.
– NSX 6.2 configured with the previous vCenter Servers.
– Network Profiles created fro External, Routed and NAT’ed networking.
– Blueprints created to reflect VMs with the previous network topologies.
– vCenter SRM 6.1 installed and configured across the primary and secondary sites in a two-way protection.
– vSphere Replication 6.1 is configure and replicating various applications (like vRA and vCD) across sites.
– NSX 6.2 is fully integrated with SRM and the application are abstracted over universal VXLANs.
– Recovery Plans already tested many times for fail-over and fail-back across sites.
– Management: RDP into the AD01 (of DC1) or AD02 (for DC2) to have local access to the environment.
– Campus: vCD console access to either DC1 or DC2.
– Remote: vCD console access to the remote office client.