In this article i will explain how to automate nested ESXi lab build. Although everything can be done using single PowerShell script, i’ll try to demonstrate how this can be done other way by using combination of multiple tools. This assumes that you already have vCenter server installed and ready to use. I’ve broken down the whole process into multiple workflows: Workflow 1 : Create ESXi Template. Tool used : Packer
In this article i will explain how I’m actually running this blog. Obviously this era is and going to be in favor of automation and running Wordpress (WP) probably doesn’t make a sense. Putting WP or similar solution in a cloud and running VM or EC2 instance doesn’t justify the costs either. I was exploring ideas of having static web site and using AWS S3 for this purposes. Sounds good initially.
In previous post we have successfully onboarded our public workloads and now time to work on network and security part. We will be setting up policy to allow our jumphost to manage those workloads, as well as play with some firewall rule to allow/deny traffic between VMs within cloud. Finally,we will leverage new VPN feature between PCGs, and will see how traffic between clouds can be managed using DFW. 1) First let’s make a groups for our JumpBox VMs.
In previous posts we have successfully deployed PCGs into AWS VPC and Azure VNET. Now it is time to start our onboarding process for the workloads. The process is similar for both public clouds. When VM gets created, we have to assign tag nsx.network=default. After this is done, there is another element now called NSX Tools (previous name was NSX Agent) that has to be installed on VM. Once both steps are completed, VM is ready to be managed by NSX-T Manager and it is called ‘managed’ VM.
In previous posts we did all necessary preparations on AWS and Azure sides, as well as completed our routing setup. Now we will be deploying Public Cloud Gateways (PCG) into AWS and Azure. We will start as usual with AWS 1) Login to CSM and go to Clouds–>AWS–>VPCs. Choose your VPC and it should look like this. Both VPCs that we defined and they have sign that says “No Gateways”. There will be gateway soon :)
So far what we did was installing CSM, adding AWS and Azure accounts and now have visibility into Regions/VPCs (VNETs) and running instances. Final thing that needs to be done is to install PCG. Before deployment, we will need to prepare both AWS and Azure network infrastructures. Let’s start with AWS 1). Login to your AWS Account and navigate to VPC. At least one VPC needs to be in place. We will be showing scenario with two VPCs.
After successful installation, now there is time for CSM configurations. There are two integrations that needs to take place: one is with on-prem NSX-T Manager and another is with public cloud accounts. Let’s get started on this 1). Login to CSM and navigate to settings and click configure .Enter NSX Manager hostname (FQDN is preffered) or IP address, credentials and thumbprint. Click Connect Once connectivity is successful, this part is over and we can move on to the next piece to integrate public cloud accounts.
In this article i will be reviewing VMware NSX Cloud offering (not be confused with VMware Cloud on AWS). NSX Cloud is ability to gain better visibility into public clouds workloads (AWS and Azure at the moment) by having single pane of glass for your network and security policies. Nowadays applications usually spread across multiple platforms and parts of it may reside on prem, while another part can be in the cloud.