NSX Cloud. Part 6 : Network and Security

In previous post we have successfully onboarded our public workloads and now time to work on network and security part. We will be setting up policy to allow our jumphost to manage those workloads, as well as play with some firewall rule to allow/deny traffic between VMs within cloud. Finally,we will leverage new VPN feature between PCGs, and will see how traffic between clouds can be managed using DFW.

1) First let’s make a groups for our JumpBox VMs. I will use tagging to group jumpbox in Azure and leverage nested grouping for AWS jumpbox. Login to NSX-T Manager and navigate to Inventory–>Groups. Click Add Group

Provide name “ JumpBox-AWS”, indicate domain for VPC and click Set Members

Move to Members tab and select group that matches subnet for our transit VPC where we deployed JumpBox VM. Click Apply

Confirm that member IP address matches with IP address of our JumpBox host

2) For Azure JumpBox we will use tagging. Go to Inventory–>Virtual Machines. Select your JumpBox Azure VM and click Edit. Add Tag : JumpBox-Azure, Scope : Role

Note, by default all VMs that appear in inventory will have pre-defined tags from AWS and Azure

3) Go back to Inventory–>Groups. Click Add Group

Provide name “JumpBox-Azure”, indicate domain for VNET and click Set Members

Add Membership criteria for VM with tag equals to “JumpBox-Azure” and scope equals to “Role”

Let’s verify whether we have reachability from our jumphosts towards compute VMs

This is from AWS as expected, we can’t reach.

ubuntu@ip-172-31-20-11:~$ ping 10.44.1.236

PING 10.44.1.236 (10.44.1.236) 56(84) bytes of data.

— 10.44.1.236 ping statistics —

6 packets transmitted, 0 received, 100% packet loss, time 4999ms

ubuntu@ip-172-31-20-11:~$ ping 10.44.0.240

PING 10.44.0.240 (10.44.0.240) 56(84) bytes of data.

— 10.44.0.240 ping statistics —

6 packets transmitted, 0 received, 100% packet loss, time 5038ms

ubuntu@ip-172-31-20-11:~$


This is from Azure as expected

ubuntu@Azure-JumpBox:~$ ping 10.55.0.4

PING 10.55.0.4 (10.55.0.4) 56(84) bytes of data.

— 10.55.0.4 ping statistics —

6 packets transmitted, 0 received, 100% packet loss, time 2037ms


We have discussed briefly in last parts about Forwarding Policies. We will need to use them now. If you recall this is Policy Based Routing (PBR) that has multiple actions of how to route traffic. Our jumphosts are not managed by NSX-T since we didn’t install NSX agent there. And we need to allow communication from unmanaged VM towards managed VM. Think of unmanaged VM as a native cloud construct like EC2 (AWS) or VM (Azure). So essentially we need to access service on managed VMs from the underlay networks. This is where we can use Route from underlay option

Navigate to Networking–>Forwarding Policy–>Add New Policy

Provide Name : JumpBox-AWS

Domain : Choose VPC domain

Add rule

Name : Allow JumpHost Access

Source : Choose created above group for JumpBox-AWS

Destination : Choose parent group that contains both groups for VPCs

Service : SSH and ICMP

Action : Route from Underlay

Click Publish

4) Now we need to adjust our DFW rules, as by default only communication within each VPC/VNET is allowed

Name : JumpBox-AWS Access

Sources: Group for JumpBox-AWS

Destination : Group for local segments for Compute-VPC

Services : SSH, ICMP

Applied To : Group for local segments for Compute-VPC

Action: Allow

5) Now let’s verify our reachability from jumpbox towards compute VMs

It is working now!!!

6) We will need to do the same for Azure JumpBox. Go to Networking–>Forwarding Policies. Add Policy

Provide Name : JumpBox-Azure

Domain : Choose VNET domain

Add rule

Name : Allow JumpHost Access

Source : Choose created above group for JumpBox-Azure

Destination : Choose parent group that contains both groups for VNETs

Service : SSH and ICMP

Action : Route from Underlay

Click Publish

7) Create DFW rule to allow communication to happen

Name : JumpBox-Azure Access

Sources: Group for JumpBox-Azure

Destination : Group for local segments for Compute VNET

Services : SSH, ICMP

Applied To : Group for local segments for Compute VNET

Action: Allow

8) Now test reachability from jumphost towards Compute VM

It works!!!


In this section we will be showing how DFW rules can be enforced so communication between VMs can be controlled

1) Login to our jumphost and then SSH into Compute VM. Try to ping another Compute VM

We have reachability since default traffic within VPC/VNET is allowed. Let’s keep this open

2) Navigate to Security–>Distributed Firewall and create new rule to Reject ICMP traffic for Compute1 and Compute2 group

3) Return back to terminal and traffic is now blocked

The moment of truth has just arrived. All preparations and installation work that we have done, now pays off. We can now manipulate cloud VM policies using our NSX-T DFW.

Now let’s configure VPN between PCGs in Azure and AWS and we will see how initial traffic is allowed across clouds. Then we will use DFW rules to enforce policies we want.

1) Login to NSX-T Manager and navigate to Networking–>VPN. Under VPN Services click Add Service, type IPSec

Enter name : AWS VPN

Choose Tier0 router for AWS

2) Repeat the same steps for Azure VPN

Enter name : Azure VPN

Choose Tier0 router for Azure

3) Go to AWS console and under our PCG go to tags. You will need to know value of Local Endpoint IP, which is secondary ip address assigned to PCG uplink

4) Go back to NSX-T Manager and move to Local Endpoints tab under Networking–>VPN

Provide name : AWS-Local-Endpoint

VPN Service : AWS VPN

IP Address: Put value derived from above : 172.31.30.166

5) Go to Azure portal and under PCG go to tags and find Local Endpoint IP

Provide name : Azure-Local-Endpoint

VPN Service : Azure VPN

IP Address: Put value derived from above : 172.30.30.6

Return to NSX-T Manager and add Local Endpoints for Azure

6) VPN traffic will be sourced from PCG uplink secondary ip address. Confirm that it has public ip address assigned. By default there are multiple secondary ip addresses got created on uplink interface. Make sure that ip address of “local endpoint ip” has public address.

Navigate to AWS Console and under Elastic IPs find interface and public IP pair. If public ip is not assigned, please allocate it

Naviate to Azure portal and confirm the same

7) Go back to NSX-T and complete VPN setup. Click on IPSec Sessions–>Add IPSec Session–>Route Based. We will be using route-based session to setup later our BGP to exchange routes.

Choose PSK ( value must match on both ends)

Tunnel IP : This is needed to setup later routing. We will use /30 with one side 192.168.33.1, while another is 192.168.33.2

Confirm that tunnels are up

8) Now we can setup BGP routing between PCGs and exchange VPC/VNETs to allow cross-cloud communication. Within NSX-T Manager navigate to Networking–>Tier-0 Gateways.

For AWS side configure BGP ASN : 65111 and indicate neighbor to be 192.168.33.2 in ASN 65112

For Azure side configure BGP ASN : 65112 and indicate neighbor to be 192.168.33.1 in ASN 65111

Confirm that BGP has been established

9) Now inject local segments into BGP so they can be advertised to remote peer. Under Tier0-Gateways click Edit on gateway, under Route Redistribution click Set. Under Advertise Tier-1 Subnets click Connected subnets.

Repeat step for other gateway

10) Verify that we are getting BGP routes . We can login to PCG management ip address and check BGP routes. We can see we are getting routes from Azure side.

The same is true from Azure PCG when we check BGP routes

11) At last we want to test our connectivity between cloud workloads. By default DFW rule that we have only communication within VPC/VNET is allowed, while everything else is blocked. We have added our JumpHost if you recall as exception to that rule. No we can create another rule that will allow communication between AWS Compute subnets and Azure Compute VNETs from DFW. Yes, you can read it again. We are going to control cross cloud communication using our on-prem NSX-T Manager.

Login to NSX-T Manager, navigate to Security–>Distributed Firewall. Add new policy

And now let’s verify connectivity from AWS Compute VM towards both Azure VMs


You can do various different tests before deploying your actual application in real production environemnt, but now you should have good idea of what NSX Cloud, how to get started and what is involved in the process. This concludes last part of the series dedicated to NSX Cloud (at least for now). I hope you found it useful. Cheers :)