NSX Cloud. Part 3 : Routing Setup

So far what we did was installing CSM, adding AWS and Azure accounts and now have visibility into Regions/VPCs (VNETs) and running instances. Final thing that needs to be done is to install PCG. Before deployment, we will need to prepare both AWS and Azure network infrastructures. Let’s start with AWS

1). Login to your AWS Account and navigate to VPC. At least one VPC needs to be in place. We will be showing scenario with two VPCs. One of them ( VPC-1) will be called Transit VPC, this is where PCG will be installed along with some workloads ; Second one (VPC – 2) will be called Compute VPC, where all workloads will be installed

2). In Transit VPC we will need to have at least 3 subnets: one for PCG Management ; one for PCG uplink with internet connectivity ; one for PCG downlink, which will lead to workloads

3). Transit VPC needs to be attached to Internet Gateway (IGW) with route table containing default route pointing towards IGW

4). Last important thing to verify is DNS hostname and DNS resolution options enabled in VPC. Our Transit VPC has it enabled

While Compute VPC has DNS Resolution disabled

We need to click on Actions–>Edit DNS Resolution. Click Enable and Save

Now let’s move to Azure side. We will be using also two virtual networks (VNETs): one for Transit, one for Compute

1). Log in to Azure portal and navigate to Virtual Networks–>Subnets. VNET-1 will be Transit VNET where we will have PCG deployed along with test workloads. CIDR block is 172.30.0.0/16

Note there is another requirement for subnet to exist with name “GatewaySubnet”. It cannot be used for workloads. It will be utilized later by Virtual Network Gateway (VNG). 

2). Move to another VNET to check subnets. For Compute VNET ( VNET-2) we will have only one subnet defined. CIDR block is 10.55.0.0/16

 

3). Navigate to Resource Groups and create one to place storage account that we will create later.

4). Navigate to Storage accounts and create one in above Resource Group. This is needed for future PCG

Those are basic steps required for Azure side to be preparared

Before discussing PCG deployments, we will need to make sure that our routing between on-prem where NSX-T Manager and CSM reside and AWS/Azure networks is in place. Amazon DirectConnect or Azure Expressroute are expensive private connection options available, which in many cases are not viable to implement. We will be reviewing IPSec VPN setup between our on-premise router and AWS/Azure side. You will need router that is capable of supporting IPSec VPN, optional but recommended BGP. I will be using Cisco CSR 1000V router. Let’s start reviewing setup to AWS first, then we will switch to Azure.

Before any configurations, let’s review options that we have with AWS. First one is to use traditional Virtual Network Gateways (VNG). If there is single VPC and single VPN connection, then it is the easiest way to set up your VPN.
As for now, you can only attach single VPC to VNG. So if you have multiple VPCs in your network, it means that only one will be attached to your gateway, while others has to be peer with that VPC. Edge to Edge routing is not supported at the moment, i.e if VPC-2(Compute) wants to peer with on-prem network (via VPN in VPC-1-Transit) it is not going to work.  Second option is newer called Transit Gateway, where you can attach multiple VPCs and VPNs to a single gateway. Furthermore, you can also partition your routing table into separate multiple small tables, thus archiving segmentation, similar to VRFs in traditional networks.  For more information about Transit Gateway you can refer to the link https://docs.aws.amazon.com/vpc/latest/tgw/tgw-transit-gateways.html

We will be using Transit Gateway option in our setup to establish routing connectivity to our on premise network.

1). Login to AWS Console, navigate to VPC and choose Customer Gateway under VPN option. Click Create Customer Gateway. We will be using BGP routing (so called route based VPN). Put IP address that matches your public address, BGP AS number (can be private). Amazon side default BGP is 64512, so you can pick up something else in private range.

 

 

2). Navigate to Transit Gateways–>Create Transit Gateway. Provide name, BGP AS Number. Leave checkboxes as in picture below

3). Next we will need to create attachments to TG. Attachment can be either VPN or VPC. Let’s start with VPN attachment. Navigate to Transit Gateway Attachments and Click Create Transit Gateway Attachment

Choose your TG that was created in previous step ;

Attachment type : VPN

Customer gateway : existing that was created in step 1

Routing options : Dynamic (BGP)

Optionally put values for Tunnel 1 and Tunnel2 ( by default two tunnels will be brought up over single VPN connection) : CIDR block and pre-shared key. CIDR block is by default /30 in range of 169.254.0.0/16 and PSK is 8-64 characters long , both generated by Amazon automatically. You can leave defaults .

 

4). Now navigate to Site-to-Site VPN connections. Highlight your connection that has been created after previous VPN attachment to TG and click Download Configuration.
Choose Vendor, Platform, Version of the Software and Click Download. This will generate text file with all necessary configuration for VPN with BGP for your router.

 

5). Put generated configuration onto your router and verify that you bgp peerings are up

csr1000v#sh ip bgp summary 

Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd

169.254.8.245   4        64512    9579   10061       34        0 1d02h          

169.254.9.33    4        64512      29      36       34        0 00:04:20       

The same can be checked from AWS console by navigating to Site-to-Site VPN Connections.

Click on VPN connection and look at Tunnel Details

As you see, we are getting 2 BGP routes from my on-prem network over both tunnels. We are not yet advertising anything into BGP from AWS side. We will do this in a moment

6). Once we setup our VPN, next step is to attach VPCs to Transit GW. The way you do this is by associating one subnet in each AZ to TG. All other subnets in the same AZ of that VPC will be added automatically. Navigate to Transit Gateway Attachments and Click Create Transit Gateway Attachment.

Choose your TG that was created in previous step ;

Attachment type : VPC

Attachment name tag : to-VPC-1-Transit

Enable DNS Support

VPC ID : Choose VPC-1-Transit

Subnet ID : Choose any subnet in your AZ

Create attachment.

Repeat the same step for Compute-VPC

7). We will be utilizing default routing table within TG, although you can create multiple for segregation purposes, but this is not an intention for this setup. Navigate to Transit Gateway Route Tables, Select default table that was created during TG setup and check first Routes tab.

 

As you noticed it is empty. So here is what we have so far. We created Transit Gateway, associated VPN and two VPCs, but gateway doesn’t have any routes in the table? To add routes into routing table of TG, we will use “Propagations” tab.

Click Create Propagation

Select VPC-1-Transit and click Create Propagation. This will inject CIDR block 172.31.0.0/16 for our VPC-1-Transit into routing table of Transit Gateway. Let’s check Routes again

Now we have expected route in the table. Repeat the same steps above for another VPC (VPC-2-Compute) as well as VPN which will inject local CIDR block of 10.44.0.0/16 for Compute VPC and 2 BGP routes coming from our on-premise. After all propagations our table should look like this

 

8). So our Transit Gateway has routes coming from BGP as well as local CIDR routes from VPCs. In order start “sharing” those routes or advertise them, we need to use “Associations” tab.  Switch to it and click “Create Association”

We will select our VPN attachment and click “Create association”.

9). Check status to confirm that attachment association was successful

Now let’s check our physical router to see whether we are getting any routes

csr1000v#sh ip bgp summary

Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd

169.254.8.245   4        64512     171     176       18        0 00:27:36        2

169.254.9.33    4        64512     171     177       18        0 00:27:38        2

csr1000v#sh ip bgp neighbors 169.254.9.33 received-routes 

     Network          Next Hop            Metric LocPrf Weight Path

*>  10.44.0.0/16     169.254.9.33           100             0 64512 e

*>  172.31.0.0       169.254.9.33           100             0 64512 e

csr1000v#sh ip bgp neighbors 169.254.8.245 received-routes

     Network          Next Hop            Metric LocPrf Weight Path

*   10.44.0.0/16     169.254.8.245          100             0 64512 e

*   172.31.0.0       169.254.8.245          100             0 64512 e

So what basically happened is our attachment to VPN was done, which lead to Site-to-Site VPN session with our Customer Gateway . Then route moved to BGP table and got advertised back to our on-prem router.

As you can see we are getting our both CIDR prefixes, with one neighbor being preferred over another.

10). We will repeat the same association steps for our both VPCs (Compute and Transit) so they can reach on-prem network as well as each other.  Repeat step 8 and choose VPC this time. Confirm that both VPCs are associated.

11). Unlike VPN attachment that has BGP on it, VPC attachments are not dynamically propagated. We will need to adjust route table for each VPC and statically point routes towards our Transit Gateway.
Under VPC–>Route Table. Pick Transit VPC. Choose “routes” and click “Edit routes”

Add relevant routes and point them towards Transit Gateway. Click “Save Changes”

Repeat the same steps for Compute VPC.

***********************************

Now let’s review routing Setup for Azure

1).Log in to Azure portal and go to Local Network Gateway. This will create our CSR1000V router configuration. This is similar to Customer Gateway in AWS . Put your public IP address, BGP settings, Peering Address – this is source for BGP connections on your router.

 

2). Now create Virtual Network Gateway which is logical configuration for Azure side of the connection

Gateway Type : VPN

VPN Type : Route Based ( so we can run BGP)

Virtual Network : VNET-1
Gateway Subnet : GatewaySubnet. This is eventually will be used as BGP peer address

Public IP : Either create new one or select from existing addresses available

Configure BGP ASN :  Enabled

ASN : Choose number, in this case 65515

Click Next to assign tags if needed and then create.

 

 

3). Next step is to create VPN connection between gateways. Navigate to VNG–>Connections–>Add

4). Provide details for your connection

Type : Site-to-Site (VPN)

Choose Local Network Gateway created earlier

Define shared-key (PSK).

Click OK

5). Verify that Connection is established

6). Now it is time to setup BGP towards this VNG.  Click on established connection and under overview click “Download Configuration”. It will ask Vendor,Device Family and firmware version. Pick relevant and click “Download Configuration” again

7). Unlike AWS which provides /30 addressing on virtual tunnel interface, it is not an option with Azure. You will find tunnel host interface (/32) and you will have to put static route towards Gateway address

interface Tunnel11
description ** To Azure **
ip address 169.254.0.1 255.255.255.255
ip tcp adjust-mss 1350
tunnel source GigabitEthernet3.1
tunnel mode ipsec ipv4
tunnel destination X.X.X.X
tunnel protection ipsec profile VPN1-IPsecProfile

ip route 172.30.0.254 255.255.255.255 Tunnel11

Then we are ready to put our BGP neighbor statements sourcing with ip address that was indicated in configurations above : 10.33.10.254

csr1000v#sh ip bgp sum

Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd

169.254.8.245   4        64512      87      92       42        0 00:14:00        2

169.254.9.33    4        64512      88      91       42        0 00:14:01        2

172.30.0.254    4        65515       9       8       42        0 00:03:48        2

 

csr1000v#sh ip bgp neighbors 172.30.0.254 received-routes

*>   10.33.10.254/32  172.30.0.254                           0 65515 i

*>   172.30.0.0       172.30.0.254                           0 65515 i

Our session is up and we are getting routes from Azure. Note we only have information about our Transit VNET for now.

8). Last steps are to make sure we have our VNETs able to route back to each other as well as to on-prem networks. We will be using VNET peering with Gateway Transit option here

Navigate to Virtual Networks. Choose Transit VNET (VNET-1)–>Peerings–>Add

Note, that VPC can either be offering transit gateway capability or using (remote gateway settings).

9). Repeat the same steps for Compute VNET (VNET-2)

10). Now verify that our physical router gets routes about all VNETs.

csr1000v#sh ip bgp neighbors 172.30.0.254 received-routes

     Network          Next Hop            Metric LocPrf Weight Path

*>   10.33.10.254/32  172.30.0.254                           0 65515 i

*>   10.55.0.0/16     172.30.0.254                           0 65515 i

*>   172.30.0.0       172.30.0.254                           0 65515 i

At this point we have IP reachability between our on-prem network and both VNETs and VPCs. So we should be able to move to next part where we will be deploying PCG. Stay tuned….