Post

GCP Network Connectivity Center (NCC)

In this lab, we will configure Network Connectivity Center (NCC) Hub that serve as a global control plane for managing network connectivity across multiple VPCs. By integrating a FortiGate VM as a Router Appliance spoke, we use the FortiGate to function as a centralized security gateway for all internet traffic.

x

VPC

First we create the all the VPCs that we need for this lab

x


Make sure the spoke VPCs (vpc-a & vpc-b) do not have static default route, we will use BGP later on for this

x


Then deploy the Fortigate Firewall to have 2 interfaces, the uplink to the ext-vpc with internet access and the downlink to the transit-vpc

x

x


Cloud Router

After that we create the Cloud Router on vpc-transit with BGP ASN 65001

x


On Advertised Routes, we will configure this router to advertise both vpc-a & vpc-b subnets

x


Now we have our Cloud Router configured

x


NCC Hub Mesh

Next we create a NCC Hub, for simplicity we will use the Mesh Topology because Mesh allows all spokes to speak to each other directly

x

  • Mesh Topology : All spokes belong to a single group where every VPC or hybrid spoke can communicate directly with every other spoke in a many-to-many relationship, optimizing for low latency.
  • Star Topology: Spokes are organized into “Center” and “Edge” groups to enforce network segmentation; Center spokes can communicate with all other spokes, while Edge spokes can only communicate with those in the Center group.
  • Hybrid Inspection Topology: A specialized configuration supported only with NCC Gateway that mandates all cloud-to-cloud and hybrid traffic be steered through a centralized set of Next-Generation Firewalls (NGFWs) for deep, stateful packet inspection.


Next give it name

x


And lastly here we add the 3 spokes, first two are the vpc-a & vpc-b

x


And the last one is a Router Appliance that points to the Fortigate Firewall

x


Here we have all 3 spokes configured, hit create

x

x


Next on Spokes, open the fortigate spoke

x


Here we will configure the BGP session with our Cloud Router

x


Select the transit-router

x


Then configure the BGP session

x


We are mandated to have 2 bgp peers so here we configure 2, even though we will only use 1 for simplicity

x


Here after the BGP sessions are all configured

x


Fortigate

On Fortigate we will also configure the BGP session

1
2
3
4
5
6
7
8
9
10
11
config router bgp
    set as 65002
    set router-id 192.168.1.2
    config neighbor
        edit "192.168.1.3"
            set remote-as 65001
            set ebgp-enforce-multihop enable
            set soft-reconfiguration enable
        next
    end
end


And now the BGP session has been established, we can see Fortigate is advertising and receiving the correct subnets from the Cloud Router

x


Back on Cloud Router, we can also see the BGP peer is now up

x


And it’s also advertising and receiving the correct subnets from Fortigate

x

x


And on the Hub side should also show the correct routing

x


Testing Mesh Hub

Now we spin up small vm on each vpc-a & vpc-b

x


On vm-a, we can ping the vm-b successfully and we can also access the internet

x


Same goes from vm-b, connectivity to vm-a and to internet is also working

x


On Fortigate, we can see the traffic coming in from vm-a & vm-b going out to the internet

x


NCC Hub Star

Next we will configure Hub with Star Topology, Star enforces network segmentation between ‘center’ and ‘edge’ spokes, allowing us to restrict routes that’s received by edge spokes, thus we can enforce traffic between spokes to go to through our transit firewall. To do that first we will delete the current Mesh Hub

x


Then we create a new Star Hub

x

x


We add the spokes like before, but now with specialized groups where vpc-a & vpc-b falls into edge group whereas fortigate sits in center

x


Other configuration for BGP and Routings remain the same, but the difference is now we have 2 routing groups, the first one is edge group where all it sees is routing from fortigate

x


Second one is center group where this group sees all routing like usual, this configuration forces all traffic from edge spoke to go to our transit firewall

x


From our firewall prespective, we can still see the routing back to the edge spokes

x


Testing Star Hub

Testing from vm-a, we can still ping and ssh to our vm-b

x


And from vm-b, we can also connect to vm-a and to internet just fine

x


But all traffic now has to go through our fortigate firewall, including traffic between edge spokes

x


This post is licensed under CC BY 4.0 by the author.