AKS workload accessing Azure SQL Database in another region

In this article, we test a few ways in which a workload running in Azure Kubernetes Service (AKS) in one region can access Azure SQL Database that is deployed in another region.

We look at the following three approaches, but other approaches like VPN and VNet-to-VNet are also possible:

Azure SQL Database with IP Firewall Rules

The document describing Azure SQL Database IP firewall is worded in a way that seems to imply that we must enable “all” connections within Azure to connect to the database from Azure resources. This is applicable when we do not control the outbound IP address. However, in our scenario, if AKS cluster has a specific static outbound IP, we can allow access only to that IP without turning on access to “all”Azure services.

In our test, the AKS cluster is in East US 2 region with a specific static public IP selected as the frontend address in the outbound rule.

Azure SQL Database is in West US 2 with public access enabled and no specific rules created initially.

We try accessing the database from inside a container running in AKS. The container is using the correct public IP for outbound communication, and as expected, SQL connection is rejected by Azure SQL IP firewall rules.

After adding our specific outbound public IP to the Azure SQL Database firewall settings, the container can successfully access the database.

Azure SQL Database Private Link with Cross-Region Private Endpoints

First, we add a cross-region private endpoint for Azure SQL Server in West US 2 with the endpoint created in East US 2 within the same VNet as our AKS.

Next, we disable public network access on the Azure SQL Server resource.

Now, from container running in AKS in East US 2 we can successfully connect to the Azure SQL Database in West US 2 using the private endpoint with private DNS zone name.

Global VNet Peering with Azure SQL Database Private Link

In the final test, we try accessing in-region Azure SQL Database private endpoint via global VNet-peering.

In this test, Azure SQL Database is in West US 2 with its private endpoint in a VNet that is also in West US 2. The same VNet contains a test VM.

We globally VNet-peer this West US 2 virtual network to the AKS VNet in East US 2 region.

As expected with global VNet peering, we see that container running in AKS in East US 2 can successfully ping the test VM in West US 2. We see the expected ~66ms roundtrip latency between for East US 2 <-> West US 2 ICMP packets.

Now, from the same AKS container in East US 2 we try to connect to the Azure SQL Database in West US 2 using its private endpoint (172.18.0.4 based on one of the screenshots above). From networking perspective, this IP is reachable, but Azure SQL Server rejects the connection because we are using an IP address instead of host/DNS name.

Let’s try to trick Azure SQL Server by creating a local host name for the Azure SQL Server in the /etc/host file of the container. It seems that Azure SQL Server uses the first part of the host name and expects it to match the actual server name. We see below that two made up host names work properly because they both start with “avsql1”.

For a more seamless DNS resolution, we add “Virtual network link” to the Azure Private DNS zone to the AKS’ VNet in East US 2.

Finally, we confirm that AKS container in East US 2 can now properly resolve the DNS of the Azure SQL Database using Private Link and can still connect successfully.

Thank you!

Please leave feedback and questions below or on Twitter https://twitter.com/ArsenVlad

Principal Engineer / Architect, FastTrack for Azure at Microsoft

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store