AKS workload accessing Azure SQL Database in another region

In this article, we test a few ways in which a workload running in Azure Kubernetes Service (AKS) in one region can access Azure SQL Database that is deployed in another region.

We look at the following three approaches, but other approaches like VPN and VNet-to-VNet are also possible:

Azure SQL Database with IP Firewall Rules

In our test, the AKS cluster is in East US 2 region with a specific static public IP selected as the frontend address in the outbound rule.

Azure SQL Database is in West US 2 with public access enabled and no specific rules created initially.

We try accessing the database from inside a container running in AKS. The container is using the correct public IP for outbound communication, and as expected, SQL connection is rejected by Azure SQL IP firewall rules.

kubectl run -it mssql --image mcr.microsoft.com/mssql-tools --overrides="{ \"spec\": { \"nodeSelector\": { \"agentpool\": \"agentpool\" } } }"echo $(curl -s http://whatismyip.akamai.com/)sqlcmd -S avsql1.database.windows.net -U user1 -P "Password@123"

After adding our specific outbound public IP to the Azure SQL Database firewall settings, the container can successfully access the database.

Azure SQL Database Private Link with Cross-Region Private Endpoints

Next, we disable public network access on the Azure SQL Server resource.

Now, from container running in AKS in East US 2 we can successfully connect to the Azure SQL Database in West US 2 using the private endpoint with private DNS zone name.

kubectl run -it mssql --image mcr.microsoft.com/mssql-tools --overrides="{ \"spec\": { \"nodeSelector\": { \"agentpool\": \"agentpool\" } } }"apt-get update
apt-get install dnsutils iputils-ping net-tools
nslookup avsql1.privatelink.database.windows.netsqlcmd -S avsql1.privatelink.database.windows.net -U user1 -P "Password@123" -d avsql1

Global VNet Peering with Azure SQL Database Private Link

In this test, Azure SQL Database is in West US 2 with its private endpoint in a VNet that is also in West US 2. The same VNet contains a test VM.

We globally VNet-peer this West US 2 virtual network to the AKS VNet in East US 2 region.

As expected with global VNet peering, we see that container running in AKS in East US 2 can successfully ping the test VM in West US 2. We see the expected ~66ms roundtrip latency between for East US 2 <-> West US 2 ICMP packets.

Now, from the same AKS container in East US 2 we try to connect to the Azure SQL Database in West US 2 using its private endpoint (172.18.0.4 based on one of the screenshots above). From networking perspective, this IP is reachable, but Azure SQL Server rejects the connection because we are using an IP address instead of host/DNS name.

Let’s try to trick Azure SQL Server by creating a local host name for the Azure SQL Server in the /etc/host file of the container. It seems that Azure SQL Server uses the first part of the host name and expects it to match the actual server name. We see below that two made up host names work properly because they both start with “avsql1”.

For a more seamless DNS resolution, we add “Virtual network link” to the Azure Private DNS zone to the AKS’ VNet in East US 2.

Finally, we confirm that AKS container in East US 2 can now properly resolve the DNS of the Azure SQL Database using Private Link and can still connect successfully.

Thank you!

Please leave feedback and questions below or on Twitter https://twitter.com/ArsenVlad

Principal Engineer / Architect, FastTrack for Azure at Microsoft