I got pritunl link setup in AWS and Azure, and down the road it will soon be GCP. In the following setup I believe I’m doing this right in the following image:
I have updated the security groups of the pritunl links with ports 500 and 4500. And depending on the SGs, I added the the VPC CIDR AWS to Azure and Azure to AWS.
I have AWS being the HUB where we host our services. We want to send data (securely) from AWS to an Azure Postgres db. However, I am uncertain on the next steps to accomplish this. Would the AWS VPC CIDR need to be added in the Security Group of Azure’s Postgres db? Also, I have a server being the hub for our IoT devices. Would I need to create another Pritunl server for peering?
Would appreciate any help. The device connection is working. It’s now making sure traffic can be between cloud providers.
For IPsec ports 500, 4500 UDP should be open on each link host allowing traffic from the other link hosts. If the link is using WireGuard no ports need to be open the UDP traffic should be able to get trough the stateful firewall used on Azure and AWS. Port 9790 TCP should also be open to allow each link host to healthcheck the other link hosts.
The source/dest checking needs to be disabled on all link hosts. This is an option in the instance settings. The VPC routing table on each side needs to have the VPC subnet of the other side with a next hop to the pritunl-link instance.
The security groups for the pritunl-link hosts should allow all traffic from all of the VPC subnets in the link. The traffic will not be able to move through the links if this is not done.
The security group for the database server should allow traffic from either VPC subnet on the other side or a specific host on the other side.
If the traffic isn’t working try pinging the link host VPC IPs from each link then from hosts on each side. Switching the link to WireGuard in the settings will likely result in a more reliable connection.
Thanks @zach. Went back through the tutorial and I’m able to ping each others private IPs of the Pritunl Link servers: Azure –> AWS and AWS —> Azure.
2 questions:
Is it recommended to create 2 Pritunl Servers for our devices and for site-to-site IPsec peering or just to use the same server for the traffic? I currently have the following:
I’m thinking of creating 1 more Pritunl server which will use Wireguard to communicate solely for site-to-site vpn access.
This is a dumb question on my side but how does the traffic from AWS –> Azure work with IPsec when it comes to Pritunl? With the Pritunl Link instances receiving traffic and connected to the the Pritunl Servers, do I need to have my EKS cluster on Pritunl to communicate to my Azure Postgres db? (I guess this falls under creating a separate Pritunl server solely for site-to-site vpn access so I don’t mix traffic between the 2 as well as my lack of knowledge in site-to-site.)
The site-to-site traffic doesn’t go through or use the servers in Pritunl. The Pritunl hosts manage the state of the links and all links only connect to other pritunl-link clients. The servers tab is only for VPN clients.
@zach Thanks! Was able to check in the settings and got pritunl-link working with AWS and Azure. Azure’s route table “next hop ip address” as well as security group configurations did the trick.
A question on pritunl-client and the openvpn profile that is created by Pritunl. We are deciding on importing the openvpn profile that pritunl creates and using that in openvpn.service in systemd. From here, we will slowly migrate the raspberry pi’s to pritunl-client.
Is there a downside to pulling the openvpn or wireguard profile that Pritunl creates and setting it up via the systemd service?
Some server options including connection single sign-on, device authentication, dynamic firewall and all WireGuard connections use a pre-connection authorization through the Pritunl web server. This modes will not work with other OpenVPN clients. For OpenVPN connections not using these options the systemd OpenVPN service can be used.