There seems to be a problem when a load balancer is used
I enabled sticky session on the load balancer, so now the same pritunl server receives all requests from the client when connecting to the global sync address.
Suppose server now getting all the requests from the LB address is A , the other one is B
Now if the client tries to establish connection to A it works, however if it randomly choses B it will not.
I analyzed the traffic with tcpdump -A -i lo ‘tcp port 9756’ to check what arrives on which server, and in the situation where the client chooses B (as we see in the client), it actually receives no traffic. All the traffic is actually sent to A
So I believe at one point the traffic from the client is sent to the LB adress instead of the chosen server address.
Device authentication requires the public address in the hosts tab to either be the IP address or DNS entry for that specific host, it can’t be a load balancer. The device authentication is scoped only to one host. Below are all the addresses and how to configure them.
Hosts Tab
Host Public Address: The public IPv4 address or domain of the Pritunl host. This should always be the public IP of the host for all configurations even when using a load balancer.
Host Public IPv6 Address: The public IPv6 address or domain of the Pritunl host. This should always be the public IP of the host for all configurations even when using a load balancer.
Host Sync Address: In the advanced host settings. The public address or domain that the web server of the Pritunl servers can be accessed from. If a load balancer is configured that address should be set here.
Top Right Settings
Connection Single Sign-On Domain: Only shown when using single sign-on connection authentication. The public address or domain that is used to validate single sign-on requests through the Pritunl web server for a new VPN connection. If a load balancer is configured that address should be set here. Requires valid SSL certificate.
However there is an issue in the traffic because of this scoping
I have Load Balancer LB, Server A, Server B
I set the LB to send all traffic to server A.
What happens is that all HTTP traffic from the client, including the device token, is sent to the load balancer and then server A (POST /key/ovpn/xxxxxx with JSON Data containing device_signature field )
As a consequence, if the client choses randomly the IP of Server A it works, but if it choses Server B it will fail because it didn’t receive the query and the deviceid
What would be expected is that the data POST is sent to the server A or server B address, not the LB address.
I did some network sniffing on the client, and server A and server B receive no TCP traffic , only UDP so it means they don’t receive the device token info directly, it’s actually sent to the LB which causes the problem.
The POST /key/ovpn completes the device authentication and returns a token that is scoped only to the server that handled the request. It will also return the public address of that server and the client will then connect to that VPN server using the token. For this reason the public address field in the host settings must point to the actual host not a load balancer.
The /key/sync is a configuration update to sync any changes, it isn’t part of the VPN authentication or server selection. The /key/ovpn request will return token, remote and remote6 using the public address and public IPv6 address fields from the host settings. This is the address that will be used to connect to the VPN.
This is not what happens. If I force only Server A to answer from the backend pool, we should see only connections to server A.
What actually happens is that after the /key/sync query the client still chooses randomly from the pool it knows, so if it choses server A then the connection works, but if we are out of luck and it choses server B, connection fails
That is the pre-connection authorization not the VPN connection, it should be sent to the load balancer. After that has completed it will return the public address of the server that will be used for the connection.
Again, this is not what happens since my load balancer is configured to forward data from only 1 server in the backend. So the expected behaviour would be the client only contacting this server.
What actually happens is that after the preauth, the client still choses randomly a server from the pool of server it gets from the config, and if the server it selects is not the one that replied via the load balancer , the connection will fail (because of the device auth)
As a consequence, in our conf (which is basic 2 servers behind a LB), device authentication doesn’t work
Find the connection: Authorization successful log message and check the remote and remote6 values on this log message. Verify these values are correct and that the domains correctly resolve to a specific host not a load balanacer.
Yes the remote field after authentication is always serverA as expected.
However in the OpenVPN logs we see 50% connection to serverA and 50% connections to serverB.
Obviously those to serverB will fail :
2025-11-06 09:42:56 UDPv4 link remote: [AF_INET]xx.yy.zz.tt:15106
2025-11-06 09:42:56 WARNING: this configuration may cache passwords in memory – use the auth-nocache option to prevent this
2025-11-06 09:42:56 VERIFY OK: depth=1, O=68de9ed3216dfec506a72924, CN=68de9ed3216dfec506a72925
2025-11-06 09:42:56 NOTE: --mute triggered…
2025-11-06 09:42:56 6 variation(s) on previous 3 message(s) suppressed by --mute
2025-11-06 09:42:56 [68de9ed4216dfec506a7293b] Peer Connection Initiated with [AF_INET]xx.yy.zz.tt:15106
2025-11-06 09:42:58 AUTH: Received control message: AUTH_FAILED
2025-11-06 09:42:58 SIGTERM[soft,auth-failure] received, process exiting