Client connects, but cannot be reached by the server

Hi, the issue we are having is that clients (profiles) are connected to the server, but they cannot be reached over the vpn.
The clients can talk to each other (ping/ssh works)
The clients can connect ok to other pritunl servers on the same host and can be reached over the vpn.

We noticed the issue since a week in one of our test systems and it is not recoverable with purging and reinstalling pritunl-client and configuration.
The second client failed when we updated it’s client version.
The third when we restarted the client.
We have 100+ other clients connected to the server with the issue and it is worrying that it might happen to them also.

The clients are running pritunl-client with versions 1.3.3343.50-0ubuntu1~jammy and 1.3.3373.6-0ubuntu1~jammy. Also tested with pritunl-client-electron v1.3.3373.6

The host is running pritunl version v1.30.3343.47.

Client log:

2023-01-17 11:36:32 [{client_id}] Peer Connection Initiated with [AF_INET]{server_ip}:2001/n
2023-01-17 11:36:43 Data Channel: using negotiated cipher 'AES-128-GCM'/n
2023-01-17 11:36:43 Outgoing Data Channel: Cipher 'AES-128-GCM' initialized with 128 bit key/n
2023-01-17 11:36:43 Incoming Data Channel: Cipher 'AES-128-GCM' initialized with 128 bit key/n
2023-01-17 11:36:43 TUN/TAP device tun3 opened/n
2023-01-17 11:36:43 net_iface_mtu_set: mtu 1500 for tun3/n
2023-01-17 11:36:43 net_iface_up: set tun3 up/n
2023-01-17 11:36:43 net_addr_v4_add: 10.100.250.1/24 dev tun3/n
2023-01-17 11:36:43 /tmp/pritunl/bfad83e3986bcc9c-up.sh tun3 1500 1553 10.100.250.1 255.255.255.0 init/n
<14>Jan 17 11:36:43 bfad83e3986bcc9c-up.sh: Link 'tun3' coming up/n
<14>Jan 17 11:36:43 bfad83e3986bcc9c-up.sh: Adding IPv4 DNS Server 8.8.8.8/n
<14>Jan 17 11:36:43 bfad83e3986bcc9c-up.sh: SetLinkDNS(45 1 2 4 8 8 8 8)/n
2023-01-17 11:36:43 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this/n
2023-01-17 11:36:43 Initialization Sequence Completed/n

server log:

[] Tue Jan 17 10:36:32 2023 {client_ip}:43906 [{client_id}] Peer Connection Initiated with [AF_INET6]::ffff:{client_ip}:43906
[] 2023-01-17 10:36:34 COM> SUCCESS: client-kill command succeeded
[] 2023-01-17 10:36:34 User disconnected user_id={client_id}
[] 2023-01-17 10:36:34 COM> SUCCESS: client-auth command succeeded
[] Tue Jan 17 10:36:38 2023 MULTI_sva: pool returned IPv4={ip}, IPv6=(Not enabled)
[] 2023-01-17 10:36:38 User connected user_id={client_id}
[] Tue Jan 17 10:38:18 2023 {pub_ip}:23229 peer info: IV_VER=2.5.5
[] Tue Jan 17 10:38:18 2023 {pub_ip}:23229 peer info: IV_PLAT=linux
[] Tue Jan 17 10:38:18 2023 {pub_ip}:23229 peer info: IV_PROTO=6
[] Tue Jan 17 10:38:18 2023 {pub_ip}:23229 peer info: IV_NCP=2
[] Tue Jan 17 10:38:18 2023 {pub_ip}:23229 peer info: IV_CIPHERS=AES-256-GCM:AES-128-GCM:AES-128-CBC
[] Tue Jan 17 10:38:18 2023 {pub_ip}:23229 peer info: IV_LZ4=1
[] Tue Jan 17 10:38:18 2023 {pub_ip}:23229 peer info: IV_LZ4v2=1
[] Tue Jan 17 10:38:18 2023 {pub_ip}:23229 peer info: IV_LZO=1
[] Tue Jan 17 10:38:18 2023 {pub_ip}:23229 peer info: IV_COMP_STUB=1
[] Tue Jan 17 10:38:18 2023 {pub_ip}:23229 peer info: IV_COMP_STUBv2=1
[] Tue Jan 17 10:38:18 2023 {pub_ip}:23229 peer info: IV_TCPNL=1
[] Tue Jan 17 10:38:18 2023 {pub_ip}:23229 peer info: IV_HWADDR={mac}
[] Tue Jan 17 10:38:18 2023 {pub_ip}:23229 peer info: IV_SSL=OpenSSL_3.0.2_15_Mar_2022
[] Tue Jan 17 10:38:18 2023 {pub_ip}:23229 peer info: UV_ID={id}
[] Tue Jan 17 10:38:18 2023 {pub_ip}:23229 peer info: UV_NAME=autumn-waters-9754
[] Tue Jan 17 10:38:19 2023 {pub_ip}:23229 [{client_id}] Peer Connection Initiated with [AF_INET6]::ffff:{pub_ip}:23229
[] 2023-01-17 10:38:20 COM> SUCCESS: client-kill command succeeded
[] 2023-01-17 10:38:20 User disconnected user_id={client_id}
[] 2023-01-17 10:38:20 COM> SUCCESS: client-auth command succeeded
[] Tue Jan 17 10:38:21 2023 MULTI_sva: pool returned IPv4={ip}, IPv6=(Not enabled)
[] 2023-01-17 10:38:21 User connected user_id={client_id}

server configuration


  • The only change on the server recently was the change of the allowed devices from any to desktop.
    As this requires a restart of the server to be applied we don’t think that it is causing the issue

Can you please advise what we can do to troubleshoot further?

It’s likely an MTU issue, the settings show a modified MTU. There is information in the client debugging on testing for MTU issues. If the other servers have a different MTU or a new server with a different MTU works that would indicate that servers MTU size is causing the issue.

Hi Zach,

Thanks for the quick response!

The issue was caused by our manipulation of the vpn IPs assigned to the clients.
For the ones with the issue a /24 subnet was defined instead of a /16.

We are experiencing the same problem. We have a network with the block 10.105.0.0/16, the server at IP 10.105.60.175, a Pritunl link at 10.105.62.154, and a client at IP 10.105.56.21. The client can ping the server, but not the link. The server can ping the client, but not the link. The link cannot ping either the server or the client. No other machines in the network (except VPN clients) can access the VPN clients. Checking the network interfaces of the link, it does not have a tunnel interface. Could this be a problem?
The subnet for all devices (pritunl server, link and clients) is 10.105.56.0/21.

Hi Jason, this sounds like a routing issue. You should have a static route added on your vnet to the network you want to access (10.105.56.0/21) with next hop the pritunl linl (10.105.62.154).

Or, could it be that you already have a route in place that forwards the traffic to a different host?
Are the pritunl server and link on the same vnet and nsg? They should be able to talk to each other directly.

This is the routes set on my machine (vpn client) and de network interface

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
8.8.8.8         10.105.56.1     255.255.255.255 UGH   0      0        0 tun0
10.10.0.0       10.105.56.1     255.255.0.0     UG    0      0        0 tun0
10.22.235.0     10.105.56.1     255.255.255.0   UG    0      0        0 tun0
10.22.235.17    10.105.56.1     255.255.255.255 UGH   0      0        0 tun0
10.50.0.0       10.105.56.1     255.255.0.0     UG    0      0        0 tun0
10.100.0.0      10.105.56.1     255.255.0.0     UG    0      0        0 tun0
10.102.0.0      10.105.56.1     255.255.0.0     UG    0      0        0 tun0
10.105.0.0      10.105.56.1     255.255.0.0     UG    0      0        0 tun0
10.105.0.2      10.105.56.1     255.255.255.255 UGH   0      0        0 tun0
10.105.56.0     0.0.0.0         255.255.248.0   U     0      0        0 tun0
10.108.0.0      10.105.56.1     255.255.0.0     UG    0      0        0 tun0
192.168.0.0     10.105.56.1     255.255.252.0   UG    0      0        0 tun0
192.168.0.20    10.105.56.1     255.255.255.255 UGH   0      0        0 tun0
tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1500
        inet 10.105.56.21  netmask 255.255.248.0  destination 10.105.56.21
        inet6 fe80::5bb8:327a:a355:c45a  prefixlen 64  scopeid 0x20<link>
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (Não Especificado)
        RX packets 1650  bytes 311394 (311.3 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1648  bytes 157076 (157.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Network interface and routes on vpn server

tun1: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1372
        inet 10.105.56.1  netmask 255.255.248.0  destination 10.105.56.1
        inet6 fe80::fc86:c303:406d:2159  prefixlen 64  scopeid 0x20<link>
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 1000  (UNSPEC)
        RX packets 2805892  bytes 193522223 (184.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5680906  bytes 5670681666 (5.2 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.105.56.1     0.0.0.0         UG    100    0        0 ens5
10.105.56.0     0.0.0.0         255.255.248.0   U     0      0        0 tun1
10.105.56.0     0.0.0.0         255.255.248.0   U     100    0        0 ens5

Network interface and routes on vpn link

ens5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9001
        inet 10.105.62.154  netmask 255.255.248.0  broadcast 10.105.63.255
        inet6 fe80::108a:dff:feef:cbd7  prefixlen 64  scopeid 0x20<link>
        ether 12:8a:0d:ef:cb:d7  txqueuelen 1000  (Ethernet)
        RX packets 118379489  bytes 83306481028 (77.5 GiB)
        RX errors 0  dropped 1163  overruns 0  frame 0
        TX packets 116495225  bytes 77134515354 (71.8 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
[ec2-user@ip-10-105-62-154 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.105.56.1     0.0.0.0         UG    100    0        0 ens5
10.105.56.0     0.0.0.0         255.255.248.0   U     0      0        0 ens5
10.105.56.0     0.0.0.0         255.255.248.0   U     100    0        0 ens5

The strange thing is the following route on the client that points the subnet block in question 10.105.56.0/21 to the gateway 0.0.0.0, but even configured this way, VPN clients can reach other clients on the VPN.

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.105.56.0     0.0.0.0         255.255.248.0   U     0      0        0 tun0

The security group of vpn server and link allows inbound access from entire network with block 10.0.0.0/8 (includes the block from all networks including peering networks)

AWS routes

Pritunl routes

The local setup shouldn’t need any change.
The routes on pritunl look ok.

On our setup (on Azure) we don’t need a pritunl link to access the cloud network from our vpn clients. This is done with just the route on the pritunl server.

We also had the same issue, where the pritunl server and pritunl link could not talk to each other, but this was due to route misconfiguration.

You should focus on the traffic between the pritunl server and pritunl link.
If you remove route 10.105.0.0/16, can the two hosts talk to each other?
The pritunl server should have ipv4 forwarding on.

I can not delete the route 10.105.0.0/16 because it’s root route (local) managed by aws.

The ipv4 forwarding is enabled in the pritunl server

[ec2-user@ip-10-105-60-175 ~]$ sudo sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

You should try to troubleshoot the issue outside the scope of pritunl.
The pritunl server and pritunl link hosts should be able to talk to each other.
It looks like an issue related to AWS.

In pritunl server, when ping pritunl link the used gateway is 10.105.56.1 (subnet gateway managed by aws). By tcpdump, the request is made but no response is returned

[ec2-user@ip-10-105-60-175 ~]$ sudo tcpdump -i tun0 host 10.105.62.154 -n -c 4
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on tun0, link-type RAW (Raw IP), snapshot length 262144 bytes
17:17:42.653184 IP 10.105.56.1 > 10.105.62.154: ICMP echo request, id 12, seq 1, length 64
17:17:43.713713 IP 10.105.56.1 > 10.105.62.154: ICMP echo request, id 12, seq 2, length 64
17:17:44.737691 IP 10.105.56.1 > 10.105.62.154: ICMP echo request, id 12, seq 3, length 64
17:17:45.761683 IP 10.105.56.1 > 10.105.62.154: ICMP echo request, id 12, seq 4, length 64
4 packets captured
4 packets received by filter
0 packets dropped by kernel

In pritunl link machine the ping to 10.105.56.1 also returns success

[ec2-user@ip-10-105-62-154 ~]$ ping 10.105.56.1 -c 4
PING 10.105.56.1 (10.105.56.1) 56(84) bytes of data.
64 bytes from 10.105.56.1: icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from 10.105.56.1: icmp_seq=2 ttl=64 time=0.075 ms
64 bytes from 10.105.56.1: icmp_seq=3 ttl=64 time=0.077 ms
64 bytes from 10.105.56.1: icmp_seq=4 ttl=64 time=0.067 ms

--- 10.105.56.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3091ms
rtt min/avg/max/mdev = 0.053/0.068/0.077/0.009 ms

Below are firewall rules in pritunl link machine. All traffic is allowed

[ec2-user@ip-10-105-62-154 ~]$ sudo iptables -L -n -v
Chain INPUT (policy ACCEPT 309K packets, 147M bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 276K packets, 36M bytes)
 pkts bytes target     prot opt in     out     source               destination  

I run pritunl-link firewall-off in link machine.

In subnet flow logs, the communication from server to link and link to server are accepted. Can be firewall configuration on machines?

version account-id interface-id srcaddr dstaddr srcport dstport protocol packets bytes start end action log-status
2 901189122820 eni-011xxxxxxx0fa8 10.105.60.175 10.105.62.154 41887 53 17 1 61 1727112597 1727112628 ACCEPT OK
2 901189122820 eni-011xxxxxxx0fa8 10.105.60.175 10.105.62.154 40965 53 17 1 64 1727112597 1727112628 ACCEPT OK
2 901189122820 eni-011xxxxxxx0fa8 10.105.60.175 10.105.62.154 56405 53 17 1 73 1727112597 1727112628 ACCEPT OK
2 901189122820 eni-011xxxxxxx0fa8 10.105.60.175 10.105.62.154 46057 53 17 1 65 1727112597 1727112628 ACCEPT OK
2 901189122820 eni-011xxxxcd20fa8 10.105.62.154 10.105.60.175 53 49523 17 1 102 1727112627 1727112658 ACCEPT OK
2 901189122820 eni-011xxxxcd20fa8 10.105.62.154 10.105.60.175 53 37531 17 1 79 1727112627 1727112658 ACCEPT OK

I did some tests:

  • I created some machines using different distros in the pritunl (client ,server and link) subnet and I cannot access it using private IP, only public ip
  • I created a subnet from scratch (10.105.90.0/28) using the same route table as previous subnet (10.105.56.0/21). I created a new machine on that subnet and I can access it using private IP !!!

The new machine can ping pritunl link and pritunl server using private IP, but not can ping pritunl clients. Then, I installed the pritunl client and now I can ping pritunl clients using private IP.

I think was possible because when I connect to pritunl server on this new machine the network interface tun0 was created

0.0.0.0         10.105.90.1     0.0.0.0         UG    100    0        0 ens5
8.8.8.8         10.105.56.1     255.255.255.255 UGH   0      0        0 tun0
10.10.0.0       10.105.56.1     255.255.0.0     UG    0      0        0 tun0
10.22.235.0     10.105.56.1     255.255.255.0   UG    0      0        0 tun0
10.22.235.17    10.105.56.1     255.255.255.255 UGH   0      0        0 tun0
10.50.0.0       10.105.56.1     255.255.0.0     UG    0      0        0 tun0
10.100.0.0      10.105.56.1     255.255.0.0     UG    0      0        0 tun0
10.102.0.0      10.105.56.1     255.255.0.0     UG    0      0        0 tun0
10.105.0.0      10.105.56.1     255.255.0.0     UG    0      0        0 tun0
10.105.0.2      10.105.56.1     255.255.255.255 UGH   0      0        0 tun0
10.105.56.0     0.0.0.0         255.255.248.0   U     0      0        0 tun0
10.105.90.0     0.0.0.0         255.255.255.240 U     100    0        0 ens5
10.108.0.0      10.105.56.1     255.255.0.0     UG    0      0        0 tun0
192.168.0.0     10.105.56.1     255.255.252.0   UG    0      0        0 tun0
192.168.0.20    10.105.56.1     255.255.255.255 UGH   0      0        0 tun0

If you are trying to have the AWS servers reach the VPN clients NAT routes can’t be used and the instance source/dest checking needs to be disabled. Then AWS route advertisement needs to be configured.

When mixing pritunl-link with a VPN server using NAT on the VPN server will prevent the linked networks from reaching the VPN clients.

1 Like