It’s an error from the Pritunl web server. Check the logs in the top right of the admin web console. If it’s a database error the logs also go to /var/log/pritunl.log when connectivity with the database is lost.
[xxxxx-xxxx-Node2][2024-09-18 12:55:27,636][ERROR] Exception on /key/ovpn/65b8f866482b71a658c0c0e1/65b8f868482b71a658c0c0f0/65c49451c5e1bd5e54f2308a [POST]
Traceback (most recent call last):
File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/flask/app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/flask/app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/flask/app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/flask/app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/auth/app.py", line 26, in _wrapped
return call(*args, **kwargs)
File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/handlers/key.py", line 1773, in key_ovpn_post
remote_addr = str(ipaddress.IPv4Address(remote_addr))
File "/usr/lib/pritunl/usr/lib/python3.9/ipaddress.py", line 1304, in __init__
self._ip = self._ip_int_from_string(addr_str)
File "/usr/lib/pritunl/usr/lib/python3.9/ipaddress.py", line 1191, in _ip_int_from_string
raise AddressValueError("Expected 4 octets in %r" % ip_str)
ipaddress.AddressValueError: Expected 4 octets in '2a03'
@zach
It seems to be working when you disable ipv6 on the networkcard (Network settings), then we are able to connect without any troubles.
We have two addresses on the mac, an ipv4 and ipv6:
10.100.4.54
2a03:dc80:0:f383::1a0d
Disabling the ipv6 address, seems to get pritunl to work, and we don’t receive the error posted above anymore.
So maybe either the client or server needs to be able to handle ipv6?
We are mostly running on ipv6, in our environment, so disabling ipv6 is only an temp workaround.
Also it looks to fail to lookup the Clients public IPv4 can it be related to when using “Force DNS configuration” that DNS fails to make lookup when a VPN connection is started as the DNS servers is replaced before the connection is up and running on the client.
What is the lookup used for, is it when “Dynamic Client Firewall” is used?
you do have app6.pritunl.com why is that not used as well for dual stack and ipv6 only clients?
[2024-09-30 12:28:53][INFO] ▶ profile: Failed to get public IPv4 address
utils: Request put error
Get "https://app4.pritunl.com/ip": dial tcp: lookup app4.pritunl.com: no such host
ORIGINAL STACK TRACE:
github.com/pritunl/pritunl-client-electron/service/utils.GetPublicAddress4
/Users/apple/go/src/github.com/pritunl/pritunl-client-electron/service/utils/request.go:206 +0x10060b1e4
github.com/pritunl/pritunl-client-electron/service/profile.(*Profile).reqOvpn
/Users/apple/go/src/github.com/pritunl/pritunl-client-electron/service/profile/profile.go:2115 +0x1006d9e1b
github.com/pritunl/pritunl-client-electron/service/profile.(*Profile).openOvpn
/Users/apple/go/src/github.com/pritunl/pritunl-client-electron/service/profile/profile.go:1975 +0x1006d91f7
github.com/pritunl/pritunl-client-electron/service/profile.(*Profile).startOvpn
/Users/apple/go/src/github.com/pritunl/pritunl-client-electron/service/profile/profile.go:1381 +0x1006d600b
github.com/pritunl/pritunl-client-electron/service/profile.(*Profile).Start
/Users/apple/go/src/github.com/pritunl/pritunl-client-electron/service/profile/profile.go:1364 +0x1006d5e37
github.com/pritunl/pritunl-client-electron/service/profile.SyncSystemProfiles.func1
/Users/apple/go/src/github.com/pritunl/pritunl-client-electron/service/profile/utils.go:413 +0x1006e7493
runtime.goexit
/opt/homebrew/Cellar/go/1.22.2/libexec/src/runtime/asm_arm64.s:1222 +0x10019a303
The client should only query the public IP if dynamic firewall is enabled. It will also query the IPv6 address it just doesn’t log an error. The error for the IPv4 lookup is only logged it won’t prevent connecting.
For dynamic firewall the server will take the IP address given by the client from the query and also use the IP address the web request was sent from. If there isn’t an incorrectly configured load balancer the connection should still work even if the client lookup fails.
The cause of this issue is your load balancer is possibly providing an IPv6 address in a format that is causing the check in pritunl/handlers/key.py to fail. The if ':' in remote_addr: should result in the IP address being parsed with remote_addr = str(ipaddress.IPv6Address(remote_addr)).
Edit /usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/handlers/key.py and add the log message below before the IP address parsing. Both the wireguard and openvpn handlers have this code when using search either replace both or verify the correct one is replaced. Then run sudo systemctl restart pritunl and check the logs to see what is being received.
logger.info('Remote address check', 'handlers',
remote_addr=remote_addr,
)
if ':' in remote_addr:
remote_addr = str(ipaddress.IPv6Address(remote_addr))
else:
remote_addr = str(ipaddress.IPv4Address(remote_addr))
It is also enabled for device authentication, I think that was done for some compatibility reasons because the device authentication was built on the dynamic firewall system and uses the same web handlers. I will test removing the lookup but with the way it is now it won’t stop a connection if it fails and that error can be ignored.
The issue in your logs is different then the errors from the original post. The profile: Single sign-on timeout error occurs after 120 seconds if the single sign-on request isn’t completed in the browser. The client sends a authentication request to a host at /key/ovpn/<org_id>/<user_id>/<server_id> that responds with a challenge that the client completes in the browser. The client sends a request to the host at /key/ovpn_wait/<org_id>/<user_id>/<server_id> waiting for the challenge to complete. It’s possible there is an issue with the host to host messaging system. This is a known issue with some versions of MongoDB 7. It should eventually show errors but it can occur with no errors logged. This would prevent the host from receiving notification of the completed challenge and it will time out. Try renaming something in the web console to trigger an event and verify it updates without needing to refresh the page. Also set the Connection Single Sign-On Domain in the top right settings to the load balancer domain. The public address of the hosts should all be the IP address of that host or a DNS record for that IP address. Load balancers should only be set in the sync address in the advanced host settings. Network load balancers on the VPN ports cannot be used.
Reducing the replica count to lower than the host count with a load balancer will result in attempting to connect to an offline host. It will try the next hosts if that occurs, if the host address is a load balancer address it may fail to connect even on additional attempts. This can be avoided by setting the host public address to the public IP of the host or a DNS record for that IP.
I did fix the issue with the ipaddress.AddressValueError exception in the codebase, it will be in the next release. This was caused by the IP address parsing code, it only effects the builds on the unstable repository. Downgrade the host to v1.32.3805.95 to fix that issue.