Azure SSO not forwarding to Enterprise Application

Hi there.

New Pritunl Install (v1.32.4181.41 b21aff) on Ubuntu 24.04 in AWS. All working fine with local Auth and users.

Added an enterprise key, and I now see the Azure SSO fields - great!

Add in the Secret (Value, not ID), Directory and Application ID’s - all fine.

Line up a user for SSO login, add the groups in Azure AD, and .. stuck. The Login prompt throws to a webpage, and the URL on that is:

https://<vpn.mydomain.com>/key/request?state=

I don’t see any passthrough to the auth.pritunl.com URL, nor the MS SSO login page - it just sits there until it times out and then 404’s.

I wonder:

  1. Is it my server not able to communicate with the auth.Pritunl.com host for it to pass the request on to MS? Can I test somehow?
  2. Is that (the auth.pritunl forwarder) a dependency I’m introducing, which if it was DDOS’ed, I’d be unable to authenticate my users if they went out of business or went offline through attack?
  3. Is it just that someone at Pritunl needs to tick a box on a new subscription to allow SSO passthrough on the account, since it was only created today?

?

Help?

Logs don’t seem to show any activity. I can see the server restarting in the logs, but since the authentication request never hits AD, I don’t see failed logins on the AD side, and I don’t see the authentication failure on the Pritunl Logs.

Check the logs in the top right for errors.

As mentioned - there is nothing written in the logs - I don’t even see an “Auth request initialisation”

It’s literally no new lines written. Other methods (local auth, etc) work fine. It’s just the Azure SSO that is silent

A Looong way back in the setup, I did manage to find a 500 error

[evening-waves-5326][2025-03-31 00:49:59,552][ERROR] Azure auth check request error
  user_id     = "<ID STRING>"
  user_name   = "CBailey"
  status_code = 500
  content     = "b''"
Traceback (most recent call last):
  File "/usr/lib/pritunl/usr/lib/python3.9/threading.py", line 937, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/pritunl/usr/lib/python3.9/threading.py", line 980, in _bootstrap_inner
    self.run()
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/cheroot/workers/threadpool.py", line 120, in run
    keep_conn_open = conn.communicate()
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/cheroot/server.py", line 1287, in communicate
    req.respond()
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/cheroot/server.py", line 1077, in respond
    self.server.gateway(self).respond()
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/cheroot/wsgi.py", line 136, in respond
    response = self.req.server.wsgi_app(self.env, self.start_response)
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/flask/app.py", line 2213, in __call__
    return self.wsgi_app(environ, start_response)
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/auth/app.py", line 26, in _wrapped
    return call(*args, **kwargs)
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/handlers/key.py", line 1235, in key_wg_post
    clients.connect_wg(
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/clients/clients.py", line 1560, in connect_wg
    auth.authenticate()
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/authorizer/authorizer.py", line 151, in authenticate
    self._check_call(self._check_sso)
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/authorizer/authorizer.py", line 189, in _check_call
    func()
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/authorizer/authorizer.py", line 1178, in _check_sso
    if not self.user.sso_auth_check(
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/user/user.py", line 443, in sso_auth_check
    logger.error('Azure auth check request error', 'user',
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/logger/__init__.py", line 55, in error
    kwargs['traceback'] = traceback.format_stack()

but even that is no longer presenting when a login attempt happens now.

Similarly - the “Sign in with Azure” button also doesn’t do anything.

Just spins until it times out as well - chrome shows

This site can’t be reached

The webpage at https://vpn.mydomain.com/sso/request might be temporarily down or it may have moved permanently to a new web address.

image

Hmm, Managed to see this error on the last reboot of the SSO server

[evening-waves-5326][2025-04-02 16:20:34,391][INFO] Starting vpn server
  server_id        = "ServerID String"
  instance_id      = "instanceID String"
  instances        = []
  instances_count  = 0
  route_count      = 1
  network          = "192.168.222.0/24"
  network6         = "fd00:c0a8:de00::/64"
  dynamic_firewall = false
  geo_sort         = false
  force_connect    = false
  sso_auth         = true
  route_dns        = false
  device_auth      = false
  host_id          = "HostID String"
  host_address     = "internal IPv4"
  host_address6    = "public IPv6 Addr"
  host_networks    = ["Host ipv4 Subnet/20"]
  cur_timestamp    = "2025-04-02 05:20:34.386549"
  libipt           = false

[evening-waves-5326][2025-04-02 16:20:38,438][ERROR] DNS mapping service stopped unexpectedly
Traceback (most recent call last):
  File "/usr/lib/pritunl/usr/lib/python3.9/threading.py", line 937, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/pritunl/usr/lib/python3.9/threading.py", line 980, in _bootstrap_inner
    self.run()
  File "/usr/lib/pritunl/usr/lib/python3.9/threading.py", line 917, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/helpers.py", line 42, in _wrapped
    for _ in call(*args, **kwargs):
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/setup/dns.py", line 43, in _dns_thread
    logger.error(
  File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/logger/__init__.py", line 55, in error
    kwargs['traceback'] = traceback.format_stack()

You can’t use client DNS mapping if systemd-resolved is running with the stub listener. Run the command below to turn this off and restart. This may have interfered with DNS resolution causing the long timeout.

sudo mkdir -p /etc/systemd/resolved.conf.d
sudo tee /etc/systemd/resolved.conf.d/no-stub-listener.conf > /dev/null <<EOF
[Resolve]
DNSStubListener=no
EOF
sudo systemctl restart systemd-resolved
sudo systemctl restart pritunl

Thanks for the suggestion!

applied, but no success.

I am wondering about whether there is something in that handshake that is starting the tunnel on the user PC, but since it’s not auth’ed yet, it can’t access out - and hence the handshake gets stuck before it ever hits the Server/Azure endpoint. that would also explain why nothing is in the logs (for any auth request).

It doesn’t explain the WebUI button getting stuck though.

ping auth.pritunl.com resolves to IPv6

PING auth.pritunl.com (64:ff9b::81d5:c3b0)

but it seems there is no response there either. Could just be ICMP blocked on that endpoint though

This also feels related to Auth.pritunl.com/callback/azure 500 error

Is there something in the latest build that is causing an issue?

Both app.pritunl.com and auth.pritunl.com are IPv4 only on the IP 129.213.195.176. There is also app4.pritunl.com on 129.158.247.162 and app6.pritunl.com on 2603:c020:4009:2d02:b4ca:c6c4:b3df:7b37 these two are used only for getting the public IP of the server or when using geo sort the client also.

The address 64:ff9b::81d5:c3b0 is a NAT64 mapped representation of 129.213.195.176. You are on a server with IPv6 only networking using IPv6-IPv4 translation. That may be causing the problem. You should be able to run the commands below and get a response from the first 3, the last one would test IPv6 but it is not needed. You should also verify the correct public address is configured in the hosts tab as the networking configuration may be breaking the automatic IP detection.

curl https://app.pritunl.com/ip
echo
curl https://auth.pritunl.com/ip
echo
curl https://app4.pritunl.com/ip
echo
curl https://app6.pritunl.com/ip
echo

Your issue is not related to the other thread, I have emailed them and that is an issue with token permissions. Azure has always had significant issues which has lead to needing to have all of the different API versions shown in the settings. But I’m currently not aware of any issues with Azure there hasn’t been any other emails received with issues.

OK, I get a response on all 4 - my public IPV4 address on the first 3, and the IPv6 on the last.

If what I’m hearing is correct - This appears to be seeing the IPv6 version of auth.pritunl.com because the host has IPv6, but for some reason, the 6to4 mapping isn’t working, hence the IPv6 path is the only one going out to the auth.pritunl endpoint, but failing because it’s not an IPv6 service?

..and I think I’m up and running.

So, the 2 things I did was turn OFF 6 to 4 in the AWS subnet I was using, and apply the stub listener above.

Turning the 6 to 4 back ON, seems to break it again. Something a little screwy there - will investigate.

Anyway - Azure login on the web URL allows me to log in, and the client allows me to sign in

it does hang after I see that - so perhaps something still stuck, but now I’m likely to have logs!

OK, It hits the VPN (Pritunl), Azure lets people log in to the webUI, and then the account is created correctly.

The profile imported in the Pritunl client launches, and shows the successful authentication.. but that never gets back to the VPN client. It sits waiting for the verified response to then progress the connection.

Azure logs show the sign in was successful - so it’s somewhere between Pritunl server getting the green light (showing that successful response) and the Pritunl client then progressing with the token.

The NAT64 address is an indication of an IPv6 only network. It isn’t something that should be used in most server environments. The server should have IPv4 networking.

When the client starts a single sign-on connection it opens the authorization link in the browser then polls the /ovpn_wait/ handler waiting for the authorization to complete. When that issue occurs the server wasn’t notified the authorization completed. This can be caused by the database tailing cursor issue which can be fixed with the command sudo pritunl clear-message-cache. If dynamic firewall is enabled those tokens are scoped only to a single host and some load balancer configurations will break this. These issues can also occur if the addresses are not configured correctly. Below are all the addresses and how to configure them.

Hosts Tab

  • Host Public Address: The public IPv4 address or domain of the Pritunl host. This should always be the public IP of the host for all configurations even when using a load balancer.
  • Host Public IPv6 Address: The public IPv6 address or domain of the Pritunl host. This should always be the public IP of the host for all configurations even when using a load balancer.
  • Host Sync Address: In the advanced host settings. The public address or domain that the web server of the Pritunl servers can be accessed from. If a load balancer is configured that address should be set here.

Top Right Settings

  • Connection Single Sign-On Domain: Only shown when using single sign-on connection authentication. The public address or domain that is used to validate single sign-on requests through the Pritunl web server for a new VPN connection. If a load balancer is configured that address should be set here.
1 Like

OK, Seems that a full reboot after all those were working, is able to push all the routes through, and we’re online. Thanks!