How to not route Internet traffic in Pritunl?

My EC2 instances that I am using to test are already being throttled at 100% using t3.mediums. I’m looking to not allow internet traffic be tunneled through the Pritunl server. However, I hope I am doing this right. I setup the Internal VPN following this documentation here with using Enable VPN Client DNS Mapping:

Once I got this setup, I added MASQUERADE of my /20 CIDR on my Pritunl Hosts and removed the Pritunl Settings 0.0.0.0/0 so I don’t route internet traffic through openvpn. Nevertheless, when I’m connecting to just 1 device, a t3.medium is hitting 100% CPU Utilization! Am I doing something wrong? I have a vanilla openvpn server that barely hits 5% when connected.

The AWS T-series are shared CPU instances, these can be throttled by other instances using the same CPU. If you have a OpenVPN configuration to compare it to enable debugging output in the server settings. Then stop and start the server and it will show the configuration in the server output when starting. Then compare this to the other server.

Also check htop or btop to verify what process is using the CPU.

Thanks for the response @zach. When using btop or htop I see the culprit being pritunl-dns service:

This tells me when using the Internal DNS, something is going out of wack.

I went ahead and restarted pritunl itself and it looks fine (for now.) Initially when I set Pritunl up it was shooting errors of the following:

Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:   File "/usr/lib/pritunl/usr/lib/python3.9/threading.py", line 917, in runSep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:     self._target(*self._args, **self._kwargs)
Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:   File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/helpers.py", line 42, in _wra>Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:     for _ in call(*args, **kwargs):
Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:   File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/setup/dns.py", line 43, in _d>
Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:     logger.error(Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:   File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/logger/__init__.py", line 55,>
Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:     kwargs['traceback'] = traceback.format_stack()Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]: [server-1][2025-09-22 23:46:35,154][ERROR] DNS mapping service stopped unexpectedly
Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]: Traceback (most recent call last):Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:   File "/usr/lib/pritunl/usr/lib/python3.9/threading.py", line 937, in _bootstrap
Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:     self._bootstrap_inner()Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:   File "/usr/lib/pritunl/usr/lib/python3.9/threading.py", line 980, in _bootstrap_inner
Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:     self.run()Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:   File "/usr/lib/pritunl/usr/lib/python3.9/threading.py", line 917, in run
Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:     self._target(*self._args, **self._kwargs)
Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:   File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/helpers.py", line 42, in _wra>
Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:     for _ in call(*args, **kwargs):
Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:   File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/setup/dns.py", line 43, in _d>Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:     logger.error(
Sep 22 23:46:48 ip-10-9-10-95 pritunl[4660]:   File "/usr/lib/pritunl/usr/lib/python3.9/site-packages/pritunl/logger/__init__.py", line 55,>

However, it has stopped and working now. I’ll update what occurs in this thread.

So it’s come back at 99% CPU Utilization. What are the troubleshooting steps to fix pritunl-dns?

Unsure if this is the issue but when I connect with just myself I am seeing this error:

[server-1] 2025-09-23 18:45:30 ERROR User auth failed "User is not valid"
[server-1] 2025-09-23 18:45:30 us=428460 54.92.149.188:48899 VERIFY OK: depth=0, O=234234h23l4kj3k4j34k3jkjkj3k, CN=5jkjk2jkjk33jk3j43k4j3j43k4j
[server-1] 2025-09-23 18:45:30 us=428738 54.92.149.188:48899 peer info: IV_VER=2.4.7
[server-1] 2025-09-23 18:45:30 us=428748 54.92.149.188:48899 peer info: IV_PLAT=linux
[server-1] 2025-09-23 18:45:30 us=428752 54.92.149.188:48899 peer info: IV_PROTO=2
[server-1] 2025-09-23 18:45:30 us=428757 54.92.149.188:48899 peer info: IV_NCP=2
[server-1] 2025-09-23 18:45:30 us=428761 54.92.149.188:48899 peer info: IV_LZ4=1
[server-1] 2025-09-23 18:45:30 us=428765 54.92.149.188:48899 peer info: IV_LZ4v2=1
[server-1] 2025-09-23 18:45:30 us=428769 54.92.149.188:48899 peer info: IV_LZO=1
[server-1] 2025-09-23 18:45:30 us=428773 54.92.149.188:48899 peer info: IV_COMP_STUB=1
[server-1] 2025-09-23 18:45:30 us=428777 54.92.149.188:48899 peer info: IV_COMP_STUBv2=1
[server-1] 2025-09-23 18:45:30 us=428781 54.92.149.188:48899 peer info: IV_TCPNL=1
[server-1] 2025-09-23 18:45:30 us=428786 54.92.149.188:48899 peer info: IV_HWADDR=e4:5f:01:5f:d1:16
[server-1] 2025-09-23 18:45:30 us=428791 54.92.149.188:48899 peer info: IV_SSL=OpenSSL_1.1.1n__15_Mar_2022
[server-1] 2025-09-23 18:45:30 us=428795 54.92.149.188:48899 peer info: UV_ID=991610ab66534216a79f31ee9df995e7
[server-1] 2025-09-23 18:45:30 us=428800 54.92.149.188:48899 peer info: UV_NAME=snowy-plains-7735
[server-1] 2025-09-23 18:45:30 us=428851 54.92.149.188:48899 TLS: Username/Password authentication deferred for username '' 
[server-1] 2025-09-23 18:45:30 us=428862 54.92.149.188:48899 TLS: move_session: dest=TM_ACTIVE src=TM_INITIAL reinit_src=1
[server-1] 2025-09-23 18:45:30 us=428896 54.92.149.188:48899 NOTE: --mute triggered...
[server-1] 2025-09-23 18:45:30 us=430485 1 variation(s) on previous 8 message(s) suppressed by --mute
[server-1] 2025-09-23 18:45:30 us=430506 MANAGEMENT: CMD 'client-deny 18 1 "User is not valid"'
[server-1] 2025-09-23 18:45:30 us=430518 MULTI: connection rejected: User is not valid, CLI:[NULL]
[server-1] 2025-09-23 18:45:30 us=465745 54.92.149.188:48899 Delayed exit in 5 seconds
[server-1] 2025-09-23 18:45:30 us=465776 54.92.149.188:48899 SENT CONTROL [UNDEF]: 'AUTH_FAILED' (status=1)
[server-1] 2025-09-23 18:45:30 us=465783 54.92.149.188:48899 SENT CONTROL [68c831de1a4ef0a79b96890c]: 'AUTH_FAILED' (status=1)
[server-1] 2025-09-23 18:45:30 us=465812 54.92.149.188:48899 Control Channel: TLSv1.3, cipher TLSv1.3 TLS_AES_256_GCM_SHA384, peer certificate: 4096 bits RSA, signature: RSA-SHA256, peer temporary key: 253 bits X25519
[server-1] 2025-09-23 18:45:30 us=465831 54.92.149.188:48899 [68c831de1a4ef0a79b96890c] Peer Connection Initiated with [AF_INET6]::ffff:54.92.149.188:48899
[server-1] 2025-09-23 18:45:32 us=663751 read UDPv6 [ECONNREFUSED]: Connection refused (fd=5,code=111)
[server-1] 2025-09-23 18:45:35 us=861195 54.92.149.188:48899 SIGTERM[soft,delayed-exit] received, client-instance exiting
[server-1] 2025-09-23 18:45:47 us=728918 Connection Attempt MULTI: multi_create_instance called
[server-1] 2025-09-23 18:45:47 us=728968 54.92.149.188:50728 Re-using SSL/TLS context
[server-1] 2025-09-23 18:45:47 us=729017 54.92.149.188:50728 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
[server-1] 2025-09-23 18:45:47 us=729026 54.92.149.188:50728 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
[server-1] 2025-09-23 18:45:47 us=729124 54.92.149.188:50728 Control Channel MTU parms [ mss_fix:0 max_frag:0 tun_mtu:1250 tun_max_mtu:0 headroom:126 payload:1600 tailroom:126 ET:0 ]
[server-1] 2025-09-23 18:45:47 us=729131 54.92.149.188:50728 Data Channel MTU parms [ mss_fix:0 max_frag:0 tun_mtu:1500 tun_max_mtu:1600 headroom:136 payload:1768 tailroom:562 ET:0 ]
[server-1] 2025-09-23 18:45:47 us=921622 54.92.149.188:50728 VERIFY OK: depth=1, O=68bca904cb4685e72be3d956, CN=68bca9047d8ea2df9979faca
[server-1] 2025-09-23 18:45:47 ERROR User auth failed "User is not valid"
[server-1] 2025-09-23 18:45:47 COM> SUCCESS: client-deny command succeeded
[server-1] 2025-09-23 18:45:47 us=921869 54.92.149.188:50728 VERIFY OK: depth=0, O=68bca904cb4685e72be3d956, CN=68c831de1a4ef0a79b96890c
[server-1] 2025-09-23 18:45:47 us=922462 54.92.149.188:50728 peer info: IV_VER=2.4.7
[server-1] 2025-09-23 18:45:47 us=922486 54.92.149.188:50728 peer info: IV_PLAT=linux
[server-1] 2025-09-23 18:45:47 us=922493 54.92.149.188:50728 peer info: IV_PROTO=2
[server-1] 2025-09-23 18:45:47 us=922499 54.92.149.188:50728 peer info: IV_NCP=2
[server-1] 2025-09-23 18:45:47 us=922504 54.92.149.188:50728 peer info: IV_LZ4=1
[server-1] 2025-09-23 18:45:47 us=922510 54.92.149.188:50728 peer info: IV_LZ4v2=1
[server-1] 2025-09-23 18:45:47 us=922515 54.92.149.188:50728 peer info: IV_LZO=1
[server-1] 2025-09-23 18:45:47 us=922521 54.92.149.188:50728 peer info: IV_COMP_STUB=1
[server-1] 2025-09-23 18:45:47 us=922539 54.92.149.188:50728 peer info: IV_COMP_STUBv2=1
[server-1] 2025-09-23 18:45:47 us=922545 54.92.149.188:50728 peer info: IV_TCPNL=1
[server-1] 2025-09-23 18:45:47 us=922552 54.92.149.188:50728 peer info: IV_HWADDR=e4:5f:01:5f:d1:16
[server-1] 2025-09-23 18:45:47 us=922558 54.92.149.188:50728 peer info: IV_SSL=OpenSSL_1.1.1n__15_Mar_2022
[server-1] 2025-09-23 18:45:47 us=922565 54.92.149.188:50728 peer info: UV_ID=234kj2h34kjh234jk23h4kj23h42k3j3jj3h3h3
[server-1] 2025-09-23 18:45:47 us=922571 54.92.149.188:50728 peer info: UV_NAME=snowy-plains-7735
[server-1] 2025-09-23 18:45:47 us=922930 54.92.149.188:50728 TLS: Username/Password authentication deferred for username '' 
[server-1] 2025-09-23 18:45:47 us=922960 54.92.149.188:50728 TLS: move_session: dest=TM_ACTIVE src=TM_INITIAL reinit_src=1
[server-1] 2025-09-23 18:45:47 us=923009 54.92.149.188:50728 NOTE: --mute triggered...
[server-1] 2025-09-23 18:45:47 us=925479 1 variation(s) on previous 8 message(s) suppressed by --mute
[server-1] 2025-09-23 18:45:47 us=925500 MANAGEMENT: CMD 'client-deny 19 1 "User is not valid"'
[server-1] 2025-09-23 18:45:47 us=925513 MULTI: connection rejected: User is not valid, CLI:[NULL]
[server-1] 2025-09-23 18:45:47 us=965518 54.92.149.188:50728 Delayed exit in 5 seconds
[server-1] 2025-09-23 18:45:47 us=965546 54.92.149.188:50728 SENT CONTROL [UNDEF]: 'AUTH_FAILED' (status=1)
[server-1] 2025-09-23 18:45:47 us=965553 54.92.149.188:50728 SENT CONTROL [68c831de1a4ef0a79b96890c]: 'AUTH_FAILED' (status=1)
[server-1] 2025-09-23 18:45:47 us=965583 54.92.149.188:50728 Control Channel: TLSv1.3, cipher TLSv1.3 TLS_AES_256_GCM_SHA384, peer certificate: 4096 bits RSA, signature: RSA-SHA256, peer temporary key: 253 bits X25519
[server-1] 2025-09-23 18:45:47 us=965601 54.92.149.188:50728 [234234234234234234234234234] Peer Connection Initiated with [AF_INET6]::ffff:54.92.149.188:50728
[server-1] 2025-09-23 18:45:49 us=205570 read UDPv6 [ECONNREFUSED]: Connection refused (fd=5,code=111)

I’ve seen this issue occur where a DNF auto update would start in the background, the system would run out of memory and caused the pritunl-dns process to remain stuck at 100% CPU usage. Run sudo dmesg -Tk and look for OOM messages or other errors.

The User is not valid error is from trying to connect with a user that was deleted. That would be unrelated to the CPU usage issue, delete and import a new profile into the client.

Fixed the User is not valid. I tore down my EC2 instances and brought up new ones.

As for the DNF auto updates, this is what I see in my logs:

ssm-user@ip-10-9-10-151:/var/snap/amazon-ssm-agent/11797$ sudo dmesg -Tk | grep -i memory
[Tue Sep 23 23:05:58 2025] DMI: Memory slots populated: 1/1
[Tue Sep 23 23:05:58 2025] ACPI: Reserving FACP table memory at [mem 0xbbf55000-0xbbf55113]
[Tue Sep 23 23:05:58 2025] ACPI: Reserving DSDT table memory at [mem 0xbbf56000-0xbbf57159]
[Tue Sep 23 23:05:58 2025] ACPI: Reserving FACS table memory at [mem 0xbbfd0000-0xbbfd003f]
[Tue Sep 23 23:05:58 2025] ACPI: Reserving WAET table memory at [mem 0xbbf5b000-0xbbf5b027]
[Tue Sep 23 23:05:58 2025] ACPI: Reserving SLIT table memory at [mem 0xbbf5a000-0xbbf5a06b]
[Tue Sep 23 23:05:58 2025] ACPI: Reserving APIC table memory at [mem 0xbbf59000-0xbbf59075]
[Tue Sep 23 23:05:58 2025] ACPI: Reserving SRAT table memory at [mem 0xbbf58000-0xbbf5809f]
[Tue Sep 23 23:05:58 2025] ACPI: Reserving HPET table memory at [mem 0xbbf54000-0xbbf54037]
[Tue Sep 23 23:05:58 2025] ACPI: Reserving SSDT table memory at [mem 0xbbf53000-0xbbf53758]
[Tue Sep 23 23:05:58 2025] ACPI: Reserving SSDT table memory at [mem 0xbbf52000-0xbbf5207e]
[Tue Sep 23 23:05:58 2025] ACPI: Reserving BGRT table memory at [mem 0xbbf51000-0xbbf51037]
[Tue Sep 23 23:05:58 2025] Early memory node ranges
[Tue Sep 23 23:05:58 2025] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
[Tue Sep 23 23:05:58 2025] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000fffff]
[Tue Sep 23 23:05:58 2025] PM: hibernation: Registered nosave memory: [mem 0xb9f31000-0xb9f4cfff]
[Tue Sep 23 23:05:58 2025] PM: hibernation: Registered nosave memory: [mem 0xbbcce000-0xbbfddfff]
[Tue Sep 23 23:05:58 2025] PM: hibernation: Registered nosave memory: [mem 0xbff7c000-0xffffffff]
[Tue Sep 23 23:05:58 2025] Freeing SMP alternatives memory: 48K
[Tue Sep 23 23:05:58 2025] Memory: 3897492K/4112428K available (21443K kernel code, 4583K rwdata, 15116K rodata, 5248K init, 4312K bss, 205960K reserved, 0K cma-reserved)
[Tue Sep 23 23:05:58 2025] x86/mm: Memory block size: 128MB
[Tue Sep 23 23:05:58 2025] Freeing initrd memory: 12964K
[Tue Sep 23 23:05:59 2025] Freeing unused decrypted memory: 2028K
[Tue Sep 23 23:05:59 2025] Freeing unused kernel image (initmem) memory: 5248K
[Tue Sep 23 23:05:59 2025] Freeing unused kernel image (text/rodata gap) memory: 1084K
[Tue Sep 23 23:05:59 2025] Freeing unused kernel image (rodata/data gap) memory: 1268K

I’ll see on testing in using a bigger EC2 instance. Just weird how pritunl-dns is eating up so much Network bandwidth where It is rising upwards to 100% CPU Util. I have memory so this wouldn’t be the issue.

I guess it’s to be expected that using pritunl-dns is going to increase your CPU usage?

The pritunl-dns won’t normally consume significant CPU usage. The out of memory error will cause system components to stop working which then sets off an uncontrolled loop in the pritunl-dns process.

I resolved it. Doofus mistake on my side where incorrectly added the wrong IP: https://www.reddit.com/r/aws/comments/c2ysrm/using_rt53_for_internal_dns/

The documentation I linked previously can also be used.

I was using 10.100.0.1/32 which was tied to the server causing it to jump 100% CPU utilization. Now it’s 1% CPU utilization