Hi everyone,
We’re using Pritunl Zero (v1.0.3565.94) as a reverse proxy in front of an n8n instance and many other apps.
We have noticed that **Pritunl Zero crashes when we perform large operations, such as updating or reading very large lookup tables while using the n8n UI.
When this happens, the browser returns 502 Bad Gateway, and the pritunl-zero process is killed by the OOM killer.
After investigating the logs, it seems the crash is triggered by the internal search/indexing subsystem (BulkDocuments) when it tries to index very large events generated by these operations.
What we found in the logs?
Before the crash, the search component reports a 413 Request Entity Too Large when attempting a bulk insert:
[2025-12-01 20:39:58][ERRO] ▶ search: Bulk insert failed, moving to buffer ◆ response="<html><h1>413 Request Entity Too Large</h1>...</html>" ◆ status_code=413
search: Bulk insert failed
github.com/pritunl/pritunl-zero/search.(*Client).BulkDocuments
/go/src/github.com/pritunl/pritunl-zero/search/document.go:155
github.com/pritunl/pritunl-zero/search.workerGroup.func1
/go/src/github.com/pritunl/pritunl-zero/search/search.go:155
Immediately after that, the kernel OOM killer terminates the Pritunl Zero process:
pritunl-zero invoked oom-killer
Out of memory: Killed process 1494501 (pritunl-zero) total-vm:6141368kB, anon-rss:3568168kB
pritunl-zero.service: Main process exited, code=killed, status=9/KILL
Restarting the service brings it back normally:
systemctl restart pritunl-zero
systemctl status pritunl-zero
● pritunl-zero.service - Pritunl Zero Daemon
Active: active (running)
Understanding of what might be happening
From what we can see, the issue occurs only when n8n performs large operations, especially reading or writing to sources such us DBs.
It looks like Pritunl Zero generates internal audit/search events for each request passing through the Applications.
When the HTTP traffic is large, the audit/log event becomes very large as well, and:
- Pritunl Zero builds a very large BulkDocuments batch
- The search backend responds 413 Request Entity Too Large
- Pritunl Zero tries to handle the failed batch (“moving to buffer”)
- Memory usage spikes
- The kernel kills the pritunl-zero process (OOM)
So the crash seems caused by the search indexing of extremely large events.
Questions
- Is this a known limitation or expected behavior when Pritunl Zero proxies very large HTTP requests/responses?
- Is there any way to prevent this?
For example:
- reduce or limit search bulk batch size,
- configure what is indexed in the search subsystem,
- disable indexing for specific Applications,
- or disable the search/audit indexing entirely.
- Are there recommended best practices for handling applications with heavy data traffic behind Pritunl Zero?
Should we avoid placing large data-processing apps (like n8n or Dashboards from BI / Analytics) behind Pritunl Zero? - Any guidance on memory/CPU sizing when using Applications + search indexing?
Our instance is a t3.medium (2 vCPU, 4 GB RAM, no swap).
We can share our full configuration, version details, or more logs if needed.
Thanks in advance for any help or recommendations!