The VPC containing the Pritunl Host is peered with the VPC that contains the target
Both VPCs have routing tables with the relevant entries
The target instance has appropriate security group
Unchecked Restrict Routing
NAT’d route works fine
I tried to lower the MTU to 1200, but my client system reports that the tunnel interface is still set to 1500. Why isn’t MTU being respected? The MTU setting is not even included in the .ovpn profile. This is the only issue I can think of. What else could be the cause here?
I suspect this issue is due to MTU but setting MTU seems broken. Here is my setting for the tunnel:
But here is the tunnel setting on the Pritunl host:
root@pritunl2:/home/lai# ip a show tun10
13: tun10: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
link/none
inet 192.168.100.1/24 brd 192.168.100.255 scope global tun10
valid_lft forever preferred_lft foreve
And on the client:
9411: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
link/none
inet 192.168.100.3/24 brd 192.168.100.255 scope global tun0
valid_lft forever preferred_lft forever
inet6 fe80::a80c:550:3b41:12f8/64 scope link stable-privacy
valid_lft forever preferred_lft forever
Restrict routing should not be disabled, that will not effect support for the configuration. Source/dest checking must be disabled from AWS on the Pritunl instances.
AWS VPC peering will only route the peered VPC networks, the system does not support any external networks. For the configuration to work the VPC peering must be replaced with either AWS Transit Gateway or pritunl-link.
The MTU is only set on the server, the server will provide the MTU to the client.