You are not logged in.
Hi,
my problem: since today various internally hosted services don't work properly anymore. I'm accessing those via our company's VPN. Example:
https://gitlab.<company>.com => works in Firefox and Chrome
https://gitlab.<company>.com/<orga>/<repo> => works in Firefox, doesn't work in Chrome. GraphQL errors with
POST https://gitlab.<company>.com/api/graphql net::ERR_HTTP2_PROTOCOL_ERROR
in the dev console.
https://elk.<company>.com/app/discover => searching doesn't work in either Chrome or FF, but errors are different. Firefox says
Uncaught Error: Batch request failed with status 0
and Chrome says
bfetch.plugin.js:1 POST [url]https://elk.<company>.com/internal/bsearch?compress=true[/url] 408 (Request Timeout)
and
kbn-ui-shared-deps-npm.dll.js:374 Uncaught Error: Invalid string. Length must be a multiple of 4
It's always the async fetches/posts that seem to fail. I can ping the sites just fine. This phenomenon also happens with our publicly available webapp: if i use a certain async feature while having the VPN tunnel enabled => timeout. I disable the VPN => works.
I also can't SSH to servers in our VPN anymore, no error. Just a timeout.
When I try to `git clone` via ssh (both with domain and ip) i get:
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
In case i have to point it out: that's not the cause.
On Windows 11 everything works fine.
# find /etc/systemd -type l -exec test -f {} \; -print | awk -F'/' '{ printf ("%-40s | %s\n", $(NF-0), $(NF-1)) }' | sort -f # lists enabled services
lists enabled services
bluetooth.service | bluetooth.target.wants
dbus-org.bluez.service | system
dbus-org.freedesktop.network1.service | system
dbus-org.freedesktop.resolve1.service | system
dbus-org.freedesktop.timesync1.service | system
docker.service | multi-user.target.wants
fstrim.timer | timers.target.wants
gcr-ssh-agent.socket | sockets.target.wants
getty@tty1.service | getty.target.wants
gnome-keyring-daemon.socket | sockets.target.wants
p11-kit-server.socket | sockets.target.wants
pipewire-pulse.socket | sockets.target.wants
pipewire-session-manager.service | user
pipewire.socket | sockets.target.wants
reflector.service | multi-user.target.wants
remote-fs.target | multi-user.target.wants
systemd-networkd.service | multi-user.target.wants
systemd-networkd.socket | sockets.target.wants
systemd-networkd-wait-online.service | network-online.target.wants
systemd-network-generator.service | sysinit.target.wants
systemd-resolved.service | sysinit.target.wants
systemd-timesyncd.service | sysinit.target.wants
teamviewerd.service | multi-user.target.wants
wireplumber.service | pipewire.service.wants
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp16s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether a8:a1:59:e9:41:ec brd ff:ff:ff:ff:ff:ff
inet 192.168.0.65/24 metric 100 brd 192.168.0.255 scope global dynamic enp16s0
valid_lft 602815sec preferred_lft 602815sec
inet6 2a02:8108:2c40:5664::745b/128 scope global dynamic noprefixroute
valid_lft 41218sec preferred_lft 41218sec
inet6 2a02:8108:2c40:5664:249e:33c:b231:1229/64 scope global temporary dynamic
valid_lft 86400sec preferred_lft 43200sec
inet6 2a02:8108:2c40:5664:aaa1:59ff:fee9:41ec/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86400sec preferred_lft 43200sec
inet6 fe80::aaa1:59ff:fee9:41ec/64 scope link
valid_lft forever preferred_lft forever
3: wlp13s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether dc:71:96:33:ba:62 brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:04:a7:64:9f brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
5: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 500
link/none
inet 10.50.0.38 peer 10.50.0.37/32 scope global tun0
valid_lft forever preferred_lft forever
inet6 fe80::734e:d036:1ca8:ecb5/64 scope link stable-privacy
valid_lft forever preferred_lft forever
# ip r
default via 192.168.0.1 dev enp16s0 proto dhcp src 192.168.0.65 metric 100
10.9.1.0/24 via 10.50.0.37 dev tun0
10.9.11.0/24 via 10.50.0.37 dev tun0
10.50.0.1 via 10.50.0.37 dev tun0
10.50.0.37 dev tun0 proto kernel scope link src 10.50.0.38
88.205.28.128/26 via 10.50.0.37 dev tun0
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.0.0/24 dev enp16s0 proto kernel scope link src 192.168.0.65 metric 100
192.168.0.1 dev enp16s0 proto dhcp scope link src 192.168.0.65 metric 100
192.168.1.0/24 via 10.50.0.37 dev tun0
# resolvectl
Global
Protocols: +LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Fallback DNS Servers: 1.1.1.1#cloudflare-dns.com 9.9.9.9#dns.quad9.net 8.8.8.8#dns.google 2606:4700:4700::1111#cloudflare-dns.com
2620:fe::9#dns.quad9.net 2001:4860:4860::8888#dns.google
Link 2 (enp16s0)
Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6
Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 192.168.0.1
DNS Servers: 192.168.0.1 2a02:8108:2c40:5664:7254:25ff:fe6a:58d3
Link 3 (wlp13s0)
Current Scopes: none
Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 4 (docker0)
Current Scopes: none
Protocols: -DefaultRoute +LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 5 (tun0)
Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6 mDNS/IPv4 mDNS/IPv6
# cat /etc/resolv.conf
nameserver 127.0.0.53
options edns0 trust-ad
search .
# stat /etc/resolv.conf
File: /etc/resolv.conf -> /run/systemd/resolve/stub-resolv.conf
Size: 37 Blocks: 8 IO Block: 4096 symbolic link
Device: 0,25 Inode: 270482 Links: 1
Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2023-06-24 10:26:58.420301953 +0200
Modify: 2023-02-06 18:43:30.085480940 +0100
Change: 2023-02-06 18:43:30.085480940 +0100
Birth: 2023-02-06 18:43:30.085480940 +0100
Regards
Update:
As of today (13.07.), i don't need the suggested fix anymore.
Last edited by benz (2023-07-13 03:38:27)
Offline
and 1 windows 11 installation
Possibly unrelated, but 3rd link below. Mandatory.
Disable it (it's NOT the BIOS setting!) and reboot windows and linux twice for voodo reasons.
Next, please don't paraphrase, https://bbs.archlinux.org/viewtopic.php?id=57855
find /etc/systemd -type l -exec test -f {} \; -print | awk -F'/' '{ printf ("%-40s | %s\n", $(NF-0), $(NF-1)) }' | sort -f # lists enabled services
ip a
ip r
resolvectl
cat /etc/resolv.conf
stat /etc/resolv.conf
The host for the "private gitlab" is on some (V)LAN segment?
What is its IP?
Do you try to reach it by IP or domain?
Offline
I've updated the OP with the desired info.
Offline
From the symptoms:
ip link set mtu 1280 dev tun0
=== OTHERWISE ===
I'll be a bit vague because idk how much of that was posted by accident.
Also I wrote this overcomplicated nonsense before it occurred to me that the symptoms would actually perfectly fit the mtu overhead of some VPNs…
The routing table shows that an IP that aligns w/ the segment that hosts your company domain explicitly gets routed over the VPN, is that deliberate?
resolvectl only shows global DNS and for 192.168.0.1, what resolves "elk.<company>.com" (assuming that's a LAN host in the VPN)?
ping -c1 elk.<company>.com
ping -c1 gitlab.<company>.com
drill elk.<company>.com # drill doesn't use nsswitch
drill gitlab.<company>.com
The IP will either be public (in which case you don't have to reveal the actual IP, just that it's in a public segment) or private, in which case only the segment is relevant (but since it's a private range, the information is borderline worthless outside the LAN/VPN anyway)
Ideally, test the behavior w/o docker and resolved.
Offline
From the symptoms:
ip link set mtu 1280 dev tun0
=== OTHERWISE ===
I'll be a bit vague because idk how much of that was posted by accident.
Also I wrote this overcomplicated nonsense before it occurred to me that the symptoms would actually perfectly fit the mtu overhead of some VPNs…The routing table shows that an IP that aligns w/ the segment that hosts your company domain explicitly gets routed over the VPN, is that deliberate?
resolvectl only shows global DNS and for 192.168.0.1, what resolves "elk.<company>.com" (assuming that's a LAN host in the VPN)?ping -c1 elk.<company>.com ping -c1 gitlab.<company>.com drill elk.<company>.com # drill doesn't use nsswitch drill gitlab.<company>.com
The IP will either be public (in which case you don't have to reveal the actual IP, just that it's in a public segment) or private, in which case only the segment is relevant (but since it's a private range, the information is borderline worthless outside the LAN/VPN anyway)
Ideally, test the behavior w/o docker and resolved.
Thanks a ton. Setting the MTU size fixed the issue. I'll check if a kernel update actually touched something in that regard. The issue caught me by surprise for sure.
Last edited by benz (2023-06-24 14:43:49)
Offline
If you google your VPN (some weak duckling?) you'll probably find the actual MTU limit, a kernel update is rather unlikely to do anything itr.
To make the change permanent, https://wiki.archlinux.org/title/Networ … eue_length (adjust the device pattern)
Please always remember to mark resolved threads by editing your initial posts subject - so others will know that there's no task left, but maybe a solution to find.
Thanks.
Offline