
Audience
This is for people running containerised services behind a reverse proxy and enforcing basic controls such as CIDR restricted admin endpoints. I assume you are comfortable with split DNS, DHCP supplied revolvers, NAT reflection, and OS level DNS over HTTPS or TLS.
My Change
I made a small change to a self hosted service behind my reverse proxy. I restricted admin feature access to internal CIDRs only. Nothing more than a slight step up in site security. When this change works, the admin account can still log in, but get no admin features unless the IP is recognised as being within IP ranges that I define.
With my split DNS, internal clients should resolve to internal IP addresses, and external clients to a public IP, including my externally facing IP. In that arrangement, the application sees the real client IP and CIDR-based admin access is achieved.
This change should have been routine.
It locked me out of the admin features immediately. OK then…
Symptom
After doing a little network switching, I found the application logs showed every request from my client arriving from my public WAN IP, including sessions initiated from inside the LAN. So, split DNS is being bypassed: internal clients resolve the public record, then requests hairpin through NAT and appear as external requests. At that point the CIDR restriction does exactly what it should and blocks those admin features.
So the job became: make internal resolution reliably internal.
Finding 1: Resolver Defaults Were Too Restrictive for This Use Case
The split DNS override existed, looked right, and was found to be correct when queried on the resolver host. Queries from other LAN clients were inconsistent, which suggested that the resolver was not behaving authoritatively for all trusted clients and some lookups were falling through to upstream resolution.
The root cause appeared to be a conservative default listening configuration. It is a sensible default for a resolver that might be exposed beyond a trusted segment. In my case the resolver is within the LAN only and not internet facing, so that default was unnecessary. I did know I had left it a little too secure for my environment, but that didn’t seem to be a problem until now.
The obvious fix was to broaden the resolver’s listening and access controls to cover trusted LAN clients as well, so it always served the local override. After that, any LAN client querying the resolver directly would receive the internal IP consistently.
After further testing, most devices were now seen as local.
Most. Not all.
Finding 2: ChromeOS Was Not Using the LAN Resolver
I observed that remaining failures were specific to newer ChromeOS devices. They all had the correct DNS server that came along via DHCP for their fixed addresses, and the WiFi settings looked fine, yet they still resolved the public address.
After a little digging, I found the reason was ChromeOS Secure DNS.

On current ChromeOS releases this feature is enabled by default. The intent is to ensure DNS queries go to an external DoH resolver, bypassing the DHCP supplied DNS config. While that is a reasonable privacy default on untrusted networks, in a trusted split DNS environment it can defeat local policy.
Once Secure DNS was disabled on those devices, they began using the LAN resolver and the hostname resolved internally as intended. The application logs immediately showed access with internal client IPs, so the admin feature access returned for admin logins on those ChromeOS devices.
Outcome
This was a compound issue with initially unclear symptoms. Critically considering and testing safe changes to the local DNS settings addressed only part of the issue. Further testing revealed a second part of the problem was a ChromeOS security default that did not belong on this local network. After those two changes, the system behaved as intended:
- internal clients always resolve to the internal address, and no longer hairpin to the external IP
- the application sees real client IPs
- admin CIDR restrictions work without exceptions, and without including any public IP address to the exceptions.
- remote admin features require VPN access to the local network
Takeaway
Nothing here was “broken”. The resolver was operating under secure defaults that did not match a LAN only deployment, and ChromeOS was operating under secure defaults that bypass local DNS. Combined, those defaults produced a predictable failure mode: split DNS bypass, NAT reflection, and an admin lockout.
If you depend on split DNS, validate resolution at the client, don’t just check resolver configuration. On modern clients, such as ChromeOS, the visible configured DNS server may not be the resolver actually queried!
You must be logged in to post a comment.