Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Fri, 13 Sep 2019 11:11:09 -0400
From: Rich Felker <>
To: Andrey Arapov <>
Subject: Re: DNS FQDN query issue in musl starting 1.1.13

On Fri, Sep 13, 2019 at 02:19:48PM +0000, Andrey Arapov wrote:
> Hello Rich,
> thank you for your prompt reply.
> I agree that SERVFAIL must be reported as an error to the caller and have just realized
> that "ucp-controller.kube-system.svc.cluster.local" has only 4 ndots, hence it isn't
> tried unless a trailing (5th) dot was specified,
> e.g. "ucp-controller.kube-system.svc.cluster.local.".
> Probably one of the differences is that, I presume, glibc treats a domain name terminated
> by a length byte of zero (RFC1035 3.1 Name space definitions), hence resolving the FQDN
> with only 4 dots whilst 5 is set.
> Please correct me if I am wrong.

You're mistaken here. The input to the resolver interfaces is a plain
string; there is no "length byte". Length byte is part of the DNS
protocol on the wire and is only involved at a lower layer past the
search processing, which is just transformations on strings.

On both glibc and musl, the search domains are tried initially if the
query string does not end in a dot and does not contain at least ndots
dots. So in your above example with four dots and ndots==5, both will
do the search. Adding the final dot suppresses the search regardless
of whether it takes the dots count to 5 or more.

The only difference between glibc and musl here is that, as I
understand it at least, glibc will continue searching after hitting an
error like ServFail, thus producing results that depend on transient
failures. musl stops and reports the error.

In the opposite case, with something like, where
there are 5 dots, the behaviors differ in the way you're wondering
about, but I don't think it's relevant to your usage/problem. glibc
will first try to resolve it as a FQDN, and if that fails, it will run
through the whole search. musl does not perform search at all for
queries with at least ndots dots in them. The motivation here is the
same: if you depend on that behavior, your application/configuration
is subject to breakage by third parties registering new things in the
global DNS namespace. This is actually what happened with all the new
TLDs -- lots of networks using those names as fake local TLDs via
search with ndots==1 broke when they appeared as real TLDs.

> Regarding usage constraints, it looks like that the whole point having the ndots > 1 is
> basically to make the internal cluster lookups faster (the more dots the faster) while
> cache the external DNS lookups so they are slow the first time but fast subsequently.

If all your internal cluster lookups use the .local fake TLD
explicitly, e.g. *.svc.cluster.local, etc., then search domain is not
needed whatsoever and just making things slow and fragile. ndots==1
would avoid it getting used, though.

If your internal cluster lookups are looking up names like ""
and expecting it to resolve to,, or depending on which
first defines it, then your setup actually does depend on search and
having ndots be greater than the number of dots in the longest
"" part you use. The number of dots in the search part
("") is not relevant.

One proposal I recall hearing from Kubernetes folks way back was to
use names like "foo-bar" instead of "" in the above, so that
ndots==1 suffices and the search behavior does not clash with lookups
of real domains.

> But having ndots = 1 to workaround the musl's unexpected behavior (when ndots > 1) is
> making all intra-cluster lookups slower, whilst upstream FQDN faster.

I don't understand why that would be. They should either fail entirely
or be just as fast. Decreasing ndots should not be able to slow down
any lookup that works both before and after the decreasing.

> After reading through the discussions it turns out that in the beginning people resorted
> to using the Kubernetes's dnsConfig for setting the ndots to 1 (default) as a workaround
> but later then they did not need that anymore as Kubernetes/CoreDNS dropped Alpine
> (not sure for what reasons though).
> I guess the whole point is that the projects using musl C library (>=1.1.13)
> should clearly make people aware of that difference in hostname lookups
> which cause unexpected behavior compared to the glibc.

We've tried to do this on the wiki about functional differences. Open
to improvements.

> Below is some brief story-line I gathered which might be handy to anyone reading this.
> ### September-December 2016/2017
> > - "We're smart enough to solve this for everyone" is not realistic. (c) BrianGallew
> The rationale for having ndots=5 is explained at length at #33554
> -
> dnsConfig:
> -
> -
> ### June-August 2018 ... April 2019
> People are struggling with this issue as upstream/downstream projects
> updated (or switched to) their Alpine base distro to 3.4 (or higher with musl >= 1.1.13).
> ndots breaks DNS resolving
> -
> Kubernetes pods /etc/resolv.conf ndots:5 option and why it may negatively affect your application performances
> -
> Docker: drop alpine
> -
> Rebase container images from alpine to debian-base.
> -

I'm not up for reading through all that right now, but thanks for
collecting the relevant information. It's frustrating that, despite
having known way back that what they were doing was wrong, they seem
to have decided to "drop support for Alpine/musl" rather than "fix the
stuff that's obviously wrong and hurting performance even on glibc
based dists"...


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.