Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Wed, 17 Jul 2013 11:41:29 -0400
From: Rich Felker <dalias@...ifal.cx>
To: musl@...ts.openwall.com
Subject: Re: Request for volunteers

On Tue, Jul 16, 2013 at 06:20:31PM +0200, Szabolcs Nagy wrote:
> * Rich Felker <dalias@...ifal.cx> [2013-07-02 03:49:20 -0400]:
> > What about a mix? Have the makefile include another makefile fragment
> > with a rule to generate that fragment, where the fragment is generated
> > from comments in the source files. Then you have full dependency
> > tracking via make, and self-contained tests.
> 
> i wrote some tests but the build system became a bit nasty
> i attached the current layout with most of the test cases
> removed so someone can take a look and/or propose a better
> buildsystem before i do too much work in the wrong direction

If you really want to do multiple makefiles, what about at least
setting up the top-level makefile so it invokes them via dependencies
rather than a shell for-loop?

> each directory has separate makefile because they work
> differently
> 
> functional/ and regression/ tests have the same makefile,
> they set up a lot of make variables for each .c file in
> the directory, the variables and rules can be overridden
> by a similarly named .mk file
> (this seems to be more reliable than target specific vars)
> 
> (now it builds both static and dynamic linked binaries
> this can be changed)

It's probably useful. We had plenty of bugs that only showed up one
way or the other, but it may be useful just to test the cases where we
know or expect it matters (purely in the interest of build time and
run time).

> i use the srcdir variable so it is possible to build
> the binaries into a different directory (so a single
> source tree can be used for glibc and musl test binaries)
> i'm not sure how useful is that (one could use several
> git repos as well)

If it's easy to do, I like it. It makes it easy to try local changes
on both without committing them to a repo.

> another approach would be one central makefile that
> collects all the sources and then you have to build
> tests from the central place
> (but i thought that sometimes you just want to run
> a subset of the tests and that's easier with the
> makefile per dir approach,

I would like this better, but I'm happy to have whatever works. IMO
it's not too bad to support building subsets with a single makefile.
You just have variables containing the names of all tests in a
particular subset and rules that depend on just those tests. One thing
I also just thought of is that you could have separate REPORT files
for each test which are concatenated to the final REPORT file. This
makes it possible to run the tests in parallel. In general, I think
the more declarative/functional and less procedural you make a
makefile, the simpler it is and the better it works.

> another issue is dlopen
> and ldso tests need the .so binary at the right path
> at runtime so you cannot run the tests from arbitrary
> directory)

Perhaps the makefile could pass the directory containing the test as
an argument to it for these tests so they could chdir to their own
location as part of the test?

> yet another approach would be to use a simple makefile
> with explicit rules without fancy gnu make tricks
> but then the makefile needs to be edited whenever a
> new test is added

I like the current approach where you don't have to edit the makefile.
:-)

> i'm not sure what's the best way to handle common/
> code in case of decentralized makefiles, now i
> collected them into a separate directory that is
> built into a 'libtest.a' that is linked to all
> tests so you have to build common/ first

That's why I like unified makefiles.

> i haven't yet done proper collection of the reports
> and i'll need some tool to run the test cases:
> i don't know how to report the signal name or number
> (portably) from sh when a test is killed by a signal

Instead of using the shell, run it from your own program that gets the
exit status with waitpid and passes that to strsignal.

> (the shell prints segfault etc to its stderr which may
> be used) and i don't know how to kill the test reliably
> after a timeout
> 
> i hope this makes sense

Yes. Hope this review is helpful. Again, this is your project and I'm
very grateful that you're doing it, so I don't want to impose my
opinions on how to do stuff, especially if it hinders your ability to
get things done.

Thanks and best wishes,

Rich

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.