Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 17 Dec 2014 20:24:41 +0300
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: Re: bleeding-jumbo: make -j random opencl errors

On Wed, Dec 17, 2014 at 07:03:45AM -0900, Royce Williams wrote:
> On two different 64-bit Linux gcc systems using NVIDIA OpenCL and CUDA
> 6.5, a non-parallel make works fine, but parallel makes die randomly
> with errors like the following, but with different errors on some
> attempts.
> 
> $ make -j -s
> opencl_mscash2_fmt_plug.c:457:1: internal compiler error: Segmentation fault
>  };

Why do you use -j, as opposed to -j8 or whatever matches your machine?
Is this a stress test?  Do you even have enough RAM for it?  I think not.
So this looks like a non-issue to me.  It is expected behavior to
receive a SIGSEGV on an out of memory condition, as long as some memory
overcommitment is allowed in the kernel.

> IIRC, this was working a few days ago on at least one of the systems,
> and neither have had this failure mode before.

Maybe memory usage by this build or by something else on your system has
increased.  Just do not use -j without specifying a limit, like "-j8" or
e.g. -j32 on our "super" machine.  We do have a large number of source
files, so plain -j will result in excessive load and likely a slightly
slower build (cache thrashing, extra context switches), and it may very
well run out of memory unless you have lots installed.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.