Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 18 Dec 2014 16:28:47 +0300
From: Solar Designer <>
Subject: Re: bleeding-jumbo: make -j random opencl errors

On Wed, Dec 17, 2014 at 08:23:16PM -0900, Royce Williams wrote:
> On Wed, Dec 17, 2014 at 8:24 AM, Solar Designer <> wrote:
> > Why do you use -j, as opposed to -j8 or whatever matches your machine?
> > Is this a stress test?  Do you even have enough RAM for it?  I think not.
> Heh.  Point taken.  Call it an inadvertent stress test.  I'd been
> doing this for a while and never had a problem.  I typo'd it one day
> (leaving off the number of cores), and since it worked well and
> finished much faster, I just kept using it.  I realized that it was
> doing a *lot* more "parallelizing" than before, but it seems to be
> fine until now.

If -j worked faster for you, it means you were not giving the full
number of your logical CPUs (were giving the number of cores instead?)

> > We do have a large number of source
> > files, so plain -j will result in excessive load and likely a slightly
> > slower build (cache thrashing, extra context switches), and it may very
> > well run out of memory unless you have lots installed.
> Thanks -- that helps me understand the root cause.  And "lots" clearly
> must mean something more than the 16G that I have in that system.

Actually, "make -j" succeeds for me on an 8 GB machine, but it is
possible that you have a newer version of gcc that needs more memory.


Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.