Date: Wed, 3 Mar 2021 13:02:41 +0100 From: Michał Majchrowicz <sectroyer@...il.com> To: john-users@...ts.openwall.com Subject: Re: Splitting mask keyspace On Wed, Mar 3, 2021 at 12:50 PM Solar Designer <solar@...nwall.com> wrote: > > On Tue, Mar 02, 2021 at 10:44:06PM +0100, Micha?? Majchrowicz wrote: > > john often needs even few hours to calculate ETA > > That's weird. You might want to open a GitHub issue for it, and include > reproduction instructions in there (preferably with a reduced testcase). Sorry I think I used incorrect wording. ETA IS updated it's just that it takes often few hours before it's "real eta" till then it keeps on growing, and growing, and growing ;) > 1. There might be other load on the system. You mentioned elsewhere > that you'd rather not use all cores of the machine, and I am guessing > that other load might be why not. OpenMP is extremely sensitive to > other load ("--fork" is not). When you use OpenMP on a system with > other load, you need to set the OMP_NUM_THREADS environment variable to > a thread count low enough that the system isn't overbooked. Yes I noticed that and that was one of the reasons at the beginning I even disabled OpenMP support as I assume it wasn't working at all :D On 8 core machine when one core was doing computation of gpu (naturally utilising 100% of single cpu core power) openmp version of john dropped to levels comparable to single core instance. However when I set up OMP_NUM_THREADS to 7 it gained huge boost. That's one of the reasons I want to limit number of cores ;) Usually I try perform performance specific tests where other cores are idle however they not always will so that's why I want to control it :) > 2. Your different salt count might be low. With "--fork", each process > generates its own candidate passwords stream. With OpenMP, just one > thread generates candidate passwords for all threads to use, and this is > synchronous. The generated candidate passwords are reused for all > salts, so the more salts you have the lower the candidate password > generation "overhead" is. You might want to see what the speed ratio > would be with a higher different salt count. (Some numbers you posted > on GitHub suggest you were running against only 5 descrypt salts. Try > running against a few thousand, up to the maximum of 4096.) Yes I noticed hashes number has significant impact I am trying to use real world (IoT based) examples to perform my tests and this is my main focus but can see how it impacts with some higher number of generated ones > 3. I assume you're already using the latest bleeding-jumbo off GitHub > and not our 1.9.0-jumbo-1 release? I made some enhancements to > descrypt since 1.9.0-jumbo-1, bringing the comparisons of computed vs. > loaded hashes from sequential into OpenMP parallel sections. So with > bleeding-jumbo you should have higher descrypt OpenMP speeds than with > 1.9.0-jumbo-1 (but not as high as what "--fork" can provide, indeed). On some nodes I do but not all , will switch to that and retest.
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.