Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 19 Mar 2015 20:59:06 +0800
From: Lei Zhang <zhanglei.april@...il.com>
To: john-dev@...ts.openwall.com
Subject: Re: [GSoC] building JtR for MIC

Hi Alexander,

Actually using your original mic.h with autoconf results in some compile errors. I modified mic.h to include autoconfig.h and those compile errors were gone. 
I didn't encounter those errors previously because I mistakenly set ARCH_LINK=autoconf_arch.h. And when I set it to mic.h, those errors suddenly appeared. The error message is attached below.

I think there're some bugs whose behavior rely on the definition of macros defined in autoconfig.h. Maybe I'll have to deal with those bugs anyway.


Lei

> On Mar 18, 2015, at 6:38 PM, Solar Designer <solar@...nwall.com> wrote:
> 
> On Wed, Mar 18, 2015 at 05:27:45PM +0800, Lei Zhang wrote:
>> I tweaked with OMP_SCALE in several formats, according to magnum's advices, and their memory usage on MIC is much more reasonable now. With OpenMP enabled, john-jumbo won't abort anymore when running the benchmark, although 126 of the tests still FAILED.
> 
> That's a huge number.  How many of the failed tests are for dynamic*
> formats?  Perhaps there's one or a handful of bugs that are causing most
> of the failures.
> 
> I notice that in jumbo my original mic.h is modified to use autoconf'ed
> ARCH_* settings:
> 
> #if AC_BUILT
> #include "autoconfig.h"
> #else
> #define ARCH_WORD                       long
> #define ARCH_SIZE                       8
> #define ARCH_BITS                       64
> #define ARCH_BITS_LOG                   6
> #define ARCH_BITS_STR                   "64"
> #define ARCH_LITTLE_ENDIAN              1
> #define ARCH_INT_GT_32                  0
> #endif
> 
> This might be wrong since we're cross-compiling.  I doubt it's causing
> trouble now, though, since the host system is almost certainly x86_64,
> for which these settings just happen to be the same.
> 
>> Next I want to figure out how those tests failed, but I have a few questions first: (forgive me if they look stupid...)
>> 1. What exactly does it mean for a format to FAIL the test?
> 
> See formats.c: fmt_self_test().  It's usually indicated after a call to
> which of the format methods the test failed, but you'll most commonly
> see get_hash[0](0) or something, which means that the actual failure was
> likely in a preceding crypt_all() or set_key() or set_salt().  The test
> vectors are in the tests[] array in each format's *_fmt*.c file.
> In some cases, it's possible to narrow the problem down by temporarily
> commenting out some of the test vectors.  But usually not.
> 
>> 2. There're two kind of numbers in the output, real and virtual. What's their difference?
> 
> This is in the FAQ:
> 
> Q: What are the "real" and "virtual" c/s rates as reported by "--test"
> (on Unix-like operating systems)?
> A: These correspond to real and virtual (processor) time, respectively.
> The two results would differ when the system is under other load, with
> the "virtual" c/s rate indicating roughly what you could expect to get
> from the same machine if it were not loaded.
> 
> ... but this FAQ entry is outdated and needs to be revised.  As written,
> it applies to single-threaded builds only (I wrote it before we
> introduced OpenMP support).  Clearly, the numbers also differ greatly in
> multi-threaded builds.  When running a multi-threaded build on an
> otherwise idle system, the "real" speed will be roughly equal to the
> "virtual" speed times the number of threads.
> 
> Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.