Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 10 May 2012 03:55:25 +0530
From: SAYANTAN DATTA <std2048@...il.com>
To: john-dev@...ts.openwall.com
Subject: Re: Sayantan: Weekly Report #3

  Wow.  20% is excessive and not expected.  We need to figure out why this

> is so - or maybe it is not (that is, part of the speedup might be from
> some side-effect, not directly from the OpenMP parallelization).
>
> You compute two MD4 hashes on host per MSCash2 hash, correct?  At 80k c/s,
> this means you compute 160k MD4s per second, and you say this corresponds
> to over 20% of total running time (you say you saved 20% by
> parallelizing this).  Thus, the total speed would be less than 800k MD4/s -
> sounds too low even for the somewhat non-optimal and non-vectorized code
> that we have there.  For the raw-md4 format in a "make generic" build
> (thus also non-vectorized), I am getting between 5M and 6M c/s on this CPU
> (also on one core, indeed).  (For vectorized code, it's more like 30M c/s
> on one core.)  So there's a factor of 7 difference here, which we don't
> have an explanation for.
>
> Well it's not MD4 alone which consumes cpu time. A deeper look into the
code suggests that conversion from byte to hexstring consumes significant
amount of  time, nearly around 90% of total CPU time. Thus the  remaining
10% of cpu time is consumed by the md4 and other stuffs.


Sayantan.

Content of type "text/html" skipped

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.