Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Wed, 24 Sep 2014 00:00:54 +0200
From: magnum <john.magnum@...hmail.com>
To: john-dev@...ts.openwall.com
Subject: nVidia Maxwell support (was: john-users)

On 2014-09-11 10:57, Solar Designer wrote:
> For now, I think someone with a Maxwell GPU should try building and
> benchmarking our descrypt-opencl on it with the S-boxes that use
> bitselect().

Is it safe to assume that OpenCL's bitselect() function will boil down 
to the new instructions on Maxwell? Or is it not that simple?

For example, we have many cases like this:

#ifdef USE_BITSELECT
#define F(x, y, z)    bitselect((z), (y), (x))
#define G(x, y, z)    bitselect((y), (x), (z))
#else
#define F(x, y, z)    ((z) ^ ((x) & ((y) ^ (z))))
#define G(x, y, z)    ((y) ^ ((z) & ((x) ^ (y))))
#endif

or this new one (courtesy of Milen), that I plan to add to a bunch of 
kernels:

#ifdef USE_BITSELECT
#define SWAP32(x) bitselect(rotate(x, 24U), rotate(x, 8U), 0x00FF00FFU)
#else
inline uint SWAP32(uint x)
{
      x = rotate(x, 16U);
      return ((x & 0x00FF00FF) << 8) + ((x >> 8) & 0x00FF00FF);
}
#endif

The thing is, we currently only define USE_BITSELECT for AMD devices. 
Would it be safer, in the nvidia case, to leave the non-bitselect 
versions for the optimizer to consider? Or would it be safer to use 
bitselect, or should it really not matter? It seems to still matter on AMD.

If use of bitselect() increases the chance for better low-level code for 
nvidia too, maybe we should always define USE_BITSELECT (I'd still keep 
the #ifdefs for quick benchmarks with/without them, as well as for 
reference).

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.