Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 12 Nov 2014 01:05:32 +0100
From: magnum <john.magnum@...hmail.com>
To: john-dev@...ts.openwall.com
Subject: Re: Discuss Mask-mode with GPUs and changes required for
 its support.

On 2014-11-05 16:02, Sayantan Datta wrote:
> Based on my earlier experience with mask-mode, it was necessary to write
> separate kernels for benchmark, mask-mode(password on GPU) and non-mask
> modes. However, as much as I would love to unify them under a common
> kernel, with all of them following the same code path, it is difficult to
> do so without making debatable changes.

Btw here's some food for thought:

The NT kernel could either (among alternatives):
a) Only suppoort ISO-8859-1 (like Hashcat).
b) Transfer base words in UTF-16 to GPU, and use a UTF-16 version of 
GPU-side mask generation.
c) Support UTF-8/codepage conversions on GPU (NTLMv2 and krb5pa-md5 
kernels currently do this). So we transfer base words in UTF-8 or a CP 
to GPU, apply the mask and finally convert to UTF-16 on GPU.
d) some combination of b and c. For example, transfer basewords to GPU 
in UTF-8/CP, then convert them to UTF-16 once, finally apply mask with a 
UTF-16 version of mask mode.

IMHO we should *definitely* have full UTF-8/codepage support, the 
question is how. We will never be quite as fast as Hashcat with NT 
hashes anyway so we should beat it with functionality. So in my book, 
option a is totally out of the question.

Option b is simplest but typically need twice the bandwidth for PCI 
transfers (which is not much of a problem when we run hybrid mask) while 
option c needs somewhat more complex GPU code. I guess option b is 
typically fastest for mask mode. However, option c is fastest when not 
using a mask.

magnum

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.