Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 19 Mar 2013 10:46:38 +0530
From: Sayantan Datta <std2048@...il.com>
To: john-dev@...ts.openwall.com
Subject: Re: Idea to increase plaintext length for GPU based hashes

On Tue, Mar 19, 2013 at 6:51 AM, magnum <john.magnum@...hmail.com> wrote:

> Another approach (not necessarily mutex to yours) would be to split the
> transfer. Let's say we have a work size of 1M. At, say, the 256K'th call to
> set_key(), it could initiate a transfer of this first fourth of keys to
> GPU. This transfer will not stall the host side, it will take place while
> we continue with the next 256K keys. And so on. If we can balance this
> properly we should get rid of much of the transfer delay. Maybe we should
> split it in 8 or 16, maybe less.


Yes, maybe we could use multiple(two or more) command-queues and
asynchronous memory copy to hide some of the latencies. This might be
similar to the concept of pipelining. But it would be much better if we
don't have to transfer anything at all i.e produce the key on GPU itself.

Regards,
Sayantan

Content of type "text/html" skipped

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.