Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 15 Jul 2013 17:34:46 +0530
From: Sayantan Datta <>
Subject: Re: Shared Memory on GPGPU?


On Mon, Jul 15, 2013 at 12:38 PM, marcus.desto <> wrote:

> I am wondering, whether it is possible to use shared data on GPGPU,
> meaning you have some data you push into the GPGPU's RAM and then you start
> the computation in the same program. The first computation finishes, then
> you run a second program that starts another computation, but it does not
> upload new input data to RAM. It uses the data that first program stored in
> GPGPU RAM for its computation. Is that possible?
> If it is, does JtR support that?

Yes , it does. In fact our split kernel implementations uses this concept.


Content of type "text/html" skipped

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.