Date: Tue, 2 Jul 2013 17:44:16 +0200 From: magnum <john.magnum@...hmail.com> To: john-dev@...ts.openwall.com Subject: Re: bug: GPU use in CPU-only formats On 2 Jul, 2013, at 16:53 , Claudio André <claudioandre.br@...il.com> wrote: >> Strange. I'll have a look. I have noticed that my laptop switches from HD4000 to GT650M even when running non-GPU formats but I thought that was due to some fuzzy logic (with earlier drivers I think such switching was actually made mostly or solely from a whitelist). > > The memory allocation makes sense. Two GPU 'pointers' are initialized during startup and not released until exit. Yeah, but 62 MB? I'd love to be able to revert to the previous nvidia driver and compare. > Ok, it is possible to get all the OpenCL information needed (e.g. to parse the -dev option) and release everything GPU related. After that, reopen GPU stuff during init() only when needed. As it is now, every OpenCL enabled build could cause drivers to keep some memory allocated while JtR is running, even a non-GPU format > > BUT: no real GPU memory allocation is done inside common code. Neither code execution. So, ghosts and busy devices could not happen. I'm not so sure they really do. Might be a driver quirk. I can reproduce it like this: screen 1$ nvidia-smi (get baseline) screen 0$ ../run/john -t=10 -form:raw-sha256-ng screen 1$ nvidia-smi (64 MB memory is allocated, temp/fans increase a little) screen 0$ ../run/john -t=10 -form:raw-sha256-ng -dev=1 screen 1$ nvidia-smi (No memory allocated, temp/fans increase a little) The temp increase (a few degrees) may actually be due to CPU heat, right? What would be the equivalence of nvidia-smi for AMD? How about running the command-line profiler for AMD (what was it called again?) with a CPU format? And is there anything like that for nvidia? magnum
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.