Date: Mon, 14 Nov 2016 16:19:59 +0100 From: magnum <john.magnum@...hmail.com> To: john-users@...ts.openwall.com Subject: Re: Loading high number of hashes on GPU On 2016-11-14 15:03, Luis Rocha wrote: > Not sure if I'm doing something wrong but having hard time to load high > number of hashes on GPU for raw-sha1. > > I have 20M sha1 hashes that I'm trying to load. The GPU has 4Gb RAM. > > $./john > John the Ripper 1.8.0-jumbo-1-5344-gefae4e5+ OMP [linux-gnu 64-bit AVX2-ac] > > > $./john 20M.hashes --wordlist=uniqwords --pot=20M.pot > --format=raw-sha1-opencl --session=gpu --fork=2 --rules:Jumbo > > No dupe-checking performed when loading hashes. I think the above message is a clue. Did you set NoLoaderDupeCheck in john.conf? That wont work with Sayantan's bitmap tables. > Using default input encoding: UTF-8 > Loaded 20000000 password hashes with no different salts (Raw-SHA1-opencl > [SHA1 OpenCL]) > Remaining 5347239 password hashes with no different salts > Node numbers 1-2 of 2 (fork) > Device 0: Hawaii > Device 1: Hawaii > / > Progress is too slow!! trying next table size. > \ > Progress is too slow!! trying next table size. > > Progress is too slow!! trying next table size. > > (..) > > This message keeps going and going (30m at least) .. Sayantan's code tries to make perfect hash tables. Once it sees a dupe (and it will due to your NoLoaderDupeCheck setting) it will re-try forever. magnum
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.