|
|
Message-ID: <037467f9-0931-49e1-a521-7bf75439cdb6@jeffunit.com> Date: Sat, 8 Nov 2025 07:10:30 -0800 From: jeff <jeff@...funit.com> To: john-users@...ts.openwall.com Subject: 2 questions about cracking 233 million NTLM passwords I am cracking a lot of NTLM passwords. I am using a windows 11 machine with 64 cores and 256gb of ram. I am using a version of john compiled in 2025, john_jumbo_2025_winX64_1_JtR.7z I am using the --fork option. I am currently running 14 threads, due to limited memory. Each fork process uses about 16gb of ram. Is there any way to reduce the memory usage, so I can run more threads? I have a basic understanding of how john works. A candidate hash is generated, and then compared to the list of unknown hashes. I suspect that john may sort the list of unknown hashes. For a small list of unknown hashes, I would guess that a linear search would be efficient. However, with a large number of unknown hashes (like 233 million) I would guess that something like a binary search would be far faster. I was wondering if john does use a binary search comparing a candidate hash against the list of unknown hashes? thanks in advance, jeff
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.