Date: Thu, 12 Aug 2010 23:50:48 +0400 From: Solar Designer <solar@...nwall.com> To: john-users@...ts.openwall.com Subject: team john-users writeup Hi, Here's our team's "Crack Me If You Can" DEFCON 2010 contest writeup. I wrote it with input from others on our team. Unfortunately, this resulted in the writeup being somewhat focused on what I did vs. what other did during the contest. This is partially compensated for by Simon (team bartavelle) and Matt having posted their own writeups, which I refer to. Warning: this writeup is lengthy! Sorry about that. ;-) --- Thanks (brief). We'd like to thank: KoreLogic, and Minga and Hank in particular - for the contest; Team bartavelle and Frank Dittrich - for their contributions; Team CrackHeads - for several things (see end of this writeup); Fyodor - for volunteering to (co-)represent us at DEFCON; Alain Espinosa - for the NTLM hashing code (in JtR jumbo patch). We would also like to thank and apologize to team smelly_weigand for failing to use their offered contribution. Please refer to the full "Thanks" section at the end of this writeup for more detail. (It was too long to start the writeup with it.) The team and contributors. Active members (those who uploaded cracked passwords, listed in order they joined the team): Solar Designer (Russia) Rich Rumble (US) Matt Weir (US) jmk (US) Dhiru Kholia (Canada) elijah (Russia) bartavelle (person) (France) websiteaccess (France) Guth (France) Other contributors (who did not participate on the team, so there was no coordination with them, yet they sent in cracked passwords to us): bartavelle (team) (France) Frank Dittrich (Germany) With few exceptions, we're unable to reliably determine the effect of individual contributions. We focused on getting a higher score as a team rather than on keeping track of everyone's individual contributions, and after about 10 hours into the contest we started reusing "team-wide" cracked passwords as wordlists and as material for .chr files and the like, and for manual analysis. So each contribution also improved other people's further contributions. With almost all cracked password uploads, except for a few early ones, there was huge overlap with passwords cracked by others on our team - typically on the order of 90% or more (that is, only around 10% or less of independently-cracked passwords tended to be new/unique rather than already cracked by others on our team). Yet we're grateful to all who have contributed! The small numbers of non-overlapping passwords simply reflect the nature of password cracking and password security, confirming that it makes sense to detect and eliminate weak passwords (then only relatively few of the remaining ones are crackable by another attacker). Computing resources. Direct use by the team (not counting contributions by team bartavelle), averages for contest duration (48 hours): approx. 30 CPU cores, approx. 1 GPU. Solar: otherwise-idle cycles (approx. 90%) of up to 12 CPU cores (over 3 quad-core CPUs in servers), roughly 8 CPU cores in use on average. (Could use several quad-core machines more, but had no time to reasonably put them to use. Spent time on launching more focused attacks instead.) Rich: 8 CPU cores (total for 3 computers), mostly in use. Matt: "2.2 GHz Intel Core 2 Duo Mac Laptop", "Asus EEE Netbook". Matt's own detailed writeup is at: http://www.openwall.com/lists/john-users/2010/08/11/1 jmk, own use ("averaging about 3 cores for 35-40 hours"): Dual Intel Xeon E5410 (2.33 GHz) quad core processors - 8 cores total jmk, contributed to bartavelle's cluster ("connected to the JtR server for about 35-40 hours"): Intel M520 (2.53 Ghz) dual core w/ HT AMD X2 5600+ dual core Dhiru: Core i5, AMD X3 720, Intel Atom 1.6 GHz (ran JtR and rcracki-mt), ATI 4870 and 5970 (ran ighashgpu), but less than one GPU used on average during contest time. "Most of the cracking was done on Intel i5 CPU, some minor work was done on AMD X3 720 and Intel Atom 1.6Ghz." elijah: AMD Athlon x2 4200 (JtR with jumbo patch, "lots of rules and a pinch of luck") bartavelle: see "bartavelle" team writeup: http://www.openwall.com/lists/john-users/2010/08/10/1 websiteaccess: Core i7 (Mac) in use during part of the contest time Guth: 3-4 CPU cores during part of the contest time Frank: a few CPU cores for quick JtR wordlist runs and a one-time contribution of the results (approx. one hour total wall clock running time, so must be no more than a few CPU-hours) Tools. John the Ripper, jumbo patch, other patches - used by all on the team. Matt's probabilistic cracker (uses JtR) - used by Matt only. rcracki-mt (on LM hashes, mostly overlapping with those cracked by JtR) - used by Dhiru (LM) and Guth (Oracle SYS and SYSTEM usernames). rcrack - used by jmk to crack the remaining two LM hashes. ighashgpu (on NTLM hashes, cracking a total of 1732 by far most of which overlapped with hashes cracked by JtR) - used by Dhiru only. Custom code written or modified during the contest. Custom Perl scripts, such as revisions of mix.pl to generate double-word lists: http://www.openwall.com/lists/john-users/2010/02/14/5 JtR wordlist rules and external modes. Custom shell scripts to automate merging of uploaded files, to generate cracked/uncracked password/hash lists and .chr files for the team to possibly reuse, and to make contest submissions (on cron). Wordlists. We used the following wordlists: JtR password.lst. Previously-cracked passwords (both from contest hashes and others). Manually-created contest-specific tiny wordlists based on analysis of cracked contest passwords. RockYou list (and "Top N" sub-lists from it to use with lots of rules). Combinations of the above (e.g., contest-specific words concatenated with RockYou Top 1000). JtR Pro revision of all.lst (in addition to words in many languages, includes a little bit of incremental mode and --external=Keyboard output, which thus got subjected to wordlist rules). Openwall collection revision of all.lst, insidepro_big (used by Dhiru). "Various InsidePro "From Queue" dictionaries" (used by Matt) wikipedia-wordlist-sraveau-20090325 (late in the contest and against NTLM hashes only, by Solar and Matt). A large second-level domain name list, with rules adding com/net/org TLDs after that pattern was identified in contest passwords. (Other TLDs were also probed, including all two-letter ones - no luck - so the focus was made on just com/net/org, which proved to be correct.) Perhaps many others. Team building and management. We (team john-users) did not prepare for the contest at all. Solar decided to participate and invited others to join roughly 12 hours before contest start. Thus, people were joining during the first day of the contest (when some of us were already cracking the hashes). Solar was completing setup of the file exchange server, creating accounts, adding SSH keys, troubleshooting some team members' login issues, etc. during the contest. We used an OpenVZ container with and on an Owl system (Openwall's Linux distro), with per-person accounts and some shared directories, with file permissions set/adjusted during the contest as needed - and eventually with some scripts on cron. Solar also setup a private mailing list for discussions internal to the team; by the end of the contest, we had 11 subscribers (most of them active team members, some not - we gave everyone a chance). The coordination was very loose, largely because of the lack of advance planning, because Solar was also one of the primary people to actually attack the hashes (leaving a lot less time for coordination), because it is hard and often unreasonable to avoid overlap (avoiding it costs people's time, so we could as well throw more CPUs at the task instead), and frankly because many of us appeared to be unwilling to coordinate (most went with whatever they liked to do, which is quite natural, yet it resulted in higher overlap in attacks attempted and hashes cracked). How the different hash types were approached. One of the first tasks was to determine what hash types we were given, although this proceeded in parallel with some non-focused initial JtR runs on whatever hash types were already identified. Soon we reliably identified all hash types but the Oracle hashes, which we were not 100% sure of. As suggested by some team members, the hashes list that was provided to us was eventually split into separate files by hash type, and then we also created separate files with admin accounts only (which we assumed were those with a GID of 0 or/and with the "admin" substring in the username). This was not essential for use with JtR (which is why it was not done right away), yet apparently it was convenient to some of us. Closer to the end of the first day, we also had a cron job producing per-hash-type lists of uncracked hashes - for faster further attacks on salted hashes. For all hash types, we ran JtR with default settings against them at least briefly to catch the weakest passwords first. The per hash type sections below mostly describe other attacks. NTLM hashes. These are what the contest was about. Although it was apparent from the number of these hashes and the speed at which they could be computed, we only truly focused on them closer to the end of the first day (and not everyone on our team did). We were concerned that we could be behind other teams due to them getting more hashes of other types if we focused almost solely on NTLM. Clearly, for a chance to win, we did need to attack all other saltless hashes as well (LM, SHA) and attack the salted ones at least lightly (although we ended up doing a lot more than that - maybe too much given that NTLMs could use more of our attention). Also, most admin accounts, which give extra points, corresponded to hashes of other types. Anyhow, early attacks against NTLM hashes, besides JtR's default settings, included runs of the RockYou list (with duplicates purged) with small rulesets, and runs of smaller wordlists (password.lst, RockYou Top N) with larger rulesets. Specifically, the "single crack" mode ruleset was used with wordlist mode, and its "crazy" section was uncommented and even expanded with 4-digit append/prepend rules. A further attack was to run rockyou.chr released by Kore against NTLM hashes on a Q6600 machine that Solar temporarily dedicated for the purpose. The workload was distributed across 4 CPU cores by password length: 0-5, 6, 7, and 8. The 0-5 run completed shortly, and was replaced by other quick attacks (sequentially, sometimes leaving this core mostly-idle for quite a while). These included --external=Keyboard for lengths 1 to 10 (and a bit of 11), which only cracked a password of length 7 and lots of length 8 (Kore, this lengths distribution is not realistic). It also included four modified-KnownForce runs to exhaustively search the abcd19nn, abcd20nn, 19nnabcd, and 20nnabcd patterns (identified from previously cracked passwords), where abcd was an arbitrary string of four lowercase letters ("aaaa" through "zzzz") and nn was an arbitrary two-digit string ("00" through "99"). Closer to the second day of the contest, a list of contest-specific words was compiled. Although early revisions of the list varied a bit, ultimately (for our team) it was: korelogic defcon Defcon blackhat facebook lasvegas LasVegas vegas whitehat hello 1234 jan feb mar apr may jun jul aug sep oct nov dec january february march april may june july august september october november december janu febr marc apri augu sept octo nove dece monday tuesday wednesday thursday friday saturday sunday one two three four five six seven eight nine ten eleven twelve twenty thirty fourty fifty sixty seventy eighty ninety hundred thousand million billion winter spring summer autumn wintertime springtime summertime autumntime (although some people on the team had their own lists). Yes, "fourty" was misspelled, and this was not noticed, unfortunately (should have run a spellchecker over the wordlist). Also, this did not include "fall" as it did not appear to be common enough, but looking at Kore's published rulesets it is there along with the common words above. This list was run (on a free core of yet another quad-core machine) against NTLM hashes with very large numbers of rules (a few million after preprocessor expansion). Specifically, a very effective ruleset line was: o[0-9][ -~] o[0-9][ -~] (overstrike any one or two characters with any other printable ASCII characters). Due to the speed at which NTLM hashes could be computed, we did not really have to identify specific substitutions (or at least not yet), as long as no more than two characters in a (pass)word were substituted at once. Ditto for inserts of up to two characters. Going to three would be a bit too slow (would take 1000x longer); instead, we caught some of those passwords by applying the same approach to previously-cracked passwords (and eventually by identifying and encoding some specific substitutions, although we did not do much in this area). We also combined this with case toggling. This was done by outputting the tiny wordlist mangled with one set of rules (such as the above) to "unique" and saving to a file, then applying a case-toggling ruleset (the default "NT" ruleset or the like) to the resulting file. Besides using the tiny contest-focused wordlist on its own, we were combining it with itself (to form strings such as "sixtysix") and with other common password lists using a variation of mix.pl (mentioned above). And we also ran the above rules (and the dual-application of rules approach) against common password lists on their own. We also used "regular" rulesets (like the "single crack" mode one) against these combined wordlists. All of this was very effective and quite quick. Another very effective approach was to use all substrings of cracked passwords (rather than cracked passwords in their entirety only) for "wordlists". Substrings were being extracted with: [List.Rules:substr] x[0-9][4-9] invoked with "... --stdout | ./unique cracked-substr". Then cracked-substr was subjected to the kinds of processing described above (mix.pl and/or dual-applied rulesets). Although the above text uses "we", these things were pretty much done by Solar alone (others were invited to apply similar processes and extend or revise them to reduce overlap, but did not report on having done so). Overall, this cracked thousands of NTLM hashes, but it is impossible to tell exactly how many because of the reuse of previously-cracked passwords for wordlists (including those cracked by others on the team who were using other methods). At about the same time, Dhiru was running some NTLM hashes through ighashgpu, eventually uploading 1732 of cracked passwords (a small number compared to what the approaches described above achieved), most of which overlapped with those cracked with JtR (but there were some unique/new ones as well - mostly those containing more than two uncommon characters). On the second day of the contest, Solar made a build of JtR capable of applying incremental mode to lengths up to 10: http://www.openwall.com/lists/john-users/2007/07/04/6 The corresponding .chr file was generated based on contest passwords cracked by that time. It was then run against NTLM hashes separately for length 9 and length 10 on two CPU cores (on a third quad-core machine that was finally put to use). It quickly cracked some new passwords, but then its progress slowed down a lot (as expected). Overall, by contest end this cracked only about 150 of new passwords. Finally, closer to the end of the contest large wordlists such as wikipedia-wordlist-sraveau-20090325 were run against NTLM hashes with various rulesets created by that point. Some of those runs completed quickly (resulting in hundreds of new cracks), whereas a couple of others were still running by contest end time and slowly cracking more passwords (on one of Solar's machines). The limiting factor appeared to be the time of people on our team. With more effort (not requiring any more computing power), we could have reverse-engineered more of Kore's rules, patterns, and wordlists, which would enable cracking more of the passwords before contest end. Without those sufficiently specific rules and patterns, we had to be using more generic but less efficient rules and wordlists, and we did not run specific attacks on certain patterns that were seen. For example, we never derived the exact list of Kore's character substitutions, although we had the material to do so. We simply dropped not-so-common "Kore words" such as "stdio" or sports teams because no one on our team would compile a complete list of them anyway (so we were combining very common "Kore words" listed above with pre-existing common (pass)word lists instead). This was largely compensated for by the "all substrings" approach, though, which obviously caught all common and not-so-common words seen in cracked passwords (but it also caught some "noise"). One approach we considered on the second day of the contest was auto-generating rulesets based on cracked passwords. JimF had posted something along these lines to the john-users list in 2009: http://www.openwall.com/lists/john-users/2009/07/27/3 There was a brief attempt (by Solar) to get someone interested in trying this out - but almost no one was interested/available (it was just 12 hours to go). One person volunteered, but then never reported anything back. It would still be curious to explore this area on contest hashes even after the contest has ended. LM hashes. For LM hashes, Solar initially ran a build of JtR with faster DES key setup (john-1.7.6-fast-des-key-setup-3.diff.gz) on a single core of a Core i7 CPU. This completed lengths 0 through 6 promptly and it also cracked many length 7 password "halves". The intent was to go for an exhaustive search over the printable ASCII space for length 7 by dedicating the Core i7 machine to it the next day (we had several extra quad-core machines available for use anyway). This should have completed in time. However, Dhiru was quicker to go with rainbow tables, and then jmk cracked the two remaining hashes with a different set of rainbow tables. So the "JtR plan" against LM hashes was canceled, thereby saving Solar some time on (not) setting that attack up. Although we cracked all LM hashes, there was an issue with submitting the corresponding passwords properly (see below). Netscape LDAP SHA hashes (saltless). These were similar to NTLM in terms of attacks to run against them, however their number was substantially smaller, so we did not focus on them - essentially only running simple attacks and variations of previously-cracked passwords (for all hash types) against them. As an exception, bartavelle actually spent some time on them, re-encoding them from base64 to hex such that he could use his unreleased faster raw SHA-1 code instead of the Netscape LDAP specific code. Then he had to have them re-encoded back to base64 for submission, because our submission script would filter out non-contest hashes, which was in turn caused by one of the team members not cleaning up their pot file for the contest. ;-) Netscape LDAP SSHA hashes (salted). These were almost exclusively left for team members other than Solar to attack, as a way to reduce overlap. Probably the usual sets of attacks were run against them (this was not documented by team members). Only very brief attacks were run by Solar (JtR defaults for a little while, trivial variations of cracked contest passwords from all hash types, and not so trivial variations against admin accounts with this hash type). Clearly, we could have done better, although this would not provide a lot of additional points (not enough to make a difference overall). MD5-based crypt hashes. For the MD5-based crypt hashes, Solar initially ran JtR with default settings (just to make use of a CPU core while focusing on other things), which was then permitted to run for a long while. The plan was to make a build of JtR with bartavelle's patch for much faster SSE2-enabled support for these hashes and use that. However, when bartavelle himself joined our team closer to the end of the first day, this plan was canceled, and bartavelle was the person to focus on these hashes. Ditto for Markov mode runs. DES-based crypt hashes. For the DES-based crypt hashes, some attacks were split over 2 CPU cores with the "--salts=3" and "--salts=-3" options. That is, salts shared by more hashes were attacked quicker or harder than those with fewer hashes. This was done at least for some incremental mode runs and then for an exhaustive search over abcdYYYY and YYYYabcd patterns, where YYYY was a year number in the range 1959 to 2019 and abcd was an arbitrary string of four lowercase letters ("aaaa" through "zzzz"). The years range was previously determined from incremental mode runs against NTLM hashes and then from the modified-KnownForce run over a much wider years range against the NTLM hashes, which was described above. Somehow this years range is slightly inconsistent with rules published by Kore; we did not investigate whether it was due to an error on our part or due to Kore using only a subset of the hashes that their rulesets could generate. Anyhow, for NTLM hashes restricting the years range did not matter, but for DES-based crypt ones it did (we'd need to dedicate more than two CPU cores to the task to complete the exhaustive search in time if the range were wider). The above was done by Solar, informing others on the team in case anyone would want to do it for other hash types (besides NTLM and DES crypt), but apparently that was never done (which was OK - we did not have all that many other hash types suitable for this attack given the resources and time available... although SHA and SSHA were suitable). elijah also did incremental mode runs against these hashes using contest-specific .chr files (based on previously cracked passwords), and he tried some custom substitution rules, as well as this ruleset previously posted by Minga: http://www.openwall.com/lists/john-users/2009/03/28/2 Given its origin, we probably should have played with it more (e.g., updated for year 2010 and ran against other hash types as well), but we did not. On the other hand, other attacks we performed against NTLM hashes should have covered these patterns. Some others tried cracking the DES-based crypt hashes by various means as well, but overall we did not focus on them as much as we did on NTLM. Blowfish-based crypt hashes. Solar (and likely others) initially ran JtR with default settings against all 80 hashes on a single CPU core (which would be otherwise idle anyway). After a couple of hours (mostly spent on tasks unrelated to this specific hash type), it was determined that none of the 80 had very weak passwords, and considering the contest scoring and the slowness of these hashes, it made sense to continue attacking only the 20 admin hashes (out of 80). So that's what was done (by interrupting, editing the .rec file to increment the options count and add "-g=0", and continuing the session). Others on the team were informed of this decision. Solar's attack on the 20 admin Blowfish hashes remained running for many hours more, eventually being replaced with NTLM attacks when it became convenient to run more of those on that machine. Perhaps others on the team attempted certain attacks as well. None of these hashes were cracked. Running an OpenMP-enabled build against these hashes (the 20 admin ones only) on a dedicated quad-core machine (and we had some spare ones) was briefly considered (by Solar), but was not attempted - running more focused attacks against NTLM hashes appeared to be a better use of time. Looking at the plaintext passwords published by Kore after contest end, some of these could probably be cracked by running heavier rules on cracked passwords from other hash types - but even that kind of attack would be very time-consuming (albeit feasible) and hardly worth it against hashes of this type (it would make more sense to focus on uncracked admin hashes of other types), unless the scoring were different. Oracle 10 hashes. For a lot of detail on these "mystery" hashes, why no team cracked a single one of them, what we tried against them, and the effect this had on our overall performance, see: http://www.openwall.com/lists/john-users/2010/08/02/11 Issues with submission of cracked passwords. No one on our team was verifying whether the cracked passwords we were submitting to Kore were actually correct. Additionally, cracked hashes were being excluded from the "uncracked" lists (that some of us used for further attacks) without verification that the cracked passwords were actually correct. Lacking this kind of verification was obviously wrong, but we did not appear to have the human resources to set it up (it'd be a distraction from cracking more hashes). Perhaps we should have created this sort of scripts prior to contest start, although for that we should have made the determination to participate in the contest much sooner. Anyhow, we were looking at the contest stats web page, and our score displayed there was only a little bit lower than our own estimate for what it should have been. If it were substantially lower, we would indeed be forced to verify our stuff. As it turned out (was noticed by us during the contest), Kore's passwords contained an abnormally high number of colons. Of the passwords we cracked, about 1000 contained colons. For submissions in john.pot format, this would make no difference (although it could have affected scripts we'd use during password cracking). However, when Kore clarified that they wanted only the plaintexts, and complete ones for LM hashes, Solar quickly hacked together a script that used "cut -d: -f2" on a john.pot format file for non-LM hashes and "john --show ... | cut -d: -f2" for LM hashes, planning to get back to correcting this at a later time. Unfortunately, this was only recalled 1 hour before contest end, at which time Solar, needing to focus on lots of contest-related tasks at once, only had sufficient time to make the trivial change of "-f2" (field two only) to "-f2-" (fields starting with the second) for non-LM hashes. The same trivial change would not work right for LM hashes due to "john --show" output containing more than two colon-delimited fields. A slightly more complicated fix for LM hashes was introduced and tested only 10 minutes _after_ contest end (so if Kore ever publishes scores for post-content submissions, this should be seen). This should have affected hundreds of LM hash passwords. Another issue was with DES-based crypt hashes, which process only 7 bits of each character (ignoring the 8th bit). This means that for a given valid passwords, many variations of it are possible (with the 8th bit of every character possibly flipped), most of which will not match those on Kore's list of correct passwords, yet all of them are correct. We ended up submitting some of these passwords with the 8th bit on some characters set (just because this is what was tested first in certain attacks run by some members of our team). In many cases (but not in all), we also had the pure 7-bit versions of the same passwords cracked, and these were also being submitted (due to the non-use of "john --show" for this hash type, all variations that we had cracked were being submitted). This was noticed during the contest, and ideally we'd convert all of these passwords to pure 7-bit, but we didn't have time and arguably this was not worth the bother (it affected maybe around 100 passwords). We do not know whether Kore's scripts were smart enough to count these valid passwords with 8-bit characters towards our team's score or not. Finally, it appears that some yet unidentified software in use by two members of the team did not handle backslashes in passwords correctly. Some of their uploads contained missing, double, or quadruple backslashes in place of single backslashes in passwords. Luckily, most of those passwords were also cracked by others of us, and all variations were being submitted for all hash types but LM. For LM hashes, this might have affected our score somewhat. Overall, we deserve some penalty on our contest score (which we must in fact have incurred) for not being very careful with our submissions. Making use of otherwise-idle CPU cycles only. For Solar's JtR runs during the contest, this was achieved in two ways: For machines running OpenVZ kernels (two of the three quad-core machines that were put to use), a dedicated OpenVZ container with an x86-64 build of Owl was created (from a recent pre-created template released by Openwall). The container was set to have a relatively low number of cpuunits - specifically, it was set to 100 cpuunits, whereas all others on the system were set to at least 1000 cpuunits each. Then JtR's "Idle" setting was disabled (set to "N"), because OpenVZ uses a two-level CPU scheduler anyway and we only needed to be "nice" to other containers running on the system. Some quick tests proved that this worked as expected. For a machine running a "regular" (non-OpenVZ) Linux kernel, the "Idle" setting was made use of (kept at its current default of "Y"), which had been tested to work well before (during John development). Summary. The contest was fun indeed, but besides being fun it also required a lot of concentration over the 48-hour period (and for a bit longer than that since there were some preparations to make shortly before contest start). Although we did not incur a direct monetary cost, the cost in people's time was substantial. The passwords were not real, and the distribution of different kinds of passwords was somewhat non-realistic... but so what. This meant that part of the challenge was for us (and for other teams) to quickly adapt to attacking these somewhat unusual passwords. This also meant that certain techniques that didn't work very well in this contest would have in fact worked much better on real-world passwords, and vice versa. For example, a few of the passwords hashed with Blowfish-based crypt could be a lot weaker (and would actually get cracked): there exist many systems that use hashes of this type without any password policy enforcement. Most of the Oracle passwords would be weak and would get cracked, and the corresponding usernames would be known reliably. There would be lot more of digits-only passwords. The keyboard-based patterns would not be magically restricted to length 8. Passwords would sometimes be username-based. On the other hand, extensive case toggling would be a bit less effective. On the bright side, many of us learned new things, and we've identified shortcomings of our approaches and software that are also relevant for real-world password security audits. Certain improvements to John the Ripper will likely be made as a result of this contest. The files released by KoreLogic will play an important role in testing and tuning of current and future password security software and techniques. It is now possible to derive lists of cracked and uncracked passwords. These passwords, through their hashes, have been tested by many people from many teams using significant cumulative computing resources, as well as many different tools, techniques, and wordlists. This makes them very valuable. Thanks. Minga and Hank of KoreLogic did a great job at making this contest possible - thank you! We'd like to thank team bartavelle and Frank Dittrich for their contributions to our team's cracked passwords pool. We're also grateful to team smelly_weigand for offering their cracked passwords to us, and we're sorry that we never merged those due to a coordination error on our part. Matt and Fyodor volunteered to represent our team at DEFCON, making our participation official - thank you from the rest of us! We would also like to thank all other teams that participated and made this contest a real challenge for every team involved. Our special thanks are to team CrackHeads who, while remaining completely separate from us during the contest, have also made use primarily of John the Ripper and provided useful feedback in their writeup: http://www.openwall.com/lists/john-users/2010/08/03/1 Additionally, we'd like to thank KoreLogic for funding the contest, including the cash prizes (and our "3rd eligible place" $100 prize specifically), and CrackHeads for kindly donating $100 out of their $300 cash prize towards JtR development (with the remaining $200 covering their Amazon EC2 costs). Finally, we'd like to thank Alain Espinosa for contributing his efficient NTLM hashing code (currently in JtR jumbo patch), which was instrumental to efficient use of John the Ripper during the contest. With approval from CrackHeads, we're going to direct the $200 ($100 from team john-users and $100 from CrackHeads) to Alain as a way to thank him in a more tangible way. His contribution is clearly worth it (and more), and not only for the contest! --- Alexander
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.