Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 10 Sep 2006 12:14:59 +1200
From: Russell Fulton <r.fulton@...kland.ac.nz>
To: john-users@...ts.openwall.com
Subject: Re: need advice on setting up cracking system for large
 organisation

First off thanks Solar for a prompt and very thorough response!  Yes
there are risks in this approach and I hope that I have considered them
all.  Thanks for reviewing them for me.  I've pruned some of your
responses and commented in line on others...

Solar Designer wrote:
> 
>> I intend to set up a (hopefully ;) secure drop box where admins can use
>> scp to drop off password files to be cracked.  I will then farm the
>> cracking out to a bunch of 'servers' to do the actual cracking using
>> 'idle' cycles on various big machines in the data centre.
> 
> This is risky.
>
I'm aware of that...
> 
> The "secure drop box" becomes a single point of failure.  You can
> improve its security by actually using a dedicated box with only sshd on
> it (perhaps just install Owl and not enable any other services) and by
> setting a forced command in authorized_keys.  (scp is no good since it
> can also be used to retrieve the files back.  It should not be allowed.)

I am intending to use various strategies like these to mitigate the risk
as much as possible.

> 
> What's worse, _each_ one of your cracking machines would also be a
> single point of failure since its compromise might result in the
> compromise of password hashes for a large number of other systems.  This
> risk can be mitigated a little bit by having your "secure drop box"
> conceal usernames and hostnames in password files that it passes on for
> cracking.

Yes, I've thought of this too.  In my my case these machines will be in
a heavily firewalled part of the network (and yes, I know firewalls are
not a panacea;).  If any of these machines get compromised then we have
major problems anyway.  Long term if this proves valuable I will get
dedicated resources for the project.  To get it off the ground I want to
leverage those spare cpu cycles.  It is easier to make a business case
when you have some solid evidence ;)
> 
> Overall, even with the improvements that I've suggested, I think that
> you might be making your network less secure, not more secure.
> 
> How fast are the "big machines" when it comes to password cracking?
> They might have disk arrays, large memories and caches, but this is of
> no use for John the Ripper.  Chances are that you can find a single
> workstation that would be more suitable for the task.

good point, particularly if JtR is not threaded.  Many of these boxes
are multi cpu.  I do wish VMWare had some way of allowing one vm to soak
up idle cycles from other ones.  Then I could set up a vm on each of our
ESX boxes which I can lock down really tightly.  Still not as secure as
a stand alone box but better than sharing an OS with other users.

  You would need to
> secure it (or just CD-boot or install Owl and disable even sshd with
> "chkconfig sshd off; service sshd stop").  Then you would pull password
> files to that system and run JtR against them locally.

This would work well in a corporate environment and I may consider it
for machines in the central IT (but many of these use 2fa) but for
faculty machine it is much easier to have a drop box style of operation.
 Ideally admins would do their own cracking on their own boxes along
with proactive checking when setting/changing -- I've been suggesting
that for years -- everyone agrees it is a good idea but it never gets to
the top of anyone's priority list.  So I have decided to do something
myself.
> 
> On many Unix systems, you can deploy pam_passwdqc:
> 
> 	http://www.openwall.com/passwdqc/
> 
>> One of the major issues is that we have *many* different hash formats
>> out there (probably one of everything that has been used in the last 10
>> years :).  My understanding is that John will crack only one hash type
>> at a time
> 
> Correct.
> 
>> so I need to sort incoming files by type.
> 
> Not necessarily.  You can run multiple instances of JtR on all of your
> files at once, with different --format=... option settings.
> Essentially, you will let JtR perform this "sorting by hash type".

AH, thanks, I had not thought of that...

>> My initial idea is to run a single mode crack on the file as soon as it
>> is received and this will find any obvious bloopers and tell me what
>> type of hashes the file contains.  Sensible?
> 
> Yes.  It also lets you filter out hashes of the weakest passwords before
> you transfer the rest to other machines.

yes, that had also occurred to me and the submitter will get quick
feedback...
> 
>> I have tried a run with john using the mangled list against 10 users in
>> a BSD MD5 hash format password file.  It took 15 hours on a fairly fast
>> Intel processor.
> 
> That's because the pre-mangled wordlist is very large and the MD5-based
> hashes are quite slow to compute.  If you intend to be running against a
> substantially larger number of hashes of this type, you can opt to use
> smaller wordlists.

in a university environment we do get people using words from foreign
languages (particularly forms of their names) in the belief that these
are 'secure'.  So that's why I used the full list.
> 
>> Any guestimates of how this will scale with number of
>> passwords?  Given the 16bit salt I'm guessing that for UNIX systems it
>> will be fairly linear with a slope of 1.
> 
> The FreeBSD-style MD5-based hashes typically use 48-bit salts.  Yes, the
> time for a JtR wordlist run should increase almost linearly with more
> hashes of this type - although in practice many systems are known to
> generate salts poorly, so there will likely be some duplicates despite
> of the large salt size.
> 
> With the traditional DES-based hashes, things are different.  These use
> 12-bit salts, so even with perfect salt generation you should expect
> duplicate salts starting with just 100+ hashes.  For 1,000 hashes, you
> should expect under 900 different salts.  For 10,000, it's under 3,750.
> For 100,000, it's all the 4,096 possible salts.
> 
>> If this is the case should I
>> send one file to each machine with the full list -- this is certainly
>> easier.
> 
> Yes, from a performance point of view, it is OK to do that for slow
> hashes.  It might not be reasonable to do the same for faster hashes,
> even when all salts are different, since with those the key setup
> "overhead" is non-negligible and you would unnecessarily duplicate it.
> 
>> OTOH I assume that window hashes which I understand to be unsalted
>> should scale with a slope of well under 1?
> 
> Yes, for Windows hashes (LM and NTLM), the slope for JtR run time vs.
> number of hashes is close to 0.

Right message here is that I should use different strategies depending
on the hash type.  Useful to have this confirmed.
> 
>> The initial aim of this exercise is to find the low hanging fruit that
>> might be broken by brute force attack so I have generated a shorter
>> mangled list with length limited to 6 characters.
> 
> I think that limiting the length is not a good way to shorten your
> wordlist.  Those longer candidate passwords that you're omitting in this
> way are not necessarily any stronger.

True but most of the passwords I've seen used in the current brute force
attacks are under 6 characters.  That was the rational for choosing this
strategy.  I have access to some word lists known to be used by
attackers in the past and I'll add them as well and run it through unique.
> 
> Instead, you can review the list of language-specific wordlist files
> that went into all.lst (this list of files is included in comments at
> the start of all.lst) and generate a new combined wordlist omitting
> many larger language-specific wordlists.  For example, you would include
> languages/English/1-tiny/* and languages/English/2-small/*, but omit
> languages/English/3-large/* and languages/English/4-extra/*.  Then
> run it through "unique" and "john -w=new.lst -rules -stdout | unique".
> 
>> or should I try incremental for some set time/length limit?
> 
> Yes - but in addition to rather than as a replacement for wordlists.
> You limit the time, not the length (leave MaxLen at 8 and let JtR decide
> what lengths are optimal to try - it does so based on statistical
> information found in the .chr files).
> 
>> Longer term plans are to keep hitting the files until we are fairly sure
>> they conform to policy.  I also intend to diff the files as they are
>> resubmitted and test new passwords to the current level of the whole file.
> 
> Yes.  comm(1) is the tool to use for this - after sort(1) of both
> revisions, of course.
> 
>> Once this set up is up and running all systems that have password based
>> services exposed to the Internet will be required to submit files at
>> least once a month or when they change.  Ditto for all the systems in
>> the Data centre -- we have to do something useful with all those wasted
>> cpu cycles :)
> 
> You might want to relax this policy for systems where you deploy
> pam_passwdqc. 

This is another drum I've been beating for a long time.  We use cracklib
on our central authentication system (and have done for many years).
Again I'm hoping that after I demonstrate that I can break passwords on
their systems installing proactive checkers will suddenly be on the top
of admins list instead of on the "it would be nice" list.

We had one instance of a faculty server account being brute forced
earlier this year.  I pointed the admin to JtR and he broke 10 more
accounts overnight and installed pam_passwdgc in the morning ;)

> You will need to check password files even off those
> systems initially - to ensure that the policy enforcement is working as
> it is supposed to - but then it might not be reasonable to put password
> files off those systems at risk.

Agreed.

> 
> Now, this was a lengthy response.  I hope it helps.
> 
It most certainly has.  Thanks!  And I hope others on this list have
found this discussion useful too.

I've  just decided that I desperately need the Pro version ;)  I'll
order it on Monday.

In conclusion: this project is not without risks but I believe that
those risks are out weighed by the potential benefits and I will be
pushing pam_passwdqc as part of the project.  We have just implemented
two factor authentication for critical systems in the core and so those
systems are not at risk.  Some departments are also using rsa too for
their servers which is great.  This is primarily aimed at faculty system
used by students and staff for shell access.  In fact I'm hoping that
the project will do itself out of a job as admins move to proactive
checkers once they see what their users are doing.  I just wish I had a
good cheap/free equivalent for windows.


Cheers, Russell

-- 
To unsubscribe, e-mail john-users-unsubscribe@...ts.openwall.com and reply
to the automated confirmation request that will be sent to you.

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.