Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 20 Jul 2014 21:08:41 +0400
From: Solar Designer <solar@...nwall.com>
To: john-dev@...ts.openwall.com
Subject: Re: ZedBoard: bcrypt

Katja,

On Sun, Jul 20, 2014 at 06:52:26PM +0200, Katja Malvoni wrote:
> On 20 July 2014 16:10, Katja Malvoni <kmalvoni@...il.com> wrote:
> >  On 20 July 2014 03:12, Solar Designer <solar@...nwall.com> wrote:
> >
> >> What clock rate?
> >
> > 71 MHz as it was before (actually, it's a bit more: 71.4). Max frequency
> > reported by Xilinx tool is 74.2 MHz but first higher than 71 MHz I can use
> > is 76.9 MHz. It might work but I haven't tried it.
> 
> I generated bitstream with 76.9 MHz clock and timing score is 0. However,
> it either locks up or reboots the zed system. After I run john, system
> stops responding.

As I informed you off-list, this triggered the newly added watchdog,
which power-cycled the system.  So unlike with previous voltage drop and
overheating problems, the system didn't reboot on its own - it merely
locked up (stopped notifying the watchdog, which power-cycled it).

I find it puzzling that overclocking bcrypt PL causes PS to lock up.

Anyway, let's not play with this further for now, focusing on:

> > I'll go for 5 BRAMs/instance with storing initial S-box values across
> > unused halves of 4 BRAMs holding S-boxes. This way, initialization will
> > require 256 clock cycles. I'm storing 2 S-boxes in higher half of each of 4
> > BRAMs. Initialization data is stored twice but I can copy it in parallel
> > for both instances. I don't use additional BRAMs and although utilization
> > will be higher, it won't impact max core count (wider buses were used in
> > 112 instances approach and core count was limited by available BRAM).
> 
> This is harder than it looks like... The first time I run john, first hash
> is computed correctly and the next one is wrong (fails on get_hash[0](1).
> After that, it fails on get_hash[0](0) and computed hash is different every
> next run. If I reload the bitstream, computed hashes follow the same
> pattern as in previous runs so it seems as initial S-box values have been
> changed. However, in simulation initial values do not change during
> computation and every time I store something to BRAM, I set only address
> bits 8 to 0 to make sure only lower half is modified. Also, before anything
> is assigned to address bus, it's set to 0 so bit 9 is zero anytime I write
> to BRAM. I tried two approaches - using readmemh function with my code
> which infers BRAM and using Coregen to generate IP cores with initial BRAM
> values.

Weird.  Are you sure you're not leaving bit 9 floating at any time that
you access BRAM?  "before anything is assigned to address bus, it's set
to 0" doesn't feel like necessarily implying "so bit 9 is zero anytime I
write to BRAM".

Alexander

Powered by blists - more mailing lists

Your e-mail address:

Powered by Openwall GNU/*/Linux - Powered by OpenVZ