Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 17 May 2013 14:31:09 -0400
From: Yaniv Sapir <yaniv@...pteva.com>
To: john-dev@...ts.openwall.com
Subject: Re: Parallella: bcrypt

Hi Katja,

On Fri, May 17, 2013 at 12:44 PM, Katja Malvoni <kmalvoni@...il.com> wrote:

> Hello,
>
> I read the Epiphany documentation and went trough bcrypt implementation.
> And now I am thinking about possible approach. At the first sight, it seems
> that having one bcrypt instance per a core could work. Since each core has
> 32 KB of memory divided into 4 memory banks, memory isn't a problem. What
> worries me is how big impact will have overhead of writing S-boxes in every
> core local memory and will those 16 instances running on 1 GHz processors
> be enough to still provide speedup when compared to CPU.
>

I think that the right way to think about this is as a technology testbed.
The Epiphany architecture is really easily scalable, either on-chip (in
products other than E16) or as a chip-cluster. Once you develop your code
using Parallella, it should be easy to scale it to use as many cores as
available. Additionally, the Epiphany can serve as an accelerator, working
in addition to the CPU, much like a GPU is. Thus, comparing this specific
platform to the CPU should not be the benchmark here.



> If I got it right,
> from this sentence (
> http://www.adapteva.com/wp-content/uploads/2012/12/epiphany_arch_reference_3.12.12.18.pdfp. 11)
>
> "Each routing link can transfer up to 8 bytes of data on every clock
> cycle, allowing 64 bytes of data to flow through every routing node on
> every clock cycle, supporting an effective bandwidth of 64 GB/sec at a mesh
> operating frequency of 1GHz."
>
> writing S-boxes can be pipelined if first started from the farthest core
> on the chip. But what I couldn't find is how to do that in practice. In
> theory routing nodes could pass on the data in one cycle and than receive
> new data in the next cycle?
>

Each node transmits its data to the next node, and new receives data from
the previous node, for the four directions, in each cycle.


> e_write() described in
> http://www.adapteva.com/wp-content/uploads/2013/04/epiphany_sdk_reference.4.13.03.301.pdfdoesn't provide enough information to make a conclusion but it also doesn't
> provide any control on writing process itself. In worst case it would take
> 16*512 cycles just to transfer S-boxes.
>

e_write() is a *host side* API. It is used to write data from the host
application's space to the Epiphany's space.This is not directly related to
the discussion of the mesh nodes above. The e_write()/e_read() speed is far
slower than the intra-chip data speeds. If the S-boxes are 4KB big, and are
available at the chip boundary for immediate use (which is pretty much
nonrealistic), than spreading the data across the chip can theoretically
(assuming no communications overheads) take 16*512 cycles. However,
bringing the data *to* the Epiphany chip will be much slower.


>
> What is the overhead in case that S-boxes are hard coded in the device
> code? How much time would e_load() take? I suppose that it should be faster
> than using e_write()? I wasn't able to find concrete answers on those
> questions.
>

e_load() actually used e_write() to load the chip with the program image,
so it should be just as efficient to load the S-boxes as it would be to
write them. Note that is the S-boxes do not change, than you may be able to
keep it in memory, in case you need to load a different program.


>
> Should I try this approach of one instance per a core or should I take few
> more hours of thinking and think of something better? And am I missing
> something in my approach?
>
> Thank you
>
> Katja
>

-- 
===========================================================
Yaniv Sapir
Adapteva Inc.
1666 Massachusetts Ave, Suite 14
Lexington, MA 02420
Phone: (781)-328-0513 (x104)
Email: yaniv@...pteva.com
Web: www.adapteva.com
============================================================
CONFIDENTIALITY NOTICE: This e-mail may contain information
that is confidential and proprietary to Adapteva, and Adapteva hereby
designates the information in this e-mail as confidential. The information
is
 intended only for the use of the individual or entity named above. If you
are
not the intended recipient, you are hereby notified that any disclosure,
copying,
distribution or use of any of the information contained in this
transmission is
strictly prohibited and that you should immediately destroy this e-mail and
its
contents and notify Adapteva.
==============================================================

Content of type "text/html" skipped

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.