Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <F4660FA8-F218-415C-BEFB-9E1C3D9947CE@gmail.com>
Date: Mon, 6 Jul 2015 10:24:14 +0800
From: Lei Zhang <zhanglei.april@...il.com>
To: john-dev@...ts.openwall.com
Subject: Re: extend SIMD intrinsics


> On Jul 5, 2015, at 2:44 PM, Solar Designer <solar@...nwall.com> wrote:
> 
> While in general you're right, for loads and stores in particular an
> alternative approach may be to stop using those intrinsics, and instead
> use simple assignments (or nothing at all, within expressions) at the C
> level.  I don't know whether this will result in any worse code being
> generated on any relevant arch; I think I haven't run into such cases so
> far.  For example, in yescrypt-simd.c, I am not using any load/store
> intrinsics, and the generated code is good.

That makes sense. I think compilers should have no difficulty converting a vector assignment into a proper vector load/store. But code written this way may look a bit messier, when pointer dereferencing is involved, e.g.

void func(uint32_t *buf) {
    vtype32 v = *((vtype32*)buf); // could be vload(buf)
    ...
}


> Now, there might or might not be a difference as it relates to C strict
> aliasing rules.  Maybe the load/store intrinsics allow us to perform
> an equivalent of typecasts yet safely circumvent C strict aliasing rules.

If I understand the strict aliasing rule correctly, I don't see how the current use of intrinsics in JtR could violate the rule, as we don't use different vector types to access the same memory address.


Lei

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.