|
Message-ID: <20181010135241.GN17110@brightrain.aerifal.cx> Date: Wed, 10 Oct 2018 09:52:41 -0400 From: Rich Felker <dalias@...c.org> To: musl@...ts.openwall.com Subject: Re: TLSDESC register-preserving mess On Wed, Oct 10, 2018 at 03:19:26PM +0200, Szabolcs Nagy wrote: > * Rich Felker <dalias@...c.org> [2018-10-09 21:26:20 -0400]: > > As written, the aarch64 and arm asm save and restore float/vector > > registers around the call, but I don't think they're future-proof > > against ISA extensions that add more such registers; if libc were > > at least on aarch64 for now the approach is to add new vector > registers to the tlsdesc clobber list in gcc (and document > this in the sysv abi, except that's not published yet). > > the reasoning is that it makes it safe to use tlsdesc with > old dynamic linker (new vector registers overlap with old > ones so old dynamic linker can clobber them) without much > practical cost: it's unlikely that vector code needs to > access tls (in vectorized loops the address is hopefully > computed outside the loop and vector math code should not > use tls state in the fast path if it wants to be efficient) Any idea if other archs are willing to commit to the same? Even if they are, the second idea of getting rid of __tls_get_new entirely is still somewhat appealing, in that it makes all dynamic TLS access faster and reduces the amount of asm needed. But a committment not to add new call-saved registers to the TLSDESC ABIs would solve the immediate problem (albeit with some hwcap fiddling for 32-bit x86 where mmx, sse, etc. perhaps need to be saved conditionally). Rich
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.