From: Andrew Morton on 10 May 2010 17:30 On Mon, 10 May 2010 12:42:34 +0200 "Oskar Schirmer" <os(a)emlix.com> wrote: > With dma based spi transmission, data corruption > is observed occasionally. With dma buffers located > right next to msg and xfer fields, cache lines > correctly flushed in preparation for dma usage > may be polluted again when writing to fields > in the same cache line. > > Make sure cache fields used with dma do not > share cache lines with fields changed during > dma handling. As both fields are part of a > struct that is allocated via kzalloc, thus > cache aligned, moving the fields to the 1st > position and insert padding for alignment > does the job. This sounds odd. Doesn't it imply that some code somewhere is missing some DMA synchronisation actions? > > v2: add a comment to explain why alignment is needed > > v3: fix the typo in comment and layout (- to end of line) > > diff --git a/drivers/input/touchscreen/ad7877.c b/drivers/input/touchscreen/ad7877.c > index 885354c..9ebb1b4 100644 > --- a/drivers/input/touchscreen/ad7877.c > +++ b/drivers/input/touchscreen/ad7877.c > @@ -153,15 +153,29 @@ enum { > */ > > struct ser_req { > + u16 sample; > + /* > + * DMA (thus cache coherency maintenance) requires the > + * transfer buffers to live in their own cache lines. > + */ > + char __padalign[L1_CACHE_BYTES - sizeof(u16)]; It would be better to use __cacheline_aligned, rather than open-coding things in this manner. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Andrew Morton on 11 May 2010 02:30 On Tue, 11 May 2010 02:11:41 -0400 Mike Frysinger <vapier.adi(a)gmail.com> wrote: > > __ __ __ __unsigned __ __ __ __ __ __ __ __pending:1; __ __ __/* P: lock */ > > + > > + __ __ __ /* > > + __ __ __ __* DMA (thus cache coherency maintenance) requires the > > + __ __ __ __* transfer buffers to live in their own cache lines. > > + __ __ __ __*/ > > + __ __ __ u16 conversion_data[AD7877_NR_SENSE] ____cacheline_aligned; > > __}; (^^stupid gmail) > i'm not sure this is correct. the cached_aligned attribute makes sure > it starts on a cache boundary, but it doesnt make sure it pads out to > one. so it might work more of the time, but i dont think it's > guaranteed. yup. You'd need to put something like int pad ____cacheline_aligned; _after_ the trashable field. Then look at the .s file and make sure it came out right ;) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Pekka Enberg on 11 May 2010 02:50 Hi Dmitry, On Tue, May 11, 2010 at 9:33 AM, Dmitry Torokhov <dmitry.torokhov(a)gmail.com> wrote: >> what guarantee exactly do you have for that statement ? > > The data is kmalloced, kmalloc aligns on cacheline boundary AFAIK which > means that next kmalloc data chunk will not share "our" cacheline. No, there are no such guarantees. kmalloc() aligns on ARCH_KMALLOC_MINALIGN or ARCH_SLAB_MINALIGN depending on which is bigger but beyond that, there are no guarantees. You can, of course, use kmem_cache_create() with SLAB_HWCACHE_ALIGN to align on cacheline boundary. Pekka -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Christoph Lameter on 11 May 2010 10:50 On Tue, 11 May 2010, Pekka Enberg wrote: > > The data is kmalloced, kmalloc aligns on cacheline boundary AFAIK which > > means that next kmalloc data chunk will not share "our" cacheline. > > No, there are no such guarantees. kmalloc() aligns on > ARCH_KMALLOC_MINALIGN or ARCH_SLAB_MINALIGN depending on which is > bigger but beyond that, there are no guarantees. You can, of course, > use kmem_cache_create() with SLAB_HWCACHE_ALIGN to align on cacheline > boundary. Note the difference between kmalloc aligment and the alignment of manually created slabs. Kmalloc data is often aligned on cacheline boundary due to the allocator methods of placing data in pages. But there is no guarantee that this will always be the case. In particular if slab debugging is on then the alignments become different. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Dmitry Torokhov on 11 May 2010 13:00
On Tue, May 11, 2010 at 09:42:03AM +0300, Pekka Enberg wrote: > Hi Dmitry, > > On Tue, May 11, 2010 at 9:33 AM, Dmitry Torokhov > <dmitry.torokhov(a)gmail.com> wrote: > >> what guarantee exactly do you have for that statement ? > > > > The data is kmalloced, kmalloc aligns on cacheline boundary AFAIK which > > means that next kmalloc data chunk will not share "our" cacheline. > > No, there are no such guarantees. kmalloc() aligns on > ARCH_KMALLOC_MINALIGN or ARCH_SLAB_MINALIGN depending on which is > bigger but beyond that, there are no guarantees. You can, of course, > use kmem_cache_create() with SLAB_HWCACHE_ALIGN to align on cacheline > boundary. > The architectures that we are trying to deal with here should be forcing kmalloc to the cache boundary already though - otherwise they would not be able to used kmalloced memory for DMA buffers at all. Or am I utterly lost here? -- Dmitry -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |