Prev: Hyperv: Export the symbol that tracks hyperv features and recommendations
Next: [PATCH] ACPI: Disable ASPM if the platform won't provide _OSC control for PCIe
From: Christoph Hellwig on 22 Jun 2010 10:50 On Mon, Jun 21, 2010 at 04:19:39PM -0700, Dan Magenheimer wrote: > [PATCH V3 3/8] Cleancache: core ops functions and configuration > > Cleancache core ops functions and configuration NACK for code that just adds random hooks all over VFS and even individual FS code, does an EXPORT_SYMBOL but doesn't actually introduce any users. And even if it had users these would have to be damn good ones given how invasive it is. So what exactly is this going to help us? Given your affiliation probably something Xen related, so some real use case would be interesting as well instead of just making Xen suck slightly less. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Nitin Gupta on 22 Jun 2010 11:20 On 06/22/2010 08:13 PM, Christoph Hellwig wrote: > On Mon, Jun 21, 2010 at 04:19:39PM -0700, Dan Magenheimer wrote: >> [PATCH V3 3/8] Cleancache: core ops functions and configuration >> >> Cleancache core ops functions and configuration > > NACK for code that just adds random hooks all over VFS and even > individual FS code, does an EXPORT_SYMBOL but doesn't actually introduce > any users. > > And even if it had users these would have to be damn good ones given how > invasive it is. So what exactly is this going to help us? Given your > affiliation probably something Xen related, so some real use case would > be interesting as well instead of just making Xen suck slightly less. > > One use case of cleancache is to provide transparent page cache compression support. Currently, I'm working 'zcache' which provides hooks for cleancache callbacks to implement the same. Page cache compression is expected is benefit use cases where memory is the bottleneck. In particular, I'm interested in KVM virtualization case where this can allow running more VMs per host for given amount of RAM. The zcache code is under active development and a working snapshot can be found here: http://code.google.com/p/compcache/source/browse/#hg/sub-projects/zcache (sorry for lack of code comments in its current state) Thanks, Nitin -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Dan Magenheimer on 22 Jun 2010 12:30 Hi Christoph -- Thanks for the comments... replying to both in one reply. > Subject: Re: [PATCH V3 0/8] Cleancache: overview > > What all this fails to explain is that this actually is useful for? See FAQ #1 in patch 1/8 (and repeated in patch 0/8). But, in a few words, it's useful for maintaining a cache of clean pages (for which the kernel has insufficient RAM) in "other" RAM that's not directly accessible or addressable by the kernel (such as hypervisor-owned RAM or kernel-owned RAM that is secretly compressed). Like the kernel's page cache, use of cleancache avoids lots of disk reads ("refaults"). And when kernel RAM is scarce but "other" RAM is plentiful, it avoids LOTS and LOTS of disk reads/refaults. > Subject: Re: [PATCH V3 3/8] Cleancache: core ops functions and > configuration > > On Mon, Jun 21, 2010 at 04:19:39PM -0700, Dan Magenheimer wrote: > > [PATCH V3 3/8] Cleancache: core ops functions and configuration > > > > Cleancache core ops functions and configuration > > NACK for code that just adds random hooks all over VFS and even > individual FS code, does an EXPORT_SYMBOL but doesn't actually > introduce any users. There's a bit of a chicken and egg here. Since cleancache touches code owned by a number of maintainers, it made sense to get that code reviewed first and respond to the feedback of those maintainers. So if this is the only remaining objection, we will proceed next with introducing users. See below for a brief description. > And even if it had users these would have to be damn good ones given > how invasive it is. I need to quibble with your definition of "invasive". The patch adds 43 lines of code (not counting comments and blank lines) in VFS/filesystem code. These lines have basically stayed the same since 2.6.18 so the hooks are clearly not in code that is rapidly changing... so maintenance should not be an issue. The patch covers four filesystems and implements an interface that provides both reading/writing to an "external" cache AND coherency with that cache. And all of these lines of code either compile into nothingness when CONFIG_CLEANCACHE is off, or become compare function-pointer- to-NULL if no user ("backend") claims the ops function. I consider that very very NON-invasive. (And should credit Chris Mason for the hook placement and Jeremy Fitzhardinge for the clean layering.) > So what exactly is this going to help us? Given your > affiliation probably something Xen related, so some real use case would > be interesting as well instead of just making Xen suck slightly less. As I was typing this reply, I saw Nitin's reply talking about zcache. That's the non-Xen-related "real" use case... it may even help KVM suck slightly less ;-) Making-Xen-suck-slightly-less is another user... Transcendent Memory ("tmem") has been in Xen for over a year now and distros are already shipping an earlier version of cleancache that works with Xen tmem. Some shim code is required between cleancache and Xen tmem, and this shim will live in the drivers/xen directory. Excellent performance results for this "user" have been presented at OLS'09 and LCA'10. And the patch provides a very generic clean interface that will likely be useful for future TBD forms of "other RAM". While I honestly believe these additional users will eventually appear, the first two users (zcache and Xen tmem) should be sufficient to resolve your NACK. Thanks, Dan -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Dave Hansen on 22 Jun 2010 12:40 On Mon, 2010-06-21 at 16:19 -0700, Dan Magenheimer wrote: > --- linux-2.6.35-rc2/include/linux/cleancache.h 1969-12-31 17:00:00.000000000 -0700 > +++ linux-2.6.35-rc2-cleancache/include/linux/cleancache.h 2010-06-21 14:45:18.000000000 -0600 > @@ -0,0 +1,88 @@ > +#ifndef _LINUX_CLEANCACHE_H > +#define _LINUX_CLEANCACHE_H > + > +#include <linux/fs.h> > +#include <linux/mm.h> > + > +struct cleancache_ops { > + int (*init_fs)(size_t); > + int (*init_shared_fs)(char *uuid, size_t); > + int (*get_page)(int, ino_t, pgoff_t, struct page *); > + void (*put_page)(int, ino_t, pgoff_t, struct page *); > + void (*flush_page)(int, ino_t, pgoff_t); > + void (*flush_inode)(int, ino_t); > + void (*flush_fs)(int); > +}; > + How would someone go about testing this code? Is there an example cleancache implementation? -- Dave -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Konrad Rzeszutek Wilk on 6 Jul 2010 17:00
On Tue, Jun 22, 2010 at 09:26:28AM -0700, Dave Hansen wrote: > On Mon, 2010-06-21 at 16:19 -0700, Dan Magenheimer wrote: > > --- linux-2.6.35-rc2/include/linux/cleancache.h 1969-12-31 17:00:00.000000000 -0700 > > +++ linux-2.6.35-rc2-cleancache/include/linux/cleancache.h 2010-06-21 14:45:18.000000000 -0600 > > @@ -0,0 +1,88 @@ > > +#ifndef _LINUX_CLEANCACHE_H > > +#define _LINUX_CLEANCACHE_H > > + > > +#include <linux/fs.h> > > +#include <linux/mm.h> > > + > > +struct cleancache_ops { > > + int (*init_fs)(size_t); > > + int (*init_shared_fs)(char *uuid, size_t); > > + int (*get_page)(int, ino_t, pgoff_t, struct page *); > > + void (*put_page)(int, ino_t, pgoff_t, struct page *); > > + void (*flush_page)(int, ino_t, pgoff_t); > > + void (*flush_inode)(int, ino_t); > > + void (*flush_fs)(int); > > +}; > > + > > How would someone go about testing this code? Is there an example > cleancache implementation? Dan, Can you reference with a link or a git branch the patches that utilize this? And also mention that in the 0/X patch so that folks can reference your cleancache implementation? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |