Prev: [PATCH 06/26] cciss: factor out cciss_find_memory_BAR()
Next: [PATCH 19/26] cciss: factor out cciss_reset_devices()
From: Rik van Riel on 19 Jul 2010 14:50 On 07/19/2010 09:11 AM, Mel Gorman wrote: > From: Wu Fengguang<fengguang.wu(a)intel.com> > > A background flush work may run for ever. So it's reasonable for it to > mimic the kupdate behavior of syncing old/expired inodes first. > > This behavior also makes sense from the perspective of page reclaim. > File pages are added to the inactive list and promoted if referenced > after one recycling. If not referenced, it's very easy for pages to be > cleaned from reclaim context which is inefficient in terms of IO. If > background flush is cleaning pages, it's best it cleans old pages to > help minimise IO from reclaim. > > Signed-off-by: Wu Fengguang<fengguang.wu(a)intel.com> > Signed-off-by: Mel Gorman<mel(a)csn.ul.ie> Acked-by: Rik van Riel <riel(a)redhat.com> It can probably be optimized, but we really need something like this... -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: KOSAKI Motohiro on 25 Jul 2010 06:50 Hi sorry for the delay. > Will you be picking it up or should I? The changelog should be more or less > the same as yours and consider it > > Signed-off-by: Mel Gorman <mel(a)csn.ul.ie> > > It'd be nice if the original tester is still knocking around and willing > to confirm the patch resolves his/her problem. I am running this patch on > my desktop at the moment and it does feel a little smoother but it might be > my imagination. I had trouble with odd stalls that I never pinned down and > was attributing to the machine being commonly heavily loaded but I haven't > noticed them today. > > It also needs an Acked-by or Reviewed-by from Kosaki Motohiro as it alters > logic he introduced in commit [78dc583: vmscan: low order lumpy reclaim also > should use PAGEOUT_IO_SYNC] My reviewing doesn't found any bug. however I think original thread have too many guess and we need to know reproduce way and confirm it. At least, we need three confirms. o original issue is still there? o DEF_PRIORITY/3 is best value? o Current approach have better performance than Wu's original proposal? (below) Anyway, please feel free to use my reviewed-by tag. Thanks. --- linux-next.orig/mm/vmscan.c 2010-06-24 14:32:03.000000000 +0800 +++ linux-next/mm/vmscan.c 2010-07-22 16:12:34.000000000 +0800 @@ -1650,7 +1650,7 @@ static void set_lumpy_reclaim_mode(int p */ if (sc->order > PAGE_ALLOC_COSTLY_ORDER) sc->lumpy_reclaim_mode = 1; - else if (sc->order && priority < DEF_PRIORITY - 2) + else if (sc->order && priority < DEF_PRIORITY / 2) sc->lumpy_reclaim_mode = 1; else sc->lumpy_reclaim_mode = 0; -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Rik van Riel on 25 Jul 2010 23:20
On 07/25/2010 11:08 PM, Wu Fengguang wrote: > We do need some throttling under memory pressure. However stall time > more than 1s is not acceptable. A simple congestion_wait() may be > better, since it waits on _any_ IO completion (which will likely > release a set of PG_reclaim pages) rather than one specific IO > completion. This makes much smoother stall time. > wait_on_page_writeback() shall really be the last resort. > DEF_PRIORITY/3 means 1/16=6.25%, which is closer. I agree with the max 1 second stall time, but 6.25% of memory could be an awful lot of pages to scan on a system with 1TB of memory :) Not sure what the best approach is, just pointing out that DEF_PRIORITY/3 may be too much for large systems... -- All rights reversed -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |