Prev: Another set of ioctl bkl pushdown, almost the end
Next: sched: change nohz idle load balancing logic to push model
From: Ric Wheeler on 24 May 2010 16:00 On 05/20/2010 08:48 PM, Zan Lynx wrote: > On 5/20/10 5:48 PM, KOSAKI Motohiro wrote: >> Hi >> >> CC to Nick and Jan >> >>> We've seen multiple performance regressions linked to the lower(20%) >>> dirty_ratio. When performing enough IO to overwhelm the background >>> flush daemons the percent of dirty pagecache memory quickly climbs >>> to the new/lower dirty_ratio value of 20%. At that point all writing >>> processes are forced to stop and write dirty pagecache pages back to >>> disk. >>> This causes performance regressions in several benchmarks as well as >>> causing >>> a noticeable overall sluggishness. We all know that the dirty_ratio is >>> an integrity vs performance trade-off but the file system journaling >>> will cover any devastating effects in the event of a system crash. >>> >>> Increasing the dirty_ratio to 40% will regain the performance loss seen >>> in several benchmarks. Whats everyone think about this??? >> >> In past, Jan Kara also claim the exactly same thing. >> >> Subject: [LSF/VM TOPIC] Dynamic sizing of dirty_limit >> Date: Wed, 24 Feb 2010 15:34:42 +0100 >> >> > (*) We ended up increasing dirty_limit in SLES 11 to 40% as it >> used to be >> > with old kernels because customers running e.g. LDAP (using BerkelyDB >> > heavily) were complaining about performance problems. >> >> So, I'd prefer to restore the default rather than both Redhat and >> SUSE apply exactly >> same distro specific patch. because we can easily imazine other users >> will face the same >> issue in the future. > > On desktop systems the low dirty limits help maintain interactive > feel. Users expect applications that are saving data to be slow. They > do not like it when every application in the system randomly comes to > a halt because of one program stuffing data up to the dirty limit. > > The cause and effect for the system slowdown is clear when the dirty > limit is low. "I saved data and now the system is slow until it is > done." When the dirty page ratio is very high, the cause and effect is > disconnected. "I was just web surfing and the system came to a halt." > > I think we should expect server admins to do more tuning than desktop > users, so the default limits should stay low in my opinion. > Have you done any performance testing that shows this? A laptop the smaller default would spin up drives more often and greatly decrease your battery life. Note that both SLES and RHEL default away from the upstream default. Ric -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Christoph Hellwig on 8 Jun 2010 14:50 Did this patch get merged somewhere? On Thu, May 20, 2010 at 07:20:42AM -0400, Larry Woodman wrote: > We've seen multiple performance regressions linked to the lower(20%) > dirty_ratio. When performing enough IO to overwhelm the background > flush daemons the percent of dirty pagecache memory quickly climbs > to the new/lower dirty_ratio value of 20%. At that point all > writing processes are forced to stop and write dirty pagecache pages > back to disk. This causes performance regressions in several > benchmarks as well as causing > a noticeable overall sluggishness. We all know that the dirty_ratio is > an integrity vs performance trade-off but the file system journaling > will cover any devastating effects in the event of a system crash. > > Increasing the dirty_ratio to 40% will regain the performance loss seen > in several benchmarks. Whats everyone think about this??? > > > > > > ------------------------------------------------------------------------ > > diff --git a/mm/page-writeback.c b/mm/page-writeback.c > index ef27e73..645a462 100644 > --- a/mm/page-writeback.c > +++ b/mm/page-writeback.c > @@ -78,7 +78,7 @@ int vm_highmem_is_dirtyable; > /* > * The generator of dirty data starts writeback at this percentage > */ > -int vm_dirty_ratio = 20; > +int vm_dirty_ratio = 40; > > /* > * vm_dirty_bytes starts at 0 (disabled) so that it is a function of > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo(a)kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont(a)kvack.org"> email(a)kvack.org </a> ---end quoted text--- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Larry Woodman on 8 Jun 2010 15:00
On Tue, 2010-06-08 at 14:49 -0400, Christoph Hellwig wrote: > Did this patch get merged somewhere? I dont think it ever did, about 1/2 of responses were for it and the other 1/2 against it. Larry > > On Thu, May 20, 2010 at 07:20:42AM -0400, Larry Woodman wrote: > > We've seen multiple performance regressions linked to the lower(20%) > > dirty_ratio. When performing enough IO to overwhelm the background > > flush daemons the percent of dirty pagecache memory quickly climbs > > to the new/lower dirty_ratio value of 20%. At that point all > > writing processes are forced to stop and write dirty pagecache pages > > back to disk. This causes performance regressions in several > > benchmarks as well as causing > > a noticeable overall sluggishness. We all know that the dirty_ratio is > > an integrity vs performance trade-off but the file system journaling > > will cover any devastating effects in the event of a system crash. > > > > Increasing the dirty_ratio to 40% will regain the performance loss seen > > in several benchmarks. Whats everyone think about this??? > > > > > > > > > > > > ------------------------------------------------------------------------ > > > > diff --git a/mm/page-writeback.c b/mm/page-writeback.c > > index ef27e73..645a462 100644 > > --- a/mm/page-writeback.c > > +++ b/mm/page-writeback.c > > @@ -78,7 +78,7 @@ int vm_highmem_is_dirtyable; > > /* > > * The generator of dirty data starts writeback at this percentage > > */ > > -int vm_dirty_ratio = 20; > > +int vm_dirty_ratio = 40; > > > > /* > > * vm_dirty_bytes starts at 0 (disabled) so that it is a function of > > > > -- > > To unsubscribe, send a message with 'unsubscribe linux-mm' in > > the body to majordomo(a)kvack.org. For more info on Linux MM, > > see: http://www.linux-mm.org/ . > > Don't email: <a href=mailto:"dont(a)kvack.org"> email(a)kvack.org </a> > ---end quoted text--- > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo(a)kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont(a)kvack.org"> email(a)kvack.org </a> -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |