Prev: [PATCH 1/8] v5 Move the find_memory_block() routine up
Next: [PATCH 5/8] v5 Allow memory_block to span multiple memory sections
From: Pekka Enberg on 9 Aug 2010 14:50 On Mon, Aug 9, 2010 at 8:26 PM, Nitin Gupta <ngupta(a)vflare.org> wrote: > Also remove references to removed stats (ex: good_comress). > > Signed-off-by: Nitin Gupta <ngupta(a)vflare.org> Acked-by: Pekka Enberg <penberg(a)kernel.org> -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Andrew Morton on 10 Aug 2010 00:40 On Mon, 9 Aug 2010 22:56:49 +0530 Nitin Gupta <ngupta(a)vflare.org> wrote: > +/* > + * Individual percpu values can go negative but the sum across all CPUs > + * must always be positive (we store various counts). So, return sum as > + * unsigned value. > + */ > +static u64 zram_get_stat(struct zram *zram, enum zram_stats_index idx) > { > - u64 val; > - > - spin_lock(&zram->stat64_lock); > - val = *v; > - spin_unlock(&zram->stat64_lock); > + int cpu; > + s64 val = 0; > + > + for_each_possible_cpu(cpu) { > + s64 temp; > + unsigned int start; > + struct zram_stats_cpu *stats; > + > + stats = per_cpu_ptr(zram->stats, cpu); > + do { > + start = u64_stats_fetch_begin(&stats->syncp); > + temp = stats->count[idx]; > + } while (u64_stats_fetch_retry(&stats->syncp, start)); > + val += temp; > + } > > + WARN_ON(val < 0); > return val; > } That reimplements include/linux/percpu_counter.h, poorly. Please see the June discussion "[PATCH v2 1/2] tmpfs: Quick token library to allow scalable retrieval of tokens from token jar" for some discussion. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Nitin Gupta on 11 Aug 2010 12:40 On 08/10/2010 10:04 AM, Andrew Morton wrote: > On Mon, 9 Aug 2010 22:56:49 +0530 Nitin Gupta <ngupta(a)vflare.org> wrote: > >> +/* >> + * Individual percpu values can go negative but the sum across all CPUs >> + * must always be positive (we store various counts). So, return sum as >> + * unsigned value. >> + */ >> +static u64 zram_get_stat(struct zram *zram, enum zram_stats_index idx) >> { >> - u64 val; >> - >> - spin_lock(&zram->stat64_lock); >> - val = *v; >> - spin_unlock(&zram->stat64_lock); >> + int cpu; >> + s64 val = 0; >> + >> + for_each_possible_cpu(cpu) { >> + s64 temp; >> + unsigned int start; >> + struct zram_stats_cpu *stats; >> + >> + stats = per_cpu_ptr(zram->stats, cpu); >> + do { >> + start = u64_stats_fetch_begin(&stats->syncp); >> + temp = stats->count[idx]; >> + } while (u64_stats_fetch_retry(&stats->syncp, start)); >> + val += temp; >> + } >> >> + WARN_ON(val < 0); >> return val; >> } > > That reimplements include/linux/percpu_counter.h, poorly. > > Please see the June discussion "[PATCH v2 1/2] tmpfs: Quick token > library to allow scalable retrieval of tokens from token jar" for some > discussion. > > I read the discussion you pointed out but still fail to see how percpu_counters, with all their overhead, are better than simple pcpu variable used in current version. What is the advantage? Thanks, Nitin -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Andrew Morton on 11 Aug 2010 13:20
On Wed, 11 Aug 2010 22:09:29 +0530 Nitin Gupta <ngupta(a)vflare.org> wrote: > On 08/10/2010 10:04 AM, Andrew Morton wrote: > > On Mon, 9 Aug 2010 22:56:49 +0530 Nitin Gupta <ngupta(a)vflare.org> wrote: > > > >> +/* > >> + * Individual percpu values can go negative but the sum across all CPUs > >> + * must always be positive (we store various counts). So, return sum as > >> + * unsigned value. > >> + */ > >> +static u64 zram_get_stat(struct zram *zram, enum zram_stats_index idx) > >> { > >> - u64 val; > >> - > >> - spin_lock(&zram->stat64_lock); > >> - val = *v; > >> - spin_unlock(&zram->stat64_lock); > >> + int cpu; > >> + s64 val = 0; > >> + > >> + for_each_possible_cpu(cpu) { > >> + s64 temp; > >> + unsigned int start; > >> + struct zram_stats_cpu *stats; > >> + > >> + stats = per_cpu_ptr(zram->stats, cpu); > >> + do { > >> + start = u64_stats_fetch_begin(&stats->syncp); > >> + temp = stats->count[idx]; > >> + } while (u64_stats_fetch_retry(&stats->syncp, start)); > >> + val += temp; > >> + } > >> > >> + WARN_ON(val < 0); > >> return val; > >> } > > > > That reimplements include/linux/percpu_counter.h, poorly. > > > > Please see the June discussion "[PATCH v2 1/2] tmpfs: Quick token > > library to allow scalable retrieval of tokens from token jar" for some > > discussion. > > > > > > I read the discussion you pointed out but still fail to see how percpu_counters, > with all their overhead, What overhead? Send numbers. Then extrapolate those numbers to a machine which has 128 possible CPUs and 4 present CPUs. > are better than simple pcpu variable used in current > version. What is the advantage? Firstly, they'd have saved all the time you spent duplicating them. Secondly, getting additional users of the standard facility results in more testing and perhaps enhancement of that facility, thus benefiting other users too. Thirdly, using the standard facility permits your code to leverage enhancements which others add. Fourthly, they would result in a smaller kernel. You didn't really need me to teach you the benefits of code reuse did you? Please do not merge this code unless there is a good reason to do so and it has been shown that the standard facility cannot be suitably fixed or enhanced to address the deficiency. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |