From: KAMEZAWA Hiroyuki on 13 Jul 2010 20:30 On Wed, 14 Jul 2010 01:44:23 +0900 Minchan Kim <minchan.kim(a)gmail.com> wrote: > > If you _really_ can't make the section size smaller, and the vast > > majority of the sections are fully populated, you could hack something > > in. We could, for instance, have a global list that's mostly readonly > > which tells you which sections need to be have their sizes closely > > inspected. That would work OK if, for instance, you only needed to > > check a couple of memory sections in the system. It'll start to suck if > > you made the lists very long. > > Thanks for advise. As I say, I hope Russell accept 16M section. > It seems what I needed was good sleep.... How about this if 16M section is not acceptable ? == NOT TESTED AT ALL, EVEN NOT COMPILED == register address of mem_section to memmap itself's page struct's pg->private field. This means the page is used for memmap of the section. Otherwise, the page is used for other purpose and memmap has a hole. --- arch/arm/mm/init.c | 11 ++++++++++- include/linux/mmzone.h | 19 ++++++++++++++++++- mm/sparse.c | 37 +++++++++++++++++++++++++++++++++++++ 3 files changed, 65 insertions(+), 2 deletions(-) Index: mmotm-2.6.35-0701/include/linux/mmzone.h =================================================================== --- mmotm-2.6.35-0701.orig/include/linux/mmzone.h +++ mmotm-2.6.35-0701/include/linux/mmzone.h @@ -1047,11 +1047,28 @@ static inline struct mem_section *__pfn_ return __nr_to_section(pfn_to_section_nr(pfn)); } +#ifdef CONFIG_SPARSEMEM_HAS_PIT +void mark_memmap_pit(unsigned long start, unsigned long end, bool valid); +static inline int page_valid(struct mem_section *ms, unsigned long pfn) +{ + struct page *page = pfn_to_page(pfn); + struct page *__pg = virt_to_page(page); + return __pg->private == ms; +} +#else +static inline int page_valid(struct mem_section *ms, unsigned long pfn) +{ + return 1; +} +#endif + static inline int pfn_valid(unsigned long pfn) { + struct mem_section *ms; if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) return 0; - return valid_section(__nr_to_section(pfn_to_section_nr(pfn))); + ms = __nr_to_section(pfn_to_section_nr(pfn)); + return valid_section(ms) && page_valid(ms, pfn); } static inline int pfn_present(unsigned long pfn) Index: mmotm-2.6.35-0701/mm/sparse.c =================================================================== --- mmotm-2.6.35-0701.orig/mm/sparse.c +++ mmotm-2.6.35-0701/mm/sparse.c @@ -615,6 +615,43 @@ void __init sparse_init(void) free_bootmem(__pa(usemap_map), size); } +#ifdef CONFIT_SPARSEMEM_HAS_PIT +/* + * Fill memmap's pg->private with a pointer to mem_section. + * pfn_valid() will check this later. (see include/linux/mmzone.h) + * The caller should call + * mark_memmap_pit(start, end, true) # for all allocated mem_map + * and, after that, + * mark_memmap_pit(start, end, false) # for all pits in mem_map. + * please see usage in ARM. + */ +void mark_memmap_pit(unsigned long start, unsigned long end, bool valid) +{ + struct mem_section *ms; + unsigned long pos, next; + struct page *pg; + void *memmap, *end; + unsigned long mapsize = sizeof(struct page) * PAGES_PER_SECTION; + + for (pos = start; + pos < end; pos = next) { + next = (pos + PAGES_PER_SECTION) & PAGE_SECTION_MASK; + ms = __pfn_to_section(pos); + if (!valid_section(ms)) + continue; + for (memmap = pfn_to_page(pfn), end = pfn_to_page(next-1); + memmap != end + 1; + memmap += PAGE_SIZE) { + pg = virt_to_page(memmap); + if (valid) + pg->private = ms; + else + pg->private = NULL; + } + } +} +#endif + #ifdef CONFIG_MEMORY_HOTPLUG #ifdef CONFIG_SPARSEMEM_VMEMMAP static inline struct page *kmalloc_section_memmap(unsigned long pnum, int nid, Index: mmotm-2.6.35-0701/arch/arm/mm/init.c =================================================================== --- mmotm-2.6.35-0701.orig/arch/arm/mm/init.c +++ mmotm-2.6.35-0701/arch/arm/mm/init.c @@ -234,6 +234,13 @@ static void __init arm_bootmem_free(stru arch_adjust_zones(zone_size, zhole_size); free_area_init_node(0, zone_size, min, zhole_size); + +#ifdef CONFIG_SPARSEMEM + for_each_bank(i, mi) { + mark_memmap_pit(bank_start_pfn(mi->bank[i]), + bank_end_pfn(mi->bank[i]), true); + } +#endif } #ifndef CONFIG_SPARSEMEM @@ -386,8 +393,10 @@ free_memmap(unsigned long start_pfn, uns * If there are free pages between these, * free the section of the memmap array. */ - if (pg < pgend) + if (pg < pgend) { + mark_memap_pit(pg >> PAGE_SHIFT, pgend >> PAGE_SHIFT, false); free_bootmem(pg, pgend - pg); + } } /* -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Minchan Kim on 14 Jul 2010 02:50 Hi, Kame. On Wed, Jul 14, 2010 at 9:23 AM, KAMEZAWA Hiroyuki <kamezawa.hiroyu(a)jp.fujitsu.com> wrote: > On Wed, 14 Jul 2010 01:44:23 +0900 > Minchan Kim <minchan.kim(a)gmail.com> wrote: > >> > If you _really_ can't make the section size smaller, and the vast >> > majority of the sections are fully populated, you could hack something >> > in. �We could, for instance, have a global list that's mostly readonly >> > which tells you which sections need to be have their sizes closely >> > inspected. �That would work OK if, for instance, you only needed to >> > check a couple of memory sections in the system. �It'll start to suck if >> > you made the lists very long. >> >> Thanks for advise. As I say, I hope Russell accept 16M section. >> > > It seems what I needed was good sleep.... > How about this if 16M section is not acceptable ? > > == NOT TESTED AT ALL, EVEN NOT COMPILED == > > register address of mem_section to memmap itself's page struct's pg->private field. > This means the page is used for memmap of the section. > Otherwise, the page is used for other purpose and memmap has a hole. It's a very good idea. :) But can this handle case that a page on memmap pages have struct page descriptor of hole? I mean one page can include 128 page descriptor(4096 / 32). In there, 64 page descriptor is valid but remain 64 page descriptor is on hole. In this case, free_memmap doesn't free the page. I think most of system will have aligned memory of 512K(4K * 128). But I am not sure. -- Kind regards, Minchan Kim -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: KAMEZAWA Hiroyuki on 14 Jul 2010 03:20 On Wed, 14 Jul 2010 15:44:41 +0900 Minchan Kim <minchan.kim(a)gmail.com> wrote: > Hi, Kame. > > On Wed, Jul 14, 2010 at 9:23 AM, KAMEZAWA Hiroyuki > <kamezawa.hiroyu(a)jp.fujitsu.com> wrote: > > On Wed, 14 Jul 2010 01:44:23 +0900 > > Minchan Kim <minchan.kim(a)gmail.com> wrote: > > > >> > If you _really_ can't make the section size smaller, and the vast > >> > majority of the sections are fully populated, you could hack something > >> > in. We could, for instance, have a global list that's mostly readonly > >> > which tells you which sections need to be have their sizes closely > >> > inspected. That would work OK if, for instance, you only needed to > >> > check a couple of memory sections in the system. It'll start to suck if > >> > you made the lists very long. > >> > >> Thanks for advise. As I say, I hope Russell accept 16M section. > >> > > > > It seems what I needed was good sleep.... > > How about this if 16M section is not acceptable ? > > > > == NOT TESTED AT ALL, EVEN NOT COMPILED == > > > > register address of mem_section to memmap itself's page struct's pg->private field. > > This means the page is used for memmap of the section. > > Otherwise, the page is used for other purpose and memmap has a hole. > > It's a very good idea. :) > But can this handle case that a page on memmap pages have struct page > descriptor of hole? > I mean one page can include 128 page descriptor(4096 / 32). yes. > In there, 64 page descriptor is valid but remain 64 page descriptor is on hole. > In this case, free_memmap doesn't free the page. yes. but in that case, there are valid page decriptor for 64pages of holes. pfn_valid() should return true but PG_reserved is set. (This is usual behavior.) My intention is that - When all 128 page descriptors are unused, free_memmap() will free it. In that case, clear page->private of a page for freed page descriptors. - When some of page descriptors are used, free_memmap() can't free it and page->private points to &mem_section. We may have memmap for memory hole but pfn_valid() is a function to check there is memmap or not. The bahavior of pfn_valid() is valid. Anyway, you can't free only half of page. If my code doesn't seem to work as above, it's bug. Thanks, -Kame -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Minchan Kim on 14 Jul 2010 03:40 On Wed, Jul 14, 2010 at 4:10 PM, KAMEZAWA Hiroyuki <kamezawa.hiroyu(a)jp.fujitsu.com> wrote: > On Wed, 14 Jul 2010 15:44:41 +0900 > Minchan Kim <minchan.kim(a)gmail.com> wrote: > >> Hi, Kame. >> >> On Wed, Jul 14, 2010 at 9:23 AM, KAMEZAWA Hiroyuki >> <kamezawa.hiroyu(a)jp.fujitsu.com> wrote: >> > On Wed, 14 Jul 2010 01:44:23 +0900 >> > Minchan Kim <minchan.kim(a)gmail.com> wrote: >> > >> >> > If you _really_ can't make the section size smaller, and the vast >> >> > majority of the sections are fully populated, you could hack something >> >> > in. �We could, for instance, have a global list that's mostly readonly >> >> > which tells you which sections need to be have their sizes closely >> >> > inspected. �That would work OK if, for instance, you only needed to >> >> > check a couple of memory sections in the system. �It'll start to suck if >> >> > you made the lists very long. >> >> >> >> Thanks for advise. As I say, I hope Russell accept 16M section. >> >> >> > >> > It seems what I needed was good sleep.... >> > How about this if 16M section is not acceptable ? >> > >> > == NOT TESTED AT ALL, EVEN NOT COMPILED == >> > >> > register address of mem_section to memmap itself's page struct's pg->private field. >> > This means the page is used for memmap of the section. >> > Otherwise, the page is used for other purpose and memmap has a hole. >> >> It's a very good idea. :) >> But can this handle case that a page on memmap pages have struct page >> descriptor of hole? >> I mean one page can include 128 page descriptor(4096 / 32). > yes. > >> In there, 64 page descriptor is valid but remain 64 page descriptor is on hole. >> In this case, free_memmap doesn't free the page. > > yes. but in that case, there are valid page decriptor for 64pages of holes. > pfn_valid() should return true but PG_reserved is set. > (This is usual behavior.) > > My intention is that > > �- When all 128 page descriptors are unused, free_memmap() will free it. > � In that case, clear page->private of a page for freed page descriptors. > > �- When some of page descriptors are used, free_memmap() can't free it > � and page->private points to &mem_section. We may have memmap for memory > � hole but pfn_valid() is a function to check there is memmap or not. > � The bahavior of pfn_valid() is valid. > � Anyway, you can't free only half of page. Okay. I missed PageReserved. Your idea seems to be good. :) I looked at pagetypeinfo_showblockcount_print. It doesn't check PageReserved. Instead of it, it does ugly memmap_valid_within. Can't we remove it and change it with PageReserved? -- Kind regards, Minchan Kim -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: KAMEZAWA Hiroyuki on 14 Jul 2010 03:50
On Wed, 14 Jul 2010 16:35:22 +0900 Minchan Kim <minchan.kim(a)gmail.com> wrote: > On Wed, Jul 14, 2010 at 4:10 PM, KAMEZAWA Hiroyuki > <kamezawa.hiroyu(a)jp.fujitsu.com> wrote: > > On Wed, 14 Jul 2010 15:44:41 +0900 > > Minchan Kim <minchan.kim(a)gmail.com> wrote: > > > >> Hi, Kame. > >> > >> On Wed, Jul 14, 2010 at 9:23 AM, KAMEZAWA Hiroyuki > >> <kamezawa.hiroyu(a)jp.fujitsu.com> wrote: > >> > On Wed, 14 Jul 2010 01:44:23 +0900 > >> > Minchan Kim <minchan.kim(a)gmail.com> wrote: > >> > > >> >> > If you _really_ can't make the section size smaller, and the vast > >> >> > majority of the sections are fully populated, you could hack something > >> >> > in. We could, for instance, have a global list that's mostly readonly > >> >> > which tells you which sections need to be have their sizes closely > >> >> > inspected. That would work OK if, for instance, you only needed to > >> >> > check a couple of memory sections in the system. It'll start to suck if > >> >> > you made the lists very long. > >> >> > >> >> Thanks for advise. As I say, I hope Russell accept 16M section. > >> >> > >> > > >> > It seems what I needed was good sleep.... > >> > How about this if 16M section is not acceptable ? > >> > > >> > == NOT TESTED AT ALL, EVEN NOT COMPILED == > >> > > >> > register address of mem_section to memmap itself's page struct's pg->private field. > >> > This means the page is used for memmap of the section. > >> > Otherwise, the page is used for other purpose and memmap has a hole. > >> > >> It's a very good idea. :) > >> But can this handle case that a page on memmap pages have struct page > >> descriptor of hole? > >> I mean one page can include 128 page descriptor(4096 / 32). > > yes. > > > >> In there, 64 page descriptor is valid but remain 64 page descriptor is on hole. > >> In this case, free_memmap doesn't free the page. > > > > yes. but in that case, there are valid page decriptor for 64pages of holes. > > pfn_valid() should return true but PG_reserved is set. > > (This is usual behavior.) > > > > My intention is that > > > > - When all 128 page descriptors are unused, free_memmap() will free it. > > In that case, clear page->private of a page for freed page descriptors. > > > > - When some of page descriptors are used, free_memmap() can't free it > > and page->private points to &mem_section. We may have memmap for memory > > hole but pfn_valid() is a function to check there is memmap or not. > > The bahavior of pfn_valid() is valid. > > Anyway, you can't free only half of page. > > Okay. I missed PageReserved. > Your idea seems to be good. :) > > I looked at pagetypeinfo_showblockcount_print. > It doesn't check PageReserved. Instead of it, it does ugly memmap_valid_within. > Can't we remove it and change it with PageReserved? > maybe. but I'm not sure how many archs uses CONFIG_ARCH_HAS_HOLES_MEMORYMODEL. Because my idea requires to add arch-dependent hook, enhancement of pfn_valid() happens only when an arch supports it. So, you may need a conservative path. Anyway, I can't test the patch by myself. So, I pass ball to ARM guys. Feel free to reuse my idea if you like. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu(a)jp.fujitsu.com> Thanks, -Kame -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |