From: Milton Miller on 27 Jul 2010 04:20 On Tue Jul 27 2010 about 02:11:22 Minchan Kim wrote: > > [Sorry if i missed or added anyone on cc, patchwork.kernel.org LKML is not > > working and I'm not subscribed to the list ] > > Readd them. :) Changed linux-mmc at vger to linxu-mm at kvack.org, from my poor use of grep MAINTAINERS. > On Tue, Jul 27, 2010 at 2:55 PM, <miltonm(a)xxxxxxx> wrote: > > On Mon Jul 26 2010 about 12:47:37 EST, Christoph Lameter wrote: > > > On Tue, 27 Jul 2010, Minchan Kim wrote: > > > > > > > This patch registers address of mem_section to memmap itself's page struct's > > > > pg->private field. This means the page is used for memmap of the section. > > > > Otherwise, the page is used for other purpose and memmap has a hole. > > > > > > > > > +void mark_valid_memmap(unsigned long start, unsigned long end); > > > > + > > > > +#ifdef CONFIG_ARCH_HAS_HOLES_MEMORYMODEL > > > > +static inline int memmap_valid(unsigned long pfn) > > > > +{ > > > > + struct page *page = pfn_to_page(pfn); > > > > + struct page *__pg = virt_to_page(page); > > > > + return page_private(__pg) == (unsigned long)__pg; > > > > > > > > > What if page->private just happens to be the value of the page struct? > > > Even if that is not possible today, someday someone may add new > > > functionality to the kernel where page->pivage == page is used for some > > > reason. > > > > > > Checking for PG_reserved wont work? > > > > I had the same thought and suggest setting it to the memory section block, > > since that is a uniquie value (unlike PG_reserved), > > You mean setting pg->private to mem_section address? > I hope I understand your point. > > Actually, KAMEZAWA tried it at first version but I changed it. > That's because I want to support this mechanism to ARM FLATMEM. > (It doesn't have mem_section) > > > > .. and we already have computed it when we use it so we could pass it as > > a parameter (to both _valid and mark_valid). > > I hope this can support FALTMEM which have holes(ex, ARM). > If we pass a void * to this helper we should be able to find another symbol. Looking at the pfn_valid() in arch/arm/mm/init.c I would probably choose &meminfo as it is already used nearby, and using a single symbol in would avoid issues if a more specific symbol chosen (eg bank) were to change at a pfn boundary not PAGE_SIZE / sizeof(struct page). Similarly the asm-generic/page.h version could use &max_mapnr. This function is a validation helper for pfn_valid not the only check. something like static inline int memmap_valid(unsigned long pfn, void *validate) { struct page *page = pfn_to_page(pfn); struct page *__pg = virt_to_page(page); return page_private(__pg) == validate; } static inline int pfn_valid(unsigned long pfn) { struct mem_section *ms; if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) return 0; ms = __nr_to_section(pfn_to_section_nr(pfn)); return valid_section(ms) && memmap_valid(pfn, ms); } > > > > +/* > > > > + * Fill pg->private on valid mem_map with page itself. > > > > + * pfn_valid() will check this later. (see include/linux/mmzone.h) > > > > + * Every arch for supporting hole of mem_map should call > > > > + * mark_valid_memmap(start, end). please see usage in ARM. > > > > + */ > > > > +void mark_valid_memmap(unsigned long start, unsigned long end) > > > > +{ > > > > + struct mem_section *ms; > > > > + unsigned long pos, next; > > > > + struct page *pg; > > > > + void *memmap, *mapend; > > > > + > > > > + for (pos = start; pos < end; pos = next) { > > > > + next = (pos + PAGES_PER_SECTION) & PAGE_SECTION_MASK; > > > > + ms = __pfn_to_section(pos); > > > > + if (!valid_section(ms)) > > > > + continue; > > > > + > > > > + for (memmap = (void*)pfn_to_page(pos), > > > > + /* The last page in section */ > > > > + mapend = pfn_to_page(next-1); > > > > + memmap < mapend; memmap += PAGE_SIZE) { > > > > + pg = virt_to_page(memmap); > > > > + set_page_private(pg, (unsigned long)pg); > > > > + } > > > > + } > > > > +} Hmm, this loop would need to change for sections. And sizeof(struct page) % PAGE_SIZE may not be 0, so we want a global symbol for sparsemem too. Perhaps the mem_section array. Using a symbol that is part of the model pre-checks can remove a global symbol lookup and has the side effect of making sure our pfn_valid is for the right model. milton -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Minchan Kim on 27 Jul 2010 06:00 On Tue, Jul 27, 2010 at 5:12 PM, Milton Miller <miltonm(a)bga.com> wrote: > > On Tue Jul 27 2010 about 02:11:22 Minchan Kim wrote: >> > [Sorry if i missed or added anyone on cc, patchwork.kernel.org �LKML is not >> > working and I'm not subscribed to the list ] >> >> Readd them. :) > > Changed linux-mmc at vger to linxu-mm at kvack.org, from my poor use of grep > MAINTAINERS. > >> On Tue, Jul 27, 2010 at 2:55 PM, <miltonm(a)xxxxxxx> wrote: >> > On Mon Jul 26 2010 about 12:47:37 EST, Christoph Lameter wrote: >> > > On Tue, 27 Jul 2010, Minchan Kim wrote: >> > > >> > > > This patch registers address of mem_section to memmap itself's page struct's >> > > > pg->private field. This means the page is used for memmap of the section. >> > > > Otherwise, the page is used for other purpose and memmap has a hole. >> > >> > > >> > > > +void mark_valid_memmap(unsigned long start, unsigned long end); >> > > > + >> > > > +#ifdef CONFIG_ARCH_HAS_HOLES_MEMORYMODEL >> > > > +static inline int memmap_valid(unsigned long pfn) >> > > > +{ >> > > > + struct page *page = pfn_to_page(pfn); >> > > > + struct page *__pg = virt_to_page(page); >> > > > + return page_private(__pg) == (unsigned long)__pg; >> > > >> > > >> > > What if page->private just happens to be the value of the page struct? >> > > Even if that is not possible today, someday someone may add new >> > > functionality to the kernel where page->pivage == page is used for some >> > > reason. >> > > >> > > Checking for PG_reserved wont work? >> > >> > I had the same thought and suggest setting it to the memory section block, >> > since that is a uniquie value (unlike PG_reserved), >> >> You mean setting pg->private to mem_section address? >> I hope I understand your point. >> >> Actually, KAMEZAWA tried it at first version but I changed it. >> That's because I want to support this mechanism to ARM FLATMEM. >> (It doesn't have mem_section) > >> > >> > .. and we already have computed it when we use it so we could pass it as >> > a parameter (to both _valid and mark_valid). >> >> I hope this can support FALTMEM which have holes(ex, ARM). >> > > If we pass a void * to this helper we should be able to find another > symbol. �Looking at the pfn_valid() in arch/arm/mm/init.c I would > probably choose &meminfo as it is already used nearby, and using a single If we uses pg itself and PG_reserved, we can remove &meminfo in FLATMEM. > symbol in would avoid issues if a more specific symbol chosen (eg bank) > were to change at a pfn boundary not PAGE_SIZE / sizeof(struct page). > Similarly the asm-generic/page.h version could use &max_mapnr. I don't consider NOMMU. I am not sure NOMMU have a this problem. > > This function is a validation helper for pfn_valid not the only check. > > something like > > static inline int memmap_valid(unsigned long pfn, void *validate) > { > � � � �struct page *page = pfn_to_page(pfn); > � � � �struct page *__pg = virt_to_page(page); > � � � �return page_private(__pg) == validate; > } I am not sure what's benefit we have if we use validate argument. > > static inline int pfn_valid(unsigned long pfn) > { > � � � �struct mem_section *ms; > � � � �if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) > � � � � � � � �return 0; > � � � �ms = __nr_to_section(pfn_to_section_nr(pfn)); > � � � �return valid_section(ms) && memmap_valid(pfn, ms); > } > >> > > > +/* >> > > > + * Fill pg->private on valid mem_map with page itself. >> > > > + * pfn_valid() will check this later. (see include/linux/mmzone.h) >> > > > + * Every arch for supporting hole of mem_map should call >> > > > + * mark_valid_memmap(start, end). please see usage in ARM. >> > > > + */ >> > > > +void mark_valid_memmap(unsigned long start, unsigned long end) >> > > > +{ >> > > > + � � � struct mem_section *ms; >> > > > + � � � unsigned long pos, next; >> > > > + � � � struct page *pg; >> > > > + � � � void *memmap, *mapend; >> > > > + >> > > > + � � � for (pos = start; pos < end; pos = next) { >> > > > + � � � � � � � next = (pos + PAGES_PER_SECTION) & PAGE_SECTION_MASK; >> > > > + � � � � � � � ms = __pfn_to_section(pos); >> > > > + � � � � � � � if (!valid_section(ms)) >> > > > + � � � � � � � � � � � continue; >> > > > + >> > > > + � � � � � � � for (memmap = (void*)pfn_to_page(pos), >> > > > + � � � � � � � � � � � � � � � � � � � /* The last page in section */ >> > > > + � � � � � � � � � � � � � � � � � � � mapend = pfn_to_page(next-1); >> > > > + � � � � � � � � � � � � � � � memmap < mapend; memmap += PAGE_SIZE) { >> > > > + � � � � � � � � � � � pg = virt_to_page(memmap); >> > > > + � � � � � � � � � � � set_page_private(pg, (unsigned long)pg); >> > > > + � � � � � � � } >> > > > + � � � } >> > > > +} > > Hmm, this loop would need to change for sections. � And sizeof(struct > page) % PAGE_SIZE may not be 0, so we want a global symbol for sparsemem I can't understand your point. What is problem of sizeof(struct page)%PAGE_SIZE? AFAIK, I believe sizeof(struct page) is always 32 bit in 32 bit machine and most of PAGE_SIZE is 4K. What's problem happen? > too. �Perhaps the mem_section array. �Using a symbol that is part of > the model pre-checks can remove a global symbol lookup and has the side > effect of making sure our pfn_valid is for the right model. global symbol lookup? Hmm, Please let me know your approach's benefit for improving this patch. :) Thanks for careful review, milton. -- Kind regards, Minchan Kim -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Minchan Kim on 27 Jul 2010 06:10 Hi, Kame. On Tue, Jul 27, 2010 at 5:13 PM, KAMEZAWA Hiroyuki <kamezawa.hiroyu(a)jp.fujitsu.com> >> Perhaps the mem_section array. �Using a symbol that is part of >> the model pre-checks can remove a global symbol lookup and has the side >> effect of making sure our pfn_valid is for the right model. >> > > But yes, maybe it's good to make use of a fixed-(magic)-value. fixed-magic-value? Yes. It can be good for some debugging. But as Christoph pointed out, we need some strict check(ex, PG_reserved) for preventing unlucky valid using of magic value in future. But in fact I have a concern to use PG_reserved since it can be used afterward pfn_valid normally to check hole in non-hole system. So I think it's redundant. Hmm.. -- Kind regards, Minchan Kim -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Christoph Lameter on 27 Jul 2010 10:40 On Tue, 27 Jul 2010, Minchan Kim wrote: > But in fact I have a concern to use PG_reserved since it can be used > afterward pfn_valid normally to check hole in non-hole system. So I > think it's redundant. PG_reserved is already used to mark pages not handled by the page allocator (see mmap_init_zone). -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Minchan Kim on 27 Jul 2010 18:40
On Tue, Jul 27, 2010 at 11:34 PM, Christoph Lameter <cl(a)linux-foundation.org> wrote: > On Tue, 27 Jul 2010, Minchan Kim wrote: > >> But in fact I have a concern to use PG_reserved since it can be used >> afterward pfn_valid normally to check hole in non-hole system. So I >> think it's redundant. Ignore me. I got confused. > > PG_reserved is already used to mark pages not handled by the page > allocator (see mmap_init_zone). I will resend below approach. static inline int memmap_valid(unsigned long pfn) { struct page *page = pfn_to_page(pfn); struct page *__pg = virt_to_page(page); return page_private(__pg) == MAGIC_MEMMAP && PageReserved(__pg); } Thanks, all. -- Kind regards, Minchan Kim -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |