Prev: frv: remove "struct file *" argument from sysctl ->proc_handler
Next: KVM MMU: optimize/cleanup for marking parent unsync
From: Avi Kivity on 12 Apr 2010 04:30 On 04/12/2010 11:01 AM, Xiao Guangrong wrote: > - calculate zapped page number properly in mmu_zap_unsync_children() > - calculate freeed page number properly kvm_mmu_change_mmu_pages() > - restart list walking if have children page zapped > > Signed-off-by: Xiao Guangrong<xiaoguangrong(a)cn.fujitsu.com> > --- > arch/x86/kvm/mmu.c | 7 ++++--- > 1 files changed, 4 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index a23ca75..8f4f781 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -1483,8 +1483,8 @@ static int mmu_zap_unsync_children(struct kvm *kvm, > for_each_sp(pages, sp, parents, i) { > kvm_mmu_zap_page(kvm, sp); > mmu_pages_clear_parents(&parents); > + zapped++; > } > - zapped += pages.nr; > kvm_mmu_pages_init(parent,&parents,&pages); > } > This looks correct, I don't understand how we work in the first place. Marcelo? > @@ -1540,7 +1540,7 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages) > > page = container_of(kvm->arch.active_mmu_pages.prev, > struct kvm_mmu_page, link); > - kvm_mmu_zap_page(kvm, page); > + used_pages -= kvm_mmu_zap_page(kvm, page); > used_pages--; > } > This too. Wow. > kvm->arch.n_free_mmu_pages = 0; > @@ -1589,7 +1589,8 @@ static void mmu_unshadow(struct kvm *kvm, gfn_t gfn) > && !sp->role.invalid) { > pgprintk("%s: zap %lx %x\n", > __func__, gfn, sp->role.word); > - kvm_mmu_zap_page(kvm, sp); > + if (kvm_mmu_zap_page(kvm, sp)) > + nn = bucket->first; > } > } > I don't understand why this is needed. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Xiao Guangrong on 12 Apr 2010 05:00 Avi Kivity wrote: > >> kvm->arch.n_free_mmu_pages = 0; >> @@ -1589,7 +1589,8 @@ static void mmu_unshadow(struct kvm *kvm, gfn_t >> gfn) >> && !sp->role.invalid) { >> pgprintk("%s: zap %lx %x\n", >> __func__, gfn, sp->role.word); >> - kvm_mmu_zap_page(kvm, sp); >> + if (kvm_mmu_zap_page(kvm, sp)) >> + nn = bucket->first; >> } >> } >> > > I don't understand why this is needed. There is the code segment in mmu_unshadow(): |hlist_for_each_entry_safe(sp, node, nn, bucket, hash_link) { | if (sp->gfn == gfn && !sp->role.direct | && !sp->role.invalid) { | pgprintk("%s: zap %lx %x\n", | __func__, gfn, sp->role.word); | kvm_mmu_zap_page(kvm, sp); | } | } in the loop, if nn is zapped, hlist_for_each_entry_safe() access nn will cause crash. and it's checked in other functions, such as kvm_mmu_zap_all(), kvm_mmu_unprotect_page()... Thanks, Xiao -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Avi Kivity on 12 Apr 2010 05:10 On 04/12/2010 11:53 AM, Xiao Guangrong wrote: >> >>> kvm->arch.n_free_mmu_pages = 0; >>> @@ -1589,7 +1589,8 @@ static void mmu_unshadow(struct kvm *kvm, gfn_t >>> gfn) >>> && !sp->role.invalid) { >>> pgprintk("%s: zap %lx %x\n", >>> __func__, gfn, sp->role.word); >>> - kvm_mmu_zap_page(kvm, sp); >>> + if (kvm_mmu_zap_page(kvm, sp)) >>> + nn = bucket->first; >>> } >>> } >>> >>> >> I don't understand why this is needed. >> > There is the code segment in mmu_unshadow(): > > |hlist_for_each_entry_safe(sp, node, nn, bucket, hash_link) { > | if (sp->gfn == gfn&& !sp->role.direct > | && !sp->role.invalid) { > | pgprintk("%s: zap %lx %x\n", > | __func__, gfn, sp->role.word); > | kvm_mmu_zap_page(kvm, sp); > | } > | } > > in the loop, if nn is zapped, hlist_for_each_entry_safe() access nn will > cause crash. and it's checked in other functions, such as kvm_mmu_zap_all(), > kvm_mmu_unprotect_page()... > hlist_for_each_entry_safe() is supposed to be be safe against removal of the element that is pointed to by the iteration cursor. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Xiao Guangrong on 12 Apr 2010 05:30 Hi Avi, Avi Kivity wrote: > hlist_for_each_entry_safe() is supposed to be be safe against removal of > the element that is pointed to by the iteration cursor. If we destroyed the next point, hlist_for_each_entry_safe() is unsafe. List hlist_for_each_entry_safe()'s code: |#define hlist_for_each_entry_safe(tpos, pos, n, head, member) \ | for (pos = (head)->first; \ | pos && ({ n = pos->next; 1; }) && \ | ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1;}); \ | pos = n) if n is destroyed: 'pos = n, n = pos->next' then it access n again, it's unsafe/illegal for us. Xiao -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Avi Kivity on 12 Apr 2010 06:30
On 04/12/2010 12:22 PM, Xiao Guangrong wrote: > Hi Avi, > > Avi Kivity wrote: > > >> hlist_for_each_entry_safe() is supposed to be be safe against removal of >> the element that is pointed to by the iteration cursor. >> > If we destroyed the next point, hlist_for_each_entry_safe() is unsafe. > > List hlist_for_each_entry_safe()'s code: > > |#define hlist_for_each_entry_safe(tpos, pos, n, head, member) \ > | for (pos = (head)->first; \ > | pos&& ({ n = pos->next; 1; })&& \ > | ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1;}); \ > | pos = n) > > if n is destroyed: > 'pos = n, n = pos->next' > then it access n again, it's unsafe/illegal for us. > But kvm_mmu_zap_page() will only destroy sp == tpos == pos; n points at pos->next already, so it's safe. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |