Prev: perf: Round robin groups of events using list_rotate_left()
Next: [PATCH 0/3] Bunch of fixes related to custom atoi() implementation
From: Andy Shevchenko on 14 Jan 2010 08:10 From: Andy Shevchenko <ext-andriy.shevchenko(a)nokia.com> Signed-off-by: Andy Shevchenko <ext-andriy.shevchenko(a)nokia.com> --- arch/x86/mm/gup.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c index 71da1bc..738e659 100644 --- a/arch/x86/mm/gup.c +++ b/arch/x86/mm/gup.c @@ -18,7 +18,7 @@ static inline pte_t gup_get_pte(pte_t *ptep) #else /* * With get_user_pages_fast, we walk down the pagetables without taking - * any locks. For this we would like to load the pointers atoimcally, + * any locks. For this we would like to load the pointers atomically, * but that is not possible (without expensive cmpxchg8b) on PAE. What * we do have is the guarantee that a pte will only either go from not * present to present, or present to not present or both -- it will not -- 1.5.6.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |