Prev: x86: Introduce text_poke_smp_batch() for batch-code modifying
Next: mm: vmscan fix mapping use after free
From: Phil Carmody on 15 Jun 2010 08:50 With several sections per module, and dozens of modules, the searches down the linked list would dominate the lookup time, dwarfing any savings from the binary search within the section. A simple move-to-front optimisation exploits the commonality of the code paths taken, and in simple real-world tests reduces the number of steps in the search to barely more than 1. Signed-off-by: Phil Carmody <ext-phil.2.carmody(a)nokia.com> --- arch/arm/kernel/unwind.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c index dd81a91..2e88abf 100644 --- a/arch/arm/kernel/unwind.c +++ b/arch/arm/kernel/unwind.c @@ -146,6 +146,8 @@ static struct unwind_idx *unwind_find_idx(unsigned long addr) addr < table->end_addr) { idx = search_index(addr, table->start, table->stop - 1); + /* MTF with 50 modules: 80 steps becomes ~1 */ + list_move(&table->list, &unwind_tables); break; } } -- 1.6.0.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |