lib/find_bit.c: micro-optimise find_next_*_bit

This saves 32 bytes on my x86-64 build, mostly due to alignment
considerations and sharing more code between find_next_bit and
find_next_zero_bit, but it does save a couple of instructions.

There's really two parts to this commit:
 - First, the first half of the test: (!nbits || start >= nbits) is
   trivially a subset of the second half, since nbits and start are both
   unsigned
 - Second, while looking at the disassembly, I noticed that GCC was
   predicting the branch taken. Since this is a failure case, it's
   clearly the less likely of the two branches, so add an unlikely() to
   override GCC's heuristics.

[mawilcox@microsoft.com: v2]
  Link: http://lkml.kernel.org/r/1483709016-1834-1-git-send-email-mawilcox@linuxonhyperv.com
Link: http://lkml.kernel.org/r/1483709016-1834-1-git-send-email-mawilcox@linuxonhyperv.com
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Acked-by: Yury Norov <ynorov@caviumnetworks.com>
Acked-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Matthew Wilcox 2017-02-24 15:00:58 -08:00 committed by Linus Torvalds
parent 55ded9551f
commit e4afd2e556
2 changed files with 3 additions and 3 deletions

View File

@ -33,7 +33,7 @@ static unsigned long _find_next_bit(const unsigned long *addr,
{
unsigned long tmp;
if (!nbits || start >= nbits)
if (unlikely(start >= nbits))
return nbits;
tmp = addr[start / BITS_PER_LONG] ^ invert;
@ -151,7 +151,7 @@ static unsigned long _find_next_bit_le(const unsigned long *addr,
{
unsigned long tmp;
if (!nbits || start >= nbits)
if (unlikely(start >= nbits))
return nbits;
tmp = addr[start / BITS_PER_LONG] ^ invert;

View File

@ -34,7 +34,7 @@ static unsigned long _find_next_bit(const unsigned long *addr,
{
unsigned long tmp;
if (!nbits || start >= nbits)
if (unlikely(start >= nbits))
return nbits;
tmp = addr[start / BITS_PER_LONG] ^ invert;