zsmalloc: micro-optimize zs_object_copy()

A micro-optimization.  Avoid additional branching and reduce (a bit)
registry pressure (f.e.  s_off += size; d_off += size; may be calculated
twise: first for >= PAGE_SIZE check and later for offset update in "else"
clause).

scripts/bloat-o-meter shows some improvement

add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-10 (-10)
function                          old     new   delta
zs_object_copy                    550     540     -10

Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Sergey Senozhatsky 2015-04-15 16:16:15 -07:00 committed by Linus Torvalds
parent 1ec7cfb13a
commit 495819ead5

View File

@ -1537,7 +1537,12 @@ static void zs_object_copy(unsigned long src, unsigned long dst,
if (written == class->size)
break;
if (s_off + size >= PAGE_SIZE) {
s_off += size;
s_size -= size;
d_off += size;
d_size -= size;
if (s_off >= PAGE_SIZE) {
kunmap_atomic(d_addr);
kunmap_atomic(s_addr);
s_page = get_next_page(s_page);
@ -1546,21 +1551,15 @@ static void zs_object_copy(unsigned long src, unsigned long dst,
d_addr = kmap_atomic(d_page);
s_size = class->size - written;
s_off = 0;
} else {
s_off += size;
s_size -= size;
}
if (d_off + size >= PAGE_SIZE) {
if (d_off >= PAGE_SIZE) {
kunmap_atomic(d_addr);
d_page = get_next_page(d_page);
BUG_ON(!d_page);
d_addr = kmap_atomic(d_page);
d_size = class->size - written;
d_off = 0;
} else {
d_off += size;
d_size -= size;
}
}