Fix:
 ------------[ cut here ]------------
 WARNING: at arch/x86/mm/init.c:342 free_init_pages+0x4c/0xfa()
 free_init_pages: range [0x40daf000, 0x40db5c24] is not aligned
 Modules linked in:
 Pid: 0, comm: swapper Not tainted
 
2.6.34-rc2-tip-03946-g4f16b23-dirty #50 Call Trace:
  [<
40232e9f>] warn_slowpath_common+0x65/0x7c
  [<
4021c9f0>] ? free_init_pages+0x4c/0xfa
  [<
40881434>] ? _etext+0x0/0x24
  [<
40232eea>] warn_slowpath_fmt+0x24/0x27
  [<
4021c9f0>] free_init_pages+0x4c/0xfa
  [<
40881434>] ? _etext+0x0/0x24
  [<
40d3f4bd>] alternative_instructions+0xf6/0x100
  [<
40d3fe4f>] check_bugs+0xbd/0xbf
  [<
40d398a7>] start_kernel+0x2d5/0x2e4
  [<
40d390ce>] i386_start_kernel+0xce/0xd5
 ---[ end trace 
4eaa2a86a8e2da22 ]---
Comments in vmlinux.lds.S already said:
 |        /*
 |         * smp_locks might be freed after init
 |         * start/end must be page aligned
 |         */
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: David Miller <davem@davemloft.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
LKML-Reference: <
1269830604-26214-2-git-send-email-yinghai@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
        .smp_locks : AT(ADDR(.smp_locks) - LOAD_OFFSET) {
                __smp_locks = .;
                *(.smp_locks)
-               __smp_locks_end = .;
                . = ALIGN(PAGE_SIZE);
+               __smp_locks_end = .;
        }
 
 #ifdef CONFIG_X86_64