首页 > 解决方案 > Should I check for parity of a number in a different way?

问题描述

In Java, as well as other languages, I always use the N % 2 == 0 trick to obtain the parity of a number, but I'm pretty sure that in terms of time complexity there is some better solution.

It's probably better already by using N & 1 == 0, but does this run in constant time, i.e. does it just do the rightmost bit comparison, or does it do all of the log(N) bits composing N?

Is it possible to somehow just read the last bit of the number, which would be in constant time for sure?

标签: javatime-complexitybit-manipulationbitparity

解决方案


Typically, the & operator is implemented in hardware circuitry. The circuitry runs over all the bits of the number in parallel in one single step. It does not look at bits one by one.

Integer division and remainder operations on the other hand are a lot more complex. Popular CPUs don't have special circuitry for it, instead it's implemented in microcode. It's hard to find a primary source, but different sources on the internet put integer division at about 20-40 times as expensive as bitwise operators like AND.

Compiler writers are well aware how expensive the division and remainder instructions are, and do everything possible to avoid generating code that uses these instructions - see for example Why does GCC use multiplication by a strange number in implementing integer division? It is very likely, but not guaranteed, that N%2==0 is compiled to N&1==0. To make sure, you would have to examine the machine code generated by the compiler. In Java, the compiler you should most care about is the JIT compiler. It generates native machine code on the fly. Examining its output is possible, but not easy. Also, the optimizations it does may change from one version to the next, or vary between CPU models.

If you really want to know which is faster in your application with your hardware, you should create a benchmark and test it yourself.


推荐阅读