Implement the powering operation with time complexities O(1)

0 votes
asked Mar 4, 2015 by xin

I am trying to understand how to implement the power operation with time complexity O(1) in the following post.

The algorithm:

The constant-time function uses logarithms, and is subject to floating-point error:

(define (pow-constant b e)
(exp (* (log b) e)))

> (trace pow-constant)
> (pow-constant 2 16)
|(pow-constant 2 16)

Can anyone explain why this algorithm works? And why it is O(1)?

2 Answers

0 votes
answered Mar 4, 2015 by user448810

I am the author of Programming Praxis. Thanks for reading! You can ask questions here at Stack Overflow, of course, but you are also welcome to ask questions in the comments section of Programming Praxis. To answer your questions:

First, it works because that's the mathematical definition of logarithms. To multiply two numbers, take their logarithms, add the logarithms together, then take the anti-logarithm of the sum; in programming terms: x × y = exp ( log(x) + log(y) ). The powering operation takes the logarithm of the base, multiplies the logarithm by the exponent, then takes the anti-logarithm of the product; in programming terms, x y = exp ( log(x) × y ).

Second, it is O(1) only if you cheat. There is only a single (constant) operation involving the exponent, so the operation is constant-time only with respect to the exponent. But arithmetic takes time, in fact arithmetic on n-bit numbers takes time log(n), but we ignore that. As a practical matter, computers only provide logarithms up to some small limit, which is usually 3.4 × 1038 for single-precision floats and 1.8 × 10308 for double-precision floats. Within that range, the functions used internally are close enough to constant-time that we say taking logarithms and exponents is O(1). If you use a big-decimal library such as GMP, with floats that are very much larger, it will be incorrect to say that the arithmetic is done in constant time. So I guess it depends on how precise you want to be.

0 votes
answered Dec 23, 2018 by foon

How it works:
A = exp(ln(A)) and ln (A^B) = B * ln(A), so A^B (as in A to the B) is equal to exp(ln(A^B)) which is exp(B * ln(A)); rewriting that to use scheme/lisp's prefix notation yields the function shown. If memory serves, this is the same technique slide rules used (although they also took advantage of ln(a*b) = ln(a)+ln(b), with the idea being that addition is easier than multiplication, but my exposure to slide rules is limited to an eccentric high school physics teacher and a cryptographic teacher using it to explain certain tricks with big number math.

Why is it O(1):
I disagree with the claim that this is O(1) for arbitrary sized numbers. It's been too long since I've looked at what the O complexity is for big number logarithms/ exponentiation, but I'm pretty sure they're not O(1) with respect to the size of the numbers, but on the other hand, these are for lossless math. For numbers constrained to the size of a floating point number, these operations are reasonably good approximations of O(1), with the limitation that you lose precision.

Welcome to Q&A, where you can ask questions and receive answers from other members of the community.