What polynomial multiplication algorithms compete with FFT?

FFT

Here’s the humanized version with tags added:

When it comes to multiplying two polynomials of O(n) degree, there are several methods available, each with its own trade-offs:

  1. Classic/Brute Force Multiplication: This is the most straightforward approach, where we multiply each term by hand. The time complexity is O(n²), and while it’s simple, it’s generally only practical for small n due to its inefficiency.
  2. FFT (Fast Fourier Transform): This is a more advanced technique that reduces the complexity to O(n log n). While it’s faster than brute force, FFT involves complex numbers, which can sometimes lead to issues with floating-point precision. However, if you are dealing with integer polynomials, a better alternative could be the NTT (Number-Theoretic Transform), which avoids these floating-point issues but is only applicable in specific rings. For arbitrary polynomial multiplication, combining NTT with other techniques may be required.
  3. Karatsuba Algorithm: This algorithm is a sub-quadratic approach that’s easier to understand than FFT. It’s based on divide-and-conquer and has a time complexity of O(n^log₂3), or roughly O(n^1.585). While not as fast as FFT, it’s a great option for integer polynomials because it avoids precision issues. In fact, Python uses Karatsuba for large integer multiplication.
  4. Large Integer Multiplication: A related problem is large integer multiplication, where large numbers are represented as polynomials. For example, each large number can be represented as:

    P(10) = Xn * 10ⁿ + Xn-1 * 10ⁿ⁻¹ + … + X0.

    The multiplication of such large numbers is similar to polynomial multiplication, and here are some algorithms commonly used for it:

    • Toom-Cook Algorithm: This is a generalized version of the Karatsuba algorithm. When k=2, it’s essentially Karatsuba. As k increases, the algorithm’s complexity approaches O(n log n). It’s a powerful method, but not always the most efficient in practice.
    • Schönhage-Strassen Algorithm: This algorithm uses FFT and outperforms the Toom-Cook algorithm for large numbers (over 10,000 to 100,000 decimal digits). Its time complexity is O(n log n log(log n)), which makes it highly efficient for very large numbers.
    • Harvey-Hoeven Algorithm: A very recent algorithm, published in 2019, this is based on NTT and has a time complexity of O(n log n). It is theoretically optimal, but its practicality is still being explored due to high constants in real-world use.

These methods each have their advantages and are selected based on the specific problem at hand. For integer polynomials, you might prefer Karatsuba or NTT, while FFT and Schönhage-Strassen shine with very large numbers.

Leave a Reply

Your email address will not be published. Required fields are marked *

Home
Account
Community
Search