The Power of Vocabulary:
The Case of Cyclotomic Polynomials
\titlenoteResearch begun while both authors were on sabbatical at David R.
Cheriton School of Computer Science,
University of Waterloo, Waterloo, Ontario
Abstract
We observe that the vocabulary used to construct the “answer” to problems in computer algebra can have a dramatic effect on the computational complexity of solving that problem. We recall a formalization of this observation and explain the classic example of sparse polynomial arithmetic. For this case, we show that it is possible to extend the vocabulary so as reap the benefits of conciseness whilst avoiding the obvious pitfall of repeating the problem statement as the “solution”.
It is possible to extend the vocabulary either by irreducible cyclotomics or by : we look at the options and suggest that the pragmatist might opt for both.
2
1 Introduction
While sparse polynomials are a natural data structure for human beings (who writes ?) and computer algebra systems, algorithms to do more than add and multiply are scarce on the ground, and most texts slip silently from considering sparse polynomials to considering dense ones [8]. This is partly because of the existence of examples showing that the output can be exponentially larger than the input, and hence “nothing can be done”. We contend that these examples are basically all cases of the cyclotomic polynomials in disguise, and that, by admitting these to the output language, as Schinzel’s operator [18] effectively does, these examples cease to be absolute barriers to efficient algorithms. Cyclotomic factors can often be recognised relatively efficiently [6], though the worstcase is NPhard from this result.
Theorem 1 ([16, Theorem 6.1])
It is NPhard to solve the problem, given a polynomial , to determine if has a root of modulus 1.
This is a paradigmatic example of a more general thesis: solving problems in computer algebra requires the concurrent design of the most appropriate vocabulary and algorithms which are polynomial in the size of the output so encoded. Naturally, unconstrained multiplication of new vocabulary is not a viable solution, and a methodology for costing this was proposed in [7].
A common problem in computer algebra is “factorize this polynomial”. The algorithms commonly used first compute factorizations adically, and then deduce the “true” factorization over . The traditional approaches [23] are theoretically exponential in the number of adic factors, though in practice the exponential aspect can be “controlled” [1]. Polynomialtime (in the degree, and a fortiori in the number of adic factors) algorithms are known [12], but in practice tend to be slower. The most recent progress is in [20], whose algorithm is faster in practice, and the deduction phase is heuristically polynomial time in the number of adic factors.
Of course, for a sparse polynomial such as , the size of the polynomial is , and so an algorithm polynomial in is still exponential in the size of the input. Are there algorithms which are polynomial in the size of the input? If the output represents the factors as expanded polynomials, this is impossible. There is however a conjecture that the only cases which cause exponential blowups are cyclotomic factors – we will return to this later.
Notation 1
We define the following for a polynomial

the number of nonzero terms in , ;

;

;

the even part of , ;

the odd part of , ;

the rootsquare, or Graeffe^{1}^{1}1There are various conventions in the literature as to how one handles the arising from the parity of ., of , ;

;

;

;

(the Mahler measure of ) .
2  3  4  5  6  7  8=9  14  23  25  27  59  359  

first  105  385  1365  1785  2805  3135  6545  10465  11305  17225  20615  26565  40755 
48  240  576  768  1280  1440  3840  6336  6912  10752  12960  10560  17280 
[A larger version, independently computed, is in [2].]
2 Cyclotomic Polynomials
Notation 2
Let denote the number of divisors of the number (including 1 and itself).
Theorem 2
It is known [22] that
(1) 
However, we should note the caveats about the distribution of given at [11, Theorem 432], in particular that it has average order but normal order roughly .
Definition 1
We will say that a polynomial is cyclotomic, if all its roots are roots of unity. Many authors reserve this for irreducible polynomials, but we will explicitly say “irreducible” when we need to.
Notation 3
Let be the th irreducible cyclotomic polynomial:
(2) 
We denote by the cyclotomic polynomial with all th roots of unity, i.e. .
We should note that it is not the case that the coefficients of are 0 or . The first counterexample is , which contains the terms and . contains the terms and The growth rate is in fact greater than one might expect, and has terms of and . , and this looks like the recipe (confirmed in [4]) to make large values^{2}^{2}2It does give us (exactly) 500 at ., but in fact 23 is first attained at , as shown in table 1. We note the spectacular leap at , which is the largest coefficient up to . [2, Table 3] shows more such leaps for (much) larger .
Theorem 3
[21, Theorem 1] shows that, for infinitely many ,
(3) 
and indeed this is precisely the right order of (worstcase) growth [3], perhaps better expressed as
(4) 
Proposition 4
, and these factors are irreducible.
Proposition 5
, where is the Möbius function.
Proposition 6
has irreducible factors.
Cyclotomic polynomials are the bugbear of anyone who tries to deal with sparse polynomials.
Example 1
Asking for the factorization, or even the degrees of the factors, of is tantamount to factoring , since for every prime dividing , there is an in the factorization of .
Example 2
Similarly, asking for the degree of is, if is (, distinct primes), tantamount to factoring , since and so
Cyclotomic polynomials are frequently used as examples.
Example 3
[20, p. 185] gives this example
and states that his algorithm sped up Maple by a factor of 500 on this example. From a cyclotomicaware point of view, such as [6], this polynomial is easy. Four applications of Graeffe’s rootsquaring process show (as is obvious to the eye) that this is where
Another application takes to itself, and hence , and so the original polynomial, is cyclotomic. If is a root of , , so , and the original polynomial is
Example 4
If is prime, , then but : two factors with 2 and terms respectively.
Example 5
If , are distinct primes, , then but . The squarefree decomposition of is therefore a repeated factor with 2 terms and a factor with terms respectively. The largest coefficient in is , and .
Obviously, a squarefree decomposition of was a bad idea in this case: however previouslyproposed algorithms, e.g. [5, p. 69] tend to do this.
It could be argued that the problem in this case is the ‘cofactor’, but life is not that simple.
Example 6
If , are distinct primes, , then but the squarefree factorization is
and we are forced to write out the large squared factor. The largest coefficient of is , so we had also better not compute it and then take its square root.
It is the contention of this paper that all these difficulties except the first are caused by an inadequate vocabulary: the first seems to be intrinsic, in the fact the factorization of numbers can be encoded as a problem of factorization of polynomials. All we can do is recognise the fact.
3 Representational Complexity
Information theory, whether through the guise of Kolmogorov Complexity [13] or Minimum Description Length [10], tell us that good representations of structured objects are twopart codes: a model and an encoding of data using that model. In other words, the proper “length” of an object consists in counting the length of the representation of the model as well as the representation of the data encoded using this model.
In [7], these results from information theory are rephrased so as to apply more directly to Computer Algebra Systems, and applications to simplification are outlined. The basic result is that for large enough structured expressions, it is always worthwhile to first formalize the “structure”, and then encode the data in such a way as to abstract out that structure. Note that, if model extensions are not allowed, then simplification reduces simply to length reduction. It is exactly the confusion between issues of the (background) modelclass and its use in model reduction which caused Moses [15] to argue that “simplification” was impossible to formalize.
When tackling a particular situation, [7] boils down to finding the right vocabulary in which to express ones’ result. In some cases, the right vocabulary is somewhat counterintuitive. For example, in the case of algebraic numbers, it was long ago discovered that using minimal polynomials to “encode” an algebraic number was best – although this can seem puzzling in the setting of “solving” a polynomial, as then the answer to the problem is just an encoding of the question. We will return to this issue later.
For the particular case of factoring of polynomials, what does this tell us? All the theoretical results point in the same direction: cyclotomic polynomials are somehow “special cases”, especially when one is factoring sparse polynomials. Conventional wisdom already tells us that both dense and sparse polynomials are useful model classes, and that we should have both at our disposal. The mathematical theory of factoring polynomials (as outlined in the rest of this paper) informs us that cyclotomics are undeniably part of the domain of discourse. Combining these together tells us that they should also be part of our model classes. The only remaining question then is whether adding this particular vocabulary actually leads to a simplification. To evaluate this, we need to actually display some data structures designed with this new vocabulary, and then evaluate if we have made any real gains.
4 Data Structures
Since we will be arguing on the size of data structures, we will need to define our data representations. The precise details might vary, though in practice the conclusions will not. For definiteness, we describe our choices according to the unaligned packed encoding of ASN.1 [19]: note that their SEQUENCE is what C programmers would think of as struct. Our encodings are intended to be practical, though we ignore issues of alignment to word boundaries, and indeed a number of operators are also omitted.
In theory, one needs to have arbitrary sized data fields, which means one needs fields for the length of the size fields. To avoid this, we will assume that and are global parameters for the size of the object, bounding the degrees and the size of the coefficients. Since a polynomial’s factors always have smaller degree, we can assume . It is not so easy for the coefficients [14], but we will assume a single field of bits associated with each outermost data structure, giving the size of all the coefficients stored in that structure. Since there is one of these, we can ignore its cost.
We next give explicit representations for dense polynomials, sparse polynomials, factored polynomials, aware factorizations and aware factorizations (explained fully below).
4.1 A single dense polynomial
We choose a dense representation with a uniform size bound for all the coefficients. A single dense polynomial of degree requires bits to represent the degree, and then there are coefficients. Hence if each of them requires bits, we need bits for the telling us this, and then the coefficients require bits (including sign).
(5) 
In pseudocode^{3}^{3}3Essentially ASN.1, except that we allow ourselves to write mathematics, enclosed in boxes, in the pseudocode., this might be represented as follows.
4.2 A single sparse polynomial
We choose a sparse representation with a uniform size bound for all the coefficients. Furthermore, we assume that there are nonzero coefficients, i.e. terms to be represented, and is bounded by the same bound as the degree. A single sparse term from a polynomial of degree requires bits to represent the degree. Hence the total space is given by
(6) 
In pseudocode, this might be represented as
Using a Horner scheme might save a few more bits in the representation of the exponent, but rarely appreciably so.
4.3 Representing Factorizations
We will use the same structure for squarefree or complete factorizations: the number of (distinct) factors followed by pairs (multiplicity, factor). In pseudocode, this might be represented as
where is one of DensePoly
or SparsePoly
.
Hence with factors, the overhead (i.e. the cost over and above that of
storing the distinct factors themselves) is
(7) 
4.4 Representing aware Factorizations
We will use the same structure for squarefree or complete factorizations: the number of (distinct) factors followed by pairs (multiplicity, factor). However, we do this twice: once for the factors that are , and once for those that are not. The factors that are are stored as followed by^{4}^{4}4The inclusion of avoids the problem in example 2. . In pseudocode, this might be represented as
where is one of DensePoly
or SparsePoly
.
Hence with factors, the cost of storing them is .
Any stored this way is cheaper than in any of the previous
representation (dense, sparse or factored), so the worst case is when there are
no in the factorization, when the overhead is merely the one field
PhiFactorCount
.
4.5 Representing aware Factorizations
Instead of storing the , we could store the complete cyclotomic polynomials . Again, we will use the same structure for squarefree or complete factorizations: the number of (distinct) factors followed by pairs (multiplicity, factor). Also in this case, we do this twice: once for the factors that are of the form , and once for those that are not. One might think to store the factors that are simply as . However, as pointed out in example 1, this will not allow us to answer questions such as “how many factors” in a reasonable time. Hence we store followed by its prime factorization.
We should also note that Proposition 5 means that we now need negative multiplicities as well. In pseudocode, this might be represented as
where is one of DensePoly
or SparsePoly
.
Hence with factors of involved, the overhead (i.e. the cost over
and above that of storing the distinct factors themselves) is
(8) 
An alternative formulation might store the factors of with multiplicity: there does not seem to be a great deal to choose between them. In either case, we should note that asking for the number of irreducible factors is no longer trivial, since the number of irreducible factors corresponding to a single is . Furthermore, since we are allowed negative exponents, representing
we have to note that not all the irreducible factors of are actually factors of the lefthand side. Nevertheless, since we have stored the prime factorization of , the problem is efficiently soluble (certainly polynomial time in the size of the representation).
Representation  Fully expanded  Squarefree factorization  Factored 

Dense  same as factored  
Sparse  same as factored  
same as factored  
same as factored 
5 Representing some cyclotomic polynomials
Representation  Fully expanded  Squarefree factorization  Factored 

Dense  
Sparse  
We now use each of our representations and compute the size of the results.
5.1 Factorization of
The sizes of the expanded polynomial in dense, and sparse encodings
are obvious. The aware and aware versions are within an
additive constant of DensePoly
/ SparsePoly
. For definiteness,
we will use SparsePoly
based counts.
To understand the size of the factored forms, we need to study their sizes a little more closely. In this case, the factored form and the squarefree form coincide. There are factors, of total degree , hence terms. By Theorem 3, we can bound^{5}^{5}5In theory, not all the factors can have coefficients this large, but the gain from exploiting this is relatively small. the size of the coefficients as times the righthand side of (3), viz.
(9) 
In general^{6}^{6}6 is an obvious counterexample., the factors will be essentially dense, so a sparse encoding will save nothing, but have to pay for the cost of storing the degrees with each coefficient, adding , to give
(10) 
We should note that the asymptotically dominant term is the coefficient storage in this model, which is contrary to intuition, and even the experimental data in table 1, but this merely shows that the asymptotics will take time to be visible.
The results of this section are summarised in Table 2.
5.2 A squarefree factorization
Let use now consider , with distinct primes. The “fully expanded” versions are again obvious. The squarefree factorization of involves multiplying out . This gives us coefficients of size , in fact assuming that , are balanced, taking bits to represent the magnitude^{7}^{7}7In this case, they are all positive, but we can’t count on this in general..
In the factored representation, we have three factors, of degrees 1, and , i.e. total degree . All coefficients are bounded by 1. Hence the total is
(11) 
The results of this section are summarised in Table 3.
6 Implementation notes
6.1 Cyclotomicfree
It is important to review the encodings of the previous section and notice that for cyclotomicfree cases, these encodings involve constant overhead, independent of the degree and of the number of factors. In fact, by using a bit or two in a header word (which modern computer algebra systems always seem to use in their internal representations), one can choose between these encodings as necessary. In other words, cyclotomicfree polynomials do not have to bear any extra representation cost for this vocabulary extension.
We can also construct various mixed cases, in other words sparse polynomials which factor into a cyclotomic part and a small dense cofactor. The difference in encoding cost is correspondingly mixed, although the end result is similar: adding cyclotomics asymptotically wins.
6.2 Which to choose?
We have posited two encodings for “cyclotomicaware” representations of factorizations: one in terms of the irreducible cyclotomics (section 4.4) and one in terms of the ‘complete’ cyclotomics (section 4.5). Tables 2 and 3 make it clear that adding cyclotomics to ones’ vocabulary is certainly reprensentionally efficient. But which one should be used? We first summarize some the advantages and disadvantages.
 Pro : clearly polynomial

In the representation, the factorization of is , where as in the representation it is . Functions meant to extract information from products of polynomials (degrees, multiplicity, etc) still function ‘easily’, whereas spotting that the representation is not squarefree is cheap, but not ‘obvious’.
 Anti : worstcase blowup
 Notes

The pragmatist would probably choose . The theoretician would be swayed by the complexity argument and want . Possibly the best answer is to admit , and with an additional extension to the vocabulary as in (12).
To further illustrate one particular difficulty, let us consider factoring of . It factors into
But if all we have at our disposal is , then the best we can do (which is still better than using sparse polynomials) is
An even more succinct representation is
(12) 
at the cost of another vocabulary extension. This is why in the representation of in section 4.5, we list the factors of , which allows us to recover this factorization relatively easily. This still means that in the aware encoding, obtaining the number of factors, the degrees of each of the factors, or the multiplicity of each of the factors is straightforward, while for the aware encoding, these simple questions now require some (small) amount of computation. , and so forth. All of these questions remain just as easy to answer in the aware encodings as they were before. However, for the aware encodings, these “simple” questions now require some actual computations to resolve. In other words, adding to our vocabulary is a very minor change with clear efficiency gains, while adding is slightly disruptive but with even greater asymptotic efficiency gains.
7 Conclusion
In section 5 we have shown how some “troublesome” factorizations, i.e. examples 1 and 5, cease to consume inordinate space when the cyclotomics are represented explicitly. But what of arbitrary polynomials and their factorizations?
The answer is that we do not know, but there are some tantalizing results. [9] state that, provided is nonreciprocal (), has a factor with at most terms, independent of , and quotes [17] to say that if the polynomial has no reciprocal factors, then all irreducible factors have at most terms. This is a significant step towards controlling the dependence of the output size on , though it falls short of saying that is polynomial in the following ways:

there is no guarantee that depends polynomially on or , and indeed is still rather mysterious to the authors [8, Challenge 4];

nothing is said about the size of the coefficients, and all known technology makes them depend exponentially on (i.e. the output size depends linearly on ).
We note, though, that the known examples of such growth [14, ], depend on cyclotomic polynomials, so one could hope that the second problem does not occur in practice.
We are convinced, and we hope to have convinced the reader, that, regardless of whether using cyclotomics ultimately makes the problem of sparse polynomial factorization more tractable, even if not guaranteed polynomial (see Theorem 1), in the input size, that they are most definitely worth having in the basic vocabulary used for the output of factoring. We strongly recommend that the specification for what it means to factor a polynomial be thus amended.
References
 [1] J.A. Abbott, V. Shoup, and P. Zimmermann. Factorization in : The Searching Phase. In C. Traverso, editor, Proceedings ISSAC 2000, pages 1–7, 2000.
 [2] A. Arnold and M. Monagan. Calculating cyclotomic polynomials of very large height. http://www.cecm.sfu.ca/~ada26/cyclotomic/CalcCycloPolys.pdf, 2008.
 [3] P.T. Bateman. Note on the coefficients of the cyclotomic polynomial. Bull. AMS, 55:1180–1181, 1949.
 [4] P.T. Bateman, C. Pomerance, and R.C. Vaughan. On the size of the coefficients of the cyclotomic polynomial. Colloq. Math. Soc. J. Bolyai, 34:171–202, 1984.
 [5] F. Beukers and C.J. Smyth. Cyclotomic points on curves. Number theory for the millennium, pages 67–85, 2002.
 [6] R.J. Bradford and J.H. Davenport. Effective Tests for Cyclotomic Polynomials. In P. Gianni, editor, Proceedings ISSAC 1988, pages 244–251, 1989.
 [7] J. Carette. Understanding Expression Simplification. In J. Gutierrez, editor, Proceedings ISSAC 2004, pages 72–79, 2004.
 [8] J.H. Davenport and J. Carette. The Sparsity Challenges. To appear in Proc. SYNASC 2009, 2010.
 [9] M. Filaseta, A. Granville, and A. Schinzel. Irreducibility and greatest common divisor algorithms for sparse polynomials. Number Theory and Polynomials, pages 155–176, 2008.
 [10] Peter D. Grünwald. The Minimum Description Length Principle, volume 1 of MIT Press Books. The MIT Press, December 2007.
 [11] G.H. Hardy and E.M. Wright. An Introduction to the Theory of Numbers (5th. ed.). Clarendon Press, 1979.
 [12] A.K. Lenstra, H.W. Lenstra, Jun., and L. Lovász. Factoring Polynomials with Rational Coefficients. Math. Ann., 261:515–534, 1982.
 [13] Ming Li and Paul Vitanyi. An Introduction to Kolmogorov Complexity and Its Applications. SpringerVerlag, Berlin, 1997.
 [14] M. Mignotte. Some Inequalities About Univariate Polynomials. In Proceedings SYMSAC 81, pages 195–199, 1981.
 [15] J. Moses. Algebraic Simplification — A Guide for the Perplexed. Comm. ACM, 14:527–537, 1971.
 [16] D.A. Plaisted. Sparse Complex Polynomials and Irreducibility. J. Comp. Syst. Sci., 14:210–221, 1977.
 [17] A. Schinzel. Reducibility of Lacunary Polynomials, I. Acta Arith., 16:123–159, 1969.
 [18] A. Schinzel. Selected Topics on Polynomials. University of Michigan Press, 1982.
 [19] International Telecommunications Union. Information technology — ASN.1 encoding rules: Specification of Packed Encoding Rules (PER). Standard X.691, 2002.
 [20] M. van Hoeij. Factoring polynomials and the knapsack problem. J. Number Theory, 95:167–189, 2002.
 [21] R.C. Vaughan. Bounds for the Coefficients of Cyclotomic Polynomials. Michigan Math. J., 21:289–295, 1974.
 [22] S. Wigert. Sur l’ordre de grandeur du nombre des diviseurs d’un entier. Arkiv för Mat. Astr. Fys., 3:1–9, 1907.
 [23] H. Zassenhaus. On Hensel Factorization I. J. Number Theory, 1:291–311, 1969.