---
format:
html:
html-math-method: katex
---
Exploring Finite Fields: Preliminaries
======================================
[Fields](https://en.wikipedia.org/wiki/Field_%28mathematics%29) are one of the basic structures in abstract algebra. Roughly, a field is a collection of elements paired with two operations, addition and multiplication, along with particular rules about their interactions. The most important elements of a field are 0 (the additive identity), 1 (the multiplicative identity), and -1 (which forms additive inverses). Moreover, multiplicative inverses must exist.
Many people are already know about some fields such as the rational numbers $\mathbb{Q}$ and complex numbers $\mathbb{C}$. Finite fields also exist, the most familiar being $\mathbb{F_2} = \text{GF}(2)$, or the field with two elements. This field only contains the elements 0 and 1, with -1 being identical to 1. The addition and multiplication tables are consequently the simplest possible according to familiar rules.
| + | 0 | 1 |
|---|---|---|
| 0 | 0 | 1 |
| 1 | 1 | 0 |
| × | 0 | 1 |
|---|---|---|
| 0 | 0 | 0 |
| 1 | 0 | 1 |
This field expresses the parity of sums and products of two integers, since:
- even + even = even
- even + odd = odd
- odd + even = odd
- odd + odd = even
And
- even × even = even
- even × odd = even
- odd × even = even
- odd × odd = odd
In One's Prime
--------------
Two is not unique as the only possible order of a finite field -- all prime numbers are also candidates. In general, the field inherits the properties of integer arithmetic. The role of -1 taken up by *p* - 1, where *p* is the order of the field, and the field is referred to as GF(*p*).
The additive inverse of an element *x* can be viewed in two ways:
- Multiplying -1 in the field with x, (i.e., $(p - 1)x \mod p$)
- Counting backwards from zero, which is equal to p (i.e., $p - x$)
The product of any two elements in the field cannot result in a multiple of *p*, which are all congruent to 0. If this were the case, then the order would share factors with one of the two terms. But the order is prime, so this is impossible. More strongly, multiplicative inverses [can be found algorithmically](https://en.wikipedia.org/wiki/Modular_multiplicative_inverse#Extended_Euclidean_algorithm), although it is a somewhat tricky task.
Polynomials
-----------
Before we look at other finite fields, we have to look at polynomials. For a given field *K*, we can also consider polynomials with coefficients from elements in the field, K\[*x*\]. The structure that polynomials fit into is called a [*ring*](https://en.wikipedia.org/wiki/Ring_%28mathematics%29). Rings are slightly weaker than a fields, since there are not generally multiplicative inverses. The zero polynomial and constant polynomial 1 are the additive and multiplicative identities, respectively.
Since GF(*p*) has a finite number of elements to consider, there are only so many choices for polynomial coefficients. Each degree has a finite number of polynomials, so it's much easier to list them out than it would be for the integers. Again, looking at polynomials over GF(2), we have:
Degree | Polynomial *q(x)* | List of coefficients of *q(x)* (ascending) | *q*(2) | *q*(2) (Binary)
-------|-------------------|--------------------------------------------|--------|----------------
1 | x | [0, 1] | 2 | 10
1 | 1 + x | [1, 1] | 3 | 11
2 | x^2 | [0, 0, 1] | 4 | 100
2 | 1 + x^2 | [1, 0, 1] | 5 | 101
2 | x + x^2 | [0, 1, 1] | 6 | 110
2 | 1 + x + x^2 | [1, 1, 1] | 7 | 111
3 | x^3 | [0, 0, 0, 1] | 8 | 1000
3 | 1 + x^3 | [1, 0, 0, 1] | 9 | 1001
3 | x + x^3 | [0, 1, 0, 1] | 10 | 1010
3 | 1 + x + x^3 | [1, 1, 0, 1] | 11 | 1011
... | ... | ... | ... | ...
### The Base-ics
There is a very close correspondence between binary expansions and polynomials over GF(2). This is evident by comparing the list of coefficients in the polynomial (column 3) with the binary expansions of the polynomial evaluated at 2 (column 5). This gives a handy way of referring to polynomials (mod *p*) without having to write out each individual "x" or "+". In fact, this is commonly used to compactly compute with and refer to [polynomials used in cyclic redundancy checks](https://en.wikipedia.org/wiki/Mathematics_of_cyclic_redundancy_checks).
Again, 2 is not unique among primes. Polynomials over any prime field GF(*p*) can be expressed as integers in base *p*.
Haskell implementation of duality between polynomials mod *p* and base *p* expansions of integers
This implementation actually works for any base *b*, which is not necessarily prime. The only difference is that the coefficients lose "field-ness" for composite *b*.
```{haskell}
-- | eval: false
-- A polynomial is its ascending list of coefficients (of type a)
data Polynomial a = Poly { coeffs :: [a] }
-- Interpret a number's base-b expansion as a polynomial
asPoly :: Int -> Int -> Polynomial Int
-- Build a list with f, which returns either Nothing
-- or Just (next element of list, next argument to f)
asPoly b = Poly . unfoldr f where
-- Divide x by b. Emit the remainder and recurse with the quotient.
f x | x /= 0 = Just $ swap $ divMod x b
-- If there's nothing left to divide out, terminate
| otherwise = Nothing
-- Horner evaluation of a polynomial at the integer b
evalPoly :: Int -> Polynomial Int -> Int
-- Start with the highest coefficient
-- Multiply by b at each step and add the coefficient of the next term
evalPoly b (Poly p) = foldr (\y acc -> acc*b + y) 0 p
-- evalPoly n . asPoly n = id :: Int -> Int
```
An interesting detail here is that the duality is expressed through `foldr` using multiplication and addition and `unfoldr` using divMod.
### Mono, not Stereo
With respect to their roots (which will soon become of primary interest), polynomials are *projective*. More directly, any scalar multiple of the polynomial has the same roots. For GF(2), this is insignificant, but over GF(5) for example, the following polynomials have the same roots:
$$
\begin{align*}
x^2 + 2 \mod 5 \quad &\longleftrightarrow \quad {_5 27} \\
2x^2 + 4 \mod 5 \quad &\longleftrightarrow \quad {_5 54} \\
3x^2 + 1 \mod 5 \quad &\longleftrightarrow \quad {_5 76} \\
4x^2 + 3 \mod 5 \quad &\longleftrightarrow \quad {_5 103}
\end{align*}
$$
Only the first polynomial has a leading coefficient of 1, a condition which makes it a *monic* polynomial. It is preferable to work with monic polynomials since the product of two monic polynomials is also monic.
An equivalent condition to this is that the integer the polynomial corresponds to corresponds to falls between 5^2 and 2×5^2 - 1 = 49. In general, monic polynomials mod *p* as integers obviously fall in the range *p*^2 and 2*p*^2 - 1.
Haskell implementation of monic polynomials mod *p*
Again, nothing about this definition depends on the base being prime.
```{haskell}
-- | eval: false
-- All monic polynomials of degree d with coefficients mod n
monics :: Int -> Int -> [Polynomial Int]
monics n d = map (asPoly n) [n^d..2*(n^d) - 1]
-- All monic polynomials with coefficients mod n, ordered by degree
allMonics :: Int -> [Polynomial Int]
allMonics n = concat [monics n d | d <- [1..]]
```
As an aside, one can also read out monics by counting normally by using the digit alphabet {1, 0, -1, ..., -*p* + 2}. Unfortunately, these base-*p* expansions are more difficult to obtain algorithmically, and I'll leave this as an exercise to the reader.
### Irreducibles
Over the integers, we can factor a number into primes. To decide if a number is prime, we just divide it (using an algorithm like long division) by numbers less than it and see if we get a nonzero remainder.
Similarly, we can factor polynomials into *irreducible* polynomials, which have no "smaller" polynomial factors other than 1. More precisely, by "smaller", we mean those of lesser degree. For example, over the integers, the polynomial $x^2 - 1$ (degree 2) factors into $(x + 1)(x - 1)$ (both degree 1), but $x^2 + 1$ is irreducible.
In general, a factorization of a polynomial over the integers implies a factorization of one over GF(*p*), since the coefficients for each factor may be taken mod *p*. However, the converse does not hold. Over GF(2),
$$
(x + 1)^2 = x^2 + 2x + 1 \equiv x^2 + 1 \mod 2
$$
...but as just mentioned, the right-hand side is irreducible over the integers.
Since we can denote polynomials by numbers, it may be tempting to freely switch between primes and irreducibles. However, irreducibles depend on the chosen modulus and do not generally correspond to the base p expansion of a prime.
Irreducible over GF(2), *q(x)* | *q*(2) ([OEIS A014580](https://oeis.org/A014580)) | Prime
-------------------------------|---------------------------------------------------|------
$x$ | 2 | 2
$x + 1$ | 3 | 3
$x^2 + x + 1$ | 7 | 5
$x^3 + x + 1$ | 11 | 7
$x^3 + x^2 + 1$ | 13 | 11
$x^4 + x + 1$ | 19 | 13
$x^4 + x^3 + 1$ | 25 | 17
$x^4 + x^3 + x^2 + x + 1$ | 31 | 19
$x^5 + x^2 + 1$ | 37 | 23
$x^5 + x^3 + 1$ | 41 | 29
$x^5 + x^3 + x^2 + x + 1$ | 47 | 31
The red entry in column 2 is not prime. Dually, the green entries in column 3 do not have binary expansions which correspond to irreducible polynomials over GF(2).
### Dividing and Sieving
Just like integers, we can use [polynomial long division](https://en.wikipedia.org/wiki/Polynomial_long_division) with these objects to decide if a polynomial is irreducible. [Synthetic division](https://en.wikipedia.org/wiki/Synthetic_division) is an alternative which is slightly easier to implement (especially mod 2, where it is, again, used in CRCs). It only works for monic polynomials, but this is all we need.
Haskell implementation of synthetic division
The algorithm is similar to table-less algorithms for CRCs, but we don't have the luxury of working at the bit level with XOR for addition. We also have to watch out for negation and coefficients other than 1 for when not working mod 2.
```{haskell}
-- | eval: false
-- Divide the polynomial ps by qs (coefficients in descending order by degree)
synthDiv' :: (Eq a, Num a) => [a] -> [a] -> ([a], [a])
synthDiv' ps qs
| head qs /= 1 = error "Cannot divide by non-monic polynomial"
| otherwise = splitAt deg $ doDiv ps deg
where
-- Negate the denominator and ignore leading term
qNeg = map negate $ tail qs
-- The degree of the result, based on the degrees of the numerator and denominator
deg = max 0 (length ps - length qs + 1)
-- Pluck off the head of the list and add a shifted and scaled version of
-- qs to the tail of the list. Repeat this d times
doDiv xs 0 = xs
doDiv (x:xs) d = x:doDiv (zipAdd xs $ map (*x) qNeg) (d - 1)
-- Use Polynomial (coefficients in ascending degree order) instead of lists
synthDiv :: (Eq a, Num a) => Polynomial a -> Polynomial a -> (Polynomial a, Polynomial a)
synthDiv (Poly p) (Poly q) = (Poly $ reverse quot, Poly $ reverse rem) where
(quot, rem) = synthDiv' (reverse p) (reverse q)
```
Then, using our list of monic polynomials, we can use the same strategy for sieving out primes to find (monic) irreducibles.
Haskell implementation of an irreducible polynomial (mod *p*) sieve
```{haskell}
-- | eval: false
-- All irreducible monic polynomials with coefficients mod n
irreducibles :: Int -> [Polynomial Int]
irreducibles n = go [] $ monics n where
-- Divide the polynomial x by i, then take the remainder mod n
remModN x i = fmap (`mod` n) $ snd $ synthDiv x i
-- Find remainders of x divided by every irreducible in "is".
-- If any give the zero polynomial, then x is a multiple of an irreducible
notMultiple x is = and [not $ all (==0) $ coeffs $ remModN x i | i <- is]
-- Sieve out by notMultiple
go is (x:xs)
| notMultiple x is = x:go (x:is) xs
| otherwise = go is xs
print $ take 10 $ irreducibles 2
```
Matrices
--------
Just like polynomials over a finite field, we can also look at matrices. The most interesting matrices are square ones, since the product of two square matrices is another square matrix. Along with the zero matrix ($\bf 0_n$) as the additive identity and the identity matrix ($\bf 1_n$) as the multiplicative, square matrices also form a ring over the field K, denoted Kn×n.
Square matrices are associated to a [determinant](https://en.wikipedia.org/wiki/Determinant), which is an element from the underlying field. Determinants are nice, since the determinant of the product of two matrices is the product of the determinants. The determinant can be implemented using [Laplace expansion](https://en.wikipedia.org/wiki/Laplace_expansion), which is also useful for inductive proofs.
Haskell implementation of Laplace expansion
Laplace expansion is ludicrously inefficient compared to other algorithms, and is only shown here due to its "straightforward" implementation and use in proof. Numeric computation will not be used to keep the arithmetic exact.
```{haskell}
-- | eval: false
newtype Matrix a = Mat { unMat :: Array (Int, Int) a }
determinant :: (Num a, Eq a) => Matrix a -> a
determinant (Mat xs) = determinant' xs where
-- Evaluate (-1)^i without repeated multiplication
parity i = if even i then 1 else -1
-- Map old array addresses to new ones when eliminating row 0, column i
rowMap i (x,y) = (x+1, if y >= i then y+1 else y)
-- Recursive determinant Array
determinant' xs
-- Base case: 1x1 matrix
| n == 0 = xs!(0,0)
-- Sum of cofactor expansions
| otherwise = sum $ map cofactor [0..n] where
-- Produce the cofactor of row 0, column i
cofactor i
| xs!(0,i) == 0 = 0
| otherwise = (parity i) * xs!(0,i) * (determinant' $ minor i)
-- Furthest extent of the bounds, i.e., the size of the matrix
(_,(n,_)) = bounds xs
-- Build a new Array by eliminating row 0 and column i
minor i = ixmap ((0,0),(n-1,n-1)) (rowMap i) xs
```
### Back to Polynomials
The [characteristic polynomial](https://en.wikipedia.org/wiki/Characteristic_polynomial) is a stronger invariant which follows from the determinant (and conveniently plays into the prior description of polynomials). It is defined as, for *λ* a scalar variable:
$$
\text{charpoly}(A) = p_A(\lambda) = \left| \lambda I - A \right| \\ ~ \\
= \left|
\begin{matrix*}
\lambda - a_{00} & -a_{01} & ... & -a_{0n} \\
-a_{10} & \lambda - a_{11} & ... & -a_{1n} \\
\vdots & \vdots & \ddots & \vdots \\
-a_{n0} & -a_{n1} & ... & \lambda - a_{nn} \\
\end{matrix*}
\right|
$$
Laplace expansion never gives *λ* a coefficient before recursing, so the characteristic polynomial is always monic.
Haskell implementation of the characteristic polynomial
Since `determinant` was defined for all `Num` and `Eq`, it can immediately be applied if these instances are defined for polynomials.
```{haskell}
-- | eval: false
-- Num instance for polynomials omitted
-- instance (Num a, Eq a) => Num (Polynomial a) where
-- ...
instance Eq a => Eq (Polynomial a) where
(==) (Poly xs) (Poly ys) = xs == ys
charpoly :: Matrix Int -> Polynomial Int
charpoly xs = determinant $ eyeLambda |+| negPolyXs where
-- Furthest extent of the bounds, i.e., the size of the matrix
(_,(n,_)) = bounds $ unMat xs
-- Negative of input matrix, after being converted to polynomials
negPolyXs :: Matrix (Polynomial Int)
negPolyXs = fmap (\x -> Poly [-x]) xs
-- Identity matrix times lambda (encoded as Poly [0, 1])
eyeLambda :: Matrix (Polynomial Int)
eyeLambda = fmap (\x -> (Poly [x] * Poly [0, 1])) $ eye (n+1)
```
Computation using this definition is only good for illustrative purposes. The [Faddeev-LeVerrier algorithm](https://en.wikipedia.org/wiki/Faddeev%E2%80%93LeVerrier_algorithm) circumvents Laplace expansion entirely and happens to generate the determinant along the way. However, it has some problems:
- It inverts the order in which the determinant and characteristic polynomial are defined
- It introduces division, which makes it unsuitable over mod *p* matrices directly
Fortunately, we can just work with a mod *p* matrix over the integers and mod out at the end instead, as the following diagram conveys:
$$
\begin{matrix}
\mathbb{F}_p ^{n \times n} & \textcolor{green}{\hookrightarrow} &
\normalsize \mathbb{Z}^{n \times n} &
\overset{\mod p ~~}{\longrightarrow} &
\mathbb{F}_p^{n \times n} & \scriptsize \phantom{text{charpoly}}
\\ \\ &
\scriptsize \textcolor{green}{\text{charpoly (FL)}} &
\textcolor{green}{\downarrow} & &
\textcolor{red}{\downarrow} & \scriptsize \textcolor{red}{\text{charpoly (LE)}}
\\ \\ & &
\mathbb{Z}[\lambda] &
\textcolor{green}{\underset{\mod p ~~}{\longrightarrow}} &
\mathbb{F}_p[\lambda] & \scriptsize \phantom{text{charpoly}}
\end{matrix}
$$
The top row are matrices and the bottom row are polynomials. To get to the bottom-right, which contains the characteristic polynomials mod *p* matrices, we can avoid the red arrow and follow the path in green instead.
### Friends Among Matrices
In the reverse direction, a matrix with a specific characteristic polynomial can be constructed from a polynomal. The matrix is called the [companion matrix](https://en.wikipedia.org/wiki/Companion_matrix), and is defined as
$$
p(\lambda) = \lambda^n + p_{n-1}\lambda^{n-1} + ... + p_1 \lambda + p_0 \\ ~ \\
C_{p(\lambda)} = \left( \begin{matrix}
0 & 1 & 0 & ... & 0 \\
0 &0 & 1 & ... & 0 \\
\vdots &\vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & ... & 1 \\
-p_0 & -p_1 & -p_2 & ... & -p_{n-1}
\end{matrix} \right) =
\left( \begin{matrix}
\overrightharpoon 0_{n-1} & \bold{1}_{n-1} \\
-p_0& -(\overrightharpoon{p}_{1:n-1})^T
\end{matrix} \right) \\ ~ \\
\text{charpoly}(C_{p}) = p_{C_{p}}(\lambda) = p(\lambda)
$$
The definition of the companion matrix only depends on elements having an additive inverse, which is always true in a field. Therefore, there are always matrices over a field that have a monic polynomial as their characteristic polynomial.
Proving that the companion matrix has the characteristic polynomial it was constructed from can be done via Laplace expansion:
$$
p_{0:n-1}(\lambda) = \left| \begin{matrix}
\textcolor{red}{\lambda} & -1 & 0 & ... & 0 \\
0 & \lambda & -1 & ... & 0 \\
\vdots &\vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & ... & -1 \\
\textcolor{green}{p_0} & p_1 & p_2 & ... & \lambda + p_{n-1}
\end{matrix} \right|
\\ ~ \\ =
\textcolor{green}{p_0} \cdot (-1)^{n-1}
\left| \begin{matrix}
-1 & 0 & ... & 0 \\
\lambda & -1 & ... & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & ... & -1
\end{matrix} \right|
+ \textcolor{red}{\lambda}
\left| \begin{matrix}
\lambda & -1 & ... & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & ... & -1 \\
p_1 & p_2 & ... & \lambda + p_{n-1}
\end{matrix} \right|
\\ ~ \\ =
\textcolor{green}{p_0} \cdot (-1)^{n-1} \cdot (-1)^{n-1} +
\textcolor{red}{\lambda} \cdot p_{1:n-1}(\lambda)
\\ ~ \\ =
p_0 + \lambda(p_1 + \lambda (...(p_{n-1} + \lambda )...)))
$$
Pleasantly, this yields the Horner form, which was used above to evaluate polynomials.
Haskell implementation of the companion matrix
```{haskell}
-- | eval: false
companion :: Polynomial Int -> Matrix Int
companion (Poly ps)
| last ps' /= 1 = error "Cannot find companion matrix of non-monic polynomial"
| otherwise = Mat $ array ((0,0), (n-1,n-1)) $ lastRow ++ shiftI where
-- The degree of the polynomial, as well as the size of the matrix
n = length ps' - 1
-- Remove trailing 0s from ps
ps' = reverse $ dropWhile (==0) $ reverse ps
-- Address/value tuples for a shifted identity matrix:
-- 1s on the diagonal just above the main diagonal, 0s elsewhere
shiftI = map (\p@(x,y) -> (p, if y == x + 1 then 1 else 0)) $ range ((0,0),(n-2,n-1))
-- Address/value tuples for the last row of the companion matrix:
-- ascending powers of the polynomial
lastRow = zipWith (\x y -> ((n-1, x), y)) [0..n-1] $ map negate ps'
-- (charpoly . companion) = id :: Polynomial Int -> Polynomial Int
```
Field Extensions
----------------
Aside from those of degree 1, the irreducible polynomials over a field cannot be factored into monomials over the field. In other words, irreducibles have roots which do not exist as elements of the field. A *field extension* formalizes the notion by which one can make a larger field from another by adding the roots.
Using $x^2 + 1$ (over the integers) again, even over an actual field like $\mathbb{R}$, the polynomial is still irreducible. On the other hand, it can be factored into $(x + i)(x - i)$ over $\mathbb{C}$. We can construct the latter field from the from if an extra number i exists alongside everything in $\mathbb{R}$ such that *i*^2^ = -1. Then, form all possible products and sums by taking linear combinations of powers of i less than the degree (in this case, 0 and 1).
The equation that *i* obeys can be rewritten as $i^2 + 1 = 0$, which is the original polynomial, evaluated at *i*. In order to refer explicitly to the construction of the bigger field from the polynomial, we write $\mathbb{R}[x] / (x^2 + 1) \cong \mathbb{C}$. *Technically*, the left hand side refers to something else (cosets of polynomials, from which we extract the canonical member *i*), but this description is good enough.
### The Power of Primes
We can extend a finite field in the same way. Over GF(2), the smallest irreducible of degree 2 is $x^2 + x + 1$. Using the same logic as before, we construct $\mathbb{F}_2[x] / (x^2 + x + 1) \cong \mathbb{F}_2[\alpha]$. The new element α is a root of the polynomial and obeys the relations:
$$
\alpha^2 + \alpha + 1 = 0 \\
\alpha^2 = -\alpha - 1 \equiv \alpha + 1 \mod 2 \\
\alpha^3 = \alpha^2 + \alpha= (\alpha +1) + \alpha \equiv 1 \mod 2
$$
Just like *i*, only powers of α less than 2 (again, 0 and 1) are necessary to express elements of the field. Skipping a few steps, we can accumulate all possible sums and products over this new field into two new tables:
| + | 0 | 1 | *α* | *α* + 1 |
|---------|---------|---------|---------|---------|
| 0 | 0 | 1 | *α* | *α* + 1 |
| 1 | 1 | 0 | *α* + 1 | *α* |
| *α* | *α* | *α* + 1 | 0 | 1 |
| *α* + 1 | *α* + 1 | *α* | 1 | 0 |
| × | 0 | 1 | *α* | *α* + 1 |
|---------|---------|---------|---------|---------|
| 0 | 0 | 0 | 0 | 0 |
| 1 | 0 | 1 | *α* | *α* + 1 |
| *α* | 0 | *α* | *α* + 1 | 1 |
| *α* + 1 | 0 | *α* + 1 | 1 | *α* |
As you might expect, the resulting field has 4 elements, so it's called $\mathbb{F}_4 = \text{GF}(4)$. In general, when adjoining an irreducible of degree d to GF(p), the resulting field has pd elements, naturally denoted $\mathbb{F}_{p^d} = \text{GF}(p^d)$. p is called the characteristic of the field, and denotes how many repeated additions are needed to get to 0. From the above table, it's clear that the characteristic is 2 since 1 + 1 = α + α = (α + 1) + (α + 1) = 0.
### ...and beyond?
All of this is manageable when you're adjoining a root of a degree 2 polynomial like *α* or *i*, but things get difficult when you start to work with higher degrees. The powers of the root form the basis for a *d*-dimensional vector space over GF(*p*) (hence the order of the field being *p*^*d*^). Proceeding as before, we'd have to be able to:
- recognize equality in the new field based on sums of powers of roots (times elements of the field)
- have a canonical method of expressing other elements after adjoining a root
- ideally, handle both with an algorithm that gives canonical forms from noncanonical ones
- know when we've found every element of the new field
These problems make it difficult to study prime power fields on a computer without the use of a CAS like Maple or Mathematica. They're capable of taking care of these issues symbolically, working with the expressions in the same way we have (or at least appearing to do so). As someone who likes to do things himself, implementing a CAS from scratch seemed a little too cumbersome. Furthermore, even a more direct approach using the previously-mentioned "canonical members of cosets of polynomials" was more annoying than I was willing to put up with.
Fortunately, there's a detour that makes it much easier to dodge all of these problems, and it has some interesting consequences. Join me in [the next post]() for an direct, non-symbolic way to work with prime power fields.