add extra post as a child of finite-field.2

This commit is contained in:
queue-miscreant 2025-08-05 04:05:36 -05:00
parent 46128385fc
commit 5c6795163f
3 changed files with 67 additions and 64 deletions

View File

@ -1,10 +1,33 @@
---
title: "Exploring Finite Fields, Part 2 (Extra)"
description: |
Additional notes about polynomial evaluation.
format:
html:
html-math-method: katex
date: "2024-01-15"
date-modified: "2025-07-16"
categories:
- algebra
- finite field
- haskell
---
In the [second post in this series](../), we briefly discussed alternate means
of evaluating polynomials by "plugging in" different structures.
Different Kinds of Polynomials
------------------------------
Rather than redefining evaluation for each of these cases, Rather than redefining evaluation for each of these cases,
we should map our polynomial into a structure compatible with how we want to evaluate it. we can map the polynomial into a structure compatible with it should be evaluated.
Essentially, this means that from a polynomial in the base structure, Essentially, this means that from a polynomial in the base structure,
we can derive polynomials in these other structures. we can derive polynomials in these other structures.
In particular, we can either have a matrix of polynomials or a polynomial in matrices.
<!-- TODO: notes about functoriality of `fmap`ping eval vs --> In particular, there is a distinction between a matrix of polynomials or a polynomial in matrices:
:::: {layout-ncol="2"} :::: {layout-ncol="2"}
::: {} ::: {}
$$ $$
@ -22,7 +45,7 @@ $$
$x$ is a scalar indeterminate $x$ is a scalar indeterminate
```haskell ```haskell
p :: Polynomial K p :: Polynomial k
``` ```
::: :::
:::: ::::
@ -45,9 +68,9 @@ $x$ is a scalar indeterminate, $P(x I)= p(x) I$ is a matrix of polynomials in $x
```haskell ```haskell
asPolynomialMatrix asPolynomialMatrix
:: Polynomial K -> Matrix (Polynomial K) :: Polynomial k -> Matrix (Polynomial k)
pMat :: Matrix (Polynomial K) pMat :: Matrix (Polynomial k)
pMat = asPolynomialMatrix p pMat = asPolynomialMatrix p
``` ```
::: :::
@ -71,59 +94,44 @@ $X$ is a matrix indeterminate, $\hat P(X)$ is a polynomial over matrices
```haskell ```haskell
asMatrixPolynomial asMatrixPolynomial
:: Polynomial K -> Polynomial (Matrix K) :: Polynomial k -> Polynomial (Matrix k)
pHat :: Polynomial (Matrix K) pHat :: Polynomial (Matrix k)
pHat = asMatrixPolynomial p pHat = asMatrixPolynomial p
``` ```
::: :::
:::: ::::
It's easy to confuse the latter two, but the Haskell makes the difference in types clearer.
There exists a natural isomorphism between the two, which is discussed further
in the [fourth post in this series](../../4/).
### Cayley-Hamilton Theorem
When evaluating the characteristic polynomial of a matrix *with* that matrix, Cayley-Hamilton Theorem, Revisited
something strange happens. ----------------------------------
Continuing from the previous article, using $x^2 + x + 1$ and its companion matrix, we have:
$$ As a reminder, the
\begin{gather*} [Cayley-Hamilton theorem](https://en.wikipedia.org/wiki/Cayley%E2%80%93Hamilton_theorem)
p(x) = x^2 + x + 1 \qquad C_{p} = C says that a matrix satisfies its own characteristic polynomial.
= \left( \begin{matrix} In a type-stricter sense, it says the following relationship holds:
0 & 1 \\
-1 & -1
\end{matrix} \right)
\\ \\
\hat P(C) = C^2 + C + (1 \cdot I)
= \left( \begin{matrix}
-1 & -1 \\
1 & 0
\end{matrix} \right)
+ \left( \begin{matrix}
0 & 1 \\
-1 & -1
\end{matrix} \right)
+ \left( \begin{matrix}
1 & 0 \\
0 & 1
\end{matrix} \right)
\\ \\
= \left( \begin{matrix}
0 & 0 \\
0 & 0
\end{matrix} \right)
\end{gather*}
$$
The result is the zero matrix. ```haskell
This tells us that, at least in this case, the matrix *C* is a root of its own characteristic polynomial. evalPoly :: a -> Polynomial a -> a
By the [Cayley-Hamilton theorem](https://en.wikipedia.org/wiki/Cayley%E2%80%93Hamilton_theorem),
this is true in general, no matter the degree of *p*, no matter its coefficients,
and importantly, no matter the choice of field.
This is more powerful than it would otherwise seem. mA :: Matrix a
For one, factoring a polynomial "inside" a matrix turns out to give the same answer
as factoring a polynomial over matrices. charpolyA :: Polynomial a
charpolyA = charpoly mA
charpolyA :: Polynomial (Matrix a)
matCharpolyA = asMatrixPolynomial charPolyA
evalPoly mA matCharpolyA == (0 :: Matrix a)
```
Due to the aformentioned isomorphism, factoring a polynomial "inside" a matrix turns
out to give the same answer as factoring a polynomial over matrices.
:::: {layout-ncol="2"} :::: {layout-ncol="2"}
::: {} ::: {}
@ -200,7 +208,7 @@ Of course, choosing one root affects the other matrix roots.
### Moving Roots ### Moving Roots
All matrices commute with the identity and zero matrices. All matrices commute with the identity and zero matrices.
A less obvious fact is that all of the matrix roots *also* commute with one another. A less obvious fact is that for a matrix factorization, all roots *also* commute with one another.
By the Fundamental Theorem of Algebra, By the Fundamental Theorem of Algebra,
[Vieta's formulas](https://en.wikipedia.org/wiki/Vieta%27s_formulas) state: [Vieta's formulas](https://en.wikipedia.org/wiki/Vieta%27s_formulas) state:
@ -242,15 +250,14 @@ $$
\pi \circ \hat P(X) = \prod_{\pi ([i]_n)} (X - \Xi_i) \pi \circ \hat P(X) = \prod_{\pi ([i]_n)} (X - \Xi_i)
\\ \\ \\ \\
= X^n = X^n
- \sigma_1 \left(\pi ([\Xi]_n) \vphantom{^{1}} \right)X^{n-1} + - \sigma_1 \left(\pi ([\Xi]_n) \vphantom{^{1}} \right)X^{n-1}
+ \sigma_2 \left(\pi ([\Xi]_n) \vphantom{^{1}} \right)X^{n-2} + ... + \sigma_2 \left(\pi ([\Xi]_n) \vphantom{^{1}} \right)X^{n-2} + ...
+ (-1)^n \sigma_n \left(\pi ([\Xi]_n) \vphantom{^{1}} \right) + (-1)^n \sigma_n \left(\pi ([\Xi]_n) \vphantom{^{1}} \right)
\\ \\ \\[10pt]
\\ \\ \pi_{(0 ~ 1)} \circ \hat P(X) = (X - \Xi_{1}) (X - \Xi_0)(X - \Xi_2)...(X - \Xi_{n-1})
(0 ~ 1) \circ \hat P(X) = (X - \Xi_{1}) (X - \Xi_0)(X - \Xi_2)...(X - \Xi_{n-1})
\\ \\
= X^n + ... + \sigma_2(\Xi_1, \Xi_0, \Xi_2, ...,\Xi_{n-1})X^{n-2} + ... = X^n + ... + \sigma_2(\Xi_1, \Xi_0, \Xi_2, ...,\Xi_{n-1})X^{n-2} + ...
\\ \\ \\ \\ \\[10pt]
\begin{array}{} \begin{array}{}
e & (0 ~ 1) & (1 ~ 2) & ... & (n-2 ~~ n-1) e & (0 ~ 1) & (1 ~ 2) & ... & (n-2 ~~ n-1)
\\ \hline \\ \hline
@ -273,9 +280,8 @@ $$
\end{gather*} \end{gather*}
$$ $$
<!-- TODO: permutation -->
The "[path swaps](/posts/permutations/1/)" shown commute only the adjacent elements. The "[path swaps](/posts/permutations/1/)" shown commute only the adjacent elements.
By contrast, the permutation (0 2) commutes *Ξ*~0~ past both *Ξ*~1~ and *Ξ*~2~. By contrast, the permutation $(0 ~ 2)$ commutes *Ξ*~0~ past both *Ξ*~1~ and *Ξ*~2~.
But since we already know *Ξ*~0~ and *Ξ*~1~ commute by the above list, But since we already know *Ξ*~0~ and *Ξ*~1~ commute by the above list,
we learn at this step that *Ξ*~0~ and *Ξ*~2~ commute. we learn at this step that *Ξ*~0~ and *Ξ*~2~ commute.
This can be repeated until we reach the permutation (0 *n*-1) to prove commutativity between all pairs. This can be repeated until we reach the permutation $(0 ~ n-1)$ to prove commutativity between all pairs.

View File

@ -78,11 +78,7 @@ plotDigraph x = MPLI.mplotString $
\ sys.stdout = original_stdout" \ sys.stdout = original_stdout"
``` ```
<!-- In the [last post](../1/), we discussed finite fields, polynomials and matrices over them, and the typical,
TODO: half-post about organizing "data" vs full post about graphs and irreducibles
-->
In the [last post](../1), we discussed finite fields, polynomials and matrices over them, and the typical,
symbolic way of extending fields with polynomials. symbolic way of extending fields with polynomials.
This post will will focus on circumventing symbolic means with numeric ones. This post will will focus on circumventing symbolic means with numeric ones.
@ -924,5 +920,7 @@ It seems to be possible to get the non-primitive sequences by looking at the sub
But this means that the entire story about polynomials and finite fields can be foregone entirely, But this means that the entire story about polynomials and finite fields can be foregone entirely,
and the problem instead becomes one of number theory. and the problem instead becomes one of number theory.
The [next post](../3) will focus on an "application" of matrix roots to other areas of abstract algebra. This post has [an addendum](./extra/) to it which discusses some additional notes about matrix roots and the
Cayley-Hamilton theorem.
The [next post](../3/) will focus on an "application" of matrix roots to other areas of abstract algebra.
Diagrams made with Geogebra and NetworkX (GraphViz). Diagrams made with Geogebra and NetworkX (GraphViz).

View File

@ -478,7 +478,6 @@ It's a relatively simple matter to move between determinants, since it's straigh
However, a natural question to ask is whether there's a way to reconcile or coerce However, a natural question to ask is whether there's a way to reconcile or coerce
the matrix polynomial into the "forgotten" one. the matrix polynomial into the "forgotten" one.
<!-- TODO: reorganize parts of second post? -->
First, let's formally establish a path from matrix polynomials to a matrix of polynomials. First, let's formally establish a path from matrix polynomials to a matrix of polynomials.
We need only use our friend from the [second post](../2) -- polynomial evaluation. We need only use our friend from the [second post](../2) -- polynomial evaluation.
Simply evaluating a matrix polynomial *r* at *λI* converts our matrix indeterminate (*Λ*) Simply evaluating a matrix polynomial *r* at *λI* converts our matrix indeterminate (*Λ*)