Note: These are notes live-tex’d from a graduate course in Algebraic Groups taught by Dan Nakano at the University of Georgia in Fall 2020. As such, any errors or inaccuracies are almost certainly my own.
Last updated: 2021-07-16
Carter’s “Finite Groups of Lie Type”[1]
Humphreys’ “Linear Algebraic Groups”[2]
Course Videos: https://www.youtube.com/playlist?list=PLYKpGBt1pueCLzs_b-VYKe_bX9My-3nUt
Guide for Math 8190 Videos:
\begin{align*}9-14-20\end{align*} A survey on the representation theory of finite-dimensional algebras
\begin{align*}9-25-20\end{align*} Frobenius kernels, simple G_r-modules, Steinberg’s Tensor Product Theorem, Connections with Lusztig’s Conjecture
\begin{align*}9-28-20\end{align*} Kempf’s Vanishing Theorem, Good and Weyl filtrations, Cohomology
\begin{align*}10-2-20\end{align*} Good and Weyl Filtrations
\begin{align*}10-5-20\end{align*} Highest Weight Categories
\begin{align*}10-7-20\end{align*} Examples: Schur algebras, tilting modules, Weyl modules, Bott-Borel-Weil Theorem
\begin{align*}10-9-20\end{align*} Bott-Borel-Weil Theory
\begin{align*}10-12-20\end{align*} Bott-Borel-Weil Theorem with applications
\begin{align*}10-14-20\end{align*} Basic Properties of Characters, Weyl’s Character Formula
\begin{align*}10-16-20\end{align*} Weyl’s Character Formula
\begin{align*}10-19-20\end{align*} Linkage, Strong Linkage, Examples
\begin{align*}10-21-20\end{align*} Strong Linkage Principle, Translation Functors (intro)
\begin{align*}10-23-20\end{align*} Translation Functors
\begin{align*}10-26-20\end{align*} Translation Functors and Characters
\begin{align*}10-28-20\end{align*} Properties of Translation Functors
\begin{align*}11-2-20\end{align*} Translation, Wall Crossing, Lusztig’s Conjecture
\begin{align*}11-4-20\end{align*} Representations of G_rT and G_rB
\begin{align*}11-6-20\end{align*} Representations of G_rT and G_rB, Good (p,r)-filtrations, Donkin’s (p,r)-Filtration Conjecture
\begin{align*}11-9-20\end{align*} Strong Linkage for G_rT-modules, Extensions for G_rT-modules, Steinberg Modules
\begin{align*}11-11-20\end{align*} Filtrations and Reciprocity for G_rT and G_r Modules
\begin{align*}11-13-20\end{align*} G_rT-modules and reciprocity, Injective G_rT-modules, Humphreys-Verma Conjecture, Donkin’s Tilting Module Conjecture
\begin{align*}11-16-20\end{align*} G-structures for injective G_r-modules, Donkin’s Tilting Module Conjecture
\begin{align*}11-18-20\end{align*} Injective G_T-modules and Injective G-modules
\begin{align*}11-20-20\end{align*} Cohomology of Frobenius kernels, finite generation of cohomology
\begin{align*}11-23-20\end{align*} Cohomology of Frobenius kernels, restricted nullcone, Jantzen’s Conjecture on support varieties, conjectures for low primes
Let \(k=\mkern 1.5mu\overline{\mkern-1.5muk\mkern-1.5mu}\mkern 1.5mu\) be algebraically closed (e.g. \(k = {\mathbb{C}}, \mkern 1.5mu\overline{\mkern-1.5mu{\mathbb{F}}_p\mkern-1.5mu}\mkern 1.5mu\)). A variety \(V\subseteq k^n\) is an affine \(k{\hbox{-}}\)variety iff \(V\) is the zero set of a collection of polynomials in \(k[x_1, \cdots, x_n]\).
Here \({\mathbb{A}}^n\mathrel{\vcenter{:}}= k^n\) with the Zariski topology, so the closed sets are varieties.
An affine algebraic \(k{\hbox{-}}\)group is an affine variety with the structure of a group, where the multiplication and inversion maps \begin{align*} \mu: G\times G &\to G \\ \iota: G&\to G \end{align*} are continuous.
\(G = {\mathbb{G}}_a \subseteq k\) the additive group of \(k\) is defined as \({\mathbb{G}}_a \mathrel{\vcenter{:}}=(k, +)\). We then have a coordinate ring \(k[{\mathbb{G}}_a] = k[x] / I = k[x]\).
\(G = \operatorname{GL}(n, k)\), which has coordinate ring \(k[x_{ij}, T] / \left\langle{\operatorname{det}(x_{ij})\cdot T = 1}\right\rangle\).
Setting \(n=1\) above, we have \({\mathbb{G}}_m \mathrel{\vcenter{:}}=\operatorname{GL}(1, k) = (k^{\times}, \cdot)\). Here the coordinate ring is \(k[x, T] / \left\langle{xT = 1}\right\rangle\).
\(G = {\operatorname{SL}}(n, k) \leq \operatorname{GL}(n, k)\), which has coordinate ring \(k[G] = k[x_{ij}] / \left\langle{\operatorname{det}(x_{ij}) = 1}\right\rangle\).
A variety \(V\) is irreducible iff \(V\) can not be written as \(V = \cup_{i=1}^n V_i\) with each \(V_i \subseteq V\) a proper subvariety.
There exists a unique irreducible component of \(G\) containing the identity \(e\). Notation: \(G^0\).
\(G\) is the union of translates of \(G^0\), i.e. there is a decomposition \begin{align*} G = {\textstyle\coprod}_{g\in \Gamma} \, g\cdot G^0 ,\end{align*} where we let \(G\) act on itself by left-translation and define \(\Gamma\) to be a set of representatives of distinct orbits.
One can define solvable and nilpotent algebraic groups in the same way as they are defined for finite groups, i.e. as having a terminating derived or lower central series respectively.
There is a maximal connected normal solvable subgroup \(R(G)\), denoted the radical of \(G\).
An element \(u\) is unipotent \(\iff\) \(u = 1+n\) where \(n\) is nilpotent \(\iff\) its the only eigenvalue is \(\lambda = 1\).
For any \(G\), there exists a closed embedding \(G\hookrightarrow\operatorname{GL}(V) = \operatorname{GL}(n , k)\) and for each \(x\in G\) a unique decomposition \(x=su\) where \(s\) is semisimple (diagonalizable) and \(u\) is unipotent.
Define \(R_u(G)\) to be the subgroup of unipotent elements in \(R(G)\).
Suppose \(G\) is connected, so \(G = G^0\), and nontrivial, so \(G\neq \left\{{e}\right\}\). Then
\(G = \operatorname{GL}(n, k)\), then \(R(G) = Z(G) = kI\) the scalar matrices, and \(R_u(G) = \left\{{e}\right\}\). So \(G\) is reductive and semisimple.
\(G = {\operatorname{SL}}(n , k)\), then \(R(G) = \left\{{I}\right\}\).
Is this semisimple? Reductive? What is \(R_u(G)\)?
A torus \(T\subseteq G\) in \(G\) an algebraic group is a commutative algebraic subgroup consisting of semisimple elements.
Let \begin{align*} T \mathrel{\vcenter{:}}= \left\langle{ \begin{bmatrix} a_1 & & \mathbf 0\\ & \ddots & \\ \mathbf 0 & & a_n \end{bmatrix} \subseteq \operatorname{GL}(n ,k) }\right\rangle .\end{align*}
Why are torii useful? For \(g = \mathrm{Lie}(G)\), we obtain a root space decomposition \begin{align*} g = \qty{\bigoplus_{\alpha \in \Phi_- }g_\alpha} \oplus t \oplus \qty{\bigoplus_{\alpha \in \Phi_+ }g_\alpha} .\end{align*}
When \(G\) is a simple algebraic group, there is a classification/correspondence: \begin{align*} (G, T) \iff (\Phi, W) .\end{align*} where \(\Phi\) is an irreducible root system and \(W\) is a Weyl group.
Split: \(T\cong \bigoplus {\mathbb{G}}_m\).
We’ll associate to this a root system, not necessarily irreducible, yielding a correspondence \begin{align*} (G, T) \iff (\Phi, W) \end{align*} with \(W\) a Weyl group.
This will be accomplished by looking at \({\mathfrak{g}}= \mathrm{Lie}(G)\). If \(G\) is simple, then \({\mathfrak{g}}\) is “simple,” and \(\Phi\) irreducible will correspond to a Dynkin diagram.
There is this a 1-to-1 correspondence \begin{align*} G \text{ simple}/\sim \iff A_n, B_n, C_n, D_n, E_6, E_7, E_8, F_4, G_2 \end{align*} where \(\sim\) denotes isogeny.
Taking the Zariski tangent space at the identity “linearizes” an algebraic group, yielding a Lie algebra.
We have the coordinate ring \(k[G] = k[x_1, \cdots, x_n] / \mathcal{I}(G)\) where \(\mathcal{I}(G)\) is the zero set. This is equal to \(\left\{{f:G\to k}\right\}\),
Define left translation is \begin{align*} \lambda_x: k[G] &\to k[G] \\ y &\mapsto f(x^{-1} y) .\end{align*}
Define derivations as \begin{align*} \mathrm{Der} ~k[G] = \left\{{D: k[G] \to k[G] {~\mathrel{\Big|}~}D(fg) = D(f) g + f D(g) }\right\} .\end{align*}
We can then realize the Lie algebra as \begin{align*} {\mathfrak{g}}= \mathrm{Lie}(G) = \left\{{D\in \mathrm{Der} k[G] {~\mathrel{\Big|}~}\lambda_x \circ D = D\circ \lambda_x}\right\} ,\end{align*} the left-invariant derivations.
Let \(G\) be reductive and \(T\) be a split torus. Then \(T\) acts on \({\mathfrak{g}}\) via an adjoint action. (For \(\operatorname{GL}_n, {\operatorname{SL}}_n\), this is conjugation.)
There is a decomposition into eigenspaces for the action of \(T\), \begin{align*} {\mathfrak{g}}= \qty{\bigoplus_{\alpha\in \Phi} g_\alpha} \oplus t \end{align*} where \(t = \mathrm{Lie}(T)\) and \(g_\alpha \mathrel{\vcenter{:}}=\left\{{x\in {\mathfrak{g}}{~\mathrel{\Big|}~}t.x = \alpha(t) x\,\, \forall t\in T}\right\}\) with \(\alpha: T\to K^{\times}\) a rational function (a root).
In general, take \(\alpha\in\hom_{\text{AlgGrp}}(T, {\mathbb{G}}_m)\).
Let \(G = \operatorname{GL}(n, k)\) and \begin{align*} T = \left\{{ \begin{bmatrix} a_1 & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & a_n \end{bmatrix} {~\mathrel{\Big|}~}a_j\in k^{\times} }\right\} .\end{align*}
Then check the following action:
which indeed acts by a rational function.
Then \begin{align*} g_\alpha = {\operatorname{span}}\left\{{ \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 0\\ 0 & 0 & 0 \end{bmatrix} }\right\} = g_{(1, -1, 0)} .\end{align*}
For \({\mathfrak{g}}= {\mathfrak{gl}}(3, k)\), we have \begin{align*} {\mathfrak{g}}= t &\oplus g_{(1, -1, 0)} \oplus g_{(-1, 1, 0)} \\ &\oplus g_{(0, 1, -1)} \oplus g_{(0, -1, 1)} \\ &\oplus g_{(1, 0, -1)} \oplus g_{(-1, 0, 1)} .\end{align*}
Let \(\rho: G\to \operatorname{GL}(V)\) be a group homomorphisms, then equivalently \(V\) is a (rational) \(G{\hbox{-}}\)module.
For \(T\subseteq G\), \(T\curvearrowright G\) semisimply, so we can simultaneously diagonalize these operators to obtain a weight space decomposition \(V = \bigoplus_{\lambda \in X(T)} V_\lambda\), where \begin{align*} V_\lambda &\mathrel{\vcenter{:}}=\left\{{v\in V{~\mathrel{\Big|}~}t.v = \lambda(t)v\,\, \forall t\in T}\right\} \\\ X(T) &\mathrel{\vcenter{:}}=\hom(T, {\mathbb{G}}_m) .\end{align*}
Let \(G = \operatorname{GL}(n, k)\) and \(V\) the \(n{\hbox{-}}\)dimensional natural representation as column vectors, \begin{align*} V = \left\{{{\left[ {v_1, \cdots, v_n} \right]} {~\mathrel{\Big|}~}v_j \in k}\right\} .\end{align*}
Then \begin{align*} T = \left\{{ \begin{bmatrix} a_1 & 0 & 0 \\ 0 & \ddots & 0\\ 0 & 0 & a_n \end{bmatrix} {~\mathrel{\Big|}~}a_j \in k^{\times} }\right\} .\end{align*}
Consider the basis vectors \(\mathbf{e}_j\), then \begin{align*} \begin{bmatrix} a_1 & 0 & 0 \\ 0 & \ddots & 0\\ 0 & 0 & a_n \end{bmatrix} \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} = a_j \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} = a_1^0 a_2^0 \cdots a_j^0 \cdots a_n^0 \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} .\end{align*}
Here the weights are of the form \({\varepsilon}_j\mathrel{\vcenter{:}}={\left[ {0, 0, \cdots, 1, \cdots, 0} \right]}\) with a \(1\) in the \(j\)th spot, so we have \begin{align*} V = V_{{\varepsilon}_1} \oplus V_{{\varepsilon}_2} \oplus \cdots \oplus V_{{{\varepsilon}_n}} .\end{align*}
For \(V = {\mathbb{C}}\), we have \(t.v = (a_1^0 \cdots a_n^0)v\) and \(V = V_{(0, 0, \cdots, 0)}\).
Let \(G\) be a simple algebraic group (ano closed, connected, normal subgroups other than \(\left\{{e}\right\}, G\)) that is nonabelian that is nonabelian.
Let \(G = {\operatorname{SL}}(3, k)\). Then \begin{align*} T = \left\{{ t = \begin{bmatrix} a_1 & 0 & 0 \\ 0 & a_1 a_2^{-1} & 0\\ 0 & 0 & a_2^{-1} \end{bmatrix} {~\mathrel{\Big|}~} a_1, a_2\in k^{\times} }\right\} \end{align*} and \begin{align*} t. \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{bmatrix} = a_1^2 a_2^{-1} \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{bmatrix} .\end{align*} and \(\alpha_1 = (2, -1)\).
Then \begin{align*} {\mathfrak{g}}= {\mathfrak{g}}_{(2, -1)} \oplus {\mathfrak{g}}_{(-2, 1)} \oplus {\mathfrak{g}}_{(-1, 2)} \oplus {\mathfrak{g}}_{(1, -2)} \oplus {\mathfrak{g}}_{(1, 1)} \oplus {\mathfrak{g}}_{(-1, -1)} .\end{align*}
Then \(\alpha_2 = (-1, 2)\) and \(\alpha_1 + \alpha_2 = ( 1, 1)\).
This gives the root space decomposition for \({\mathfrak{sl}}_3\):
Then the Weyl group will be generated by reflections through these hyperplanes.
\(A_n\) corresponds to \({\mathfrak{sl}}(n+1, k)\) (mnemonic: \(A_1\) corresponds to \({\mathfrak{sl}}(2)\))
We have representations \(\rho: G\to \operatorname{GL}(V)\), i.e. \(V\) is a \(G{\hbox{-}}\)module
For \(T\subseteq G\), we have a weight space decomposition: \(V = \bigoplus_{\lambda \in X(T)} V_\lambda\) where \(X(T) = \hom(T, {\mathbb{G}}_m)\).
Note that \(X(T) \cong {\mathbb{Z}}^n\), the number of copies of \({\mathbb{G}}_m\) in \(T\).
Let \(\Phi = A_2\), then we have the following root system:
In general, we’ll have \(\Delta = \left\{{\alpha_1, \cdots, \alpha_n}\right\}\) a basis of simple roots.
Every root \(\alpha\in I\) can be expressed as either positive integer linear combination (or negative) of simple roots.
For any \(\alpha\in \Phi\), let \(s_\alpha\) be the reflection across \(H_\alpha\), the hyperplane orthogonal to \(\alpha\). Then define the Weyl group \(W = \left\{{s_\alpha {~\mathrel{\Big|}~}\alpha\in \Phi}\right\}\).
Here the Weyl group is \(S_3\):
\(W\) acts transitively on bases.
\(X(T) \subseteq {\mathbb{Z}}\Phi\), recalling that \(X(T) = \hom(T, {\mathbb{G}}_m) = {\mathbb{Z}}^n\) for some \(n\). Denote \({\mathbb{Z}}\Phi\) the root lattice and \(X(T)\) the weight lattice.
Let \(G = {\mathfrak{sl}}(2, {\mathbb{C}})\) then \(X(T) = {\mathbb{Z}}\omega\) where \(\omega = 1\), \({\mathbb{Z}}\Phi = {\mathbb{Z}}\left\{{\alpha}\right\}\) Then there is one weight \(\alpha\), and the root lattice \({\mathbb{Z}}\Phi\) is just \(2{\mathbb{Z}}\). However, the weight lattice is \({\mathbb{Z}}\omega = {\mathbb{Z}}\), and these are not equal in general.
There is partial ordering on \(X(T)\) given by \(\lambda \geq \mu \iff \lambda - \mu = \sum_{\alpha\in \Delta} n_\alpha \alpha\) where \(n_\alpha \geq 0\). (We say \(\lambda\) dominates \(\mu\).)
We extend scalars for the weight lattice to obtain \(E \mathrel{\vcenter{:}}= X(T) \otimes_{\mathbb{Z}}{\mathbb{R}}\cong {\mathbb{R}}^n\), a Euclidean space with an inner product \({\left\langle {{-}},~{{-}} \right\rangle}\).
For \(\alpha\in \Phi\), define its coroot \(\alpha {}^{ \vee }\mathrel{\vcenter{:}}={2\alpha \over {\left\langle {\alpha},~{\alpha} \right\rangle}}\). Define the simple coroots as \(\Delta {}^{ \vee }\mathrel{\vcenter{:}}=\left\{{\alpha_i {}^{ \vee }}\right\}_{i=1}^n\), which has a dual basis \(\Omega \mathrel{\vcenter{:}}=\left\{{\omega_i}\right\}_{i=1}^n\) the fundamental weights. These satisfy \({\left\langle {\omega_i},~{\alpha_j {}^{ \vee }} \right\rangle} = \delta_{ij}\).
Important because we can index irreducible representations by fundamental weights.
A weight \(\lambda\in X(T)\) is dominant iff \(\lambda \in {\mathbb{Z}}^{\geq 0} \Omega\), i.e. \(\lambda = \sum n_i \omega_i\) with \(n_i \in {\mathbb{Z}}^{\geq 0}\).
If \(G\) is simply connected, then \(X(T) = \bigoplus {\mathbb{Z}}\omega_i\).
See Jantzen for definition of simply-connected, \({\operatorname{SL}}(n+1)\) is simply connected but its adjoint \(PGL(n+1)\) is not simply connected.
When doing representation theory, we look at the Verma modules \(Z(\lambda) = U({\mathfrak{g}}) \otimes_{U({\mathfrak{b}}^+)} \lambda \twoheadrightarrow L(\lambda)\).
\(L(\lambda)\) as a finite-dimensional \(U({\mathfrak{g}}){\hbox{-}}\)module \(\iff\) \(\lambda\) is dominant, i.e. \(\lambda \in X(T)_+\).
Thus the representations are indexed by lattice points in a particular region:
Question 1:
Suppose \(G\) is a simple (simply connected) algebraic group. How do you parameterize irreducible representations?
For \(\rho: G\to \operatorname{GL}(V)\), \(V\) is a simple module (an irreducible representation) iff the only proper \(G{\hbox{-}}\)submodules of \(V\) are trivial.
Answer 1: They are also parameterized by \(X(T)_+\). We’ll show this using the induction functor \(\mathop{\mathrm{ind}}_B^G \lambda =H^0(G/B, \mathcal{L}(\lambda))\) (sheaf cohomology of the flag variety with coefficients in some line bundle).
We’ll define what \(B\) is later, essentially upper-triangular matrices.
Question 2: What are the dimensions of the irreducible representations for \(G\)?
Answer 2: Over \(k={\mathbb{C}}\) using Weyl’s dimension formula.
For \(k = \mkern 1.5mu\overline{\mkern-1.5mu{\mathbb{F}}\mkern-1.5mu}\mkern 1.5mu_p\): conjectured to be known for \(p\geq h\) (the Coxeter number), but by Williamson (2013) there are counterexamples. Current work being done!
Review: let \({\mathfrak{g}}\) be a semisimple lie algebra \(/{\mathbb{C}}\). There is a decomposition \({\mathfrak{g}}= {\mathfrak{b}}^+ \oplus {\mathfrak{n}}^- = {\mathfrak{n}}^+ \oplus t\oplus {\mathfrak{n}}^-\), where \(t\) is a torus. We associate \(U({\mathfrak{g}})\) the universal enveloping algebra, and representations of \({\mathfrak{g}}\) correspond with representations of \(U({\mathfrak{g}})\).
Let \(\lambda \in X(T)\) be a weight, then \(\lambda\) is a \(U({\mathfrak{b}}^+){\hbox{-}}\)module. We can write \(Z(\lambda) = U({\mathfrak{g}}) \otimes_{U({\mathfrak{b}}^+)} \lambda\).
There exists a unique maximal submodule of \(Z(\lambda)\), say \(RZ(\lambda)\) where \(Z(\lambda)/RZ(\lambda) \cong L(\lambda)\) is an irreducible representation of \({\mathfrak{g}}\).
Let \(L = L(\lambda)\) be a finite-dimensional irreducible representation for \({\mathfrak{g}}\). Then
Let \({\mathfrak{g}}\) be an algebraic group \(/k\) with \(k = \mkern 1.5mu\overline{\mkern-1.5muk\mkern-1.5mu}\mkern 1.5mu\), and let \(H \leq G\). Let \(M\) be an \(H{\hbox{-}}\)module, we’ll eventually want to produce a \(G{\hbox{-}}\)modules.
Step 1: Make \(M\) into a \(G\times H\) where the first component \((g, 1)\) acts trivially on \(M\).
Taking the coordinate algebra \(k[G]\), this is a \((G-G){\hbox{-}}\)bimodule, and thus becomes a \(G\times H{\hbox{-}}\)module. Let \(f\in k[G]\), so \(f:G\to K\), and let \(y\in G\). The explicit action is \begin{align*} [(g, h) f] (y) \mathrel{\vcenter{:}}= f(g^{-1} y h) .\end{align*}
Note that we can identify \(H\cong 1\times H \leq G\times H\). We can form \((M\otimes_k k[G])^H\), the \(H{\hbox{-}}\)fixed points.
Let \(N\) be an \(A{\hbox{-}}\)module and \(B{~\trianglelefteq~}A\), then \(N^B\) is an \(A/B{\hbox{-}}\)module.
Hint: the action of \(B\) is trivial on \(N^B\). Here \(N^B \mathrel{\vcenter{:}}=\left\{{n\in N {~\mathrel{\Big|}~}b.n = n\, \forall b\in B}\right\}\)
The induced module is defined as \begin{align*} \mathop{\mathrm{ind}}_H^G(M) \mathrel{\vcenter{:}}=(M\otimes k[G])^H .\end{align*}
\(({-}\otimes_k k[G]) = \hom_H(k, {-}\otimes_k k[G])\) is only left-exact, i.e. \begin{align*} \qty{0\to A\to B\to C\to 0}\mapsto \qty{0\to FA \to FB \to FC \to \cdots} .\end{align*}
By taking right-derived functors \(R^jF\), you can take cohomology.
Note that in this category, we won’t have enough projectives, but we will have enough injectives.
This functor commutes with direct sums and direct limits.
(Important) Frobenius Reciprocity: there is an adjoint, restriction, satisfying \begin{align*} \hom_G(N, \mathop{\mathrm{ind}}_H^G M) = \hom_H(N\downarrow_H, M) .\end{align*}
(Tensor Identity) If \(M\in {\mathsf{Mod}}(H)\) and additionally \(M \in {\mathsf{Mod}}(G)\), then \(\mathop{\mathrm{ind}}_H^G = M \otimes_k \mathop{\mathrm{ind}}_H^G k\).
If \(V_1, V_2 \in {\mathsf{Mod}}(G)\) then \(V_1 \otimes_k V_2 \in {\mathsf{Mod}}(G)\) with the action given by \(g(v_1\otimes v_2) = gv_1 \otimes gv_2\).
I.e., equivariant wrt the \(H{\hbox{-}}\)action.
Then \(G\) acts on \(\mathop{\mathrm{ind}}_H^G M\) by left-translation: \((gf)(y) = f(g^{-1} y)\).
This is an \(H{\hbox{-}}\)module morphism. Why? We can check \begin{align*} {\varepsilon}(h.f) &\mathrel{\vcenter{:}}=(h.f)(a) \\ &= f(h^{-1} ) \\ &= hf(1) \\ &= h({\varepsilon}(f)) .\end{align*}
We can write the isomorphism in Frobenius reciprocity explicitly: \begin{align*} \hom_G(N, \mathop{\mathrm{ind}}_H^G M) &\xrightarrow{\cong} \hom_H(N, M) \\ \phi & \mapsto {\varepsilon}\circ \phi .\end{align*}
Suppose \(G\) is a connected reductive algebraic group \(/k\) with \(k = \mkern 1.5mu\overline{\mkern-1.5muk\mkern-1.5mu}\mkern 1.5mu\).
Let \(G = \operatorname{GL}(n, k)\). There is a decomposition:
Step 1: Getting modules for \(U\).
Then there’s a general fact: \(U^+ T U \hookrightarrow G\) is dense in the Zariski topology for any reductive algebraic group. We can form
Suppose we have a \(U{\hbox{-}}\)module, i.e. a representation \(\rho: U \to \operatorname{GL}(V)\). We can find a basis such that \(\rho(u)\) is upper triangular with ones on the diagonal. In this case, there is a composition series with 1-dimensional quotients, and the composition factors are all isomorphic to \(k\).
Moral: for unipotent groups, there are only trivial representations, i.e. the only simple \(U{\hbox{-}}\)modules are isomorphic to \(k\).
Step 2: Getting modules for \(B\).
Modules for \(B\) are solvable, in which case we can find a flag. In this case, \(\rho(b)\) embeds into upper triangular matrices, where the diagonal action may now not be trivial (i.e. diagonal is now arbitrary).
Thus simple \(B{\hbox{-}}\)modules arise by taking \(\lambda \in X(T) = \hom(T, {\mathbb{G}}_m) = \hom(T, \operatorname{GL}(1, k))\), then letting \(u\) act trivially on \(\lambda\), i.e. \(u.v = v\). Here we have \(B \to B/U = T\), so any \(T{\hbox{-}}\)module can be pulled back to a \(B{\hbox{-}}\)module.
Step 3: Getting modules for \(G\).
Let \(\lambda \in X(T)\), then \(H^0(\lambda) = \mathop{\mathrm{ind}}_B^G \lambda = \nabla(\lambda)\).
Take \(R\) a ring, then consider \(M\) an \(R{\hbox{-}}\)module to be a “vector space” over \(M\). Note that \(M\) is an \(R{\hbox{-}}\)module \(\iff\) there exists a ring morphism \(\rho: R\to \hom_{\text{AbGrp}}(M, M)\).
Now let \(G\) be a group and consider \(G{\hbox{-}}\)modules \(M\). Then a \(G{\hbox{-}}\)module will be defined by taking \(M/k\) a vector space and a \(G{\hbox{-}}\)action on \(M\). This is equivalent to having a group morphism \(\rho: G\to \operatorname{GL}(M)\).
For \(M\) a \(G{\hbox{-}}\)module, given a group action, define \begin{align*} \rho: G&\to \operatorname{GL}(M) \\ \rho(g)(m) &= g.m \end{align*} where \(\rho(h): M\to M\).
Similarly, for \(\rho: G\to \operatorname{GL}(M)\) a group morphism, define the group action \(g.m \mathrel{\vcenter{:}}=\rho(g)m\). Thus representations of \(G\) and \(G{\hbox{-}}\)modules are equivalent.
Let \(M\) be a \(G{\hbox{-}}\)module.
\(M\) is a simple \(G{\hbox{-}}\)module (equivalently an irreducible representation) \(\iff\) the only \(G{\hbox{-}}\)submodules (equiv. \(G{\hbox{-}}\)invariant subspaces) are \(0, M\).
\(M\) is indecomposable \(\iff\) \(M\) can not be written as \(M = M_1 \oplus M_2\) with \(M_i < M\) proper submodules.
For \(G = {\operatorname{SL}}(n, {\mathbb{C}})\), there is a natural \(n{\hbox{-}}\)dimensional representation \(M = V\), and this is irreducible.
Let \(R = {\mathbb{Z}}\), so we’re considering \({\mathbb{Z}}{\hbox{-}}\)modules. For \(M={\mathbb{Z}}\), \(M\) is not simple since \(2{\mathbb{Z}}< {\mathbb{Z}}\) is a proper submodule. However \(M\) is indecomposable.
Recall from last time: we defined a functor \(\mathop{\mathrm{ind}}_H^G({-}): H{\hbox{-}}\text{mod} \to G{\hbox{-}}\text{mod}\), where \(\mathop{\mathrm{ind}}_H^G = \qty{k[G] \otimes M}^H\), the \(H{\hbox{-}}\)invariants. This functor is left-exact but not right-exact, so we have cohomology \(R^j \mathop{\mathrm{ind}}_H^G\) by taking right-derived functors.
Goal: classify simple \(G{\hbox{-}}\)modules for \(G\) a reductive connected algebraic group.
For \(G = \operatorname{GL}(n , k)\), we have a decomposition
We have
For \(U{\hbox{-}}\)modules: \(k\) is the only simple \(U{\hbox{-}}\)module. Importantly, if \(V\) is a \(U{\hbox{-}}\)module, then the fixed points are never zero, i.e. \(V^U = \hom_{U{\hbox{-}}\text{Mod}}(k, V) \neq 0\).
For \(B{\hbox{-}}\)modules: let \(X(T) \mathrel{\vcenter{:}}=\hom(T, {\mathbb{G}}_m) = \hom(T, \operatorname{GL}(1, k))\). These are the simple representations for the torus \(T\). Thus \(\lambda \in X(T)\) represents a simple \(T{\hbox{-}}\)module.
We have a map \(B \to B/U = T\), so we can pullback \(T{\hbox{-}}\)representations to \(B{\hbox{-}}\)representations (“inflation”), since we have a map \(T\to \operatorname{GL}(1, k)\) and we can just compose. So \(\lambda\) is a 1-dimensional (simple) \(B{\hbox{-}}\)module where \(U\) acts trivially.
Lee’s theorem: all irreducible representations for \(B\) are one-dimensional. Thus these are the simple \(B{\hbox{-}}\)modules.
For \(G{\hbox{-}}\)modules: define \(\nabla(\lambda) \mathrel{\vcenter{:}}=\mathop{\mathrm{ind}}_B^G(\lambda) = H^0(\lambda)\).
Questions:
Known in characteristic zero, wildly open in positive characteristic.
Another interpretation: look at the flag variety \(G/B\) and take global sections, then \(H^0(\lambda) = H^0(G/B, \mathcal{L}(\lambda))\) where \(\mathcal{L}\) is given by projecting the fiber product \(G \times_B \lambda \twoheadrightarrow G/B\) onto the first factor.
Goal: classify simple \(G{\hbox{-}}\)modules. Strategy: used dominant highest weights.
As opposed to Verma modules, the irreducibles will be a dual situation where they sit at the bottom of the module. Indicated by the notation \(\nabla\) pointing down!
Let \(\lambda \in X(T)\) with \(H^0(\lambda) \neq 0\).
Note that in fact \(\ell(w_0) = {\left\lvert {\Phi^+} \right\rvert}\).
Take \(A_2\) with simple reflections \(s_{\alpha_1}, s_{\alpha_2}\) and \(\Delta = \left\{{\alpha_1, \alpha_2}\right\}\).
We can write \begin{align*} H^0(\lambda) = \left\{{f\in k[G] {~\mathrel{\Big|}~}f(gb) = \lambda(b)^{-1} f(g) \, b\in B, g\in G}\right\} .\end{align*}
Suppose \(f\in H^0(\lambda)^{U^+}\) and \(u_+ \in U^+, t\in T, u\in U\). Then \begin{align*} \qty{ u_+^{-1} f} (tu) &= f(tu) \\ &= \lambda(t)^{-1} f(1) .\end{align*} On the other hand, \begin{align*} \qty{ u_+^{-1} f} (tu) &= f(u_+ t u) .\end{align*}
So by density, \(f(1)\) is determined by \(f(u_+ t u)\) and \(\dim H^0(\lambda)^{U^+} \leq 1\). But since this can’t be zero, the dimension must be equal to 1.
Let \begin{align*} {\varepsilon}: H^0(\lambda) \to \lambda \end{align*} be the evaluation morphism.
This is a morphism of \(B{\hbox{-}}\)modules, and in particular is a morphism of \(T{\hbox{-}}\)modules. Thus the image of a weight \(\mu \neq \lambda\) is zero, so \({\varepsilon}\) is injective.
We have \begin{align*} f(u_+ t u) = \lambda(t)^{-1} f(1) = \lambda(t)^{-1} {\varepsilon}(f) .\end{align*}
Suppose \(f\in H^0(\lambda)^{U^+}\) and \({\varepsilon}(f) = 0\). Then \(f(u_+ t u) = 0\), and by density \(f\equiv 0\), showing injectivity.
Therefore \(H^0(\lambda)^{U^+}\subset H^0(\lambda)_\lambda\). Suppose \(\mu\) is maximal among weights in \(H^0(\lambda)\). Then \begin{align*} H^0(\lambda)_{\mu} \subseteq H^0(\lambda)^{U^+} \end{align*} because \(U^+\) raises weights.
But \(H^0(\lambda)^{U^+} \subseteq H^0(\lambda)_\lambda\) implies \(\mu = \lambda\). Thus the maximal weight in \(H^0(\lambda)\) is \(\lambda\).
Recall the situation in lie algebras: \(g_\alpha v \in V_{\lambda + \alpha}\) when \(v\\in V_{\lambda}\).
Since \(\lambda\) is maximal, any other weight \(\mu\) satisfies \(\mu \leq \lambda\). Thus \begin{align*} H^0(\lambda)_\lambda \subseteq H^0(\lambda)^{U^+} \subseteq H^0(\lambda)_\lambda ,\end{align*} forcing these to be equal and finishing part 1.
Some concepts used in the proof of other theorems: Let \(G\) be a reductive algebraic group and \({\mathfrak{g}}\) its lie algebra. There is an associative algebra \(U({\mathfrak{g}})\) which reflects the representation theory of \(G\).
Fact: \({\mathfrak{g}}{\hbox{-}}\)mod \(\equiv U({\mathfrak{g}}){\hbox{-}}\)modules which are unitary, i.e. \(1.m = m\).
We can write a basis \begin{align*} {\mathfrak{g}}= \left\langle{e_\alpha, h_i, f_\beta {~\mathrel{\Big|}~}\alpha\in\Phi^+,\, \beta\in\Phi^-,\, i = 1,2,\cdots,n}\right\rangle ,\end{align*} the Chevalley basis. It turns out that the structure constants are all in \({\mathbb{Z}}\).
Take \({\mathfrak{g}}= {\mathfrak{sl}}(2, k)\), then \begin{align*} e = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} \quad f = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \quad h = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} .\end{align*}
We want to form a \({\mathbb{Z}}{\hbox{-}}\)lattice in \(U({\mathfrak{g}})\), denoted \begin{align*} U({\mathfrak{g}})_{\mathbb{Z}} = \left\langle{ e_\alpha^{[n]} = {e_\alpha^n \over n!},\, f_\beta^{[n]} = {f_\beta^n \over n!}, {h_i \choose m} }\right\rangle .\end{align*}
We then form the distribution algebra (or hyperalgebra in earlier literature) as \(\mathrm{Dist})G) \mathrel{\vcenter{:}}= U({\mathfrak{g}})_{\mathbb{Z}}\otimes_{\mathbb{Z}}k\) for \(k\) any field (e.g. \(\operatorname{ch}(k) = p\)).
\(G{\hbox{-}}\)modules \(\equiv \mathrm{Dist}(G){\hbox{-}}\)modules which are
In characteristic zero, \(\mathrm{Dist}(G) = U({\mathfrak{g}})\). Thus there is a correspondence \begin{align*} \left\{{\substack{G{\hbox{-}}\text{modules}}}\right\} \iff \left\{{\substack{U({\mathfrak{g}}){\hbox{-}}\text{modules}}}\right\} .\end{align*}
If \(\operatorname{ch}(k) = p\), e.g. \(k = \mkern 1.5mu\overline{\mkern-1.5mu{\mathbb{F}}\mkern-1.5mu}\mkern 1.5mu_p\), and if the Frobenius map \(F:G\to G\) satisfies \(G_1\mathrel{\vcenter{:}}=\ker F\) (thinking of \(G_1\) as a group scheme), then \(\mathrm{Dist}(G_1) < \mathrm{Dist}(G)\) is a proper submodule. In this case, \({\mathfrak{g}}\subseteq \mathrm{Dist}(G_1)\) is a finite dimensional Hopf algebra, and \(k[G_1] = \mathrm{Dist}(G_1) {}^{ \vee }\). Importantly, the lie algebra does not generate \(\mathrm{Dist}(G)\) if \(k = \mkern 1.5mu\overline{\mkern-1.5mu{\mathbb{F}}\mkern-1.5mu}\mkern 1.5mu_p\).
Take \(G = {\mathbb{G}}_a\), then \(\mathrm{Dist}({\mathbb{G}}_a) = \left\langle{T^k {~\mathrel{\Big|}~}k=0,1,\cdots}\right\rangle\) is an infinite dimensional algebra. In this case, \(T^k T^\ell = {k+\ell \choose \ell}T^{k+\ell}\). For \(k={\mathbb{C}}\), \(\mathrm{Dist}({\mathbb{G}}_a) = \left\langle{T^1}\right\rangle\) has one generator.
In the case \(k = \mkern 1.5mu\overline{\mkern-1.5mu{\mathbb{F}}\mkern-1.5mu}\mkern 1.5mu_p\), we have \(\mathrm{Dist}(\qty{{\mathbb{G}}_a}_1) = \left\langle{T^k {~\mathrel{\Big|}~}0\leq k \leq p-1}\right\rangle\).
Note that taking duals yields a truncated polynomial algebra: \(k[\qty{{\mathbb{G}}_a}_1] = k[x] / \left\langle{x^p}\right\rangle\).
Recall that \(H^0(\lambda) \mathrel{\vcenter{:}}=\mathop{\mathrm{ind}}_B^G \lambda\). Proved in last (missed) class:
Let \(H^0(\lambda) \neq 0\). Then
\(\dim H^0(\lambda)_\lambda = 1\) where \(H^0(\lambda) = H^0(\lambda)^{U^+}\).
Each weight \(\mu\) of \(H^0(\lambda)\) satisfies \(w_0 \lambda \leq \mu \leq \lambda\), where \(w_0\) is the longest Weyl group element.
Let \(H^0(\lambda)_\lambda \neq 0\), then \(L(\lambda) = \mathop{\mathrm{Soc}}_G H^0(\lambda)\) is simple.
If \(\mu\) is a weight of \(L(\lambda)\), then \(w_0 \lambda \leq \mu \leq \lambda\), \(\dim L(\lambda)_\lambda = 1\), and \(L(\lambda)_\lambda = L(\lambda)^{U+}\).
Any simple \(G{\hbox{-}}\)module is isomorphic to \(L(\lambda)\) where \(H^0(\lambda) \neq 0\).
Goal: We now want to classify simple \(G{\hbox{-}}\)modules. So we need an iff criterion for when \(H^0(\lambda) \neq 0\).
We look at the set of dominant weights \begin{align*} X(T)_+ &= \left\{{\lambda \in X(T) {~\mathrel{\Big|}~}{\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle}\geq 0 \forall \alpha\in\Delta}\right\} &= \left\{{\lambda \in X(T) {~\mathrel{\Big|}~}\lambda = \sum_{i=1}^\ell n_i w_i,\, n_i \geq 0}\right\} .\end{align*}
TFAE:
\(1\implies 2\): Suppose (1), then consider a simple reflection \(s_\alpha\) for some \(\alpha \in \Delta\). We know \(H^0(\lambda)_\lambda \neq 0\), thus \(H^0(\lambda)_{s_\alpha \lambda} \neq 0\). Therefore \begin{align*} s_\alpha \lambda = \lambda - {\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle}\alpha \leq \lambda \\ \implies 0 \leq {\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle}\alpha \\ \implies {\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} \geq 0 \qquad \forall \alpha\in \Delta .\end{align*}
\(2\implies 1\): For a detailed proof, see Jantzen 2.6 in Part II.
Let \(\lambda \in X(T)_+\), then (by the intro lie algebras course) there exists an \(L(\lambda)\): a simple finite dimensional \(U({\mathfrak{g}}){\hbox{-}}\)module over \({\mathbb{C}}\).
\(L(\lambda)\) has an integral basis which is compatible with \(U({\mathfrak{g}})_{\mathbb{Z}}\) (Kostant’s \({\mathbb{Z}}{\hbox{-}}\)form).
Thus we can base change to get \(\tilde L(\lambda) \mathrel{\vcenter{:}}= L(\lambda) \otimes_{\mathbb{Z}}k\), which is a \(\mathrm{Dist}(G){\hbox{-}}\)module. Note that \(\tilde L(\lambda)\) still has highest weight \(\lambda\), so consider \(\hom_B(\tilde L(\lambda), \lambda) \neq 0\).
Apply frobenius reciprocity: \(\hom_B(\tilde L(\lambda), \lambda) = \hom_G(\tilde L(\lambda), \mathop{\mathrm{ind}}_B^G \lambda) = \hom_G(\tilde L(\lambda), H^0(\lambda))\). But then \(H^0(\lambda) \neq 0\) (since otherwise this would imply the original hom was zero).
Let \(G\) be a reductive connected algebraic group over \(k\). Then there exists a 1-to-1 correspondence between dominant weights and irreducible \(G{\hbox{-}}\)representations: \begin{align*} \left\{{\substack{\text{Dominant weights: } X(T)_+}}\right\} \iff \left\{{\substack{\text{Irreducible representations: }\left\{{L(\lambda) {~\mathrel{\Big|}~}\lambda \in X(T)_+}\right\} }}\right\} .\end{align*}
Let \(G\) be reductive, so (importantly) it has a maximal torus \(T\). Let \(M\in G{\hbox{-}}\mathrm{mod}\), so (importantly) \(M\in T{\hbox{-}}\mathrm{mod}\).
Then there is a weight space decomposition \(M = \bigoplus_{\lambda \in X(T)} M_\lambda\). We then write the character of \(M\) as \begin{align*} \operatorname{ch}M \mathrel{\vcenter{:}}=\sum_{\lambda \in X(T)} \qty{\dim M_\lambda} e^{\lambda} \in {\mathbb{Z}}[X(T)] .\end{align*}
Next time: more characters, and Weyl’s dimension formula.
Todo
Let \(F:: {\mathsf{Alg}_{/k} }\to {\mathsf{Set}}\) be a functor, then \(F\) is representable iff \(F(R)\) corresponds to “solutions to equations in \(R\).”
Let \(F({-}) = {\operatorname{SL}}(2, {-})\), then the corresponding equations are \(\operatorname{det}(x_{ij}) = 1\).
If \(F\) is representable, there is a correspondence \(F(R) \cong \hom_R(A, R)\). In the above example, \begin{align*}A = k[x_{11}, x_{12}, x_{21}, x_{22}] / \left\langle{x_{11} x_{22} - x_{12}x_{21}}\right\rangle,\end{align*} which is exactly the coordinate algebra.
An affine group scheme is a representable functor \(F:{\mathsf{Alg}_{/k} }\to{\mathsf{Group}}\).
Suppose \(G\) is an affine group scheme, and let \(A = k[G]\) be the representing object. Then there is a correspondence \begin{align*} G{\hbox{-}}\text{modules} \iff k[G] {}^{ \vee }{\hbox{-}}\text{modules} .\end{align*}
For \(G\) reductive, the RHS is equivalent to \(\operatorname{Dist}(G){\hbox{-}}\)modules.
\(G\) is a finite group scheme iff \(k[G]\) is finite dimensional.
If \(G\) is finite, then \(A {}^{ \vee }\cong k[G] {}^{ \vee }\) is a cocommutative Hopf algebras. Thus representations for finite group schemes are equivalent to representations for finite-dimensional cocommutative Hopf algebras.
On group scheme side: see reduction, spectral sequences, conceptual arguments. On the algebra side: have bases, underlying vector space, can do concrete computations. Can take \(\operatorname{Spec}\qty{k[G]} {}^{ \vee }\) to recover a group scheme.
For \(A\) a \({\mathsf{Alg}_{/k} }\), we have a multiplication and a unit, which can be defined in terms of diagrams. To categorically reverse arrows, we can ask for a comultiplication and a counit. \begin{align*} \Delta: A &\to A^{\otimes 2} \\ \\ \epsilon: A &\to k .\end{align*}
We’ll want another map, an antipode \begin{align*} s: A\to A .\end{align*}
The comultiplication should satisfy
The counit should satisfy
And the antipode should satisfy
Let \(A\) be a Hopf algebra.
For \(A{\hbox{-}}\)modules \(M, N\), we can form the \(A{\hbox{-}}\)module \(M\otimes_k N\) with \begin{align*} \Delta(a) &= \sum a_i \otimes a_j \\ \\ a(m\otimes n) &= \sum a_1 m \otimes a_2 n .\end{align*}
If \(M\) is finite-dimensional over \(A\), then \(M {}^{ \vee }= \hom_k(M, k) \ni f\) is an \(A{\hbox{-}}\)module, and we can define \((af)(x) \mathrel{\vcenter{:}}= f(s(a)x)\) for \(a\in A, x\in M\).
\(A = kG\) the group algebra on a group is a Hopf algebra: \begin{align*} \Delta: A &\to A^{\otimes 2} \\ g &\mapsto g\otimes g .\end{align*}
The module action is diagonal, namely \(g(m\otimes n) = gm \otimes gn\). The antipode is given by \(s(g) = g^{-1}\), and the unit is \({\varepsilon}(g) = 1\) for all \(g\in G\).
Let \(A = U({\mathfrak{g}})\), the universal enveloping algebra for \({\mathfrak{g}}\) a Lie algebra. Recall that \({\mathfrak{g}}{\hbox{-}}\)modules are equivalent to \(U({\mathfrak{g}}){\hbox{-}}\)modules (unitary representations, some big associative algebra). Then \(A\) is a Hopf algebra, with \(\Delta(\ell) = \ell\otimes 1 + 1\otimes\ell\) for \(\ell \in {\mathfrak{g}}\). The unit is \({\varepsilon}(\ell) = 0\), and the antipode is \(s(\ell) = -\ell\).
Take the additive group \({\mathbb{G}}_a\), then \(A = k[{\mathbb{G}}_a] \cong k[x]\) is a commutative Hopf algebra with \(\Delta(x) = x\otimes 1 + 1\otimes x\), \({\varepsilon}(x) = 0, s(x) = -x\).
For \({\mathbb{G}}_m\), we have \(A = k[{\mathbb{G}}_m] \cong k[x, x^{-1}], {\varepsilon}(x) = 1, s(x) = x^{-1}\).
Let \(G\) be an algebraic group (scheme) over \(k\), where \(\operatorname{ch}(k) = p\). Let \(F:G\to G\) be the Frobenius, where e.g. \begin{align*} F:\operatorname{GL}(n, {-}) &\to \operatorname{GL}(n, {-})\\ (x_{ij}) & \mapsto (x_{ij}^p) .\end{align*}
Then \(F\) is a map of group schemes.
\(G_r \mathrel{\vcenter{:}}=\ker F^r\), where \(F^r \mathrel{\vcenter{:}}= F\circ F \circ \cdots \circ F\) is the \(r{\hbox{-}}\)fold composition of the Frobenius.
This yields a nesting \(G_1 {~\trianglelefteq~}G_2 {~\trianglelefteq~}G_3 \cdots \leq G\).
Recall that \begin{align*} \operatorname{Dist}(G) = \left\langle{ {x_\alpha^n \over n!}, {y_\beta^m \over m!}, {H_i \choose k} }\right\rangle .\end{align*}
We get a chain of finite dimensional algebras \begin{align*} \operatorname{Dist}(G_1) \leq \operatorname{Dist}(G_2) \leq \cdots \leq \operatorname{Dist}(G) \end{align*} where \begin{align*} \operatorname{Dist}(G_1) = \left\langle{ {x_\alpha^n \over n!}, {y_\beta^m \over m!}, {H_i \choose k} {~\mathrel{\Big|}~}0\leq n,m,k \leq p-1 }\right\rangle ,\end{align*}
where in general \(\operatorname{Dist}(G_\ell)\) goes up to \(p^{\ell} - 1\). Recall that \(G_r\) representations were equivalent to \(\operatorname{Dist}(G_r)\) representations.
Some basic questions (Curtis, Steinberg, 1960s):
What are the simple modules for Frobenius kernels? I.e., what are the irreducible representations for \(G_r\)?
How are the representations for \(G_r\) related to those for \(G\)?
It turns out the representations for \(G_r\) will lift to representations to \(G\). Use “twisted tensor product” (Steinberg).
It turns out that \(G_1\) is special. \begin{align*} \operatorname{Dist}(G_1) \cong u({\mathfrak{g}}) \mathrel{\vcenter{:}}= U({\mathfrak{g}}) / \left\langle{x^p - x^{[p]}}\right\rangle ,\end{align*} where \({\mathfrak{g}}= \mathrm{Lie}(G)\) is a restricted lie algebra (N. Jacobson). Note that for \(D\in {\mathfrak{g}}\) a derivation, we define \(D^{[p]} \mathrel{\vcenter{:}}= D\circ \cdots \circ D\) is the \(p{\hbox{-}}\)fold composition.
\(G_1{\hbox{-}}\)modules are equivalent to \({\mathfrak{g}}{\hbox{-}}\)modules which are restricted in the sense that \begin{align*} \rho: g &\to {\mathfrak{gl}}(V) \\ x^{[p]} &\mapsto \rho(x)^p .\end{align*}
Let \(\operatorname{ch}(k) p > 0\) and let \(G\) be an algebraic group scheme. We have a Frobenius map \(F:G\to G\) given by \(F((x_{ij})) = (x_{ij}^p)\), which we can iterate to get \(F^r\) for \(r\in {\mathbb{N}}\). Setting \(G_r = \ker F^r\) the \(r\)th Frobenius kernel, we get a normal series of group schemes \begin{align*} G_1 {~\trianglelefteq~}G_2 {~\trianglelefteq~}\cdots {~\trianglelefteq~}G .\end{align*}
There is an associated chain of finite dimensional Hopf algebras \begin{align*} \operatorname{Dist}(G_1) \leq \operatorname{Dist}(G_2) \leq \cdots \leq \operatorname{Dist}(G) .\end{align*}
Then \(k[G] {}^{ \vee }= \operatorname{Dist}(G_r)\), and we get an equivalence of representations for \(G_r\) to representations for \(\operatorname{Dist}(G_r)\).
A special case will be when \(G\) is a reductive algebraic group scheme. We’ll start by finding a basis for \(\operatorname{Dist}(G_r)\).
Recall the PBW theorem: we have a basis for \({\mathfrak{g}}\) given by \begin{align*} \left\{{x_\alpha {~\mathrel{\Big|}~}\alpha\in \Phi^+ }\right\} &\text{ Positive root vectors} \\ \left\{{h_i {~\mathrel{\Big|}~}i=1,\cdots, n}\right\} &\text{ A basis for } t \\ \left\{{x_\alpha {~\mathrel{\Big|}~}\alpha\in \Phi^- }\right\} &\text{ Negative root vectors} \\ .\end{align*}
We can then obtain a basis for \(U({\mathfrak{g}})\): \begin{align*} U({\mathfrak{g}}) = \left\langle{ \prod_{\alpha\in\Phi^+} x_\alpha^{n(\alpha)} \prod_{i=1}^n h_i^{k_i} \prod_{\alpha\in\Phi^+} x_{-\alpha}^{m(\alpha)} }\right\rangle .\end{align*}
We can similarly obtain a basis for the distribution algebra \begin{align*} \operatorname{Dist}(G) = \left\langle{ \prod_{\alpha\in\Phi^+} { x_{\alpha}^{n(\alpha)} \over n!} \prod_{i=1}^n {h_i \choose k_i} \prod_{\alpha\in\Phi^+} { x_{-\alpha}^{n(\alpha)} \over n!} }\right\rangle ,\end{align*} and we can similar get \(\operatorname{Dist}(G_r)\) by restricting to \(0\leq n(\alpha), k_i, m(\alpha) \leq p^r - 1\). Above the \(k_i\) are allowed to be any integers. This yields a triangular decomposition \begin{align*} \operatorname{Dist}(G_r) = \operatorname{Dist}(U_r^+) \operatorname{Dist}(T_r) \operatorname{Dist}(U_r^-) ,\end{align*} where we’ll denote the first two terms \(\operatorname{Dist}(B_r^+)\) and the last two as \(\operatorname{Dist}(B_r)\).
Goal: Classify simple \(G_r{\hbox{-}}\)modules. We know the classification of simple \(G{\hbox{-}}\)modules, so we’ll follow similar reasoning. We started by realizing \(L(\lambda) \hookrightarrow\mathop{\mathrm{ind}}_B^G \lambda\) as a submodule (the socle) of some “universal” module.
Let \(M\) be a \(B_r{\hbox{-}}\)module, we can then define \begin{align*} \mathop{\mathrm{ind}}_{B_r}^{G_r}M = \qty{k[G_r] \otimes M }^{B_r} ,\end{align*} where we’re now taking the \(B_r{\hbox{-}}\)invariants. We get a decomposition as vector spaces, \begin{align*} k[G_r] = k[U_r^+] \otimes_k k[B_r] \end{align*} and thus an isomorphism \begin{align*} \mathop{\mathrm{ind}}_{B_r}^{G_r}M = \qty{k[G_r] \otimes M }^{B_r} \cong k[U_r^+] \otimes\qty{ k[B_r] \otimes M}^{B_r} \cong k[U_r^+] \otimes M \end{align*} since \(k[B_r]\otimes M \cong \mathop{\mathrm{ind}}_{B_r}^{B_r} M \cong M\).
We then define \begin{align*} \mathop{\mathrm{coInd}}_{B_r}^{G_r} = \operatorname{Dist}(G_r) \otimes_{\operatorname{Dist}(B_r)} \otimes M ,\end{align*} which is an analog of \(U({\mathfrak{g}})\otimes_{U({\mathfrak{b}})} M\).
We have \(\operatorname{Dist}(U_r^+) \otimes\operatorname{Dist}(B_r) \cong \operatorname{Dist}(G_r)\), so
\begin{align*} \mathop{\mathrm{coInd}}_{B_r}^{G_r} = \operatorname{Dist}(G_r) \otimes_{\operatorname{Dist}(B_r)} \otimes M \cong \operatorname{Dist}(U_r^+) \otimes_k \operatorname{Dist}(B_r) \otimes_{\operatorname{Dist}(B_r)} M \cong \operatorname{Dist}(U_r^+) \otimes_k M ,\end{align*} which we’ll define as the coinduced module.
We can compute the dimension: \begin{align*} \dim \mathop{\mathrm{ind}}_{B_r}^{G_r} M = \dim \mathop{\mathrm{coInd}}_{B_r}^{G_r} M = \qty{\dim M} p^{r{\left\lvert {\Phi^+} \right\rvert}} .\end{align*}
Open: don’t know how to compute composition factors.
\begin{align*}\mathop{\mathrm{coInd}}_{B_r}^{G_r} M \equiv \mathop{\mathrm{ind}}_{B_r}^{G_r} M\otimes 2(p^r - 1)\rho,\end{align*} where the last term is a one-dimensional \(B_r{\hbox{-}}\)module and \(\rho\) is the Weyl weight.
\begin{align*}\mathop{\mathrm{coInd}}_{B_r^+}^{G_r} M \cong \mathop{\mathrm{ind}}_{B_r^+}^{G_r} M \otimes-2\qty{p^r-1}\rho\end{align*}
where \begin{align*} \rho = {1\over 2}\sum_{\alpha\in\Phi^+} \alpha = \sum_{i=1}^n w_i .\end{align*}
Since the tensor product satisfies a universal property, we have a map
We need to find a \(B_r\) morphism \(f:M\to N\).
We need to show that \(f\) generates \(N\) as a \(G_r{\hbox{-}}\)module.
Note that if (1) and (2) hold, then \(\psi\) is surjective, but since \(\dim \mathop{\mathrm{coInd}}_{B_r}^{G_r} M= \dim N\) this forces \(\psi\) to be an isomorphism.
We can write \begin{align*} \mathop{\mathrm{ind}}_{B_r}^{G_r} M\otimes 2(p^r-1) \rho &= \qty{ k[G_r] \otimes M \otimes 2(p^r-1) \rho }^{B_r} \\ &\cong \hom_{B_r}\qty{\operatorname{Dist}(G_r), M\otimes 2(p^r-1)\rho } .\end{align*}
Let \(g_m(x) \mathrel{\vcenter{:}}= m\otimes 2(p^r-1)\rho\) for any \(x =\prod_{\alpha\in\Phi^+} {x_\alpha^{p^r-1} \over \qty{p^r-1}! }\), and \(g_m(x) = 0\) for any other \(x\).
Now define \(f(m) = g_m\), and check that \(\operatorname{im}f\) generates \(N\).
Recall that \(W(\lambda) \mathrel{\vcenter{:}}= U({\mathfrak{g}}) \otimes_{U({\mathfrak{b}}^+)} \lambda\) were the Verma modules for lie algebras.
Let \(\lambda \in X(T)\), we have \(T_r \leq T\) and restriction yields a map \(X(T) \to X(T_r)\). Given a weight \(\lambda\), we can write it \(p{\hbox{-}}\)adically as \begin{align*} \lambda = \lambda_0 + \lambda_1 p + \lambda_2 p^2 + \cdots + \lambda_{r-1} + \cdots .\end{align*}
This yields an exact sequence \begin{align*} 0 \to p^r X(T) \to X(T) \to X(T_r) \to 0 ,\end{align*}
and thus \(X(T) / p^r X(T) \cong X(T_r)\).
Let \(\lambda \in X(T_r)\), then \(\lambda\) becomes a \(B_r{\hbox{-}}\)module by letting \(U_r\) act trivially, since we have \begin{align*} \cdots U_r \to B_r \twoheadrightarrow T_r \to 0 .\end{align*}
Set \(Z(r) = \mathop{\mathrm{coInd}}_{B_r}^{G_r} \lambda\), and set \(Z(r)' = \mathop{\mathrm{ind}}_{B_r}^{G_r} \lambda\). Then \(\dim Z_r(\lambda) = \dim Z_r'(\lambda) = p^{r{\left\lvert {\Phi^+} \right\rvert}}\). We’ll then think of
Note that the dimensions aren’t known, nor are the projective covers or injective hulls.
We have a form of translation invariance, namely \begin{align*} Z_r(\lambda + p^r\nu) = Z_r(\lambda) \qquad &\forall \nu \in X(T) \\ Z_r'(\lambda + p^r\nu) = Z_r'(\lambda) \qquad &\forall \nu \in X(T) .\end{align*}
Let \(\lambda \in X(T)\).
Let \(G\) be a reductive algebraic group scheme, \(k=\mkern 1.5mu\overline{\mkern-1.5mu{\mathbb{F}}\mkern-1.5mu}\mkern 1.5mu_p\) with \(p>0\), equipped with the Frobenius map \(F:G\to G\) with \(F^r\) its \(r{\hbox{-}}\)fold composition. We defined Frobenius kernels \(G_r \mathrel{\vcenter{:}}=\ker F^r\), which are in correspondence with the cocommutative Hopf algebras \(\operatorname{Dist}(G_r)\).
Goal: We want to classify simple \(G_r{\hbox{-}}\)modules, and to do this we’ll use socles.
We have a maximal torus \(T\subseteq G\) and thus \(T_r \subseteq G_r\) after acting by Frobenius. This yields a SES \begin{align*} 0 \to p_r X(T) \to X(T) \to X(T)/p^r X(T) = X(T_r) \to 0 .\end{align*}
How to think about this: take \(\lambda \in X(T_r)\), then we can write \(\lambda = \lambda + p^r \sigma\) in \(X(T_r)\) for some other weight \(\sigma \in X(T)\). We’ll define the “baby Verma modules” \begin{align*} Z_r(\lambda) \mathrel{\vcenter{:}}=\mathop{\mathrm{coInd}}_{B_r^+}^{G_r} \lambda \\ Z_r'(\lambda) \mathrel{\vcenter{:}}=\mathop{\mathrm{ind}}_{B_r^+}^{G_r} \lambda ,\end{align*}
and we have \(\dim Z_r(\lambda) = \dim Z_r'(\lambda) = p^{r {\left\lvert {\Phi^+} \right\rvert}}\).
Let \(\lambda\in X(T)\) be a weight.
\(Z_r(\lambda)\downarrow_{B_r}\) is the projective cover of \(\lambda\) and the injective hull of \(\lambda - 2 (p^r-1) \rho\).
\(Z_r'(\lambda)\downarrow_{B_r^+}\) is the injective hull of \(\lambda\) and the projective cover of \(\lambda - 2 (p^r-1) \rho\).
Note the latter are \(T_r{\hbox{-}}\)modules, so we let \(U^+\) act trivially.
What we need to do:
For (1), we can write \begin{align*} \operatorname{Dist}(G_r) = \operatorname{Dist}(U_r^+) \operatorname{Dist}(B_r) = \operatorname{Dist}(B_r^+) \operatorname{Dist}(U_r), ,\end{align*} and so \begin{align*} Z_r(\lambda) &= \mathop{\mathrm{coInd}}_{B_r^+}^{G_r} \lambda \\ &= \qty{\operatorname{dist}(G_r) \otimes_{\operatorname{Dist}(B_r)} \lambda} \downarrow_{B_r^+} \\ &= \operatorname{Dist}(U_r^+)\otimes\lambda \\ &= \operatorname{Dist}(B_r^+) \otimes_{\operatorname{Dist}(T_r)} \lambda \\ &= \mathop{\mathrm{coInd}}_{T_r}^{B_r^+} \lambda .\end{align*}
Why is this projective? Look at cohomology, suffices to show that higher Exts vanish. So consider \begin{align*} \operatorname{Ext} _{B_r^+}^n(\mathop{\mathrm{coInd}}_{T_r}^{B_r^+}, M) &= \operatorname{Ext} _{T_r}^n (\lambda, M) \qquad\text{by Frobenius reciprocity} \\ &= 0 \qquad \text{for } n \geq 0 ,\end{align*} since representations for \(T_r\) are completely reducible, and we’ve used the fact that \(\mathop{\mathrm{coInd}}_{T_r}^{B_r^+}({-})\) is exact.
Note: general algebra fact that higher exts vanish for projective modules.
For (2), we can write \begin{align*} \hom_{B_r^+}(Z_r(\lambda), \mu) &= \hom_{B_r^+}(\mathop{\mathrm{coInd}}_{T_r}^{B_r^+} \lambda, \mu) \\ &= \hom_{T_r} (\lambda, \mu) \qquad\text{by Frobenius reciprocity} \\ &= \begin{cases} k \& \lambda = \mu \\ 0 \& \text{else}. \end{cases} \end{align*}
Thus \(Z_r(\lambda) / \mathop{\mathrm{Rad}}Z_r(\lambda) \downarrow{B_r^+} = \lambda\).
If we now write \(A= \operatorname{Dist}(B_r^+)\) and \({\mathfrak{g}}= {\mathfrak{n}}^+ \oplus t \oplus {\mathfrak{n}}\) with \({\mathfrak{b}}^+ \mathrel{\vcenter{:}}={\mathfrak{n}}^+ \oplus t\), \begin{align*} \sum_S \qty{\dim P(S)} \qty{\dim(S)} \\ &= \sum_{\lambda \in X(T_r)} \qty{\dim Z_r(\lambda)} \qty{\dim \lambda} \\ &= \sum_{\lambda \in X(T_r)} p^{r{\left\lvert {\Phi^+} \right\rvert}} \cdot 1 \\ &= {\left\lvert {X(T_r)} \right\rvert} p^{r{\left\lvert {\Phi^+} \right\rvert}} \\ &= p^{rn} p^{r{\left\lvert {\Phi^+} \right\rvert}} \qquad n = \dim t\\ &= p^{r \dim {\mathfrak{b}}^+} \\ &= \dim A \end{align*}
We know that after taking fixed points, \(Z_r(\lambda)^{U_r}\) and \(Z_r'(\lambda)^{U_r^+}\) are one-dimensional, and thus \begin{align*} Z_r(\lambda) / \mathop{\mathrm{Rad}}Z_r(\lambda) \cong L_r(\lambda) \qquad \mathop{\mathrm{Soc}}_{G_r} Z_r'(\lambda) = L_r(\lambda) \end{align*} following the same argument considering \(H_0(\lambda)\).
For any \(\lambda \in X(T_r)\) we have \(0\neq L_r = \mathop{\mathrm{Soc}}_{G_r} Z_r'(\lambda)\). By the one-dimensionality above, we know \begin{align*} L_r(\mu) = L_r(\lambda) \iff \lambda = \mu \in X(T_r) .\end{align*}
Letting \(N\) be a simple \(G_r{\hbox{-}}\)module, we can consider it as a \(B_r{\hbox{-}}\)module, and the simple \(B_r{\hbox{-}}\)modules are one dimensional and obtained from simple \(T_r{\hbox{-}}\)modules. We then know that for some \(\lambda \in X(T_r)\), \begin{align*} 0 \neq \hom_{B_r}(N, \lambda) \\ &= \hom_{G_r}(N, \mathop{\mathrm{ind}}_{B_r}^{G_r} \lambda) ,\end{align*} which implies that \(N\hookrightarrow\mathop{\mathrm{ind}}_{B_r}^{G_r} \lambda = Z_r'(\lambda)\) as a submodule, and thus \(N = L_r(\lambda)\).
Let \(\Lambda\) be a set of representatives of \(XX(T) / p^r X(T) \cong X(T_r)\). Then there exists a one-to-one correspondence \begin{align*} \Lambda \iff \left\{{L_r(\lambda) \lambda \in \Lambda}\right\} ,\end{align*} where the RHS are simple \(G_r{\hbox{-}}\)modules.
How to think about this: restricted regions. Choose dominant weights as representatives \begin{align*} X_r(T) &= \left\{{\lambda \in X(T)_+ {~\mathrel{\Big|}~}0\leq {\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} < p^r\, \forall \alpha\in \Delta }\right\} \\ &= \left\{{\lambda \in X(T)_+ {~\mathrel{\Big|}~}\lambda = \sum_{i=1}^\ell n_i w_i,\, 0\leq n_j \leq p^r-1\, \forall j}\right\} \\ .\end{align*}
Pictures:
Some facts:
If \(\lambda \in X(T)_+\), then \(L(\lambda)\) is a simple \(G{\hbox{-}}\)module.
Question 1: What happens when we restrict \(L(\lambda)\downarrow_{G_r}\)?
Answer: This remains irreducible over \(G_r\) iff \(\lambda \in X_r(T)\), i.e. if \(L(\lambda)\downarrow_{G} \cong L_r(\lambda)\) when \(\lambda \in X_r(T)\).
Question 2: Given \(L(\lambda)\) for \(\lambda \in X(T)_+\), can we express \(L(\lambda)\) in terms of simple \(G_r{\hbox{-}}\)modules?
Answer: Yes, can be formulated in terms of Steinberg’s twisted tensor product.
From last time: Steinberg’s tensor product.
Let \(G\) be a reductive algebraic group scheme over \(k\) with \(\operatorname{ch}(k) > 0\). We have a Frobenius \(F:G\to G\), we iterate to obtain \(F^r\) and examine the Frobenius kernels \(G_r\mathrel{\vcenter{:}}=\ker F^r\).
If we have a representation \(\rho: G\to \operatorname{GL}(M)\), we can “twist” by \(F^r\) to obtain \(\rho^{(r)}: G \to \operatorname{GL}(M^{(r)})\). We have
Here \(M^{(r)}\) has the same underlying vector space as \(M\), but a new module structure coming from \(\rho^{(r)}\). Note that \(G_r\) acts trivially on \(M^{(r)}\).
Note that \(L(\lambda)\downarrow_{G_r}\) is semisimple, equal to \(L_r(\lambda)\) for \(\lambda \in X_r(T)\).
1960’s, Curtis and Steinberg.
Let \(\lambda \in X_r(T)\) and \(\mu \in X(T)_+\). Then \begin{align*} L(\lambda + p^r \mu) \cong L(\lambda) \otimes L(\mu)^{(r)} .\end{align*}
Recall that socle formula: letting \(M\) be a \(G{\hbox{-}}\)module, we have an isomorphism of \(G{\hbox{-}}\)modules: \begin{align*} \mathop{\mathrm{Soc}}_{G_r} \cong \bigoplus_{\lambda \in X_r(T)} L(\lambda) \otimes\hom_{G_r}(L(\lambda), M) .\end{align*}
Let \(M = L(\lambda + p^r \mu)\). Then from the socle formula, only one summand is nonzero, and thus \(\hom_{G_r}(L(\lambda), M)\) must be simple. Then there exists a \(\tilde \lambda \in X_r(T)\) and a \(\tilde \mu \in X(T)_+\) such that \begin{align*} M = L(\tilde \lambda) \otimes L(\tilde\mu)^{(r)} .\end{align*}
We now compare highest weights: \begin{align*} \lambda + p^r \mu = \tilde \lambda + p^r \tilde \mu \implies \lambda = \tilde \lambda {\quad \operatorname{and} \quad} \mu = \tilde \mu .\end{align*}
Let \(\lambda \in X(T)_+\), with a \(p{\hbox{-}}\)adic expansion \begin{align*} \lambda = \lambda_0 + \lambda_1 p + \cdots + \lambda_m p^m .\end{align*} where \(\lambda_j \in X_1(T)\) for all \(j\). Then \begin{align*} L(\lambda) = L(\lambda_0) \otimes\bigotimes_{j=1}^m L(\lambda_j)^{(j)} .\end{align*}
In order to know \(\dim L(\lambda)\) for \(\lambda \in X(T)_+\), it is enough to know \(\dim L_1(\mu)\) for \(\mu \in X_1(T)\). Schematic:
Recall that simplie \(G_1{\hbox{-}}\)modules correspond to simple \(\operatorname{Dist}(G_1){\hbox{-}}\)modules, and \(\operatorname{Dist}(G_1) \cong U({\mathfrak{g}})\).
1980: Lusztig proved conjecture: \(\operatorname{ch}L(\lambda)\) for \(\lambda \in X_1(T)\) is given by KL polynomials, shown for \(p \geq 2(h-1)\).
Kato showed for \(p> h\), where \(h\) is the Coxeter number satisfying \(h = {\left\langle {\rho},~{\alpha_i {}^{ \vee }} \right\rangle} + 1\) where \(\alpha_i {}^{ \vee }\) is the highest short root.
1990’s: A relation to representations of quantum groups \(U_q\) and affine lie algebras \(\widehat{{\mathfrak{g}}}\):
The first map is due to Andersen-Jantzen-Soergel for \(p\gg 0\) with no effective lower bounds, and the equivalence is due to Kazhdan-Lusztig, where the L conjecture holds for \(\widehat{{\mathfrak{g}}}\).
2000’s: Fiebig showed the L conjecture holds for \(p>N\) where \(N\) is an effective (but large) lower bound.
2013: Geordie Williamson shows L conjecture is false, with infinitely many counterexamples, and no lower bounds that are linear in \(h\).
See Donkin’s Tilting Module conjecture: expected that characters may come from \(p{\hbox{-}}\)KL polynomials instead.
Let \(G= {\operatorname{SL}}(2)\), so \(\dim T =1\). Here the restricted region of weights is given by \(X_!(T) = \left\{{0,1,\cdots, p-1}\right\}\). Then \(H^0(\lambda) = S^\lambda(V)\) for \(\lambda \in X(T)_+ = {\mathbb{Z}}_{\geq 0}\) and \(L(\lambda) \subseteq H^0(\lambda)\).
\begin{align*} L(\lambda) = H^0(\lambda) {\quad \operatorname{for} \quad} \lambda \in X_1(T) .\end{align*}
\begin{align*} \dim L(\lambda) = \lambda + 1 {\quad \operatorname{for} \quad} \lambda \in X_1(T) .\end{align*}
Take \(p=3\). Then \(\dim L(0) = 1\), \(\dim L(1) = 2\) (the natural representation), and \(\dim L(2) = 3\) (the adjoint representation). Then for \(p=4\), we have to use the twisted tensor product formula. Taking the 3-adic expansion \(4 = 1\cdot 3^0 + 1\cdot 3^1\), we have \begin{align*} L(4) = L(1) \otimes L(1)^{(1)} .\end{align*}
Since \(\dim L(1) = 2\), we get \(\dim L(4) = 4\).
Similarly, considering \(7 = 1\cdot 3^0 + 2\cdot 3^1\), we get \begin{align*} L(7) \cong L(1) \otimes L(2)^{(1)} \end{align*} and so \(\dim L(7) = 6\).
Take \(p=5\), then
What is \(H^0(5)\)? We know \(L(5)\) is a submodule, and we can write the character \begin{align*} \operatorname{ch}H^0(5) = e^5 + e^3 + e^1 + e^{-1} + e^{-3} + e^{-5} .\end{align*}
We know \(\operatorname{ch}(L(1)) = e^1 + e^{-1}\) and \(L(5) = L(1)^{(1)}\), so we can write \(\operatorname{ch}L() = e^{5} + e^{-5}\). By quotienting, we have \(\operatorname{ch}H^0(5) - \operatorname{ch}L(5) = e^3 + e^1 + e^{-1} +e^{-3} = \operatorname{ch}L(3)\). Thus the composition factors of \(H^0(5)\) are \(L(5)\) and \(L(3)\).
These correspond to an action of the affine Weyl group:
There is a strong linkage principle which describes the possible composition factors of \(H^0(\lambda)\).
We can thus find the socle/head structure:
Thus \(\operatorname{Ext} _G^1(L(5), L(3)) \cong k\).
Note that in other types, we don’t know the characters of the irreducibles in the restricted region, so we don’t necessarily know the composition factors.
Next topic: Kempf’s Vanishing Theorem. Proof in Jantzen’s book involving ampleness for sheaves.
Setup:
We have
along with the weights \(X(T)\).
We can consider derived functors of induction, yielding \(R^n \mathop{\mathrm{ind}}_B^G \lambda = \mathcal{H}^n(G/B, \mathcal{L}(\lambda)) \mathrel{\vcenter{:}}= H^n(\lambda)\) where \(\mathcal{L}(\lambda)\) is a line bundle and \(G/B\) is the flag variety.
Recall that
If \(\lambda \in X(T)_+\) a dominant weight, then \(H^n(\lambda) = 0\) for \(n> 0\).
In \(\operatorname{ch}(k) = 0\), \(H^n(\lambda)\) is known by the Bott-Borel-Weil theorem. In positive characteristic, this is not know: the characters \(\operatorname{ch}H^n (\lambda)\) is known, and it’s not even known if or when they vanish. Wide open problem!
Could be a nice answer when \(p>h\) the Coxeter number.
We define two classes of distinguished modules for \(\lambda \in X(T)_+\):
We have \begin{align*} L(\lambda) &\hookrightarrow\nabla(\lambda) \Delta(\lambda) &\twoheadrightarrow L(\lambda) .\end{align*}
We define the category \(\text{Rat}{\hbox{-}}G\) of rational \(G{\hbox{-}}\)modules. This is a highest weight category (as is e.g. Category \({\mathcal{O}}\)).
An (possibly infinite) ascending chain of \(G{\hbox{-}}\)modules \begin{align*} 0 \leq V_0 \subseteq V_1 \subseteq V_2 \subseteq \cdots \subseteq V \end{align*} is a good filtration of \(V\) iff
\(V = \cup_{i\geq 0} V_i\)
\(V_i/V_{i-1} \cong H^0(\lambda_i)\) for some \(\lambda_i \in X(T)_+\).
In characteristic zero, the \(H^0\) are irreducible and this recovers a composition series. Since we don’t have semisimplicity in this category, this is the next best thing.
With the same conditions of a good filtration, a chain is a Weyl filtration on \(V\) iff
\(V = \cup_{i\geq 0} V_i\)
\(V_i/V_{i-1} \cong V(\lambda_i)\) for some \(\lambda_i \in X(T)_+\).
I.e. the different is now that the quotients are standard modules.
\(V\) is a tilting module iff \(V\) has both a good filtration and a Weyl filtration.
Let \(\lambda \in X(T)_+\) be a dominant weight. Then there is a unique indecomposable highest weight tilting module \(T(\lambda)\) with highest weight \(\lambda\).
We have the following situation for type \(A_2\):
And thus a decomposition:
The picture to keep in mind is the following: 4 types of modules, all indexed by dominant weights:
We’ll take cohomology in the following way: let \(G\) be an algebraic group scheme, and define \begin{align*} H^n(G, M) \mathrel{\vcenter{:}}= \mathrm{Ext} G^n(k, M) \end{align*}
where to compute \(\operatorname{Ext} _G^n(M, N)\) we take an injective resolution \(N \hookrightarrow I_*\), apply \(\hom_G(M, {-})\), and take kernels mod images.
Letting \(\lambda \in {\mathbb{Z}}\Phi\) be integral, so \(\lambda_{\alpha \in \Delta} = \sum n_\alpha \alpha\), define the height \begin{align*} \operatorname{ht}(\lambda) = \sum_{\alpha\in\Delta} n_\alpha .\end{align*}
There exists an injective resolution of \(B{\hbox{-}}\)modules \begin{align*} 0\to k\to I_0 \to I_1 \to \cdots \end{align*} where
\begin{align*} k[u] \text{ an injective $B{\hbox{-}}$module} \\ k\hookrightarrow\mathop{\mathrm{ind}}_T^B k \mathrel{\vcenter{:}}= I_0 = k[u] .\end{align*}
We thus get a diagram of the form
Let \(H\leq G\), then there exists a spectral sequence \begin{align*} E^{i, j}_2 = \operatorname{Ext} _G^i(N, R^j \mathop{\mathrm{ind}}_H^G M) \implies \operatorname{Ext} _H^{i+j}(N, M) \end{align*} for \(N\in {\mathsf{Mod}}(G), M\in {\mathsf{Mod}}(H)\).
Let \(H=B\) and take \(G=G\) itself, and let \(N = k\) the trivial module and \(M\in {\mathsf{Mod}}(G)\) be any rational \(G{\hbox{-}}\)module. We have \begin{align*} E_2^{i, j} = \operatorname{Ext} ^{i}_B(k, R^j \mathop{\mathrm{ind}}_B^G M) \implies \operatorname{Ext} ^{i+j}_B(k, M) .\end{align*}
Observations:
\(R^0 \mathop{\mathrm{ind}}_B^G k = \mathop{\mathrm{ind}}_B^G k = k\).
The tensor identity works here, i.e. \(R^j \mathop{\mathrm{ind}}_B^G M = \qty{R^j \mathop{\mathrm{ind}}_B^G k} \otimes M\).
\(R^j \mathop{\mathrm{ind}}_B^G k = 0\) for \(j> 0\) since we have a dominant weight.
The spectral sequence thus collapses on \(E_2\):
Thus \begin{align*} E_2^{i, 0} = \operatorname{Ext} ^i_B(k, M) = H^i(B, M) .\end{align*}
Let \(G \supseteq P \supseteq B\) where \(P\) is a parabolic subalgebra and let \(M\) be a rational \(G{\hbox{-}}\)module. Then \(H^n(G, M) = H^n(P, M) = H^n(B, M)\) for all \(n \geq 0\).
Fix a Dynkin diagram and take a subset \(J\subseteq \Delta\).
Then \(L_j\rtimes U_j = P_J = P\), and we have a decomposition like
Let \(M\in {\mathsf{Mod}}(P)\) with \(P\supseteq B\).
If \(\dim M < \infty\) then \(\dim H^n(P, M) < \infty\) for all \(n\).
If \(H^j(P, M) \neq 0\) then there exists \(\lambda\) a weight of \(M\) with \(-\lambda \in {\mathbb{N}}\Phi^+\) and \(\operatorname{ht}(-\lambda) \geq j\).
Recall that we had a dominant weight \(\lambda \in X(T)_+\) with
where we have a module with both a good and a Weyl filtration.
If \(B\subseteq P \subseteq G\) with \(P\) parabolic and \(M\in {\mathsf{Mod}}(G)\), we have a “transfer theorem”: maps \begin{align*} H^n(G; M) \xrightarrow{\mathop{\mathrm{res}}} H^n(P; M) \xrightarrow{\mathop{\mathrm{res}}} H^n(B; M) \end{align*} induced by restrictions which are isomorphisms.
Let \(M\in {\mathsf{Mod}}(P)\) with \(P\supseteq B\).
If \(\dim M < \infty\) then \(\dim H^n(P; M) < \infty\).
If \(H^j(P; M) \neq 0\) then there exists a weight \(\lambda\) of \(M\) such that \(-\lambda \in {\mathbb{N}}\Phi^+\) and \(\operatorname{ht}(-\lambda) \geq j\).
Part (a) is proved in the book, we won’t show it here.
Suppose \(H^j(P; M) \neq 0\), then we have an injective resolution \(I_*\) for \(k\). Tensoring with \(M\) yields an injective resolution for \(M\), \begin{align*} 0 \to M \to I_0\otimes M \to I_1 \otimes M \to \cdots .\end{align*} Since \(H^j(B; M) \neq 0\), we know that the cocycles \(\hom_B(k, I_j\otimes M) \neq 0\) and thus \(\hom_T(k, I_j\otimes M) \neq 0\).
So there exists a weight \(-\lambda\) of \(I_j\) with \(\operatorname{ht}(-\lambda) \geq j\), and we know \(\lambda\) is a weight of \(M\) applying the previous lemma: namely we know that \(\lambda\) is invariant under the torus action, so there is a weight \(-\lambda\) such that \(-\lambda + \lambda = 0\).
Let \(\lambda, \mu \in X(T)_+\), then
The cohomology in the tensor product is zero, except in one special case: \begin{align*} H^i(G, H^0(\lambda) \otimes H^0(\mu)) = \begin{cases} 0 & i>0 \\ k & i=0, \lambda = -w_0\mu \end{cases} .\end{align*}
There are only extensions in one specific situation: \begin{align*} \operatorname{Ext} _G^i(V(\mu), H^0(\lambda)) = \begin{cases} 0 & i> 0 \\ k & i=0, \lambda = \mu \end{cases} .\end{align*}
The following is an important calculation!
Step 1: We’ll use Frobenius reciprocity twice. We can write the term of interest in two ways: \begin{align*} H^i(G, H^0(\lambda) \otimes H^0(\mu)) = H^i(B, H^0(\lambda) \otimes\mu) \\ \\ H^i(G, H^0(\lambda) \otimes H^0(\mu)) = H^i(G, \lambda \otimes H^0(\mu)) .\end{align*}
Thus there exists a weight \(\nu\) of \(H^0(\lambda)\) and \(\nu'\) of \(H^0(\mu)\) such that \begin{align*} \mu + \nu, \lambda + \nu' \in - {\mathbb{N}}\Phi^+ \quad \operatorname{ht}(\mu+\nu), \operatorname{ht}(\lambda + \nu') \leq -i .\end{align*}
Since \(w_0\lambda\) (resp. \(w_0\mu\)) is the lowest of weight of \(H_0(\lambda)\) (resp. \(H_0(\mu)\)), it follows that \begin{align*} \mu + w_0 \lambda, \lambda + w_0\mu \in -{\mathbb{N}}\Phi^+ .\end{align*}
Since \(w_0^2 = \operatorname{id}\), we can write \(\lambda + w_0\mu = w_0(\mu + w_0 \lambda)\). We know that the LHS is in \(-{\mathbb{N}}\Phi^+\), and the term in parentheses on the RHS is also in \(-{\mathbb{N}}\Phi^+\). Applying \(w_0\) interchanges \(\Phi^\pm\), so the RHS is in \({\mathbb{N}}\Phi^+\). But \({\mathbb{N}}\Phi^+ \cap-{\mathbb{N}}\Phi^+ = \left\{{0}\right\}\), forcing \(\lambda + w_0 \mu = 0\) and thus \(\lambda = -w_0 \mu\).
Since the height of zero is zero, we have \begin{align*} 0 = \operatorname{ht}(\lambda + w_0 \mu) \leq \operatorname{ht}(\lambda + \nu') \leq -i \implies i=0 .\end{align*} This shows cohomological vanishing for \(i>0\), the first case in the theorem statement.
For the remaining case, we can check that \(H^0(\lambda)^{U} = H^0(\lambda)_{w_0 \lambda}\), and so \begin{align*} \qty{H^0(\lambda) \otimes-w_0 \lambda}^{U^+} = k .\end{align*} This shows that \(H^0(B; H^0(\lambda) \otimes-w_0\lambda ) \cong k\), since \begin{align*} \qty{H^0(\lambda) \otimes-w_0 \lambda}^B = \qty{ \qty{H^0(\lambda) \otimes-w_0 \lambda }^U }^T .\end{align*}
Let \(\lambda, \mu \in X(T)_+\) with \(\lambda \not> \mu\). Then we can calculate the \(i\)th ext by computing the \(i-1\)st: for \(i>0\), \begin{align*} \operatorname{Ext} ^i_G(L(\lambda), L(\mu)) \cong \operatorname{Ext} ^{i-1}_G(L(\lambda), H^0(\mu) / \mathop{\mathrm{Soc}}_G(H^0(\mu))) .\end{align*}
We showed this in a special case. Let \(i=1\) with \(\lambda \not> \mu\), then \begin{align*} \operatorname{Ext} _G^1(L(\lambda), L(\mu)) \cong \mathop{\mathrm{Hom}}_G(L(\lambda), H^0(\mu) / \mathop{\mathrm{Soc}}_G(H^0(\mu))) .\end{align*} Thus it suffices to understand only the previous layer:
Consider the SES \begin{align*} 0 \to L(\mu) \to H^0(\mu) \to H^0(\mu) / \mathop{\mathrm{Soc}}_G(H^0(\mu)) \to 0 \end{align*} which yields a LES in homology by applying \(\hom_G(L(\lambda), {-})\). To obtain the statement, it suffices to show \(\operatorname{Ext} _G^1(L(\lambda), H^0(\mu)) = 0\) for \(i>0\), since this is the middle column in the LES.
We can write \begin{align*} \operatorname{Ext} _G^i(L(\lambda), H^0(\mu)) = H^i(G, L(\lambda) {}^{ \vee }\otimes H^0(\mu)) \quad\text{taking duals} \\ = H^i(B, L(\lambda) {}^{ \vee }\otimes\mu) \quad\text{by Frobenius reciprocity} ,\end{align*} so we can obtain a weight \(\sigma\) of \(L(\lambda) {}^{ \vee }\otimes\mu\) such that \(\sigma \in - {\mathbb{N}}\Phi^+\) and \(\operatorname{ht}(-\sigma) \geq i > 0\) by applying the previous lemma. So \(\sigma = \nu + \mu\) for \(\nu\) some weight of \(L(\lambda) {}^{ \vee }\).
By rearranging, we find that \(\sigma \in {\mathbb{N}}\Phi^-\). Letting \(\lambda\) be the lowest weight of \(L(\lambda) {}^{ \vee }\), we find \(\sigma \geq -\lambda + \mu\) (since this can only lower the weight).
But then \(-\lambda + \mu \in {\mathbb{N}}\Phi^-\), implying \(-\mu + \lambda \in {\mathbb{N}}\Phi^-\), and the LHS here is equal to \(\lambda - \mu\). This precisely says \(\lambda > \mu\), which contradicts the assumption that \(\lambda\) did not dominate \(\mu\). It may also be the case that \(\lambda = \mu\), which is handled separately.
We now want criteria for when we can find the following types of lifts:
Let \(V\) be a \(G{\hbox{-}}\)module with \(0\neq \hom_G(L(\lambda), V)\). If
\(\hom(L(\mu), V) = 0\),
\(\operatorname{Ext} _G^1(V(\mu), V) = 0\) for all \(\mu \in X(T)_+\) with \(\mu < \lambda\),
then \(V\) contains a submodule isomorphic to \(H^0(\lambda)\) and such a lift/extension exists.
The ext criterion will be the most important. The idea is to quotient and continue applying it.
Consider the SES \begin{align*} 0 \to L(\lambda) \hookrightarrow V \to V/L(\lambda) \to 0 \end{align*} as well as \begin{align*} 0 \to L(\lambda) \to H^0(\lambda) \to H^0(\lambda)/L(\lambda) \to 0 .\end{align*}
Now want to applying the LES in cohomology by applying \(\hom_G({-}, V)\), we get a LES of homs over \(G\): \begin{align*} 0 &\to \mathop{\mathrm{Hom}}(H^0(\lambda)/L(\lambda), V) \to \mathop{\mathrm{Hom}}(H^0(\lambda) , V) \to \mathop{\mathrm{Hom}}(L(\lambda), V) \\ &\to \operatorname{Ext} ^1(H^0(\lambda)/L(\lambda), V) \to \cdots .\end{align*} Thus it suffices to show this \(\operatorname{Ext} ^1\) is zero.
Strategy: show all of the composition factors of \(H^0(\lambda)/L(\lambda)\) are zero These are all of the form \(L(\mu)\) for \(\mu < \lambda\), so it now suffices to just show that \(\operatorname{Ext} _G^1(L(\mu), V) = 0\) when \(\mu < \lambda\).
Observe that we have \begin{align*} 0 \to N \to V(\mu) \to L(\mu) \to 0 \end{align*} where \(N\) are \(L(\sigma)\) composition factors for \(\sigma < \mu\). So apply \(\hom({-}, V)\): \begin{align*} 0 &\to \mathop{\mathrm{Hom}}(L(\mu), V) \to \mathop{\mathrm{Hom}}(V(\mu), V) \to \mathop{\mathrm{Hom}}(N, V) \\ &\to \operatorname{Ext} ^1(L(\mu), V) \to \operatorname{Ext} ^1(V(\mu), V) \to \cdots .\end{align*}
But we have \(\mathop{\mathrm{Hom}}(N, V) =0\) and \(\operatorname{Ext} ^1(V(\mu), V) = 0\), which squeezes and forces \(\operatorname{Ext} ^1(L(\mu), V) = 0\).
Next time: state and prove a cohomological criterion (Donkin, Scott, proved independently) for a \(G{\hbox{-}}\)module to admit a good filtration. More about when tensor products of induced modules have good filtrations.
Recall that good filtration is a chain \(\left\{{0}\right\} \subseteq V_1 \subseteq \cdots \subseteq V\) satisfying \(V = \cup V_i\) and \(V_i/V_{i-1} \cong H^0(\lambda_i)\) for \(\lambda_i\) some weight of \(V\).
Let \(V\) be a \(G{\hbox{-}}\)module and \(\lambda \in X(T)_+\) with \(\hom_G(L(\lambda), V)\). If \(\hom_G(L(\mu), V) = 0\) for any \(\mu < \lambda\) and \(\operatorname{Ext} _G^1(V(\mu), V) = 0\) for all \(\mu \in X(T)_+\), then \(V\) contains a submodule isomorphic to \(H^0(\lambda)\).
That is, we have a lift of the following form:
Let \(V\) be a \(G{\hbox{-}}\)module.
Analog of Jordan-Holder. Note that \(H^0(\lambda)\) may not by irreducible, but changing the filtration can not change the number of composition factors.
Much like measuring projectivity: can check all exts, or just the first.
Suppose \(V\) has a good filtration. Idea: induct on the filtration.
Suppose \(V = H^0(\lambda_1)\), then \begin{align*} [V: H^0(\mu) ] = \begin{cases} 0 & \mu \neq \lambda_1 \\ 1 & \mu = \lambda_1 \end{cases} = \dim \hom_G(V(\lambda_1), V) ,\end{align*} since we know the dimensions of these hom spaces from a previous result.
Suppose now that we have \begin{align*} 0 \to H^0(\mu_1) \to V H^0(\mu_2) \to 0 .\end{align*} Applying \(F \mathrel{\vcenter{:}}=\hom_G(V(\lambda), {-})\), we find that \(\operatorname{Ext} ^1_G\) vanishes. So this leads a SES, and the dimensions are thus additive. The result follows since \(F\) is additive.
\(1\implies 2\): Use the fact that \(\operatorname{Ext} ^i_G(V(\lambda), H^0(\mu)) = 0\) for all \(i>0\) and all \(\mu\).
\(2\implies 3\): Clear!
\(3\implies 1\): Choose a total ordering of weights \(\lambda_0, \lambda_1, \cdots \in X(T)\) such that if \(\lambda_i < \lambda_j\) then \(i<j\). Since \(V\neq 0\), there exists a dominant weight \(\lambda \in X(T)_+\) such that \(\hom_G(V(\lambda), V) \neq 0\), so choose \(i\) minimally in this order to produce such a \(\lambda_i\). Idea: use this to start a filtration.
Then \(\hom(L(\lambda_i), V) \neq 0\), and we have \begin{align*} V(\lambda_i) \twoheadrightarrow L(\lambda_i) \hookrightarrow V .\end{align*}
We know that \begin{align*} \hom_G(V(\mu), V) = 0 \quad \forall \mu < \lambda_i \\ \hom_G(L(\mu), V) = 0 \quad \forall \mu < \lambda_i \\ \operatorname{Ext} _G^1(L(\mu), V) = 0 \quad \forall \mu \in X(T)_+ \text{ by assumption} .\end{align*}
So the following map must be an injection, since there is no socle:
Set \(V_1 = H^0(\lambda_i)\), so \(V_1 \subseteq V\). We then have a SES \begin{align*} 0 \to V_1 \to V \to V/V_1 \to 0 .\end{align*}
Applying \(\hom(V(\lambda), {-})\) we obtain
Now iterate this process to obtain a chain \(V_1 \subseteq V_2 \subseteq \cdots \subseteq V\), and set \(V' \mathrel{\vcenter{:}}=\cup_{i>0} V_i\). Then \(\dim \hom_G(V(\lambda), V') = \dim \hom_G( V(\lambda), V )\) since \(\dim \hom_G(V(\lambda), V) < \infty\). But then taking the SES \begin{align*} 0\to V' \to V \to V/V' \to 0 \end{align*} and applying \(\mathop{\mathrm{Hom}}(V(\lambda), {-})\), we have \(\mathop{\mathrm{Hom}}(V(\lambda), V/V') = 0\) and we get an isomorphism of homs. But then \(\hom(V(\lambda), V/V') = 0\) for all \(\lambda \in X(T)_+\), forcing \(V/V'=0\) and \(V=V'\).
Let \(0\to V_1 \to V \to V_2 \to 0\) be a SES of \(G{\hbox{-}}\)modules with \(\dim \hom_G(V(\lambda), V_2) < \infty\) for all \(\lambda \in X(T)_+\). If \(V_1, V\) have good filtrations, then \(V_2\) also has a good filtration.
Note: this is likely difficult to prove without cohomology! But here we can apply the ext criterion.
Let \(\lambda \in X(T)_+\), then
For \(\lambda \in X(T)_+\), let \(I(\lambda)\) be the injective hull of \(L(\lambda)\), so we have \begin{align*} 0 \to L(\lambda) \hookrightarrow I(\lambda) .\end{align*}
Let \(\lambda \in X(T)_+\) and \(I(\lambda)\) be the injective hull of \(L(\lambda)\).
\(I(\lambda)\) has a good filtration.
The multiplicity \([I(\lambda): H^0(\mu)]\) is equal to \([H^0(\mu): L(\lambda)]\), the composition factor multiplicity.
Brauer-Humphreys Reciprocity. Same idea as in category \({\mathcal{O}}\): multiplicity of Vermas equals multiplicity of irreducibles.
How to check that it has a good filtration? The cohomological criterion! So consider \(\operatorname{Ext} ^1_G( V(\sigma), I(\lambda) )\) for all \(\sigma \in X(T)_+\). We want to show it’s zero, but this follows because \(I(\lambda)\) is injective.
By the previous result, we have \begin{align*} [I(\lambda): H^0(\mu) ] &= \dim \hom_G(V(\mu), I(\lambda)) \\ &= [V(\mu): L(\lambda) ] .\end{align*} Why does this second equality hold? The functor \(\hom_G({-}, I(\lambda))\) is exact, and \(\hom_G(L(\mu), I(\lambda)) = \delta_{\lambda, \mu}\). If \(\lambda = \mu\) there’s only one morphism, since \(L(\lambda) \hookrightarrow I(\lambda)\) and \(\mathop{\mathrm{Soc}}_G I(\lambda) = L(\lambda)\). This means that they have the same character, \(\operatorname{ch}H^0(\lambda) = \operatorname{ch}V(\lambda)\), and this implies that they have the same composition factors.
Let \(V\) be a \(G{\hbox{-}}\)module.
If \(V\) admits a Weyl filtration, then \begin{align*} [V: V(\lambda)] = \dim \hom_G (V, H^0(\lambda)) \end{align*}
Suppose that \(\dim \hom_G(V(\lambda), H^0(\lambda)) < \infty\) for all \(\lambda \in X(T)_+\). Then TFAE
Crelle 1988 (CPS: Cline Parshall Scott)
Let HWC denote a highest weight category.
BGG Category \({\mathcal{O}}\)
\(\operatorname{Rat}(G)\) for \(G\) a reductive algebraic group
\(\mathsf{Perv}_W(G/B) \cong {\mathcal{O}}_0\)
See
Donkin: On generalized Schur algebras
Irving: BGG algebras
There is a equivalence between HWC and QHA (quasi-hereditary algebras).
Key Points
\(L(\lambda) = \mathop{\mathrm{Soc}}_G \nabla(\lambda)\) and \(\nabla(\lambda) = A(\lambda)\).
All composition factors of \(\nabla(\lambda)\) satisfy \(\mu \leq \lambda\)
We have cohomological vanishing: \begin{align*} \operatorname{Ext} _G^i(\Delta(\lambda), \nabla(\mu)) = \begin{cases} 0 & i >0 \\ 0 & i=0, \lambda \neq \mu \\ k & i=0. \lambda = 0 \end{cases} \end{align*}
Interval finite poset: we’ll have a cone \(\Lambda\) of positive weights:
See handout!
Let \(G ,G'\) be rational \(G{\hbox{-}}\)modules admitting good filtrations. Then the tensor product \(V\otimes V'\) also admits a good filtration.
Let \(G = {\operatorname{SL}}(n, k)\) and take the natural representation \(V = H^0(w_1)\). Then \(V^{\otimes d}\) has a good filtration.
Let \(J\subset \Delta\) be a subset of simple roots. If \(V \in {\mathsf{Mod}}(G)\) has a good filtration and \(L_J\) is a Levi factor, then \(V{\downarrow_{L_J}}\) has a good filtration.
Let \({\mathfrak{g}}= \operatorname{Lie}(G)\) and \(p\) be a good prime (doesn’t divide any of the coefficients of the highest weight). Then the symmetric algebra \(S({\mathfrak{g}})\) has a good filtration.
For \(p\geq 3(h-1)\), the exterior algebra \(\Lambda({\mathfrak{g}})\) also admits a good filtration. Question: Is this true for all primes \(p\)? Or potentially for all good primes \(p\)?
Let \(G = \operatorname{GL}(n, k)\), then a module for \(G\) is polynomial iff the weights \(\lambda = (\lambda_1, \cdots, \lambda_n)\) satisfy \(\lambda_j \geq 0\) for all \(j\).
For \(V\) the natural representation, the weights are the unit vectors \({\varepsilon}_1, \cdots, {\varepsilon}_n\), so \(V\) is a polynomial representation. Then \(V^{\otimes d}\) is again polynomial by a previous remark.
Note that the adjoint representation \({\mathfrak{g}}\cong V\otimes V {}^{ \vee }\) is not a polynomial representation.
There is an equivalence \begin{align*} \mathrm{Poly}(G) \cong \bigoplus_{j\geq 0} {\mathsf{Mod}}(S(n, d)) ,\end{align*} where this Schur algebra \(S(n, d)\) is given by \(\mathop{\mathrm{End}}_{\Sigma_d}(V^{\otimes d})\) where \(\Sigma_d\) is the symmetric group of \(d\) letters.
The theorem is that \({\mathsf{Mod}}(S(n, d))\) is a QHA, and thus a highest weight category.
This is a finite-dimensional algebra, so we should be able to calculate the dimensions, index by highest weights, write the standard/costandard modules, etc. There is a correspondence \begin{align*} \left\{{\substack{\text{Simple modules for }S(n, d)}}\right\} \iff \left\{{\substack{\Lambda^+(n, d) \text{ partitions of $d$ with at most $n$ parts}}}\right\} .\end{align*}
We can compute \begin{align*} \dim S(n, d) = {n^2 + d - 1 \choose n^2 - 1} ,\end{align*} and simple modules correspond to \(L(\lambda)\) for \(\operatorname{GL}_n\) where \(\lambda\) is a polynomial representation.
\(S(n, d)\) is semisimple if and only if
\(k = {\mathbb{C}}\) or characteristic zero, or
\(d < p\).
For latter condition, see Maschke’s theorem
Consider \(S(2, 3)\) for \(p=2\), so \(G = \operatorname{GL}(2)\). Then \begin{align*} \dim S(2, 3) = {4+3-1 \choose 3} = {6\choose 3} = 20 .\end{align*}
The only admissible partitions are thus
Then \(L(2, 1) = L("w")\) as an \({\operatorname{SL}}(2){\hbox{-}}\)module, so \begin{align*} \dim L(2, 1) = 2 \end{align*} Then \(L(3, 0) = L("3w")\) as an \({\operatorname{SL}}(2){\hbox{-}}\)module. We can compute \begin{align*} L(3) = L(1, 0)^{(1)} \otimes L(1, 0) ,\end{align*} and since each is 2-dimensional, we get \(\dim L(3) = 4^2 + 2^2 = 20\).
Note that the sum of the squares of the dimensions of the irreducibles are equal to the total dimension, which shows this module is semisimple. But this contradicts the theorem! So it turns out there is a third condition, namely this exact case.
Next time: look at structure of injective modules, then the theory of Bott-Borel-Weil for higher sheaf cohomology.
Let \(G = \operatorname{GL}(n, k)\), then polynomial representations of \(G\) are equivalent to \(S(n, d)\) modules for all \(d\geq 0\), where we can note that \(S(n, d) = \mathop{\mathrm{End}}_{\Sigma_d}(V^{\otimes d})\). We’ll have a correspondence \begin{align*} \left\{{\substack{L(\lambda) \text{ simple modules for } S(n,d)}}\right\} \iff \Lambda^+(n, d) \text{, partitions of $d$ with at most $n$ parts} ,\end{align*}
Good example, can see all filtrations at work, tilting modules, etc.
Consider \(S(3, 3)\) for \(p=3\), we then have the partitions \(\Lambda^+(3, 3) = \left\{{(3), (2, 1), (1,1,1)}\right\}\). We can think of these in the \({\varepsilon}\) basis as \((3) = (3,0,0), (2,1) = (2,1,0)\). Since \({\operatorname{SL}}(3, k) \subset \operatorname{GL}(3, k)\), we can find the \(SL(3, k)\) weights by taking successive differences to yield \((3, 0), (1, 1), (0, 0)\) with the corresponding picture
We can compute
We have a form of Brauer reciprocity: \begin{align*} [I(\lambda): H^0(\mu)] = [H^0(\mu) : L(\lambda) ] .\end{align*}
We can now compute the injective hulls:
What are the tilting modules? We can use the fact that \(L(1^3) = V(1^3)\). It has a good filtration and a Weyl filtration and thus must be the tilting module for \(L(1^3)\).
Using the following fact:
We can compute the following:
\(k = {\mathbb{C}}\) implies \(L(\lambda) = H^0(\lambda)\) for all \(\lambda \in X(T)_+\)
\(k= \mkern 1.5mu\overline{\mkern-1.5mu{\mathbb{F}}\mkern-1.5mu}\mkern 1.5mu_p\) implies \(L(\lambda) = H^0(\lambda)\) if \({\left\langle {\lambda},~{\alpha_0 {}^{ \vee }} \right\rangle} \leq 1\) where \(\alpha_0\) is the highest short root.
Such \(\lambda\) are referred to as minuscule weights.
For type \(A_n\), we have \(\alpha_0 = \sum_{i=1}^n \alpha_i\). For type \(G_2\), we have \(\alpha_0 {}^{ \vee }= 2\alpha_1 {}^{ \vee }+ 3\alpha_2 {}^{ \vee }\).
In type \(A_n\), set \(\lambda = \sum_{j=1}^n c_j w_j\) where \(c_j \geq 0\). Then \({\left\langle {\lambda},~{\alpha_0 {}^{ \vee }} \right\rangle} = \sum c_j \leq 1\), so \(\lambda\) is minuscule iff \(\lambda = 0\) or \(\lambda = w_j\) for some \(j\).
Quick timeline:
The simple module is equal to the induced module, so \(L(\lambda) = H^0(\lambda)\), for all \(p\) iff \(\lambda\) is minuscule, or if \(L(\lambda) = {\mathfrak{g}}\) for \(\Phi = E_8\).
We can consider the higher right-derived functors of \(\lambda\), given by \(H^i(\lambda) = R^i \mathop{\mathrm{ind}}_B^G \lambda\) for \(\lambda \in X(T)\). You can think of this as the higher sheaf cohomology of the flag variety, \(\mathcal{H}^i(G/B, \mathcal{L}(\lambda))\).
We have Kempf Vanishing: \(H^i(\lambda) = 0\) for all \(i>0\) when \(\lambda \in X(T)_+\) is dominant (although other things may happen for non-dominant weights). There is a correspondence \((G, T) \iff (W, \Phi)\), and since \(W\) is generated by simple reflections, we can write any \(w\in W\) as \(w=\prod s_{\alpha_i}\). A reduced expression is one in which the length can not be shortened, and any two reduced expressions necessarily have the same length (number of simple reflections).
For \(\Phi = A_2\), we have \(w_0 = s_{\alpha_1} s_{\alpha_2} s_{\alpha_1} = s_{\alpha_2} s_{\alpha_1} s_{\alpha_2}\).
We can let \(W\) act on \(X(T)\) by reflections by the formula \(s_\alpha \lambda = \lambda - {\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle}\alpha\). We then shift the action by setting \(s_\alpha \cdot \lambda = w(\lambda+\rho)-\rho\) where \(\rho = {1\over 2} \sum_{\alpha\in \Phi^+} \alpha = \sum_{j=1}^n w_j\).
Let \(G\) be a reductive algebraic group and \(k={\mathbb{C}}\). For \(\lambda \in X(T)_+\), we can describe the sheaf cohomology: \begin{align*} \mathcal{H}^i(w\cdot \lambda) = \begin{cases} H^0(\lambda) & i=\ell(w) \\ 0 & \text{otherwise} \end{cases} .\end{align*}
Moreover, if \(\lambda \not\in X(T)_+\) and \({\left\langle {\lambda+\rho},~{\alpha {}^{ \vee }} \right\rangle} \geq 0\) for all \(\alpha \in \Delta\), then \(\mathcal{H}^i(w\cdot \lambda) = 0\) for all \(w\in W\).
Wide open in characteristic \(p\), can say some things. We’ll prove this in characteristic zero.
Recall that \(k={\mathbb{C}}\) and \(H^0(\lambda) = L(\lambda)\). We’ll want to reduce to \({\operatorname{SL}}(2, {\mathbb{C}})\) parabolics. For \(\alpha\in\Delta\), let \(P_\alpha\) be the associated parabolic \(P_\alpha = L_\alpha \rtimes U_\alpha\), which is parabolic of type \(A_1\).
Idea: \(\alpha\) generates an \({\operatorname{SL}}_2\) subgroup (the Levi factor), like the Borel but sticks out in one dimension:
Then \begin{align*} s_\alpha \cdot \lambda = s_\alpha(\lambda + \rho) - \rho \\ = \lambda + \rho - {\left\langle {\lambda + \rho},~{\alpha {}^{ \vee }} \right\rangle}\alpha - \rho \\ = \lambda - {\left\langle {\lambda + \rho},~{\alpha {}^{ \vee }} \right\rangle}\alpha .\end{align*}
Next time: proof of Bott-Borel-Weil and its generalization to \(k = \mkern 1.5mu\overline{\mkern-1.5mu{\mathbb{F}}\mkern-1.5mu}\mkern 1.5mu_p\). For \(B\subset P_\alpha \subset G\), we’ll have a spectral sequence \begin{align*} E_2^{i, j} = R^i \mathop{\mathrm{ind}}_{P_\alpha}^G R^j \mathop{\mathrm{ind}}_B^{P_\alpha} \Rightarrow R^{i+j} \mathop{\mathrm{ind}}_B^G \lambda = H^{i+j}(\lambda) .\end{align*}
Last time: Bott-Borel-Weil. Stated for characteristic zero, working toward a generalization.
Let \(\Delta\) be the set of simple roots, and \(\alpha\in \Delta\). We can form a Levi decomposition \(P_\alpha \mathrel{\vcenter{:}}= L_\alpha \rtimes U_\alpha\):
We have \(B \subseteq P_\alpha \subseteq G\). The dot action is given by the following: Let \(W\) be the Weyl group, then \(W\) acts on \(X(T)\) by \(w\cdot \lambda = w(\lambda + \rho) - \rho\), where \begin{align*} \rho = {1\over 2} \sum_{\alpha\in \Phi^+} \alpha = \sum_{i=1}^n w_n .\end{align*}
We obtained a formula \begin{align*} S_\alpha \cdot \lambda = \lambda - {\left\langle {\lambda + \rho},~{\alpha {}^{ \vee }} \right\rangle} \alpha .\end{align*}
Let \(\alpha\in\Delta\) be simple and \(\lambda \in X(T)\) be an arbitrary weight. Then
\(U_\alpha\) acts trivially on \(\mathop{\mathrm{ind}}_B^{P_\alpha} \lambda\).
(Kempf’s Vanishing for \(P_\alpha\)) If \({\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} = r \geq 0\), then \begin{align*} R^i \mathop{\mathrm{ind}}_B^{P_\alpha} \lambda = 0 \qquad \text{for } i \geq 0 ,\end{align*} and \(\dim \mathop{\mathrm{ind}}_B^{P_\alpha}\lambda = r + 1\).
If \({\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} = -1\), then \(R^i \mathop{\mathrm{ind}}_B^{P_\alpha} \lambda = 0\) for all \(i\).
If \({\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} \leq -2\), then
\(R^i \mathop{\mathrm{ind}}_B^{P_\alpha} \lambda = 0\) for \(i \neq 1\), and
\(\dim R^1 \mathop{\mathrm{ind}}_B^{P_\alpha} \lambda = r+1\)
Note: we have \begin{align*} \mathop{\mathrm{ind}}_B^{P_\alpha} \lambda = S^r(V) \qquad &\text{when } {\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} = r \geq 0 \\ R^1 \mathop{\mathrm{ind}}_B^{P_\alpha} = S^r(V) {}^{ \vee }\qquad&\text{where $V$ is a 2-dim representation and } {\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} \leq -2 \\ &\text{and } r = {\left\lvert {{\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle}} \right\rvert} - 1 .\end{align*}
This gives us an analog of \(A_1\) or \({\operatorname{SL}}_2\) theory. Also note that we have Serre duality: \begin{align*} H^1(\lambda) = H^0( - (\lambda + 2\rho) ) {}^{ \vee } .\end{align*}
Let \(\alpha\in \Delta\) and \(\lambda\in X(T)\), and suppose \(\lambda\) is dominant with respect to \(\alpha\), i.e. \({\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} \geq 0\).
If \(\operatorname{ch}(k) = 0\) then \(\mathop{\mathrm{ind}}_B^{P_\alpha}\lambda = R^1 \mathop{\mathrm{ind}}_B^{P_\alpha} s_\alpha \cdot \lambda\)
If \(\operatorname{ch}(k) = p\) and if there exists an \(s, m\) with \(0<s<p\) and \({\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} = sp^m - 1\) (Steinberg weights), then \begin{align*} \mathop{\mathrm{ind}}_B^{P_\alpha} \lambda = R^1 \mathop{\mathrm{Ind}}_B^{P_\alpha} s_\alpha \cdot \lambda .\end{align*}
The proof of this will use a Grothendieck-type spectral sequence of the form \begin{align*} E_2^{i, j} = R^i \mathop{\mathrm{ind}}_{P_\alpha}^G \qty{ R^j \mathop{\mathrm{ind}}_B^{P_\alpha} \lambda} \Rightarrow R^{i+j} \mathop{\mathrm{ind}}_B^G \lambda .\end{align*}
We’ll have a version of Grothendieck vanishing: \begin{align*} R^j \mathop{\mathrm{ind}}_B^{P_\alpha} \lambda = 0 \qquad\text{for } j > \dim P_\alpha/B = 1 .\end{align*}
So the resulting spectral sequence will only be supported on the first two lines, and \(E_3 = E_\infty\). Note the differential will be of bidegree \({\partial}_r \leadsto (r, 1-r)\), and \(E_2\) will look like the following,
Recall that \(R^i \mathop{\mathrm{ind}}_B^G \lambda \mathrel{\vcenter{:}}= H^i(\lambda)\)
Let \(\alpha\in\Delta\) and \(\lambda \in X(T)\).
If \({\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} = -1\), then \(H^{-}(\lambda) = 0\).
If \({\left\langle {\lambda},~{ \alpha {}^{ \vee }} \right\rangle} \geq 0\), then \(H^i(\lambda) = R^i \mathop{\mathrm{ind}}_B^{P_\alpha} \lambda\) for all \(i\geq 0\).
If \({\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} \leq -2\), then \begin{align*} H^i(\lambda) = R^{i-1} \mathop{\mathrm{ind}}_{P_\alpha}^G \qty{ R^1 \mathop{\mathrm{ind}}_B^{P_\alpha} \lambda } \qquad \forall i .\end{align*}
Suppose \({\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} \geq 0\). If \(\operatorname{ch}(k) = 0\), or \(\operatorname{ch}(k) = p> 0\) and \({\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} = sp^n - 1\), then \begin{align*} H^i(\lambda) = H^{i+1}(s_\alpha\cdot \lambda) .\end{align*}
If \({\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} = -1\), then \(R^{-}\mathop{\mathrm{ind}}_B^{P_\alpha} \lambda = 0\). But this is what appears as the “coefficients” in the spectral sequence, so \(E_2^{{-}, {-}} = 0\) and this \(R^{-}\mathop{\mathrm{ind}}_B^{P_\alpha} = 0\).
If \({\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} = 0\), then \(R^j \mathop{\mathrm{ind}}_B^{P_\alpha} \lambda = 0\) for all \(j>0\). Thus only the bottom line survives, and the spectral sequence degenerates on page 2. Thus \(E_2^{1, 0} = R^i \mathop{\mathrm{ind}}_B^G \lambda\), where the LHS is equal to \(R^i \mathop{\mathrm{ind}}_{P_\alpha}^G \qty{\mathop{\mathrm{ind}}_B^{P_\alpha} \lambda }\).
If \({\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} = -2\), then \(R^i \mathop{\mathrm{ind}}_B^{P_\alpha} \lambda = 0\) for \(i\neq 1\), so only \(i=1\) survives Then \begin{align*} R^{i-1} \mathop{\mathrm{ind}}_{P_\alpha}^G \qty{ \mathop{\mathrm{ind}}_B^{PP_\alpha} \alpha} = R^i \mathop{\mathrm{ind}}_B^G \lambda ,\end{align*} so there is some dimension shifting.
If \({\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} \geq 0\), then by (b), \begin{align*} H^i(\lambda) &= R^i \mathop{\mathrm{ind}}_{P_\alpha}^G \qty{ \mathop{\mathrm{ind}}_B^{P_\alpha} \lambda } && \text{by c}\\ &= R^i \mathop{\mathrm{ind}}_{P_\alpha}^G \qty{ R^1 \mathop{\mathrm{ind}}_B^{P_\alpha} s_\alpha\cdot \lambda } && \text{by corollary}\\ &= H^{i+1}(s_\alpha\cdot \lambda) .\end{align*}
We can then check that \begin{align*} s_\alpha \cdot \lambda &= \lambda - {\left\langle {\lambda + \rho},~{\alpha {}^{ \vee }} \right\rangle}\alpha \\ &= \lambda - \qty{ {\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} + 1 }\alpha && \text{using } {\left\langle {\rho},~{\alpha {}^{ \vee }} \right\rangle} = 1 \\ \\ \implies {\left\langle {s_\alpha \cdot \lambda},~{\alpha {}^{ \vee }} \right\rangle} &= {\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} - \qty{ {\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle}+1 }{\left\langle {\alpha},~{\alpha {}^{ \vee }} \right\rangle} \\ &= {\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} - \qty{ {\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle}+1 }2 \\ &= -{\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} - 2 \\ &\leq -2 .\end{align*}
Now define \begin{align*} \mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu_{{\mathbb{Z}}} &\mathrel{\vcenter{:}}= \left\{{ \lambda \in X(T) {~\mathrel{\Big|}~}0 \leq {\left\langle {\lambda+\rho},~{\beta {}^{ \vee }} \right\rangle} \,\forall \beta \in \Phi^+ }\right\} \qquad\text{ if } \operatorname{ch}(k) = 0 \\ &\mathrel{\vcenter{:}}= \left\{{ \lambda \in X(T) {~\mathrel{\Big|}~}0 \leq {\left\langle {\lambda+\rho},~{\beta {}^{ \vee }} \right\rangle} \leq \operatorname{ch}(k) \,\forall \beta \in \Phi^+ }\right\} \qquad\text{if } \operatorname{ch}(k) = p .\end{align*}
Idea:
If \(\lambda \in \mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu_{\mathbb{Z}}\) and \(\lambda \not\in X(T)_+\), then \(H^0(w\cdot \lambda) = 0\).
If \(\lambda \in \mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu_{\mathbb{Z}}\cap X(T)_+\), then for all \(w\in W\), \begin{align*} H^i(w\cdot \lambda) = \begin{cases} H^0(\lambda) & i= \ell(w) \\ 0 & \text{otherwise} \end{cases} .\end{align*}
Note that this covers everything in the \(\operatorname{ch}(k) = 0\) case, but only gives the following hexagon in the \(\operatorname{ch}(k) = p\) case:
Open Problem: Determine \(\operatorname{ch}H^i(\lambda)\) for \(\lambda\in X(T)\) in characteristic \(p>0\).
Andersen provided necessary an sufficient conditions for \(H^1(\lambda) \neq 0\) and computed \(\mathop{\mathrm{Soc}}_G H^1(\lambda)\).
Recall the Bott-Borel-Weil theorem: in characteristic zero, we’re looking at the closure of the region containing the fundamental region \(C_{\mathbb{Z}}\):
If \(\lambda \in \mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu_{\mathbb{Z}}\) and \(\lambda \not\in X(T)_+\) then \(H^0(w\circ \lambda) = 0\).
If \(\lambda\in \mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu_{\mathbb{Z}}\cap X(T)_+\) then for all \(w\in W\), we have \begin{align*} H^i(w\cdot \lambda) = \begin{cases} H^0(\lambda)& i = \ell(w) \\ 0 & \text{otherwise} \end{cases} .\end{align*}
For (a): we use induction on \(\ell(w)\). For \(\ell(w) = 0\), we have \(w = \operatorname{id}\). Let \(\lambda \in \mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu_{\mathbb{Z}}\) and \(\lambda\not\in X(T)_+\). Then \begin{align*} 0 &\leq {\left\langle {\lambda + \rho},~{\alpha {}^{ \vee }} \right\rangle} \\ &= {\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} + 1 \\ \implies {\left\langle {\lambda},~{\alpha {}^{ \vee }} \right\rangle} &= -1 .\end{align*} Applying the previous proposition, we get \(H^0(\lambda) = 0\).
For the base case \(w=\operatorname{id}\), this follows from Kempf vanishing. Assuming the result holds for any word of length \(l<\ell(w)\), if \(\ell(w) > 0\), there exists some simple reflection \(s_\alpha\) for \(\alpha\in\Delta\) such that \(\ell(s_\alpha w) = \ell(w) - 1\). Moreover, \(w^{-1}(\alpha) \in -\Phi^+\), so set \(\beta = -w^{-1}(\alpha) \in \Phi^+\). We can the make the following computation:
\begin{align*}
{\left\langle {(s_\alpha w) \cdot \lambda},~{\alpha {}^{ \vee }} \right\rangle}
&= {\left\langle {(s_\alpha w)(\lambda+\rho) - \rho},~{\alpha {}^{ \vee }} \right\rangle} \\
&= {\left\langle {(s_\alpha w)(\lambda+\rho)},~{\alpha {}^{ \vee }} \right\rangle} - 1 \\
&= {\left\langle {w(\lambda+\rho)},~{s_\alpha \alpha {}^{ \vee }} \right\rangle} - 1 \\
&= - {\left\langle {w(\lambda+\rho)},~{\alpha {}^{ \vee }} \right\rangle} - 1 \\
&= {\left\langle {\lambda + \rho},~{-w^{-1}\alpha {}^{ \vee }} \right\rangle} - 1 \\
&= {\left\langle {\lambda + \rho},~{\beta {}^{ \vee }} \right\rangle} - 1 \\
&\geq -1
\end{align*}
and \({\left\langle {(s_\alpha w)\cdot \lambda},~{ \alpha {}^{ \vee }} \right\rangle} < \rho\) since \(\lambda\in \mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu_{\mathbb{Z}}\). Note that we’ve used the fact that the inner product is \(W{\hbox{-}}\)invariant.
Now if \({\left\langle {(s_\alpha w)\cdot \lambda},~{ \alpha {}^{ \vee }} \right\rangle} \geq 0\), we can apply the prior proposition part (d). Here we use the fact that \(\mathop{\mathrm{ind}}_B^{P_\alpha}(s_\alpha w)\lambda\) is simple. Applying the inductive hypothesis yields \begin{align*} H^i(s_\alpha - \lambda) = H^{i+1}(w\cdot \lambda) .\end{align*}
Now if \({\left\langle {s_\alpha w \cdot \lambda},~{\alpha {}^{ \vee }} \right\rangle} = -1\), then \begin{align*} -1 &= {\left\langle {\lambda + \rho},~{\beta {}^{ \vee }} \right\rangle} - 1 \\ \implies {\left\langle {\lambda + \rho},~{\beta {}^{ \vee }} \right\rangle} &= 0 \\ \implies {\left\langle {\lambda},~{\beta {}^{ \vee }} \right\rangle} &= 0 \\ & \cdots .\end{align*}
Then applying (a) yields \(H^1(w\cdot \lambda) = 0\).
Let \(P\) be a parabolic subgroup, i.e. \(P_J = P \mathrel{\vcenter{:}}= L_J \rtimes U_J\) for some \(J\subseteq \Delta\). Set \(n(P) = {\left\lvert {\Phi^+} \right\rvert} - {\left\lvert {\Phi^+_J} \right\rvert}\).
Let \(\Phi = A_4\), which has ten simple roots:
Then \(n(P) = 10 - 3 = 7\).
\begin{align*} R^i \mathop{\mathrm{ind}}_P^G M = 0 \qquad \text{for } i > n(P) .\end{align*}
\begin{align*} \qty{ R^i \mathop{\mathrm{ind}}_B^G M } {}^{ \vee }\cong R^{n(P) -i} \mathop{\mathrm{ind}}_P^G M {}^{ \vee }\otimes(-2\rho_P) .\end{align*} where \begin{align*} \rho_p \mathrel{\vcenter{:}}={1\over 2}\sum_{\beta \in \Phi^+ \setminus\Phi_J} \beta \end{align*}
Take \(B = P\) and \(M = \lambda\). Then \(\lambda {}^{ \vee }= -\lambda\), so \begin{align*} \qty{ R^i \mathop{\mathrm{ind}}_B^G \lambda } {}^{ \vee }\cong R^{{\left\lvert {\Phi^+} \right\rvert} -i} \mathop{\mathrm{ind}}_P^G (- \lambda) {}^{ \vee }\otimes(-2\rho) .\end{align*} From this we can conclude \begin{align*} H^i(\lambda) = H^{n-i} (-\lambda - 2\rho) {}^{ \vee } ,\end{align*} where \(n = {\left\lvert {\Phi^+} \right\rvert}\).
Let \(\lambda \in X(T)_+ \cap\mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu_{\mathbb{Z}}\) be a dominant weight. Then
The irreducible representations are given by \(L(\lambda) = H^0(\lambda)\).
\(\operatorname{Ext} _G^1(L(\lambda), L(\mu)) = 0\) for all \(\lambda, \mu\) in \(\mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu_{\mathbb{Z}}\).
If \(\operatorname{ch}(k) = 0\), so \(X(T)_+ \subset \mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu_{\mathbb{Z}}\), then all \(G{\hbox{-}}\)modules are completely reducible.
Note that the longest element takes positive roots to negative roots, so \(w_0 \rho = - \rho\), and moreover \(-w_0(\mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu_{\mathbb{Z}}) = \mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu_{\mathbb{Z}}\). We also have \begin{align*} w_0 \cdot ( w_0 \lambda) &= w_0 (-w_0 \lambda + \rho) - \rho \\ &= -\lambda + w_0 \rho - \rho \\ &= -\lambda - 2\rho .\end{align*} By Serre duality, if we take the Weyl module we obtain \begin{align*} V(-w_0 \lambda) &\mathrel{\vcenter{:}}= H^0(\lambda) {}^{ \vee }\\ &= H^n(-\lambda - 2\rho) \\ &= H^n(w_0 \cdot (-w_0 \lambda)) \\ &= H^n(-w_0 \lambda) \qquad\text{by Bott-Borel-Weil} ,\end{align*} where we’ve used that \(\ell(w_0) = {\left\lvert {\Phi^+} \right\rvert}\). We know that \(L(-w_0 \lambda) \subseteq \mathop{\mathrm{Soc}}H^0(-w_0 \lambda) = V(-w_0 \lambda) \twoheadrightarrow L(-w_0 \lambda)\), where the last term is contained in the head. But this means that this splits, so by indecomposability we must have \(L(-w_0 \lambda) = H^0(-w_0 \lambda) = V(-w_0 \lambda)\). So we can conclude \begin{align*} L(\mu) = H^0(\mu) = V(\mu) \qquad \forall \mu \in X(T)_+ \cap\mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu_{\mathbb{Z}} .\end{align*}
Suppose \(\operatorname{Ext} _G^1(L(\lambda), L(\mu)) \neq 0\), then \(L(\lambda)\) is in \(H^0(\mu) / \mathop{\mathrm{Soc}}_G H^0(\mu) = 0\) and \(L(\mu)\) is in \(H^0(\lambda) / \mathop{\mathrm{Soc}}_G H^0(\lambda) = 0\), but this forces \(\operatorname{Ext} _G^1(L(\lambda), L(\mu)) = 0\).
Part (c) follows from part (b).
Problem: Determine \(\operatorname{ch}H^0 \lambda\) for \(\lambda \in X(T)_+\).
Solution: Let \(A(\lambda) = \sum_{w\in W} \operatorname{sgn}(w) e^{w\lambda} \in {\mathbb{Z}}[X(T)]\), where we sum over the usual Weyl group and not the affine Weyl groups, taken as a formal sum in the group algebra on the weight lattice. We can then state Weyl’s character formula: \begin{align*} \operatorname{ch}H^0(\lambda) = {A(\lambda + \rho) \over A(\rho)} \qquad \text{for }\lambda \in X(T)_+ .\end{align*} This is a formal sum, so it’s surprising that the bottom term even divides the top. But there is a great deal of cancellation, we’ll see this in examples such as \(\operatorname{GL}_3\).
Let \(M\) be a \(T{\hbox{-}}\)module, then define the character \begin{align*} \operatorname{ch}M\mathrel{\vcenter{:}}=\sum_{\mu\in X(T)} \qty{\dim M_\mu} e^\mu \quad \in {\mathbb{Z}}[X(T)] .\end{align*}
We then define the Euler characteristic \begin{align*} \chi(M) \mathrel{\vcenter{:}}=\sum_{i\geq 0} (-1)^i \operatorname{ch}H^i(M) .\end{align*} Note that by Grothendieck vanishing, \(H^i(M) = 0\) for \(i > {\left\lvert {\Phi^+} \right\rvert} = \dim(G/B)\), so this is a finite sum. In fact, if \(M\) is a \(G{\hbox{-}}\)module, then this is \(W{\hbox{-}}\)invariant and thus in fact \(\chi(M) \in {\mathbb{Z}}[X(T)]^W\).
Today:
Weyl’s character formula
Strong linkage
Translation functors
Recall that we defined \begin{align*} \operatorname{ch}(M) &\mathrel{\vcenter{:}}=\sum_{\mu \in X(T)} \qty{\dim M_\mu} e^{\mu} \in {\mathbb{Z}}[X(T)]\\ \chi(M) &\mathrel{\vcenter{:}}=\sum_{i\geq 0} (-1)^i \operatorname{ch}H^i(M) \in {\mathbb{Z}}[X(T)]^W .\end{align*}
where \(H^i(M) = R^i \mathop{\mathrm{ind}}_B^G M\), and \(H^i(M) =0\) for \(i> G/B = {\left\lvert {\Phi^+} \right\rvert}\).
Note that the Euler characteristic is additive on SESs: if \(0\to A\to B\to C\to 0\) then \(\chi(B) = \chi(A) + \chi(B)\). It is also multiplicative wrt the tensor product: \(\chi(A\otimes B) \chi(A) \chi(B)\).
If \(\lambda \in X(T)_+\), then \(\chi(\lambda) = \operatorname{ch}H^0(\lambda) = \operatorname{ch}(V(0))\).
The set \(\left\{{\operatorname{ch}L(\lambda) {~\mathrel{\Big|}~}\lambda\in X(T)_+}\right\}\) is a basis for \({\mathbb{Z}}[X(T)]^W\).
If \(\lambda \in X(T)\) and \(\sum a_\mu e^\mu \in {\mathbb{Z}}[X(T)]^W\), then there is a formula: \begin{align*} \chi(\lambda) \qty{ \sum_\mu a_\mu e^\mu } = \sum_\mu a_\mu \chi(\lambda + \mu) .\end{align*}
Let \begin{align*} \operatorname{Sym}(\mu) \mathrel{\vcenter{:}}=\sum_{\nu \in W\mu} e^\nu \end{align*} be the sum over the \(W\) orbit of \(\mu\). This is clearly \(W{\hbox{-}}\)invariant, so \(\operatorname{Sym}(\mu) \in {\mathbb{Z}}[X(T)]^W\). Since every \(\nu \in X(T)\) is \(W{\hbox{-}}\)conjugate to \(\mu\) (which is dominant), the set \(\left\{{\operatorname{Sym}(\mu) {~\mathrel{\Big|}~}\mu \in X(T)_+}\right\}\) is a basis for \({\mathbb{Z}}[X(T)]^W\), since this set is linearly independent.
Why: conjugate to a unique weight.
Let \(\lambda \in X(T)_+\), then
\begin{align*} \operatorname{ch}L(\lambda) = \operatorname{Sym}(\lambda) + \sum_{\substack{\mu < \lambda \\ \mu \in X(T)_+} } a_\mu \operatorname{Sym}(\mu) .\end{align*} Thus the transition matrix is unipotent and upper-triangular, thus \(\left\{{\operatorname{ch}L(\lambda) {~\mathrel{\Big|}~}\lambda \in X(T)_+}\right\}\) is a basis for \({\mathbb{Z}}[X(T)]^W\).
Since \(\left\{{L(\lambda) {~\mathrel{\Big|}~}\lambda\in X(T)_+}\right\}\) forms a basis for \({\mathbb{Z}}[X(T)]^W\), there is some \(G{\hbox{-}}\)module \(V\) such that \(\sum a_\mu e^\mu = \pm \operatorname{ch}V\). We can consider a composition series of \(V\otimes\lambda\), where the factor \(\left\{{\mu \otimes\lambda}\right\}\) appears \(a_\mu = \dim V_\mu\) times. We now compute in two different ways: \begin{align*} \chi(V\otimes\lambda) &= \operatorname{ch}(V) \chi(\lambda) && \text{using the formula from earlier} \\ &= \chi(\lambda) \qty{ \sum_\mu a_\mu e^\mu } .\end{align*}
On the other hand, \begin{align*} \chi(V\otimes\lambda) &= \sum a_\mu \chi(\lambda + \mu) .\end{align*}
The formula used above was \begin{align*} R^i \mathop{\mathrm{ind}}_B^G (V\otimes\lambda) = V\otimes R^i \mathop{\mathrm{ind}}_B^G(\lambda) .\end{align*}
For any \(\alpha\in\Delta\) and \(\lambda \in X(T)\) with \({\left\langle {\lambda + \rho},~{\alpha {}^{ \vee }} \right\rangle} \geq 0\). We have an analog of Serre duality: \begin{align*} \operatorname{ch}\mathop{\mathrm{ind}}_B^{P_\alpha} \lambda = \operatorname{ch}R^i \mathop{\mathrm{ind}}_B^{P_\alpha} s_\alpha \cdot \lambda ,\end{align*} i.e. the induced module coincides with the Weyl module.
By definition of the dot action, we have \begin{align*} s_\alpha \cdot \lambda = s_\alpha(\lambda + \rho) - \rho .\end{align*}
As in previous calculations, we have
\begin{align*} {\left\langle {s_\alpha\cdot\lambda},~{\alpha {}^{ \vee }} \right\rangle} = -{\left\langle {\lambda+\rho},~{\alpha {}^{ \vee }} \right\rangle} - 1 \leq - 1 .\end{align*}
As in the analysis of Bott-Borel-Weil, we have \begin{align*} H^i(s_\alpha \cdot\lambda) &= H^i( R^1 \mathop{\mathrm{ind}}_B^{P_\alpha} s_\alpha\cdot\lambda ) \\ H^i(\lambda) &= H^i( \mathop{\mathrm{ind}}_B^{P_\alpha}\lambda ) ,\end{align*} since the spectral sequence collapses. Note that the two things appearing on the RHS have the same Euler characteristics.
We can thus define define a modified Euler characteristic \begin{align*} \phi(N) = \sum_{i\geq 0} (-1)^i \operatorname{ch}R^i \mathop{\mathrm{ind}}_{P_\alpha}^G(N) .\end{align*}
and obtain \(\chi(\lambda) = -\chi(s_\alpha \cdot \lambda)\). The same argument works for \({\left\langle {\lambda + \rho},~{\alpha {}^{ \vee }} \right\rangle} < 0\).
\begin{align*} \lambda \in X(T) \implies \chi(\lambda) = -\chi(s_\alpha \cdot \lambda) .\end{align*}
\begin{align*} \chi(w\cdot \lambda) = \operatorname{sgn}(w) \chi(\lambda) && \operatorname{sgn}(w) \mathrel{\vcenter{:}}=(-1)^{\ell(w)} ,\end{align*} with the convention that \(\chi(0) = e^0 = 1\).
Let \(\lambda \in X(T)\) where \(\sum a_\mu e^|mu \in {\mathbb{Z}}[X(T)]^W\), so (as we proved) \begin{align*} \chi(\lambda) \qty{ \sum_\mu a_\mu e^\mu } = \sum_\mu a_\mu \chi(\lambda + \mu) .\end{align*} In the special case \(\lambda = 0\), we have \(\chi(\lambda) = \chi(0) = e^0\), we obtain \begin{align*} \sum_\mu a_\mu e^\mu = \sum_\mu a_\mu \chi(\mu) .\end{align*}
Extend this to a field by letting \(\lambda \in X(T) \otimes_{\mathbb{Z}}{\mathbb{Q}}\), then define \begin{align*} A(\lambda) \mathrel{\vcenter{:}}=\sum_{w\in W} \operatorname{sgn}(w) e^{w \lambda} \in {\mathbb{Z}}[ (X(T) \otimes{\mathbb{Q}}] .\end{align*}
Then
\(w' A(\lambda) = \operatorname{sgn}(w') A(\lambda)\).
\(A(\mu) A(\lambda) \in {\mathbb{Z}}[X(T) \otimes{\mathbb{Q}}]^W\).
Proof of 1: exercise.
We can compute \begin{align*} w(A(\mu) A(\lambda) ) &= w A(\mu) w A(\lambda) \\ &= \operatorname{sgn}(w) A(\mu) \operatorname{sgn}(w) A(\lambda) \\ &= \operatorname{sgn}(w)^2 A(\mu) A(\lambda) \\ &= A(\mu) A(\lambda) .\end{align*}
Let \(\lambda \in X(T)\) be any weight, then \begin{align*} \chi(\lambda) = { A(\lambda + \rho) \over A(\rho) } ,\end{align*} where \(\rho = {1\over 2} \sum_{\alpha \in \Phi^+} \alpha\).
Note: this says that one formal sum divides another.
A corollary is an analog of Weyl’s dimension formula: :::{.corollary title=“?”} Let \(\lambda \in X(T)_+\) be a dominant weight. Then \begin{align*} \operatorname{ch}H^0(\lambda) = { A(\lambda + \rho) \over A(\rho) } .\end{align*} :::
Big question: suppose \(k = \mkern 1.5mu\overline{\mkern-1.5mu{\mathbb{F}}\mkern-1.5mu}\mkern 1.5mu_p\). What are \(\operatorname{ch}L(\lambda)\) and \(\lambda \in X(T)_+\)? We know this for \(p\gg 0\), but in general it’s wide open. There are expressions in terms of “\(p{\hbox{-}}\)bases,” but these are hard to compute. There are only recursive formulas, none that are closed (and these may not exist).
Next time:
Idea of the proof: we’ll have some \(\chi(\lambda) = \sum_\mu a_\mu e^\mu\). Well also have \(A(\rho) \qty{ \sum_\mu a_\mu e^\mu } = A(\lambda + \rho)\). This will reduce to equating coefficients of two formal sums, which will result in a system of linear equations.
Review: suppose the following is invariant under the Weyl group, so \(\sum a_\mu e^\mu \in {\mathbb{Z}}[X(T)]^W\). In this case, we have an equality \begin{align*} \sum a_\mu e^\mu = \sum a_\mu \chi(\mu) ,\end{align*} where \(\chi(\mu) = \sum_{i\geq 0} (-1)^i \operatorname{ch}H^i(\mu)\). We also had a relation \begin{align*} \chi(w\cdot \mu) = (-1)^{\ell(w)} \chi(\mu) = \operatorname{sgn}(w) \chi(\mu) .\end{align*}
Now let \(\lambda \in X(T) \otimes{\mathbb{Q}}\), then we defined \begin{align*} A(\lambda) = \sum_{w\in W} \operatorname{sgn}(w) e^{w\lambda} \in {\mathbb{Z}}[X(T) \otimes{\mathbb{Q}}] .\end{align*}
We obtain
\(w' A(\lambda) = \operatorname{sgn}(w') A(\lambda)\)
\(A(\mu) A(\lambda) = {\mathbb{Z}}[X(T) \otimes{\mathbb{Q}}]^W\).
\begin{align*} \lambda\in X(T) \implies \chi(\lambda) = {A(\lambda + \rho) \over A(\lambda)} .\end{align*}
As a special case when \(\lambda \in X(T)_+\), all higher sheaf cohomology vanishes and thus \begin{align*} \operatorname{ch}H^0(\lambda) = {A(\lambda + \rho) \over A(\lambda)} .\end{align*}
We first perform a reindexing step: \begin{align*} \sum_{w, w'} \operatorname{sgn}(w\cdot w') e^{w(\lambda+\rho) + w'\rho} &= \sum_{w, w'} \operatorname{sgn}(w^{-1} w') e^{w(\lambda+\rho) + w'\rho} \\ &= \sum_{w, y} \operatorname{sgn}(y) e^{w(\lambda+\rho) + wy\rho} && y = w^{-1}w' \implies w' = wy \\ &= \sum_{w, y} \operatorname{sgn}(y) e^{w(\lambda + \rho + y\rho)} .\end{align*}
Now let \(\lambda\in X(T)\), we then compute \begin{align*} A(\lambda + \rho) A(\rho) &= \sum_{w} \operatorname{sgn}(w) e^{w(\lambda + \rho)} + \sum_{w'} \operatorname{sgn}(w') e^{w'(\lambda + \rho)} \\ &= \sum_{w, w'} \operatorname{sgn}(ww') e^{w(\lambda + \rho) + w'\rho} \\ &= \sum_{w, w'} \operatorname{sgn}(w') e^{w(\lambda + \rho + w'\rho)} && \text{from reindexing above, setting } y\mathrel{\vcenter{:}}= w' \\ &= \sum_{w, w'} \operatorname{sgn}(w') \chi\qty{w(\lambda + \rho + w'\rho)} \\ &= \sum_{w, w'} \operatorname{sgn}(w') \chi\qty{w\cdot (\lambda + w'\rho + w^{-1} \rho)} && \text{definition of dot action}\\ &= \sum_{w, w'} \operatorname{sgn}(ww') \chi\qty{\lambda + w'\rho + w\rho } && \text{swapping } w\leadsto w^{-1} .\end{align*}
Note that \(\chi\) can be introduced since \(A(\lambda + \rho)A(\rho) \in {\mathbb{Z}}[X(T) \otimes{\mathbb{Q}}]^{W\cdot}\).
We can now conclude that \begin{align*} A(\rho)^2 = \sum_{w, w'} \operatorname{sgn}(ww') e^{w\rho + w' \rho} .\end{align*} Since this quantity is \(W{\hbox{-}}\)invariant, since it’s a square, we can move the \(\chi\) inside: \begin{align*} \chi(\lambda) \qty{ \sum a_\mu e^\mu } = \sum a_\mu \chi(\lambda + \mu) \\ \implies \chi(\lambda) A(\rho)^2 = \sum_{w, w'} \operatorname{sgn}(ww') \chi(\lambda + w\rho + w'\rho) ,\end{align*} which is exactly what the first calculation resulted in. So we can conclude \begin{align*} A(\lambda + \rho) A(\rho) = \chi(\lambda) A(\rho)^2 .\end{align*} Note that \(A(\rho) \neq 0\) since \(w\rho \neq \rho\) unless \(w=\operatorname{id}\). Thus we are actually working in \({\mathbb{Z}}[X(T) + {\mathbb{Z}}\rho]\), which is an integral domain, and thus we can apply cancellation laws to obtains \begin{align*} A(\lambda + \rho) = \chi(\lambda) A(\rho) .\end{align*}
Let \(G = \operatorname{GL}_3(k)\), which has a natural 3-dimensional representation \(V\). Let \(\lambda = (1,0,0)\), so \(L(1,0,0) = V\). This is a polynomial representation, so by permuting we can obtain \begin{align*} \operatorname{ch}V = e^{(1,0,0)} + e^{(0,1,0)} + e^{(0,0,1)} = \chi(1,0,0) ,\end{align*} where the last equality holds since \(\lambda\) is dominant.
We can write \(\rho = (2,1,0)\), since the fundamental weights are given by \(w_1 = (1,0,0)\) and \(w_2 = (1,1,0)\) (since we’re in an \({\operatorname{SL}}_2\) and/or \(A_2\) situation). We then obtain \(\lambda + \rho = (3,1,0)\), and since \(W= S_3\), \begin{align*} A(\lambda + \rho) = \sum_{w\in W} \operatorname{sgn}(w) e^{w(\lambda + \rho)} = e^{(3,1,0)} - e^{(1,3,0)} + e^{(1,0,3)} - e^{(0,1,3)} + e^{(0,3,1)} - e^{(3,0,1)} .\end{align*}
Thus \begin{align*} A(\rho) = e^{(2,1,0)} - e^{(1,2,0)} + e^{(1,0,2)} - e^{(0,1,2)} + e^{(0,2,1)} - e^{(2,0,1)} .\end{align*}
We can then compute \begin{align*} \chi(1,0,0) A(\rho) = &e^{(3,1,0)} - e^{(2,2,0)} + e^{(2,0,2)} -e^{(1,1,2)} + e^{(1,2,1)} - e^{(3,0,1)} + \\ &e^{(2,2,0)} - e^{(1,3,0)} + e^{(1,1,2)} - e^{(0,2,2)} + e^{(0,3,1)} - e^{(2,1,1)} + \\ &e^{(2,1,1)} - e^{(1,2,1)} + e^{(1,0,3)} - e^{(0,1,3)} + e^{(0,2,2)} - e^{(2,0,2)} .\end{align*}
After cancellation, you’ll find that this expression is equal to \(A(\lambda + \rho)\).
We’ll consider representations in characteristic zero, so we can take \(k={\mathbb{C}}\). Let \(G\) bet a complex simple group, \({\mathfrak{g}}= \operatorname{Lie}(G)\), \(t\) a maximal torus, \(X\) the weights, and \(X_+\) the dominant weights. We have a correspondence \begin{align*} \left\{{\substack{(g, t)}}\right\} \iff \left\{{\substack{(\Phi, W)}}\right\} \end{align*}
where \(\Phi\) is an irreducible root system and \(W\) is the Weyl group. We’ll have a set of simple roots \(\Delta\subseteq \Phi^+\). For \(\lambda\in X\), we have \begin{align*} Z(\lambda) = U({\mathfrak{g}}) \otimes_{U({\mathfrak{b}}^+)} \lambda \twoheadrightarrow L(\lambda) .\end{align*}
Then \(\lambda \in X_+ \iff L(\lambda)\) is finite dimensional. We have \(W\) acting on \(X\) via reflections, which we can extend to a dot action \begin{align*} w\cdot \lambda = w(\lambda + \rho) - \rho, \hspace{4em} \rho = {1\over 2}\sum_{\alpha\in\Phi^+} \alpha .\end{align*}
We define Category \({\mathcal{O}}\) which has objects \({\mathfrak{g}}{\hbox{-}}\)modules with a weight space decomposition which is locally finite wrt \({\mathfrak{n}}^+\).
Set \(Z(\lambda) = \Delta(\lambda)\), then \begin{align*} [Z(\lambda) : L(\mu)] \neq 0 \implies \lambda \in W\cdot \mu .\end{align*} The LHS is computed by evaluating certain Kazhdan-Lusztig polynomials at \(x=1\).
Let \(\Phi= A_2\), then