Definitions

List of Notation

Useful Facts

SL2 Theory

Definition The group and the algebra:

\[\begin{align*} {\mathfrak{sl}}(n, {\mathbb{C}}) &= \left\{{M \in \operatorname{GL}(n, {\mathbb{C}}) {~\mathrel{\Big|}~}\det(M) = 1}\right\} \\ {\mathfrak{sl}}(n, {\mathbb{C}}) &= \left\{{M \in \operatorname{GL}(n, {\mathbb{C}}) {~\mathrel{\Big|}~}\operatorname{Tr}(M) = 0}\right\} .\end{align*}\]

Generated by \[\begin{align*} x = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} ,\quad h = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} ,\quad y = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} \end{align*}\]

with relations

\[\begin{align*} [hx] &= 2x \\ [hy] &= -2y \\ [xy] &= h \\ .\end{align*}\]

Some identifications: \[\begin{align*} \Phi &= A_1 \\ \dim {\mathfrak{h}}&= 1\\ \Lambda &\cong {\mathbb{Z}}\\ \Lambda_r & \cong {\mathbb{Z}}/2{\mathbb{Z}}\\ \Lambda^+ &= \left\{{0, 1, 2, 3, \cdots}\right\} \\ W &= \left\{{1, s_0}\right\} \quad \lambda \overset{s_0}\iff -\lambda \\ \chi_\lambda = \chi_\mu &\iff \mu = \lambda, -\lambda-2 {\quad \text{(linked)} \quad}\\ \Pi(M(\lambda)) &= \left\{{\lambda, \lambda-2, \cdots}\right\} \\ \rho &= 1 \\ \alpha &= 2 \\ s_\alpha \cdot \lambda &= - \lambda - 2 .\end{align*}\]

For \(\lambda\) dominant integral \[\begin{align*} N(\lambda) &\cong L(-\lambda - 2) \\ \dim L(\lambda) &= \lambda + 1 \\ \Pi(L(\lambda)) &= \left\{{\lambda, \lambda-2, \cdots, -\lambda}\right\} \\ \dim \qty{L(\lambda)}_\mu &= 1 \quad\quad\forall \mu = \lambda-2i .\end{align*}\]

Image

Finite-dimensional irreducible representations (i.e. simple modules) of \({\mathfrak{sl}}(2, {\mathbb{C}})\) are in bijection with dominant integral weights \(n\in \Lambda\), i.e. \(n\in {\mathbb{Z}}^{\geq 0}\), are denoted \(M(n)\), and each admits a basis \(\left\{{\mathbf{v}_i{~\mathrel{\Big|}~}0\leq i \leq n}\right\}\) where \[\begin{align*} h \cdot v_{i} &= (n-2 i) v_{i}\\ x \cdot v_{i} &= (n-i+1) v_{i-1}\\ y \cdot v_{i} &= (i+1)v_{i+1} ,\end{align*}\] setting \(v_{-1} = v_{n + 1}=0\) and letting \(v_0\) be the unique vector in \(L(n)\) annihilated by \(x\).

Characters: \[\begin{align*} \operatorname{ch}M(\lambda) &= \operatorname{ch}L(\lambda) + \operatorname{ch}L(s_\alpha \cdot \lambda) \\ \operatorname{ch}M(s_\alpha \cdot \lambda) &= \operatorname{ch}L(s_\alpha \cdot \lambda) .\end{align*}\]

We can think of this pictorially as the ‘head’ on top of the socle:

\[\begin{align*} M(\lambda) = \frac{L(\lambda)}{L(s_\alpha \cdot \lambda)} .\end{align*}\]

We can invert the formula to get equation (2), which corresponds to inverting this matrix:

\[\begin{align*} \operatorname{ch}L(\lambda) &= \operatorname{ch}M(\lambda) - \operatorname{ch}M(s_\alpha \cdot \lambda) \\ \operatorname{ch}L(s_\alpha \cdot \lambda) &= \operatorname{ch}M(s_\alpha \cdot \lambda) .\end{align*}\]

If \(\lambda \not\in\Lambda^+\), then \(\operatorname{ch}L(\lambda) = \operatorname{ch}M(\lambda)\) and \(b_{\lambda, 1} = 1, b_{\lambda, s_\alpha} = 0\) are again independent of \(\lambda \in {\mathfrak{h}}^\vee\setminus \Lambda^+\).

SL3

\({\mathfrak{sl}}(3, {\mathbb{C}})\) has root system \(A_2\):


\[\begin{align*} \Phi &= \left\{{\pm \alpha, \pm \beta, \pm\gamma \coloneqq\alpha + \beta}\right\} \\ \Delta &= \left\{{\alpha, \beta}\right\} \\ \Phi^+ &= \left\{{\alpha, \beta, \gamma}\right\} \\ W &= \left\{{1, s_\alpha, s_\beta, s_\alpha s_\beta, s_\beta s_\alpha, w_0 = s_\alpha s_\beta s_\alpha = s_\beta s_\alpha s_\beta}\right\} .\end{align*}\]

For \(\lambda\) regular, integral, and antidominant:

Wednesday January 8

Course Website: https://faculty.franklin.uga.edu/brian/math-8030-spring-2020

Chapter Zero: Review

Material can be found in Humphreys 1972.

Practice writing lowercase mathfrak characters!

In this course, we’ll take \(k = {\mathbb{C}}\).

Recall that a Lie Algebra is a vector space \({\mathfrak{g}}\) with a bracket \[\begin{align*} [{\,\cdot\,}, {\,\cdot\,}]: {\mathfrak{g}}\otimes{\mathfrak{g}}\to {\mathfrak{g}} \end{align*}\] satisfying

Note that the last axiom implies that \(x\) acts as a derivation.

Show that \([x y] = -[y x]\).

Hint: Consider \([x+y, x+y]\). Note that the converse holds iff \(\operatorname{ch}k \neq 2\).

Show that Lie Algebras never have an identity.

\({\mathfrak{g}}\) is abelian iff \([x y] = 0\) for all \(x,y\in{\mathfrak{g}}\).

There are also the usual notions (define for rings/algebras) of:

Given a vector space (possibly infinite-dimensional) over \(k\), then (exercise) \({\mathfrak{gl}}(V) \mathrel{\vcenter{:}}=\mathrm{End}_k(V)\) is a Lie algebra when equipped with \([f g] = f\circ g - g\circ f\).

A representation of \({\mathfrak{g}}\) is a homomorphism \(\phi: {\mathfrak{g}}\to \operatorname{GL}(V)\) for some \(V\).

The adjoint representation is \[\begin{align*} \operatorname{ad}: {\mathfrak{g}}&\to {\mathfrak{gl}}({\mathfrak{g}}) \\ \operatorname{ad}(x)(y) &\mathrel{\vcenter{:}}=[x y] .\end{align*}\]

Representations give \({\mathfrak{g}}\) the structure of a module over \(V\), where \(x\cdot v \mathrel{\vcenter{:}}=\phi(x)(v)\). All of the usual module axioms hold, where now \[\begin{align*} [x y] \cdot v \mathrel{\vcenter{:}}= x\cdot(\cdot v) - y\cdot(x\cdot v) \end{align*}\]

The trivial representation \(V = k\) where \(x\cdot a = 0\).

\(V\) is irreducible (or simple) iff \(V\) as exactly two \({\mathfrak{g}}{\hbox{-}}\)invariant subspaces, namely \(0, V\).

\(V\) is completely reducible iff \(V\) is a direct sum of simple modules, and indecomposable iff \(V\) can not be written as \(V = M \oplus N\), a direct sum of proper submodules.

There are several constructions for creating new modules from old ones:

Semisimple Lie Algebras

The derived ideal is given by \({\mathfrak{g}}^{(1)} \mathrel{\vcenter{:}}=[{\mathfrak{g}}{\mathfrak{g}}] \mathrel{\vcenter{:}}={\operatorname{span}}_k\qty{\left\{{[x y] {~\mathrel{\Big|}~}x,y\in{\mathfrak{g}}}\right\}}\).

This is the analog of the commutator subgroup.

\({\mathfrak{g}}\) is abelian iff \({\mathfrak{g}}^{(1)} = \left\{{0}\right\}\), and 1-dimensional algebras are always abelian.

This follows because if \([x y] \mathrel{\vcenter{:}}= xy = yx\) then \([x y] = 0 \iff xy = yx\).

A lie algebra \({\mathfrak{g}}\) is simple iff the only ideals of \({\mathfrak{g}}\) are \(0, {\mathfrak{g}}\) and \({\mathfrak{g}}^{(1)} \neq \left\{{0}\right\}\).

Note that thus rules out the zero modules, abelian lie algebras, and particularly 1-dimensional lie algebras.

The derived series is defined by \({\mathfrak{g}}^{(2)} = [{\mathfrak{g}}^{(1)} {\mathfrak{g}}^{(1)}]\), continuing inductively. \({\mathfrak{g}}\) is said to be solvable if \({\mathfrak{g}}^{(n)} = 0\) for some \(n\).

Abelian implies solvable.

The lower central series of \({\mathfrak{g}}\) is defined as \({\mathfrak{g}}_{j+1} \mathrel{\vcenter{:}}=[{\mathfrak{g}}, {\mathfrak{g}}_j]\). The lie algebra \({\mathfrak{g}}\) is nilpotent if this series terminates at zero.

Note that an element \(x\) of a Lie algebra is nilpotent iff \(\operatorname{ad}x\) is nilpotent as a matrix (so \(x\) is ad-nilpotent), i.e. \(\operatorname{ad}(x)^n =0\) for some \(n\). There is a result, Engel’s theorem, which relates these two notions: a Lie algebra is nilpotent iff all of its elements are nilpotent (with potentially different \(n\)s depending on \(x\)).

\({\mathfrak{g}}\) is semisimple (s.s.) iff \({\mathfrak{g}}\) has no nonzero solvable ideals.

Show that simple implies semisimple.

  1. Semisimple algebras \({\mathfrak{g}}\) will usually have solvable subalgebras.
  2. \({\mathfrak{g}}\) is semisimple iff \({\mathfrak{g}}\) has no nonzero abelian ideals.

The Killing form is given by \(\kappa: {\mathfrak{g}}\otimes{\mathfrak{g}}\to k\) where \(\kappa(x, y) = \operatorname{Tr}(\operatorname{ad}x ~\operatorname{ad}y)\), which is a symmetric bilinear form.

\[\begin{align*} \kappa([x y], z) = \kappa(x, [y z]) \end{align*}\]

If \(\beta: V^{\otimes 2} \to k\) is any symmetric bilinear form, then its radical is defined by

\[\begin{align*} {\operatorname{rad}}\beta = \left\{{v\in V {~\mathrel{\Big|}~}\beta(v, w) = 0 ~\forall w\in V}\right\} .\end{align*}\]

A bilinear form \(\beta\) is nondegenerate iff \(\mathrm{rad}\beta = 0\).

\(\mathrm{rad}\kappa {~\trianglelefteq~}{\mathfrak{g}}\) is an ideal, which follows by the above associative property.

\({\mathfrak{g}}\) is semisimple iff \(\kappa\) is nondegenerate.

The standard example of a semisimple lie algebra is

\[\begin{align*} {\mathfrak{g}}= {\mathfrak{sl}}(n, {\mathbb{C}}) \mathrel{\vcenter{:}}=\left\{{x\in {\mathfrak{gl}}(n, {\mathbb{C}}) {~\mathrel{\Big|}~}\operatorname{Tr}(x) = 0 }\right\} \end{align*}\]

From now on, \({\mathfrak{g}}\) will denote a semisimple lie algebra over \({\mathbb{C}}\).

Every finite dimensional representation of a semisimple \({\mathfrak{g}}\) is completely reducible.

In other words, the category of finite-dimensional representations is relatively uninteresting – there are no extensions, so everything is a direct sum. Thus once you classify the simple algebras (which isn’t terribly difficult), you have complete information.

Friday January 10th

Root Space Decomposition

Let \({\mathfrak{g}}\) be a finite dimensional semisimple lie algebra over \({\mathbb{C}}\). Recall that this means it has no proper solvable ideals. A more useful characterization is that the Killing form \(\kappa: {\mathfrak{g}}\otimes{\mathfrak{g}}\to {\mathfrak{g}}\) is a non-degenerate symmetric (associative) bilinear form. The running example we’ll use is \({\mathfrak{g}}= {\mathfrak{sl}}(n, {\mathbb{C}})\), the trace zero \(n\times n\) matrices. Let \({\mathfrak{h}}\) be a maximal toral subalgebra, where \(x\in{\mathfrak{g}}\) is toral if \(x\) is semisimple, i.e. \(\operatorname{ad}x\) is semisimple (i.e. diagonalizable).

\({\mathfrak{h}}\) is the diagonal matrices in \({\mathfrak{sl}}(n, {\mathbb{C}})\).

\({\mathfrak{h}}\) is abelian, so \(\operatorname{ad}{\mathfrak{h}}\) consists of commuting semisimple elements, which (theorem from linear algebra) can be simultaneously diagonalized.

This leads to the root space decomposition,

\[\begin{align*} {\mathfrak{g}}= {\mathfrak{h}}\oplus \bigoplus_{\alpha\in \Phi} {\mathfrak{g}}_\alpha .\end{align*}\]

where \[\begin{align*} {\mathfrak{g}}_\alpha = \left\{{x\in {\mathfrak{g}}{~\mathrel{\Big|}~}[h x] = \alpha(h) x ~\forall h\in {\mathfrak{h}}}\right\} \end{align*}\] where \(\alpha \in {\mathfrak{h}}^\vee\) is a linear functional.

Here \({\mathfrak{h}}= C_{\mathfrak{g}}({\mathfrak{h}})\), so \([h x] = 0\) corresponds to zero eigenvalues, and (fact) it turns out that \({\mathfrak{h}}\) is its own centralizer.

We then obtain a set of roots of \({\mathfrak{h}}, {\mathfrak{g}}\) given by \[\begin{align*} \Phi = \left\{{\alpha\in{\mathfrak{h}}^\vee{~\mathrel{\Big|}~}\alpha\neq 0, {\mathfrak{g}}_\alpha \neq \left\{{0}\right\}}\right\} \end{align*}\]

\({\mathfrak{g}}_\alpha = {\mathbb{C}}E_{ij}\) for some \(i\neq j\), the matrix with a 1 in the \(i,j\) position and zero elsewhere.

The restriction \({\left.{\kappa}\right|_{{\mathfrak{h}}}}\) is nondegenerate, so we can identify \({\mathfrak{h}}, {\mathfrak{h}}^\vee\) via \(\kappa\) (can always do this with vector spaces with a nondegenerate bilinear form), where \(\kappa\) maps to another bilinear form \(({\,\cdot\,}, {\,\cdot\,})\). We thus get a correspondence \[\begin{align*} {\mathfrak{h}}^\vee\ni \lambda \iff t_\lambda \in {\mathfrak{h}}\\ \lambda(h) = \kappa(t_\lambda, h) \quad\text{where } (\lambda, \mu) = \kappa(t_\lambda, t_\mu) .\end{align*}\]

Facts About \(\Phi\) and Root Spaces

Let \(\alpha, \beta \in \Phi\) be roots.

  1. \(\Phi\) spans \({\mathfrak{h}}^\vee\) and does not contain zero.

  2. If \(\alpha \in \Phi\) then \(-\alpha \in \Phi\), but no other scalar multiple of \(\alpha\) is in \(\Phi\).

    • Note: see .
  3. \((\beta, \alpha^\vee) \in {\mathbb{Z}}\)

  4. \(s_\alpha(\beta) \mathrel{\vcenter{:}}=\beta - (\beta, \alpha^\vee)\alpha \in \Phi\).

    • Note: see

An aside:

If \(\alpha + \beta \in \Phi\), then \([{\mathfrak{g}}_\alpha {\mathfrak{g}}_\beta] = {\mathfrak{g}}_{\alpha+\beta}\).

Example: If \(\alpha = E_{ij}, \beta = E_{jk}\) where \(k\neq i\), then \([E_{ij}, E_{jk}]= E_{ik}\).

The circled roots below form the root string for \(\beta\):

Image

In general, a subset \(\Phi\) of a real euclidean space \(E\) satisfying conditions (1) through (4) is an (abstract) root system. Note that when \(\Phi\) comes from a \({\mathfrak{g}}\), we define \(E\mathrel{\vcenter{:}}={\mathbb{R}}\Phi\).

The Root System

There exists a subset \(\Delta \subseteq \Phi\) such that

\(\Delta\) is called a simple system.

If \(\Delta = \left\{{a_1, \cdots, a_\ell}\right\}\) then \(\Phi^+\) are the positive roots, and if \(\Phi^+ \ni \beta = \sum_{\alpha \in \Delta} c_\alpha \alpha\), then the height of \(\beta\) is defined as

\[\begin{align*} \text{ht}(\beta) \mathrel{\vcenter{:}}= \sum c_\alpha \in {\mathbb{Z}}_{> 0} \end{align*}\]

Note that \({\mathbb{Z}}\Phi \mathrel{\vcenter{:}}=\Lambda_r\) is a lattice, and is referred to as the root lattice, and \(\Lambda_r \subset E = {\mathbb{R}}\Phi\).

We also have \[\begin{align*} \Phi^+ = \left\{{\beta^\vee{~\mathrel{\Big|}~}\beta \in \Phi}\right\} ,\end{align*}\] the dual root system, is a root system with simple system \(\Delta^\vee\).

\[\begin{align*} {\mathfrak{n}}= {\mathfrak{n}}^+ &\mathrel{\vcenter{:}}=\sum_{\beta > 0} {\mathfrak{g}}_\beta && \text{Upper triangular with zero diagonal,} \\ {\mathfrak{n}}^- &\mathrel{\vcenter{:}}=\sum_{\beta > 0} {\mathfrak{g}}_{-\beta} && \text{Lower triangular with zero diagonal,} \\ {\mathfrak{b}}&\mathrel{\vcenter{:}}={\mathfrak{h}}+ {\mathfrak{n}} && \text{Upper triangular (the "Borel" subalgebra),} \\ {\mathfrak{b}}^- &\mathrel{\vcenter{:}}={\mathfrak{h}}+ {\mathfrak{n}}^- && \text{Lower triangular.} .\end{align*}\]

\[\begin{align*} {\mathfrak{g}}= {\mathfrak{n}}^- \oplus {\mathfrak{h}}\oplus {\mathfrak{n}} \end{align*}\]

If \(\beta \in \Phi^+\setminus \Delta\), and if \(\alpha \in \Delta\) such that \((\beta, \alpha^\vee) > 0\), then \(\beta - (\beta,\alpha^\vee)\alpha \in \Phi^+\) has height strictly less than the height of \(\beta\).

By root strings, \(\beta-\alpha\in\Phi^+\) is positive root of height one less than \(\beta\), yielding a way to induct on heights (useful technique).

Weyl Groups

For \(\alpha \in \Phi\), define

\[\begin{align*} S_\alpha : {\mathfrak{h}}^\vee&\to {\mathfrak{h}}^\vee\\ \lambda &\mapsto \lambda - (\lambda, \alpha^\vee)\alpha .\end{align*}\]

This is reflection in the hyperplane in \(E\) perpendicular to \(\alpha\):

Reflection through a hyperplane

Note that \(s_\alpha^2 = \text{id}\).

Define \(W\) as the subgroup of \(\operatorname{GL}(E)\) generated by all \(s_\alpha\) for \(\alpha \in \Phi\), this is the Weyl group of \({\mathfrak{g}}\) or \(\Phi\), which is finite and \(W = \left\langle{s_\alpha {~\mathrel{\Big|}~}\alpha\in\Delta}\right\rangle\) is generated by simple reflections.

By (4), \(W\) leaves \(\Phi\) invariant. In fact \(W\) is a finite Coxeter group with generators \(S = \left\{{s_\alpha {~\mathrel{\Big|}~}\alpha\in \Delta}\right\}\) and defining relations \((s_\alpha s_\beta)^{m(\alpha, \beta)} = 1\) for \(\alpha,\beta \in \Delta\) where \(m(\alpha, \beta) \in \left\{{2,3,4,6}\right\}\) when \(\alpha \neq \beta\) and \(m(\alpha, \alpha) = 1\).

If this finiteness on numerical conditions are met, then \(W\) is referred to as a Crystallographic group.

Monday January 13th

Lengths

Recall that we have a root space decomposition \({\mathfrak{g}}= {\mathfrak{h}}\oplus \bigoplus_{\beta \in \Phi} {\mathfrak{g}}_\beta\) for finite dimensional semisimple lie algebras over \({\mathbb{C}}\). We have \(s_\beta(\lambda) = \lambda - (\lambda, \beta^\vee)\beta\), for \(\lambda \in {\mathfrak{h}}^\vee\) and the Weyl group \[\begin{align*} W = \left\langle{s_\beta {~\mathrel{\Big|}~}\beta\in\Phi}\right\rangle = \left\langle{s_\alpha {~\mathrel{\Big|}~}\alpha \in \Delta}\right\rangle \end{align*}\] where \(\Delta = \left\{{a_i}\right\}\) are the simple roots.

For \(w\in W\), we can take the reduced expression for \(w\) by writing \(w = s_1 \cdots s_n\) with \(s_i\) simple and \(n\) minimal. The length is uniquely determined, but not the expression. So we define \(\ell(w) \mathrel{\vcenter{:}}= n\) where \(\ell(1) \mathrel{\vcenter{:}}= 0\).

  1. \(\ell(w)\) is the size of the set \(\left\{{\beta\in\Phi^+ {~\mathrel{\Big|}~}w\beta < 0}\right\}\)

    • The above set is equal to \(\Phi^+ \cap w^{-1}\Phi^-\).
    • In particular, for \(\beta \in \Phi^+\), \(\beta\) is simple (i.e. \(\beta \ni \Delta\) iff \(\ell(s_\beta) = 1)\).
    • Note: \(\alpha\) is the only root that \(s_\alpha\) sends to a negative root, so \(s_\alpha(\beta) > 0\) for all \(\beta\in\Phi^+\setminus\left\{{\alpha}\right\}\).
  2. \(\ell(w) = \ell(w^{-1})\) for all \(w\in W\), so \(\ell(w)\) is also the size of \(\Phi \cap w\Phi\) (replacing \(w^{-1}\) with \(w\))

  3. There exists a unique \(w_0 \in W\) with \(\ell(w_0)\) maximal such that \(\ell(w_0) = {\left\lvert {\Phi^+} \right\rvert}\) and \(w_0(\Phi^+) = \Phi^-\).

    • Also \(\ell(w_0 w) = \ell(w_0) - \ell(w)\)1
  4. For \(\alpha \in \Phi^+\), \(w\in W\), we have either

\[\begin{align*} \ell(w s_\alpha) > \ell (w) \iff w(\alpha) > 0 \\ \ell(w s_\alpha) < \ell (w) \iff w(\alpha) < 0 \\ .\end{align*}\]

Taking inverses yields \(\ell(s_\alpha w) > \ell(w) \iff w^{-1}\alpha > 0\).

Bruhat Order

Let \(S\) be the set of simple reflections, i.e. \(S = \left\{{s_\alpha {~\mathrel{\Big|}~}\alpha \in \Delta}\right\}\). Then define \[\begin{align*} T \mathrel{\vcenter{:}}=\bigcup_{w\in W} wSw^{-1}= \left\{{s_\beta {~\mathrel{\Big|}~}\beta\in\Phi^+}\right\} .\end{align*}\] This is the set of all reflections in \(W\) through hyperplanes in \(E\).

We’ll write \(w' \xrightarrow{t} w\) means \(w=tw'\) and \(\ell(w') < \ell(w)\). Note that in the literature, it’s also often assumed that that \(\ell(w') = \ell(w) - 1\). In this case, we say \(w'\) covers \(w\), and refer to this as the covering relation. So \(w' \to w\) means that \(w' \xrightarrow{t} w\) for some \(t\in T\). We extend this to a partial order: \(w' < w\) means that there exists a \(w\) such that \[\begin{align*} w' = w_0 \to w_1 \to \cdots \to w_n = w. \end{align*}\] This is called the Bruhat-Chevalley order on \(W\).

\(w' < w \implies \ell(w') < \ell(w)\), so \(1\in W\) is the unique minimal element in \(W\) under this order.

It turns out that if we set \(w = w' t\) instead, this results in the same partial order. If you restrict \(T\) to simple reflections, this yields the weak Bruhat order In this case, the left and right versions differ, yielding the left/right weak Bruhat orders respectively.2

Recall that lie algebras yield finite crystallographic coxeter groups.

For \((W, S)\) a coxeter group,

  1. \(w' \leq w\) iff \(w'\) occurs as a subexpression/subword of every reduced expression \(s_1 \cdots s_n\) for \(w\), where a subexpression is any subcollection of \(s_i\) in the same order.3
  1. Adjacent elements \(w', w\) (i.e. \(w' < w\) and there does not exist a \(w''\) such that \(w' < w'' < w\)) in the Bruhat order differ in length by 1.

  2. If \(w' < w\) and \(s\in S\), then \(w' s \leq w\) or \(w's \leq ws\) (or both). i.e., if \(\ell(w_1) = 2 = \ell(w_2)\), then the size of \(\left\{{w\in W {~\mathrel{\Big|}~}w_1 < w < w_2}\right\}\) is either 0 or 2.

Properties of Universal Enveloping Algebras

Let \({\mathfrak{g}}\) be any lie algebra, and \(\phi: {\mathfrak{g}}\to A\) be any map into an associative algebra. Then there exists an object \(U({\mathfrak{g}})\) and a map \(i\) such that the following diagram commutes:

where \(\tilde \phi\) is a map in the category of associative algebras.

Moreover any lie algebra homomorphism \({\mathfrak{g}}_1 \to {\mathfrak{g}}_1\) induces a morphism of associative algebras \(U({\mathfrak{g}}_1) \to U({\mathfrak{g}}_2)\), where \({\mathfrak{g}}\) generates \(U({\mathfrak{g}})\) as an algebra.

\(U({\mathfrak{g}})\) can be constructed as \[\begin{align*} U({\mathfrak{g}}) = T({\mathfrak{g}})/ \left\langle{ [x,y] - x\otimes y - y\otimes x {~\mathrel{\Big|}~}x,y\in{\mathfrak{g}}}\right\rangle .\end{align*}\] Note that this ideal is not necessarily homogeneous.

If \(\left\{{x_1, \cdots x_n}\right\}\) is a basis for \({\mathfrak{g}}\), then \(\left\{{x_1^{t_1}, \cdots, x_n^{t_n} {~\mathrel{\Big|}~}t_i \in {\mathbb{Z}}^+}\right\}\) (noting that \(x^n = x\otimes x \otimes\cdots x\) and \({\mathbb{Z}}^+\) includes 0) is a basis for \(U({\mathfrak{g}})\).

\(i:{\mathfrak{g}}\to U({\mathfrak{g}})\) is injective, so we can think of \({\mathfrak{g}}\subseteq U({\mathfrak{g}})\). If \({\mathfrak{g}}\) is semisimple, then it admits a triangular decomposition \({\mathfrak{g}}= {\mathfrak{n}}^- \oplus {\mathfrak{h}}\oplus {\mathfrak{n}}\) and choosing a compatible basis for \({\mathfrak{g}}\) yields \(U({\mathfrak{g}}) = U({\mathfrak{n}}^-)\otimes U({\mathfrak{h}}) \otimes U({\mathfrak{n}})\).

If \(\phi: {\mathfrak{g}}\to \operatorname{GL}(V)\) is any Lie algebra representation, it induces an algebra representation \(U({\mathfrak{g}})\) of \(U({\mathfrak{g}})\) on \(V\) and vice-versa. It satisfies \[\begin{align*}x\cdot (y \cdot v) - y\cdot (x \cdot v) = [x y] \cdot v\end{align*}\] for all \(x,y \in {\mathfrak{g}}\) and \(v\in V\).

Note that this lets us go back and forth between Lie algebra representations and associative algebra representations, i.e. the theory of modules over rings.

A note on notation: \(\mathcal Z({\mathfrak{g}})\) denotes the center of \(U({\mathfrak{g}})\).

Integral Weights

We have a Euclidean space \(E = {\mathbb{R}}\Phi^+\), the \({\mathbb{R}}{\hbox{-}}\)span of the roots.

We also have the integral weight lattice \[\begin{align*} \Lambda = \left\{{\lambda \in E {~\mathrel{\Big|}~}(\lambda, \alpha^\vee) \in {\mathbb{Z}}~\forall \alpha\in\Phi (\text{or}~ \Phi^+ ~\text{or}~ \Delta)}\right\} .\end{align*}\]

There is a sublattice \(\Lambda_r \subseteq \Lambda\), which is an additive subgroup of finite index. There is a partial order of \(\Lambda\) on \(E\) and \({\mathfrak{h}}^\vee\). We write \[\begin{align*} \mu \leq \lambda \iff \lambda - \mu \in {\mathbb{Z}}^+ \Delta = {\mathbb{Z}}^+ \Phi^+ \end{align*}\]

For a basis \(\Delta = \left\{{\alpha_1, \cdots, \alpha_n}\right\}\), define a dual basis \((w_i ,\alpha_j^\vee) = \delta_{ij}\). The fundamental weights are given by a \({\mathbb{Z}}{\hbox{-}}\)basis for \(\Lambda\). Then \(\Lambda\) is a free abelian group of rank \(\ell\), and \[\begin{align*} \Lambda^+ = {\mathbb{Z}}^+ w_1 + \cdots + {\mathbb{Z}}^+ w_\ell \end{align*}\] are the dominant integer weights.4

Wednesday January 15th

Review

The Weyl vector is given by \[\begin{align*} \rho = \mkern 1.5mu\overline{\mkern-1.5mu\omega\mkern-1.5mu}\mkern 1.5mu_1 + \cdots + \mkern 1.5mu\overline{\mkern-1.5mu\omega\mkern-1.5mu}\mkern 1.5mu_\ell = \frac 1 2 \sum_{\beta \in \Phi^+} \beta \in \Lambda^+ .\end{align*}\]

envlist - If \(\alpha \in \Delta\) then \((\rho, \alpha^\vee) = 1\) - \(s_\alpha(\rho) = \rho - \alpha\).

Let \(\lambda \in \Lambda^+\); a few facts:

  1. The size of \(\left\{{\mu\in \Lambda^+ {~\mathrel{\Big|}~}\mu \leq \lambda}\right\}\) (with the partial order from last time) is finite.
  2. \(w\lambda < \lambda\) for all \(w\in W\).

The Weyl chamber for a fixed root in \(E\) a Euclidean space is \[\begin{align*} C = \left\{{\lambda \in E {~\mathrel{\Big|}~}(\lambda, \alpha) > 0 ~ \forall \alpha\in\Delta}\right\} \end{align*}\]

Note that the hyperplane splits \(E\) into connected components, we mark this component as distinguished.

We also let \(\mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu\) denote the fundamental domain.

Weight Representations

For \(\lambda \in {\mathfrak{h}}^\vee\), we let \[\begin{align*} M_\lambda \mathrel{\vcenter{:}}= \left\{{v\in M {~\mathrel{\Big|}~}h\cdot v = \lambda(h) v ~\forall h\in{\mathfrak{h}}}\right\} .\end{align*}\] denote a weight space of \(M\) if \(M_\lambda \neq 0\). In this case, \(\lambda\) is a weight of \(M\). The dimension of \(M_\lambda\) is the multiplicity of \(\lambda\) in \(M\), and we define the set of weights as \[\begin{align*} \Pi(M) \mathrel{\vcenter{:}}= \left\{{\lambda \in {\mathfrak{h}}^\vee{~\mathrel{\Big|}~}M_\lambda \neq 0}\right\} .\end{align*}\]

If \(M = {\mathfrak{g}}\) under the adjoint action, then \(\Pi(M) = \Phi \cup\left\{{0}\right\}\).

The weight vectors for distinct weights are linearly independent. Thus there is a \({\mathfrak{g}}{\hbox{-}}\)submodule given by \(\sum_\lambda M_\lambda\), which is in fact a direct sum. It may not be the case that \(\sum_\lambda M_\lambda = M\), and can in fact be zero, although this is an odd situation.5 In our case, we’ll have a weight module \(M = \bigoplus_\lambda M_\lambda\), where \({\mathfrak{h}}\curvearrowright M\) semisimply.

Finite Dimensional Modules

Recall Weyl’s complete reducibility theorem, which implies that any finite dimensional \({\mathfrak{g}}{\hbox{-}}\)module is a weight module. In fact, \({\mathfrak{n}}, {\mathfrak{n}}^- \curvearrowright M\) nilpotently.

Simple Finite Dimensional \({\mathfrak{sl}}(2, {\mathbb{C}}){\hbox{-}}\)modules

Fix the standard basis \(\left\{{x, h, y}\right\}\) of \({\mathfrak{sl}}(2, {\mathbb{C}})\) with

\[\begin{align*} [h x] &= 2x \\ [h y] &= -2y \\ [x y] &= h .\end{align*}\]

Since \(\dim {\mathfrak{h}}= 1\), there is a bijection \[\begin{align*} {\mathfrak{h}}^\vee&\leftrightarrow {\mathbb{C}}\\ \Lambda &\leftrightarrow {\mathbb{Z}}\\ \Lambda_r &\leftrightarrow 2{\mathbb{Z}}\\ \alpha &\to 2\\ \rho &\to 1 .\end{align*}\]

There is a correspondence between weights and simple modules:

\[\begin{align*} \left\{{\substack{\text{Isomorphism classes of simple modules}}}\right\} &\iff \Lambda^+ = \left\{{0,1,2,3,\cdots}\right\} \\ L(\lambda) &\iff \lambda .\end{align*}\]

Moreover, \(L(\lambda)\) has a 1-dimensional weight spaces with weights \(\lambda, \lambda - 2, \cdots, -\lambda\) and thus \(\dim L(\lambda) = \lambda + 1\).

Choose a basis \(\left\{{v_1, \cdots, v_\lambda}\right\}\) for \(L(\lambda)\) so that \({\mathbb{C}}v_0 = M_{\lambda}\), \({\mathbb{C}}v_1 = M_{\lambda - 2}\), \(\cdots {\mathbb{C}}v_{\lambda} M_{-\lambda}\). Then

We then say \(L(\lambda)\) is a highest weight module, since it is generated by a highest weight vector \(\lambda\). Then \(W = \left\{{1, s_\alpha}\right\}\), where \(s_\alpha\) is reflection through 0 by the identification \(\alpha = 2\).

Weyl group reflection in {\mathfrak{sl}}_2({\mathbb{C}})

Chapter 1: Category \(\mathcal O\) Basics

The category of \(U({\mathfrak{g}}){\hbox{-}}\)modules is too big. Motivated by work of Verma in 60s, started by Bernstein-Gelfand-Gelfand in the 1970s. Used to solve the Kazhdan-Lusztig conjecture.

Axioms and Consequences

\(\mathcal O\) is the full subcategory of \(U({\mathfrak{g}})\) modules consisting of \(M\) such that

  1. \(M\) is finitely generated as a \(U({\mathfrak{g}}){\hbox{-}}\)module.
  2. \(M\) is \({\mathfrak{h}}{\hbox{-}}\)semisimple, i.e. \(M\) is a weight module \[\begin{align*} M = \bigoplus_{\lambda \in {\mathfrak{h}}^\vee} M_\lambda \end{align*}\]
  3. \(M\) is locally \(n{\hbox{-}}\)finite, i.e.  \[\begin{align*} \dim_{\mathbb{C}}U({\mathfrak{n}}) v < \infty \qquad \forall v\in M .\end{align*}\]

If \(\dim M < \infty\), then \(M\) is \({\mathfrak{h}}{\hbox{-}}\)semisimple, and axioms 1, 3 are obvious.

Let \(M \in {\mathcal{O}}\), then

  1. \(\dim M_\mu < \infty\) for all \(\mu \in {\mathfrak{h}}^\vee\).
  2. There exist \(\lambda_1, \cdots \lambda_r \in {\mathfrak{h}}^\vee\) such that \[\begin{align*} \Pi(M) \subset \bigcup_{i=1}^\lambda (\lambda_i - {\mathbb{Z}}^+ \Phi^+) \end{align*}\]
Forest structure of weights

By axiom 2, every \(v\in M\) is a finite sum of weight vectors in \(M\). We can thus assume that our finite generating set consists of weight vectors. We can then reduce to the case where \(M\) is generated by a single weight vector \(v\). So consider \(U({\mathfrak{g}}) v\). By the PBW theorem, there is a triangular decomposition \[\begin{align*} U({\mathfrak{g}}) = U({\mathfrak{n}}^-) U({\mathfrak{h}}) U({\mathfrak{n}}) \end{align*}\]

By axiom 3, \(U({\mathfrak{n}}) \cdot v\) is finite dimensional, so there are finitely many weights of finite multiplicity in the image. Then \(U({\mathfrak{h}})\) acts by scalar multiplication, and \(U({\mathfrak{n}}^-)\) produces the “cones” that result in the tree structure:

Cones under tree structure of weights

A weight of the form \(\mu = \lambda_i - \sum n_i \alpha_i\) can arise from \(y_{n_1}^{n_1} \cdots\).

Friday January 17th

Let \(M\)

  1. Be finitely generated,
  2. Semisimple \(M = \oplus_{\lambda \in {\mathfrak{h}}^\vee} M_\lambda\),
  3. Locally finite
  4. \(\dim M_\mu < \infty\) for all \(\mu \in {\mathfrak{h}}^\vee\),
  5. Satisfy the forest condition for weights.
  1. \(\mathcal O\) is Noetherian6

  2. \(\mathcal O\) is closed under quotients, submodules, finite direct sums

  3. \(\mathcal O\) is abelian (similar to a category of modules)

  4. If \(M\in \mathcal O\), \(\dim L < \infty\), then \(L \otimes M \in \mathcal O\) and the endofunctor \(M \mapsto L\otimes M\) is exact

  5. If \(M\in \mathcal O\), then \(M\) is locally \(Z({\mathfrak{g}}){\hbox{-}}\)finite7

  6. \(M\in \mathcal O\) is a finitely generated \(U({\mathfrak{n}}^-){\hbox{-}}\)module.

See BA II, page 103.

Implied by (b), BA II Page 330.

Can check that \(L\otimes M\) satisfies 2 and 3 above. Need to check first condition. Take a basis \(\left\{{v_i}\right\}\) for \(L\) and \(\left\{{w_j}\right\}\) a finite set of generators for \(M\). The claim is that \(B = \left\{{v_i \otimes w_j}\right\}\) generates \(L\otimes M\). Let \(N\) be the submodule generated by \(B\).

For any \(v\in V\), \(v\otimes w_j \in N\). For arbitrary \(x\in {\mathfrak{g}}\), compute \[\begin{align*}x\cdot(v\otimes w_j) = (x\cdot v) \otimes w_j + x\otimes(v\cdot w_j).\end{align*}\] Since the LHS is in \(N\) and the first term on the RHS is in \(N\), the entire RHS is in \(N\). By iterating, we find that \(v\otimes(u\cdot w_j) \in N\) for all PBW monomials \(u\). So \(L\otimes M \in \mathcal O\).

Since \(v\in M\)is a sum of weight vectors, wlog we can assume \(v \in M_\lambda\) is a weight vector (where \(\lambda \in {\mathfrak{h}}^\vee\)). For any central element \(z\in Z({\mathfrak{g}})\), we can compute \[\begin{align*}h\cdot(z\cdot v) = z \cdot (h\cdot v) = z \cdot \lambda(h) v = \lambda(h)z \cdot v.\end{align*}\] Thus \(z\cdot v\in M_\lambda\). By (4), we know that \(\dim M_\lambda < \infty\), so \(\dim {\operatorname{span}}~Z({\mathfrak{g}}) v < \infty\) as well.

By 5, \(M\) is generated by a finite dimensional \(U(\mathfrak b)\) submodule \(N\). Since we have a triangular decomposition \(U({\mathfrak{g}}) = U({\mathfrak{n}}^-) U(\mathfrak b)\), there is a basis of weight vectors for \(N\) that generates \(M\) as a \(U({\mathfrak{n}}^-)\) module.

Highest Weight Modules

A maximal vector \(v^+ \in M \in \mathcal O\) is a nonzero vector such that \({\mathfrak{n}}\cdot v^+ = 0\).

By properties 2 and 3, every nonzero \(M\in \mathcal O\) has a maximal vector.

A highest weight module \(M\) of highest weight \(\lambda\) is a module generated by a maximal vector of weight \(\lambda\), i.e.  \[\begin{align*}M = U({\mathfrak{g}}) v^+ = U({\mathfrak{n}}^-) U({\mathfrak{h}}) U({\mathfrak{n}}) v^+ = U({\mathfrak{n}}^-) v^+\end{align*}\]

Let \(M = U({\mathfrak{n}}^-)v^+\) be a highest weight module, where \(v^+ \in M_\lambda\). Fix \(\Phi^+ = \left\{{\beta_1, \cdots, \beta_n}\right\}\) with root vectors \(y_i \in {\mathfrak{g}}_{\beta_i}\).

  1. \(M\) is the \({\mathbb{C}}{\hbox{-}}\)span of PBW monomials \(\left\langle{ y_1^{t_1} \cdots y_m^{t_m}}\right\rangle\) of weight \(\lambda - \sum t_i \beta_i\). Thus \(M\) is a module.

  2. All weights \(\mu\) of \(M\) satisfy \(\mu \leq \lambda\)

  3. \(\dim M_\mu < \infty\) for all \(\mu \in T(M)\), and \(\dim M_\lambda = 1\). In particular, property (3) holds and \(M \in \mathcal O\).

  4. Every nonzero quotient of \(M\) is a highest-weight module of highest weight \(\lambda\).

  5. Every submodule of \(M\) is a weight module, and any submodule generated by a maximal vector with \(\mu < \lambda\) is proper. If \(M\) is semisimple, then the set of maximal weight vectors equals \({\mathbb{C}}^{\times}v^+\).

  6. \(M\) has a unique maximal submodule \(N\) and a unique simple quotient \(L\), thus \(M\) is indecomposable.

  7. All simple highest weight modules of highest weight \(\lambda\) are isomorphic.

For such \(M\), \(\dim \operatorname{End}(M) = 1\). (Category \(\mathcal O\) version of Schur’s Lemma, generalizes to infinite dimensional case)

Either obvious or follows from previous results. First few imply \(M\) is in \(\mathcal O\), and we know the latter hold for such modules.

\(N\) is a sum of submodules, so \(N = \sum M_i\), proper submodules of \(M\). So take \(L = M/N\). To see indecomposability, there exists a better proof in section 1.3.

Let \(M_1 = U({\mathfrak{n}}^-)v_1^+\) and \(M_2\) be define similarly, where the \(v_i \in (M_i)_\lambda\) have the same weight. Then \(M_0 \mathrel{\vcenter{:}}= M_1 \oplus M_2\) implies that \(v^+ \mathrel{\vcenter{:}}=(v_1^+, v_2^+)\) is a maximal vector for \(M_0\). So \(N \mathrel{\vcenter{:}}= U({\mathfrak{n}}^-) v^+\) is a highest weight module of highest weight \(\lambda\).

We have the following diagram:

and since e.g. \(N \to M_1\) is not the zero map, it is a surjection.

By (f), \(N\) is a unique simple quotient, so this forces \(M_1 \cong M_2\). Since \(M\) is simple, any nonzero \({\mathfrak{g}}{\hbox{-}}\)endomorphism \(\phi\) must be an isomorphism, and so we take \(v^+ \mapsto cv^+\) for some \(c\neq 0\). Note that since \(\phi\) is also a \({\mathfrak{h}}{\hbox{-}}\)morphism, we have \(\dim M_\lambda = 1\).

Since \(v^+\) generates \(M\) and \[\begin{align*} \phi(u\cdot v^+) = u \phi(v^+) = cu\cdot v^+ ,\end{align*}\] \(\phi\) is multiplication by a constant.

Wednesday January 22nd

Try problems 1.1 and 1.3* in Humphreys.

Recall: In category \(\mathcal O\), we have finite dimensional, semisimple modules over \({\mathbb{C}}\) with triangular decompositions.

If \(M\) is any \(U({\mathfrak{g}})\) module than a \(v^+ \in M_\lambda\) a weight vector (so \(\lambda \in {\mathfrak{h}}^\vee\)) is primitive iff \({\mathfrak{n}}\cdot v^+ = 0\). Note: it doesn’t have to be of maximal weight. \(M\) is a highest weight module of highest weight \(\lambda\) iff it’s generated over \(U({\mathfrak{g}})\) as an associative algebra by a maximal vector \(v^+\) of weight \(\lambda\). Then \(M = U({\mathfrak{g}}) \cdot v^+\).

See structure of highest weight modules, and irreducibility.

If \(0 \neq M\in\mathcal O\), then \(M\) has a finite filtration with quotients highest weight modules, i.e. \(M_0 \subset M_1 \subset \cdots \subset M_n = M\) with \(M_i/M_{i-1}\) highest weight modules.

Note that the quotients are not necessarily simple, so this isn’t a composition series, although we’ll show such a series exists later.

Let \(V\) be the \({\mathfrak{n}}\) submodule of \(M\) generated by a finite set of weight vectors which generate \(M\) as a \(U({\mathfrak{g}})\) module, i.e. take the finite set of weight vectors and act on them by \(U({\mathfrak{n}})\). Then \(\dim_{\mathbb{C}}V < \infty\) since \(M\) is locally \({\mathfrak{n}}{\hbox{-}}\)finite.

Note that \({\mathfrak{n}}\) increases weights.

Induction on \(n = \dim V\). If \(n=1\), \(M\) itself is a highest weight module. For \(n > 1\), choose a weight vector \(v_1 \in V\) of weight \(\lambda\) which is maximal among all weights of \(V\). Set \(M_1 \mathrel{\vcenter{:}}= U({\mathfrak{g}}) v_1\); this is a highest weight submodule of \(M\) of highest weight \(\lambda\). (\({\mathfrak{n}}\) has to kill v_1, otherwise it increases weight and \(v_1\) wouldn’t be maximal.)

Let \(\mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu = M/M_1 \in \mathcal O\), this is generated by the image of \(\mkern 1.5mu\overline{\mkern-1.5muV\mkern-1.5mu}\mkern 1.5mu\) of \(V\) and thus \(\dim \mkern 1.5mu\overline{\mkern-1.5muV\mkern-1.5mu}\mkern 1.5mu < \dim V\). By the IH, \(\mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu\) has the desired filtration, say \[\begin{align*}0 \subset \mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu_2 \subset \mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu_{n-1} \subset \mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu_n = \mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu.\end{align*}\] Let \(\pi: M \to M/M_1\), then just take the preimages \(\pi^{-1}(\mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu_i)\) to be the filtration on \(M\).

By isomorphism theorems, the quotients in the series for \(M\) are isomorphic to the quotients for \(\mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu\).

Verma and Simple Modules

Constructing universal highest weight modules using “algebraic induction”. Start with a nice subalgebra of \({\mathfrak{g}}\) then “induce” via \(\otimes\) to a module for \({\mathfrak{g}}\).

Recall \({\mathfrak{g}}= {\mathfrak{n}}^- \oplus {\mathfrak{h}}\oplus {\mathfrak{n}}\), here \({\mathfrak{h}}\oplus {\mathfrak{n}}\) is the Borel subalgebra \({\mathfrak{b}}\), and \({\mathfrak{n}}\) corresponds to a fixed choice of positive roots in \(\Phi^+\) with basis \(\Delta\). Then \(U({\mathfrak{g}}) = U({\mathfrak{n}}^-) \otimes_{\mathbb{C}}U({\mathfrak{b}})\). Given any \(\lambda \in {\mathfrak{h}}^\vee\), let \({\mathbb{C}}_\lambda\) be the 1-dimensional \({\mathfrak{h}}{\hbox{-}}\)module (i.e. 1-dimensional \({\mathbb{C}}{\hbox{-}}\)vector space)on which \({\mathfrak{h}}\) acts by \(\lambda\).

Let \(\left\{{1}\right\}\) be the basis for \({\mathbb{C}}\), so \(h \cdot 1 = \lambda(h)1\) for all \(h\in {\mathfrak{h}}\). Then there is a map \({\mathfrak{b}}\to {\mathfrak{b}}/ {\mathfrak{n}}\cong {\mathfrak{h}}\), so make \(C_\lambda\) a \({\mathfrak{b}}{\hbox{-}}\)module via this map. This “inflate” \(C_\lambda\) into a 1-dimensional \({\mathfrak{b}}{\hbox{-}}\)module.

Note that \({\mathfrak{h}}\) is solvable, and by Lie’s Theorem, every finite dimensional irreducible \({\mathfrak{b}}{\hbox{-}}\)module is of the form \({\mathbb{C}}_\lambda\) for some \(\lambda \in {\mathfrak{h}}^\vee\).

\[\begin{align*} M(\lambda) \mathrel{\vcenter{:}}= U({\mathfrak{g}}) \otimes_{U({\mathfrak{b}})} {\mathbb{C}}_\lambda \mathrel{\vcenter{:}}=\operatorname{Ind}_{\mathfrak{b}}^{\mathfrak{g}}{\mathbb{C}}_\lambda \end{align*}\] is the Verma module of highest weight \(\lambda\).

This process is called algebraic/tensor induction. This is a \(U({\mathfrak{g}})\) module via left multiplication, i.e. acting on the first tensor factor.

Since \(U({\mathfrak{g}}) \cong U({\mathfrak{n}}^-) \otimes_{\mathbb{C}}U({\mathfrak{h}})\), we have \(M(\lambda) \cong U({\mathfrak{n}}^-) \otimes_{\mathbb{C}}{\mathbb{C}}_\lambda\), but at what level of structure?

In particular, \(M(\lambda)\) is a free \(U({\mathfrak{n}}^-){\hbox{-}}\)module of rank 1. Note that this always happens when tensoring with a vector space.

Consider \(v^+ \mathrel{\vcenter{:}}= 1\otimes 1 \in M(\lambda)\). Note that \(U({\mathfrak{n}}^-)\) is not homogeneous, so not graded, but does have a filtration. Then \(v^+\) is nonzero, and freely generates \(M(\lambda)\) as a \(U({\mathfrak{n}}^-){\hbox{-}}\)module. Moreover \({\mathfrak{n}}\cdot v^+ = 0\) since for \(x\in {\mathfrak{g}}_\beta\) for \(\beta \in \Phi^+\), we have

\[\begin{align*} x(1\otimes 1) &= x\otimes 1 \\ &= 1\otimes x\cdot 1 \quad\text{since } x\in {\mathfrak{b}}\\ &= 1 \otimes 0 \implies x\in {\mathfrak{n}}\\ &= 0 ,\end{align*}\]

and for \(h\in {\mathfrak{h}}\),

\[\begin{align*} h(1\otimes 1) &= h1\otimes 1 \\ &= 1 \otimes h1\\ &=1 \otimes\lambda(h) 1 \\ &=\lambda(h) v^+ .\end{align*}\]

So \(M(\lambda)\) is a highest weight module of highest weight \(\lambda\), and thus \(M(\lambda) \in \mathcal O\).

Any weight \(\lambda \in {\mathfrak{h}}^\vee\) is the highest weight of some \(M\in \mathcal O\). Let \(\Pi(M)\) denote the set of weights of a module, then \(\Pi(M(\lambda)) = \lambda - {\mathbb{Z}}^+ \Phi^+\).

By PBW, we can obtain a basis for \(M(\lambda)\) as \(\left\{{ y_1^{t_1} \cdots y_m^{t_m}v^+ {~\mathrel{\Big|}~}t_i \in {\mathbb{Z}}^+}\right\}\). Taking a fixed ordering \(\left\{{\beta_1, \cdots, \beta_m}\right\} = \Phi^+\), then \(0\neq y_i \in {\mathfrak{g}}_{-\beta_i}\). Then every weight of this form is a weight of some \(M(\lambda)\), and every weight of \(M(\lambda)\) is of this form: \(\lambda - \sum t_i \beta_i\).

The functor \(\operatorname{Ind}_{\mathfrak{h}}^{\mathfrak{g}}({\,\cdot\,}) = U({\mathfrak{g}}) \otimes_{{\mathfrak{b}}} {\,\cdot\,}\) from the category of finite-dimensional \({\mathfrak{g}}{\hbox{-}}\)semisimple \({\mathfrak{b}}{\hbox{-}}\)modules to \(\mathcal O\) is an exact functor, since it is naturally isomorphic to \(U({\mathfrak{n}}^-) \otimes_{\mathbb{C}}{\,\cdot\,}\) (which is clearly exact since we are tensoring a vector space over its ground field).

Let \(I\) by a left ideal of \(U({\mathfrak{g}})\) which annihilates \(v^+\), so \[\begin{align*} I = \left\langle{{\mathfrak{n}}, h-\lambda(h)\cdot 1 {~\mathrel{\Big|}~}h\in{\mathfrak{h}}}\right\rangle .\end{align*}\] Since \(v^+\) generates \(M(\lambda)\) as a \(U({\mathfrak{g}}){\hbox{-}}\)module, then (by a standard ring theory result) \(M(\lambda) = U({\mathfrak{g}})/I\), since \(I\) is the annihilator of \(M(\lambda)\).

Let \(M\) be any highest weight module of highest weight \(\lambda\) generated by \(v\). Then \(I\cdot v = 0\), so \(I\) is the annihilator of \(v\) and thus \(M\) is a quotient of \(M(\lambda)\). Thus \(M(\lambda)\) is universal in the sense that every other highest weight module arises as a quotient of \(M(\lambda)\).

By theorem 1.2, \(M(\lambda)\) has a unique maximal submodule \(N(\lambda)\) (nonstandard notation) and a unique simple quotient \(L(\lambda)\) (standard notation).

Every simple module in \(\mathcal O\) is isomorphic to \(L(\lambda)\) for some \(\lambda \in {\mathfrak{h}}^\vee\) and is determined uniquely up to isomorphism by its highest weight. Moreover, there is an analog of Schur’s lemma: \[\begin{align*}\dim \hom_{\mathcal O}(L(\mu), L(\lambda)) = \delta_{\mu\lambda}\end{align*}\], i.e. it’s 1 iff \(\lambda=\mu\) and 0 otherwise.

Up to isomorphism, we’ve found all of the simple modules in \(\mathcal O\), and most are finite-dimensional.

Friday January 24th

A standard theorem about classifying simple modules in category \({\mathcal{O}}\):

Theorem (Classification of Simple Modules)
Every simple module in \({\mathcal{O}}\) is isomorphic to \(L(\lambda)\) for some \(\lambda \in {\mathfrak{h}}^\vee\), and is determined uniquely up to isomorphism by its highest weight. Moreover, \(\dim \hom_{\mathcal{O}}(L(\mu), L(\lambda)) = \delta_{\lambda \mu}\).
Proof

Let \(L \in {\mathcal{O}}\) be irreducible. As observed in 1.2, \(L\) has a maximal vector \(v^+\) of some weight \(\lambda\).

Recall: can increase weights and reach a maximal in a finite number of steps.

Since \(L\) is irreducible, \(L\) is generated by that weight vector, i.e. \(L U({\mathfrak{g}}) \cdot v^+\), so \(L\) must be a highest weight module.

Standard argument: use triangular decomposition.

By the universal property, \(L\) is a quotient of \(M(\lambda)\). But this means \(L \cong L(\lambda)\), the unique irreducible quotient of \(M(\lambda)\).

By Theorem 1.2 part g (see last Friday), \(\dim \operatorname{End}_{\mathcal{O}}(L(\lambda)) = 1\) and \(\hom_{\mathcal{O}}(L(\mu), L(\lambda)) = 0\) since both entries are irreducible.

Theorem (1.2 f, Highest Weight Modules are Indecomposable)
A highest weight module \(M\) is indecomposable, i.e. can’t be written as a direct sum of two nontrivial proper submodules.
Proof (of Theorem 1.2 f)
Suppose \(M = M_1 \oplus M_2\) where \(M\) is a highest weight module of highest weight \(\lambda\). Category \({\mathcal{O}}\) is closed under submodules, so \(M_i\) are weight modules and have weight-space decompositions. But \(M_\lambda\) is 1-dimensional (triangular decomposition, only \({\mathbb{C}}\) acts), and thus \(M_\lambda \subset M_1\). Since \(M_\lambda\) is a highest weight module, it generates the entire module, so \(M \subset M_1\). The reverse holds as well, so \(M = M_1\) and this forces \(M_2 = 0\).

1.4: Maximal Vectors in Verma Modules

1.5: Examples in the case \({\mathfrak{sl}}(2)\), over \({\mathbb{C}}\) as usual.

First, some review from Lie algebras.

Let \({\mathfrak{g}}\) be any lie algebra, and take \(u, v \in U({\mathfrak{g}})\). Recall that we have the formula \[\begin{align*}uv = [uv] + vu,\end{align*}\] where we use the definition \([uv] = uv - vu\).

Let \(x, y_1, y_2\) be in \({\mathfrak{g}}\), what is \([x, y_1 y_2]\)? Use the fact that \(\operatorname{ad}x (y_1, y_2)\) acts as a derivation, and so \([x, y_1 y_2] = [x y_1]y_2 + y_1[x y_2]\), which is a bracket entirely in the Lie algebra. This extends to an action on \(U({\mathfrak{g}})\) by the product rule.

Recall that \({\mathfrak{sl}}(2)\) is spanned by \(y =[0,0; 1,0], h = [1,0; 0, -1], x = [0,1; 0,0]\), where each basis vector spans \({\mathfrak{n}}^-, {\mathfrak{h}}, {\mathfrak{n}}\) respectively. Then \([x y] = h, [h x] = 2x, [h y] = -2y\), so \(E_{ij} E_{kl} = \delta_{jk} E_{il}\) (should be able to compute easily!).

Then \({\mathfrak{h}}= {\mathbb{C}}\), so \({\mathfrak{h}}^\vee\cong {\mathbb{C}}\) where \(\lambda \mapsto \lambda(h)\). So we identify \(\lambda\) with a complex number, this is kind of like a bundle of Verma modules over \({\mathbb{C}}\).

Consider \(M(1)\), then \(\lambda = 1\) will denote \(\lambda(h) = 1\). As in any Verma module, \(M(\lambda) \cong U({\mathfrak{n}}^-) \otimes_{\mathbb{C}}{\mathbb{C}}_{\lambda}\). We can think of \(v^+\) as \(1\otimes 1\), with the action \(yv^+ = y1\otimes 1\). Note that \(y\) has weight \(-2\).

Weight Basis
1 \(v^+\)
-1 \(yv^+\)
-3 \(y^2 v^+\)
-5 \(y^3 v^+\)

Consider how \(x\curvearrowright y^2 v^+\). Note that \(x\) has weight \(+2\). We have

\[\begin{align*} x \cdot y^2 v^+ &= x \cdot y^2 \otimes 1_\lambda \\ &= ([x y^2] + y^2 x) \otimes 1 \\ &= ([xy]y + y[xy]) \otimes 1 + y^2 \otimes x\cdot 1 \quad\text{moving $x$ across the tensor because ?}\\ &= ([xy]y + y[xy]) \otimes 1 + 0 \quad\text{since $x$ is maximal} \\ &= (hy + yh) \otimes 1 \\ &= hy \otimes 1 + y\otimes h\cdot 1 \\ &= hy \otimes 1 + \lambda(h)1 \\ &= hy \otimes 1 + 1 \\ &= ([xy] + yh)\otimes 1 + y\otimes 1 \\ &= -2y \otimes 1 + y\otimes 1 + y\otimes 1 \\ &= 0 .\end{align*}\]

So \(y\) moves us downward through the table, and \(x\) moves upward, except when going from \(-3\to -1\) in which case the result is zero.

Thus there exists a morphism \(\phi: M(-3) \to M(1)\), with image \(U({\mathfrak{g}}) y^2 v^+ = U({\mathfrak{n}}^-) y^2 v^+\). So the image of \(\phi\) is everything spanned by the bases in the rows \(-3, -5, \cdots\), which is exactly \(M(-3)\). So \(M(-3) \hookrightarrow M(1)\) as a submodule.

Motivation for next section: we want to find Verma modules which are themselves submodules of Verma modules.

It turns out that \(\operatorname{im}({\phi })\cong N(1)\). We should have \(M(1) / N(1) \cong L(1)\). What is the simple module of weight 1 for \({\mathfrak{sl}}(2)\)? The weights of \(L(n)\) are \(n, n-2, n-4, \cdots, -n\), so the representations are parameterized by \(n\in {\mathbb{Z}}^{+}\). These are the Verma modules for \({\mathfrak{sl}}(2)\). What happens is that \(y\curvearrowright-n \to -n-2\) gives a maximal vector, so the calculation above roughly goes through the same way. So we’ll have a similar picture with \(L(n)\) at the top.

Back to 1.4

Question 1: What are the submodules of \(M(\lambda)\)?

Question 2: What are the Verma submodules \(M(\mu) \subset M(\lambda)\)? Equivalently, when do maximal vectors of weight \(\mu < \lambda\) (the interesting case) lie in \(M(\lambda)\)?

Question 3: As a special case, when do maximal vectors of weight \(\lambda - k\alpha\) for \(\alpha \in \Delta\) lie in \(M(\lambda)\) for \(k\in {\mathbb{Z}}^+\)?

Fix a Chevalley basis for \({\mathfrak{g}}\) (see section 0.1) \(h_1, \cdots, h_\ell \in {\mathfrak{h}}\) and \(x_\alpha \in {\mathfrak{g}}_\alpha\) and \(y_\alpha \in {\mathfrak{g}}_{-\alpha}\) for \(\alpha \in \Phi^+\). Let \(\Delta = \left\{{\alpha_1, \cdots, \alpha_\ell}\right\}\) and let \(x_i = x_{\alpha_i}, y_i = y_{\alpha_i}\) be chosen such that \([x_i y_i] = h_i\).

Lemma

For \(k\geq 0\) and \(1\leq i, j \leq \ell\), then

  1. \([x_j, y_i^{k+1}] = 0\) if \(j\neq i\)

  2. \([h_j, y_i^{k+1}] = -(k+1) \alpha_i(h_j) y_i^{k+1}\).

  3. \([x_i, y_i^{k+1}] = -(k+1) y_i(k\cdot 1 - h_i)\).

Proof (sketch)

Both easy to prove by induction since \([x_j, y_i] \to \alpha_j - \alpha_i \not\in \Phi\) is a difference of simple roots.

For \(k=0\), all identities are easy. For \(k> 0\), an inductive formula that uses the derivation property, which we’ll do next class.

Monday January 27th

Section 1.4

Fix \(\Delta = \left\{{\alpha_1, \cdots, \alpha_\ell}\right\}\), \(x_i \in g_{\alpha_i}\) and \(y_i \in g_{-\alpha_i}\) with \(h_i = [x_i y_i]\).

Lemma

For \(k\geq 0\) and \(1 \leq i, j \leq \ell\),

  1. \([x_j y_i^{k+1}] = 0\) if \(j\neq i\)
  2. \([h_j y_i^{k+1}] = -(k+1) \alpha_i(h_j) y_i^{k+1}\)
  3. \([x_i y_i^{k+1}] = (k+1) y_i^{k} (k\cdot 1 - h_i)\).
Proof (Sketch for (c))

By induction, where \(k=0\) is clear.

\[\begin{align*} [x+i y_i^{k+1}] &= [x_i y_i] y_i^k + y_i [x_i y_i^k] \\ &=h_i y_i^k + y_i(-k y_i^{k-1} ((k-1)1 - h_i)) \quad\text{by I.H.} \\ &= (k+1)y_i^k h_i - (k^2 -k + 2k)y_i^k \\ &= -(k+1) y_i^k ( k\cdot 1 - h_i ) .\end{align*}\]

Proposition (Existence of Morphisms of Verma Modules)

Suppose \(\lambda \in {\mathfrak{h}}^\vee, \alpha \in \Delta\), and \(n\mathrel{\vcenter{:}}=(\lambda, \alpha^\vee) \in {\mathbb{Z}}^+\). Then in \(M(\lambda)\), \(y_\alpha^{n+1} v^+\) is a maximal weight vector of weight \(\mu \mathrel{\vcenter{:}}=\lambda - (n+1)\alpha < \lambda\).

Note this is free as an \(U({\mathfrak{n}}^-){\hbox{-}}\)module, so \(v^+ \neq 0\). Note that \(n = \lambda(h_\alpha)\).

By the universal property, there is a nonzero homomorphism \(M(\mu) \to M(\lambda)\) with image contained in \(N(\lambda)\), the unique maximal proper submodules of \(M(\lambda)\).

Proof

Say \(\alpha = \alpha_i\). Fix \(j\neq i\).

\[\begin{align*} x_i y_\alpha^{n+1} \otimes 1 &= [x_j y_i^{n+1}] \otimes 1 + y_i^{n+1} \otimes x_j \cdot 1 \\ &= [x_j y_i^{n+1}] \otimes 1 + y_i^{n+1} \otimes 0 \quad\text{by a} \\ &= 0 .\end{align*}\]

\[\begin{align*} x_i y_i^{n+1} \otimes 1 &= [x_i y_i^{n+1} \otimes 1] \\ &= -(n+1) y_i^n (n\cdot 1 - h_i) \otimes 1 \\ &= -(n+1) (n - \lambda(h_i)) 1 \otimes 1 \\ &\coloneqq-(n+1) (\lambda(h_i) - \lambda(h_i)) 1 \otimes 1 \\ &= 0 .\end{align*}\]

Since \(g_{\alpha_j}\) generate \({\mathfrak{n}}\) as a Lie algebra, since \([{\mathfrak{g}}_\alpha, {\mathfrak{g}}_\beta] = {\mathfrak{g}}_{\alpha + \beta}\). This shows that \({\mathfrak{n}}\cdot y_i^{n+1} v^+ = 0\), and the weight of \(y_i^{n+1} v^+\) is \(\lambda - (n+1)\alpha_i\). So \(y_i^{n+1}\) is a maximal vector of weight \(\mu\). The universal property implies there is a nonzero map \(M(\mu) \to M(\lambda)\) sending highest weight vectors to highest weight vectors and preserving weights. The image is proper since all weights of \(M_\mu\) are less than or equal to \(\mu < \lambda\).

Consider \({\mathfrak{sl}}(2)\), then \(M(1) \supset M(-3)\). Note that reflecting through 0 doesn’t send 1 to -3, but shifting the origin to \(-1\) and reflecting about that with \(s_\alpha \cdot\) fixes this problem. Note that \(L(1)\) is the quotient.

For \(\lambda \in {\mathfrak{h}}^\vee\) and \(\alpha \in \Delta\), we can compute \(s_\alpha \cdot \lambda \mathrel{\vcenter{:}}= s_\alpha(\lambda + \rho) - \rho\) where \(\rho = \sum_{j=1}^\ell e_i\). Then \((w_j, \alpha_i^\vee) = \delta_{ij}\) and \((\rho, \alpha_i^\vee) = 1\).

\[\begin{align*} s\alpha \cdot \lambda &= s_\alpha(\lambda + \rho) - \rho \\ &= (\lambda + \rho) - (\lambda + \rho, \alpha^\vee)\alpha -\rho \\ &= \lambda + \rho - ((\lambda< \alpha^\vee) +1)\alpha - \rho \\ &= \lambda - (n+1)\alpha \\ &= \mu .\end{align*}\]

So this gives a well-defined, nonzero map \(M(s_\alpha \cdot \lambda) \to M(\lambda)\) for \(s_\alpha \cdot \lambda < \lambda\).

Image

Corollary
Let \(\lambda, \alpha, n\) be as in the above proposition. Let \(\mkern 1.5mu\overline{\mkern-1.5muv\mkern-1.5mu}\mkern 1.5mu^+\) now be a maximal vector of weight \(\lambda\) in \(L(\lambda)\). Then \(y_\alpha^{n+1} \mkern 1.5mu\overline{\mkern-1.5muv\mkern-1.5mu}\mkern 1.5mu^+ = 0\).
Proof
If not, then this would be a maximal vector, since it’s the image of the vector \(y_i^{n+1}v^+ \in M(\lambda)\) under the map \(M(\lambda) \to L(\lambda)\) of weight \(\mu < \lambda\). Then it would generate a proper submodules of \(L(\lambda)\), but this is a contradiction since \(L(\lambda)\) is irreducible.

Section 1.5

Example: \({\mathfrak{sl}}(2)\). What do Verma modules \(M(\lambda)\) and their simple quotients \(L(\lambda)\) look like?

Fix a Chevalley basis \(\left\{{y,h,x}\right\}\) and let \(\lambda \in {\mathfrak{h}}^\vee\cong {\mathbb{C}}\).

Fact 1

For \(v^+ = 1\otimes 1_\lambda\), we have \[\begin{align*}M(\lambda) = U({\mathfrak{n}}^-) v^+ = {\mathbb{C}}\left\langle{y^i v^+ {~\mathrel{\Big|}~}i\in {\mathbb{Z}}^+}\right\rangle\end{align*}\] is a basis for \(M(\lambda)\) with weights \(\lambda - 2i\) where \(\alpha\) corresponds to 2. So the weights of \(M(\lambda)\) are \(\lambda, \lambda-2, \lambda-4, \cdots\) each with multiplicity 1.

Letting \(v_i = \frac 1 {i!} y^i v^+\) for \(i\in {\mathbb{Z}}^+\); this is a basis for \(M(\lambda)\). Using the lemma, we have

\[\begin{align*} h\cdot v_i &= (\lambda - 2i) v_i \\ y \cdot v_i &= (i+1) v_{i+1} \\ x\cdot v_i &= (\lambda - i + 1)v_{i-1} .\end{align*}\]

Note that these are the same for finite-dimensional \({\mathfrak{sl}}(2){\hbox{-}}\)modules, see section 0.9.

Fact (2)
We know from the proposition that if \(\lambda \in {\mathbb{Z}}^+\), i.e. \((\lambda, \alpha^\vee) \in {\mathbb{Z}}^+\), then \(M(\lambda)\) has a maximal vector of weight \[\begin{align*}\lambda - (n+1)\alpha = \lambda - (\lambda+1)2 = -\lambda-2 = s_\alpha \cdot \lambda.\end{align*}\]
Exercise

Check that this maximal vector generates the maximal proper submodule \[\begin{align*}N(\lambda) = M(-\lambda - 2).\end{align*}\]

So the quotient \(L(\lambda) = M(\lambda) / N(\lambda) = M(\lambda) / M(-\lambda - 2)\) has weights \(\lambda, \lambda-2, \cdots, -\lambda+2, -\lambda\). So when \(\lambda \in {\mathbb{Z}}^+\), \(L(\lambda)\) is the familiar simple \({\mathfrak{sl}}(2){\hbox{-}}\)module of highest weight \(\lambda\).

Fact (3)

When \(\lambda \not\in{\mathbb{Z}}^+\),

Proof
Argue by contradiction. If not, \(M(\lambda) \supset M \neq 0\) is a proper submodule. So \(M\in {\mathcal{O}}\), and thus \(M\) has a maximal weight vector \(w^+\), and by the restriction of weights for modules in \({\mathcal{O}}\), we know \(w^+\) has height \(\lambda - 2m\) for some \(m\in {\mathbb{Z}}^+\). Then \(w^+ = c v_i\) where \(0\neq c \in {\mathbb{C}}\), and taking \(v_{-1} \mathrel{\vcenter{:}}= 0\) and \(x\cdot v_i = (\lambda - i + 1)v_{i-1}\) for \(i\geq 1\), so \(\lambda = i-1 \implies \lambda \in {\mathbb{Z}}^+\).

Friday January 31st

Theorem (Duals of Simple Quotients of Vermas)
A useful formula: \(L(\lambda)^\vee\cong L(-w_0)\).
Proof

\(L(\lambda)^\vee\) is a finite dimensional module, and \((x\cdot f)(v) = -f(x\cdot v)\), so \(L(\lambda)^\vee\cong L(\nu)\) for some \(\nu \in \Lambda^+\). The weights of \(L(\lambda)^\vee\) are the negatives of the weight of \(L(\lambda)\). Thus the lowest weight of \(L(\lambda)\) is \(w_0\lambda\), since \(w_o\) reverses the partial order on \({\mathfrak{h}}^\vee\), i.e \(w_0 \Phi^+ = \Phi^-\)

Then \[\begin{align*} \mu \in \Pi(L(\mu)) \implies w_0 \mu \in \Pi(L(\lambda)) \implies w_0\mu \leq \lambda .\end{align*}\] This shows that the lowest weight of \(L(\lambda)\) is \(w_0 \lambda\), thus the highest weight \(L(\lambda)^\vee\) is \(-w_0 \lambda\) by reversing this inequality.

The inner product is \(W\) invariant and is its own inverse, so we can move it to the other side.

1.7: Action of \(Z({\mathfrak{g}})\)

Next big goal: Every module in \({\mathcal{O}}\) has a finite composition series (Jordan-Holder series, the quotients are simple). Leads to Kazhdan-Lustzig conjectures from 1979/1980, which were solved, but are still open in characteristic \(p\) case.

The technique we’ll use the Harish-Chandra homomorphism, which identifies \({\mathcal{Z}}({\mathfrak{g}})\) explicitly.

It’s commutative, a subalgebra of a Noetherian algebra, no zero divisors – could be a quotient, but then it’d have zero divisors, so this seems to force it to be a polynomial algebra on some unknown.

Also note that \({\mathcal{Z}}({\mathfrak{g}}) \mathrel{\vcenter{:}}= Z(U({\mathfrak{g}}))\).

Recall: \({\mathcal{Z}}({\mathfrak{g}})\) acts locally finitely on any \(M\in {\mathcal{O}}\) – this is by theorem 1.1e, i.e. \(v\in M_\mu\) and \(z\in {\mathcal{Z}}({\mathfrak{g}})\) implies that \(zv\in M_\mu\). (The calculation just follows by computing the weight and commuting things through.)

Let \(\lambda \in {\mathfrak{h}}^\vee\) and \(M = U({\mathfrak{g}})v^+\) a highest weight module of highest weight \(\lambda\). Then for \(z\in {\mathcal{Z}}({\mathfrak{g}})\), \(z\cdot v^+ \in M_\lambda\) which is 1-dimensional. Thus \(z\) acts by scalar multiplication here, and \(z\cdot v^+ = \chi_\lambda(z) v^+\). Now if \(u\in U({\mathfrak{u}}^-)\), we have \[\begin{align*}z\cdot(u\cdot v^+) = u\cdot(z\cdot v^+) = u(\chi_\lambda(z)v^+) = \chi_\lambda(z) u\cdot v^+.\end{align*}\] Thus \(z\) acts on all of \(M\) by the scalar \(\chi_\lambda(z)\).

Exercise
Show that \(\chi_\lambda\) is a nonzero additive and multiplicative function, so \(\chi_\lambda: {\mathcal{Z}}({\mathfrak{g}}) \to {\mathbb{C}}\) is linear and thus a morphism of algebras. Conclude that \(\ker \chi_\lambda\) is a maximal ideal of \({\mathcal{Z}}({\mathfrak{g}})\).

Note: this is called the infinitesimal character.

Note that \(\chi_\lambda\) doesn’t depend on which highest weight module \(M_\lambda\) was chosen, since they’re all quotients of \(M(\lambda)\). In fact, every submodule and subquotient of \(M(\lambda)\) has the same infinitesimal character.

Definition (Central/Infinitesimal Character)

\(\chi_\lambda\) is called the central (or infinitesimal) character, and \(\widehat{\mathcal{Z}}({\mathfrak{g}})\) denotes the set of all central characters. More generally, any algebra morphism \(\chi: {\mathcal{Z}}({\mathfrak{g}}) \to {\mathbb{C}}\) is referred to as a central character. Central characters are in one-to-one correspondence with maximal ideals of \({\mathcal{Z}}({\mathfrak{g}})\), where

\[\begin{align*} \chi & \iff \ker \chi \\ {\mathbb{C}}[x_1, \cdots, x_n] &\iff \left\langle{x_1 - a_1, \cdots, x_n - a_n}\right\rangle \end{align*}\]

where \([a_1, \cdots, a_n] \in {\mathbb{C}}^n\).

Next goal: Describe \(\chi_\lambda(z)\) more explicitly.

Using PBW, we can write \(z\in {\mathcal{Z}}({\mathfrak{g}}) \subset U({\mathfrak{g}}) = U({\mathfrak{n}}^-) U({\mathfrak{h}}) U({\mathfrak{n}})\). Some observations:

  1. Any PBW monomial in \(z\) ending with a factor in \({\mathfrak{n}}\) will kill \(v^+\), and hence can not contribute to \(\chi_\lambda(z)\).
  2. Any PBW monomial in \(z\) beginning with a factor in \({\mathfrak{n}}^-\) will send \(v^+\) to a lower weight space, so it also can’t contribute.

So we only need to see what happens in the \({\mathfrak{h}}\) part. A relevant decomposition here is \[\begin{align*} U({\mathfrak{g}}) = U({\mathfrak{h}}) \oplus \qty{ {\mathfrak{n}}^- U({\mathfrak{g}}) + U({\mathfrak{g}}){\mathfrak{n}}^+ } .\end{align*}\]

Exercise
Why is this sum direct?

Let \(\mathrm{pr}: U({\mathfrak{g}}) \to U({\mathfrak{h}})\) be the projection onto the first factor. Then \(\chi_\lambda(z) = \lambda(\mathrm{pr} z)\) for all \(z\in {\mathcal{Z}}({\mathfrak{g}})\). Then if \(\mathrm{pr}(z) = h_1^{m_1} \cdots h_\ell^{m_\ell}\), we can extend the action on \({\mathfrak{h}}\) to all polynomials in elements of \({\mathfrak{h}}\) (which is in fact evaluation on these monomials), and thus \(\chi_\lambda(z) = \lambda(h_1)^{m_1} \cdots \lambda(h_\ell)^{m_\ell}\).

Note that for \(\lambda \in {\mathfrak{h}}^\vee\), we’ve extended this to the “evaluation map” \(\lambda: U({\mathfrak{g}}) \cong S({\mathfrak{h}})\), the symmetric algebra on \({\mathfrak{h}}\).

Why is this the correct identification? The RHS is \(T({\mathfrak{h}}) / \left\langle{x\otimes y - y\otimes x - [xy]}\right\rangle\), but notice that the bracket vanishes since \({\mathfrak{h}}\) is abelian, and this is the exact definition of the symmetric algebra.

Thus \(\chi_\lambda = \lambda \circ \mathrm{pr}\).

Observation:

\[\begin{align*} \lambda(\mathrm{pr}(z_1 z_2)) &= \chi_\lambda(z_1 z_1)\\ &= \chi_\lambda(z_1) \chi_\lambda(z_2) \\ &= \cdots \\ &= \lambda( \mathrm{pr}(z_1) \mathrm{pr}(z_2) ) .\end{align*}\]

Exercise
Show \(\cap_{\lambda \in {\mathfrak{h}}^\vee} \ker \lambda = \left\{{0}\right\}\).
Definition (Harish-Chandra Morphism)
Let \(\xi = {\left.{\mathrm{pr}}\right|_{{\mathcal{Z}}({\mathfrak{g}})}}: {\mathcal{Z}}({\mathfrak{g}}) \to U({\mathfrak{h}})\).
Definition (Twisted Harish-Chandra Morphism)
\(\xi\) is an algebra morphism, and is referred to as the Harish-Chandra homomorphism.

See page 23 for interpretation of \(\xi\) without reference to representations.

Questions:

  1. Is \(\xi\) injective?
  2. What is \(\operatorname{im}({\xi })\subset U({\mathfrak{h}})\)?

When does \(\chi_\lambda = \chi_\mu\)? Proved last time: we introduced the \(\cdot\) action and proved that \(M(s_\alpha \cdot \lambda) \subset M(\lambda)\) where \(\alpha \in \Delta\). It’ll turn out that the statement holds for all \(\lambda \in W\).

Wednesday: Section 1.8.

Wednesday February 5th

Recall the Harish-Chandra morphism \(\xi\):

If \(M\) is a highest weight module of highest weight \(\lambda\) then \(z\in {\mathcal{Z}}({\mathfrak{g}})\) acts on \(M\) by scalar multiplication. Note that if we have \(\chi_\lambda(z)\) where \(z\cdot v = \chi_\lambda(z) v\) for all \(v\in M\), we can identify \(\lambda(\mathrm{pr}(z)) = \lambda(\xi(z))\).

Central Characters and Linkage

The \(\chi_\lambda\) are not all distinct – for example, if \(M(\mu) \subset M(\lambda)\), then \(\chi_\mu = \chi_\lambda\). More generally, if \(L(\mu)\) is a subquotient of \(M(\lambda)\) then \(\chi_\mu = \chi_\lambda\). So when do we have equality \(\chi_\mu = \chi_\lambda\)?

Given \({\mathfrak{g}}\supset {\mathfrak{h}}\) with \(\Phi \supset \Phi^+ \supset \Delta\), then define \[\begin{align*}\rho = \frac 1 2 \sum_{\beta \in \Phi^+} \beta \in {\mathfrak{h}}^\vee.\end{align*}\] Note that \(\alpha \in \Delta \implies s_\alpha \rho = \rho - \alpha\).

Definition (Dot Action)
The dot action of \(W\) on \({\mathfrak{h}}^\vee\) is given by \[\begin{align*}w\cdot \lambda = w(\lambda + \rho) - \rho,\end{align*}\] which implies \((\rho, \alpha^\vee) = 1\) for all \(\alpha \in \Delta\). Then \(\rho = \sum_{i=1}^\ell w\).
Exercise
Check that this gives a well-defined group action.
Definition (Linkage Class)
\(\mu\) is linked to \(\lambda\) iff \(\mu = w\cdot \lambda\) for some \(w\in W\). Note that this is an equivalence relation, with equivalence classes/orbits where the orbit of \(\lambda\) is \(\left\{{w\cdot \lambda {~\mathrel{\Big|}~}w\in W}\right\}\) is called the linkage class of \(\lambda\).

Note that this is a finite subset, since \(W\) is finite. Orbit-stabilizer applies here, so bigger stabilizers yield smaller orbits and vice-versa.

Example
\(w\cdot (-\rho) = w(-\rho + \rho) - \rho = -\rho\), so \(-\rho\) is in its own linkage class.
Definition (Dot-Regular)
\(\lambda \in {\mathfrak{h}}^\vee\) is dot-regular iff \({\left\lvert {W\cdot \lambda } \right\rvert} = {\left\lvert {W} \right\rvert}\), or equivalently if \((\lambda + \rho, \beta^\vee) \neq 0\) for all \(\beta \in \Phi\).

To think about: does this hold if \(\Phi\) is replaced by \(\Delta\)?

We also say \(\lambda\) is dot-singular if \(\lambda\) is not dot-regular, or equivalently \({\operatorname{Stab}}_{W\cdot}\lambda \neq \left\{{1}\right\}\).

I.e. lying on root hyperplanes.

Exercise
If \(0\in {\mathfrak{h}}^\vee\) is regular, then \(-\rho\) is singular.

Image

Proposition (Weights in Weyl Orbit Yield Equal Characters)
If \(\lambda \in \Lambda\) and \(\mu \in W\cdot \lambda\), then \(\chi_\mu = \chi_\lambda\).
Proof

Start with \(\alpha \in \Delta\) and consider \(\mu = s_\alpha \cdot \lambda\). Since \(\lambda \in \Lambda\), we have \(n\mathrel{\vcenter{:}}=(\lambda ,\alpha^\vee) \in {\mathbb{Z}}\) by definition. There are three cases:

  1. \(n\in {\mathbb{Z}}^+\), then \(M(s_\alpha \cdot \lambda) \subset M(\lambda)\). By Proposition 1.4, we have \(\chi_\mu =\chi_\lambda\).

  2. For \(n=-1\), \(\mu = s_\alpha \cdot \lambda = \lambda + \rho -(\lambda + \rho, \alpha^\vee)\alpha - \rho = \lambda + n+1 = \lambda + 0\). So \(\mu = \lambda\) and thus \(M_\mu = M_\lambda\).

  3. For \(n\leq -2\),

\[\begin{align*} (\mu, \alpha^\vee) &= (s_\alpha \cdot \lambda , \alpha^\vee) \\ &= (\lambda i (n+1)\alpha, \alpha^\vee) \\ &= n - 2(n+1) \\ &= -n-2 \\ &\geq 0 ,\end{align*}\]

so \(\chi_\mu = \chi_{s_\alpha \cdot \mu} = \chi_{s\alpha \cdot (s_\alpha \cdot \lambda)} = \chi_\lambda\). Since \(W\) is generated by simple reflections and the linkage property is transitive, the result follows by induction on \(\ell(w)\).

Exercise (1.8)
See book, show that certain properties of the dot action hold (namely nonlinearity).

1.9: Extending the Harish-Chandra Morphism

We want to extend the previous proposition from \(\lambda \in \Lambda\) to \(\lambda \in {\mathfrak{h}}^\vee\). We’ll use a density argument from affine algebraic geometry, and switch to the Zariski topology on \({\mathfrak{h}}^\vee\subset {\mathbb{C}}^n\).

Fix a basis \(\Delta = \left\{{a_1, \cdots, a_\ell}\right\}\) and use the Killing form to identify these with a basis for \({\mathfrak{h}}= \left\{{h_1, \cdots, h_\ell}\right\}\). Similarly, take \(\left\{{w_1, \cdots, w_\ell}\right\}\) as a basis for \({\mathfrak{h}}^\vee\), and we’ll use the identification

\[\begin{align*} {\mathfrak{h}}^\vee&\iff {\mathbb{A}}^\ell \\ \lambda &\iff (\lambda(h_1), \cdots, \lambda(h_\ell)) .\end{align*}\]

We identify \(U({\mathfrak{h}}) = S({\mathfrak{h}}) = {\mathbb{C}}[h_1, \cdots, h_\ell]\) with \(P({\mathfrak{h}}^\vee)\) which are polynomial functions on \({\mathfrak{h}}^\vee\). Fix \(\lambda \in {\mathfrak{h}}^\vee\), extended \(\lambda\) to be a multiplicative function on polynomials. For \(f\in {\mathbb{C}}[h_1, \cdots, h_\ell]\), we defined \(\lambda(f)\). Under the identification, we send this to \(\tilde f\) where \(\tilde f(\lambda) = \lambda(f)\).

Note: we’ll identify \(f\) and \(\tilde f\) notationally going forward and drop the tilde everywhere.

Then \(W\) acts on \(P({\mathfrak{h}}^\vee)\) by the dot action: \((w\cdot \tilde f)(\lambda) = \tilde f(w^{-1}\cdot \lambda)\).

Exercise
Check that this is a well-defined action.

Under this identification, we have

\[\begin{align*} {\mathfrak{h}}^\vee&\iff {\mathbb{A}}^\ell \\ \Lambda &\iff {\mathbb{Z}}^\ell .\end{align*}\]

Note that \(\Lambda\) is discrete in the analytic topology, but is dense in the Zariski topology.

Proposition (Polynomials Vanishing on a Lattice Are Zero)
A polynomial \(f\) on \({\mathbb{A}}^\ell\) vanishing on \({\mathbb{Z}}^\ell\) must be identically zero.
Proof

For \(\ell = 1\): A nonzero polynomial in one variable has only finitely many zeros, but if \(f\) vanishes on \({\mathbb{Z}}\) it has infinitely many zeros.

For \(\ell > 1\): View \(f\in {\mathbb{C}}[h_1, \cdots, h_{\ell-1}][h_\ell]\). Substituting any fixed integers for the \(h_i\) for \(i\leq \ell - 1\) yields a polynomial in one variable which vanishes on \({\mathbb{Z}}\). By the first case, \(f \equiv 0\), so the coefficients must all be zero and the coefficient polynomials in \({\mathbb{C}}[h_1, \cdots ,h_{\ell-1}]\) vanish on \({\mathbb{Z}}^{\ell-1}\). By induction, these coefficient polynomials are identically zero.

Corollary (Lattices Are Zariski-Dense in Affine Space)
The only Zariski-closed subset of \({\mathbb{A}}^\ell\) containing \({\mathbb{Z}}^\ell\) is \({\mathbb{A}}^\ell\) itself, so the Zariski closure \(\mkern 1.5mu\overline{\mkern-1.5mu{\mathbb{Z}}^\ell\mkern-1.5mu}\mkern 1.5mu = {\mathbb{A}}^\ell\) and \({\mathbb{Z}}^\ell\) is dense in \({\mathbb{A}}^\ell\).

Friday February 7th

So far, we have \(\chi_\lambda = \chi_{w.\cdot \lambda}\) if \(\lambda \in \Lambda\) and \(w\in W\). We have \({\mathfrak{h}}^\vee\supset \Lambda\) which is topologically equivalent to \({\mathbb{A}}^\ell \supset {\mathbb{Z}}^\ell\), where \({\mathbb{Z}}^\ell\) is dense in the Zariski topology.

For \(z\in {\mathcal{Z}}({\mathfrak{g}})\), we have \(\chi_\lambda(z) = \chi_{W\cdot \lambda} (z)\) and so \(\lambda(\xi(z)) = (w\cdot \lambda )(\xi(z))\) where \(\xi: {\mathcal{Z}}({\mathfrak{g}}) \to U({\mathfrak{h}}) = S({\mathfrak{h}}) \cong P({\mathfrak{h}}^\vee)\) where we send \(\lambda(f)\) to \(f(\lambda)\).

Then \(\xi(z)(\lambda) = \xi(z)(w\cdot \lambda)\) for all \(\lambda \in \Lambda\), and so \(\xi(z) = w^{-1}\xi(z)\) on \(\Lambda\). But both sides here are polynomials and thus continuous, and \(\Lambda \subset {\mathfrak{h}}^\vee\) is dense, so \(\xi(z) = w^{-1}\xi(z)\) on all of \({\mathfrak{h}}^\vee\). I.e., \(\chi_\lambda = \chi_{w\cdot } \lambda\) for all \(\lambda \in {\mathfrak{h}}^\vee\).

This in fact shows that the image of \({\mathcal{Z}}({\mathfrak{g}})\) under \(\xi\) consists of \(W{\hbox{-}}\)invariant polynomials.

It’s customary to state this in terms of the natural action of \(W\) on polynomials without the row-shift. We do this by letting \(\tau_\rho: S({\mathfrak{h}}) \xrightarrow{\cong} S({\mathfrak{h}})\) be the algebra automorphism induced by \(f(\lambda) \mapsto f(\lambda - \rho)\). This is clearly invertible by \(f(\lambda) \mapsto f(\lambda + \rho)\). We then define \[\begin{align*}\psi: {\mathcal{Z}}({\mathfrak{g}}) \xrightarrow{\xi} S({\mathfrak{h}}) \xrightarrow{\tau_\rho} S({\mathfrak{h}})\end{align*}\] as this composition; this is referred to as the Harish-Chandra (HC) homomorphism.

Exercise
Show \(\chi_\lambda(z) = (\lambda + \rho) (\psi(z))\) and \(\chi_{w\cdot \lambda} (w(\lambda+\rho))(\psi(z))\), where \(w({\,\cdot\,})\) is the usual \(w{\hbox{-}}\)action.

Replacing \(\lambda\) by \(\lambda + \rho\) and \(w\) by \(w^{-1}\), we get \[\begin{align*} w\psi(z) = \psi(z) \end{align*}\] for all \(z\in {\mathcal{Z}}({\mathfrak{g}})\) and all \(w\in W\) where \((w\psi(z))(\lambda) = \psi(z)(w^{-1}\lambda)\).

We’ve proved that

Theorem (Character Linkage and Image of the HC Morphism)
  1. If \(\lambda, \mu \in {\mathfrak{h}}^\vee\) that are linked, then \(\chi_\lambda = \chi_\mu\).

  2. The image of the twisted HC homomorphism \(\psi: {\mathcal{Z}}({\mathfrak{g}}) \to U({\mathfrak{h}}) = S({\mathfrak{h}})\) lies in the subalgebra \(S({\mathfrak{h}})^W\).

Example

Let \({\mathfrak{g}}= {\mathfrak{sl}}_2\). Recall from finite-dimensional representations there is a canonical element \(c\in {\mathcal{Z}}({\mathfrak{g}})\) called the Casimir element. For \({\mathcal{O}}\), we need information about the full center \({\mathcal{Z}}({\mathfrak{g}})\) (hence introducing infinitesimal characters).

Expressing \(c\) in the PBW basis yields \(c = h^2 + 2h + 4yx\), where \(h^2 + 2h \in U({\mathfrak{h}})\) and \(4yx \in {\mathfrak{n}}^- U({\mathfrak{g}}) + U({\mathfrak{g}}) {\mathfrak{n}}\).

Enveloping algebra convention: \(x\)s, \(h\)s, \(y\)s

Then \(\xi(c) = {\operatorname{pr}}(c) = h^2 + 2h\), and under the identification \({\mathfrak{h}}^\vee\iff {\mathbb{C}}\) where \(\lambda \iff \lambda(h)\), we can identify \(\rho \iff \rho(h) = 1\). The row shift is given by \(\psi(c) = (h-1)^2 + 2(h-1) = h^2 - 1\). This is \(W{\hbox{-}}\)invariant, since \(s_{\alpha_h} = -h\). But \(W = \left\langle{s_\alpha, 1}\right\rangle\), so \(s_\alpha\) generates \(W\).

We also have \(\chi_\lambda(c) = (\lambda + \rho) (\psi(c)) = (\lambda+1)^2 - 1\). Then \[\begin{align*} \chi_\lambda(c) = \chi_\mu(c) \iff (\lambda+1)^2 - 1 = (\mu + 1)^2 \iff \mu = \lambda \text{ or } \mu = -\lambda - 2 \end{align*}\]

But \(\lambda = 1 \cdot \lambda\) and \(-\lambda - 2 = s_\alpha \cdot \lambda\), so \({\mathcal{Z}}({\mathfrak{g}}) = \left\langle{c}\right\rangle \mathrel{\vcenter{:}}={\mathbb{C}}[c]\) as an algebra. So these characters are equal iff \(\mu = w\cdot \lambda\) for \(w\in W\).

Section 1.10: Harish-Chandra’s Theorem

Goal: prove the converse of the previous theorem.

Theorem (Harish-Chandra)

Let \(\psi: {\mathcal{Z}}({\mathfrak{g}}) \to S({\mathfrak{h}})\) be the twisted HC homomorphism. Then

  1. \(\psi\) is an isomorphism of \({\mathcal{Z}}({\mathfrak{g}})\) onto \(S({\mathfrak{h}})^W\).

  2. For all \(\lambda, \mu \in {\mathfrak{h}}^\vee\), \(\chi_\lambda = \chi_\mu\) iff \(\mu = w\cdot \lambda\) for some \(w\in W\).

  3. Every central character \(\chi: {\mathcal{Z}}({\mathfrak{g}}) \to {\mathbb{C}}\) is a \(\chi_\lambda\).

Proof (of (a))

Relies heavily on the Chevalley Restriction Theorem (which we won’t prove here).

Initially we have a restriction map on polynomial functions \(\theta: P({\mathfrak{g}}) \to P({\mathfrak{h}})\). We identified \(P({\mathfrak{g}}) = S({\mathfrak{g}}^\vee)\), the formal polynomials on \({\mathfrak{g}}^\vee\). However, for \({\mathfrak{g}}\) semisimple, we can identify \(S({\mathfrak{g}}^\vee) \cong S({\mathfrak{g}})\) via the Killing form.

By the Chinese Remainder Theorem, \(\theta: S({\mathfrak{g}})^G \to S({\mathfrak{h}})^W\) is an isomorphism, where the subgroup \(G \leq \operatorname{Aut}({\mathfrak{g}})\) is the adjoint group generated by \(\left\{{\exp\qty{\qty}{\operatorname}{ad}_x {~\mathrel{\Big|}~}x \text{ is nilpotent}}\right\}\).

It turns out that \(S({\mathfrak{g}})^G\) is very close to \({\mathcal{Z}}({\mathfrak{g}})\) – it is the associated graded of a natural filtration on \({\mathcal{Z}}({\mathfrak{g}})\). This is enough to show that \(\psi\) is a bijection.

Proof (of (b))

We’ll prove the contrapositive of the converse.

Suppose \(W\cdot \lambda \cap W\cdot \mu = \emptyset\) and both are in \({\mathfrak{h}}^\vee\). Since these are finite sets, Lagrange interpolation yields a polynomial that is 1 on \(W\cdot \lambda\) and 0 on \(W\cdot \mu\). Let \(g = \frac{1}{{\left\lvert {W} \right\rvert}} \sum_{w\in W} w\cdot f\).

Note: definitely the dot action here, may be a typo in the book.

Then \(g\) is a \(W\cdot\) invariant polynomial with the same properties. By part (a), we can get rid of the row shift to obtain an isomorphism \(\xi: {\mathcal{Z}}({\mathfrak{g}}) \to S({\mathfrak{h}})^(W\cdot)\), the \(W\cdot\) invariant polynomials. Choose \(z\in {\mathcal{Z}}({\mathfrak{g}})\) such that \(\xi(z) = g\), then \(\chi_\lambda(z) = \lambda(\xi(z)) = \lambda(g) = g(\lambda) = 1\). So \(\chi_\mu(z) = 0\) similarly, and \(\chi_\lambda = \chi_\mu\).

Proof (of (c))
This follows from some commutative algebra, we won’t say much here. Look at maximal ideals in \({\mathbb{C}}[x, y,\cdots]\) which correspond to evaluating on points in \({\mathbb{C}}^\ell\).

\(\hfill\blacksquare\)

Remark
Chevalley actually proved that \(S({\mathfrak{h}})^W \cong {\mathbb{C}}(p_1, \cdots, p_\ell)\) where the \(p_i\) are homogeneous polynomials of degrees \(d_1 \leq \cdots \leq d_\ell\). These numbers satisfy some remarkable properties: \(\prod d_i = {\left\lvert {W} \right\rvert}\) and \(d_1 = 2\) (these are called the degrees of \(W\))

Section 1.11

Theorem (Category O is Artinian)
Category \({\mathcal{O}}\) is artinian, i.e. every \(M \in {\mathcal{O}}\) is Artinian (DCC) and \(\dim \hom_{\mathfrak{g}}(M, N) < \infty\) for every \(M, N\).

Recall that \({\mathcal{O}}\) is known to be Noetherian from an earlier theorem. This will imply that every \(M\) has a composition/Jordan-Holder series, so we can take composition factors and multiplicities.

Most interesting question: what are the factors/multiplicities of the simple modules and Verma modules?

Wednesday February 12th

Infinitesimal Blocks

We’ll break up category \({\mathcal{O}}\) into smaller subcategories (blocks).

Recall theorem 1.1 (e): \({\mathcal{Z}}({\mathfrak{g}})\) acts locally finitely on \(M\in {\mathcal{O}}\), and \(M\) has a finite filtration with highest weight sections, so \(M\) should involve only a finite number of central characters \(\chi_\lambda\) (where \(\lambda \in{\mathfrak{h}}^\vee\)).

Note: an analog of Jordan decomposition works here because of this finiteness condition. This discussion will parallel the JCF of a simple operator on a finite dimensional \({\mathbb{C}}{\hbox{-}}\)vector space. However, this involves the entire center instead of just scalar matrices, so the analogy is diagonalizing a family of operators simultaneously.

Let \(\chi \in \widehat{\mathcal{Z}}({\mathfrak{g}})\) and \(M\in {\mathcal{O}}\), and \[\begin{align*} M^\chi \mathrel{\vcenter{:}}=\left\{{v\in M {~\mathrel{\Big|}~}~\forall z\in {\mathcal{Z}}({\mathfrak{g}}),~\exists n>0 ~{\text{s.t.}}~(z- \chi(z))^n \cdot v = 0}\right\} \end{align*}\]

Idea: write \[\begin{align*} z = \chi(z) \cdot 1 + (z-\chi(z)\cdot 1), \end{align*}\] where the first is a scalar operator and the second is (locally) nilpotent on \(M^\chi\). Thus we can always arrange for \(z\) to act by a sum of “Jordan blocks”:

Image

Some observations:

Let \({\mathcal{O}}_\chi\) be the full subcategory of modules \(M\) such that \(M = M^\chi\); we refer to this as a block.

Note: full subcategory means keep all of the hom sets.

Proposition (O Factors into Blocks, Indecomposables/Highest Weight Modules Lie in a Single Block)
\({\mathcal{O}}= \bigoplus_{\lambda \in {\mathfrak{h}}^\vee} {\mathcal{O}}_{\chi_\lambda}\). Each indecomposable module in \({\mathcal{O}}\) lies in a unique \({\mathcal{O}}_\chi\). In particular, any highest weight module of highest weight \(\lambda\) lies in \({\mathcal{O}}_{\chi_\lambda}\).

Thus we can reduce to studying \({\mathcal{O}}_{\chi_\lambda}\).

Remark: \({\mathcal{O}}_{\chi_\lambda}\) has a finite number of simple modules \(\left\{{L(w\cdot \lambda) {~\mathrel{\Big|}~}w\in W}\right\}\) and a finite number of Verma modules \(\left\{{M(w\cdot \lambda) {~\mathrel{\Big|}~}w\in W}\right\}\).

Blocks

Let \({\mathcal{C}}\) be a category with is artinian and noetherian, with \(L_1, L_2\) simple modules. We say \(L_1 \sim L_2\) if there exists a non-split extension \[\begin{align*}0 \to L_1 \to M \to L_2 \to 0,\end{align*}\] i.e. \(\operatorname{Ext}^1_{\mathcal{O}}(L_2, L_1) \neq 0\). In particular, \(M\) equivalently needs to be indecomposable. We then extend \(\sim\) to be reflexive/symmetric/transitive to obtain an equivalence relation.

\(L_1\) ends up being the socle here.

This partitions the simple modules in \({\mathcal{C}}\) into blocks \({\mathcal{B}}\). More generally, we say \(M\in {\mathcal{C}}\) belongs to \({\mathcal{B}}\) iff all of the composition factors of \(M\) belong to \({\mathcal{B}}\). Although not obvious, there are no nontrivial extensions between modules in different blocks. Thus each simple module (generally, just an object) \(M\in {\mathcal{C}}\) decomposes as a direct sum of submodules (subobjects) with each belonging to a single block.

Question: Is \({\mathcal{O}}_\chi\) a block of \({\mathcal{O}}\)? The answer is not always. Because each indecomposable module in \({\mathcal{O}}\) lives in a simple \({\mathcal{O}}_\chi\) By the definition, it’s clear that each block is contained in a single simple infinitesimal block \({\mathcal{O}}_\chi\).

The block containing \(L_1, L_2\) will be contained in the same infinitesimal block, and continuing the composition series puts all composition factors in a single block.

Proposition (Integral Weights Yield Simple Blocks)
If \(\lambda\) is an integral weight, so \(\lambda \in \Lambda\), then \({\mathcal{O}}_{\chi_\lambda}\) is a (simple) block of \({\mathcal{O}}\).
Proof

It suffices to show that all \(L(w\cdot \lambda)\) for \(w\in W\) lie in a single block. We’ll induct on the length of \(w\). Start with \(2 = s_\alpha\) for some \(\alpha\in \Delta\). Let \(\mu = s_\alpha \cdot \lambda\). If \(\mu = \lambda\), i.e. \(\lambda\) is in the stabilizer, then we’re done.

Otherwise, assume WLOG \(\mu < \lambda\) in the partial order, using the fact that \(\lambda \in \Lambda\). (The difference between these is just an integer multiple of \(\alpha\).)

By proposition 1.4, we have the following maps:

Then \(\phi\) induces a map \(L(\mu) \xrightarrow{\mkern 1.5mu\overline{\mkern-1.5mu\phi\mkern-1.5mu}\mkern 1.5mu} M(\lambda)/N\), where the codomain here is a highest weight module with quotient \(L(\lambda)\). Since highest weight modules are indecomposable and thus lie in a single bloc, \(L(\mu)\) and \(L(\lambda)\) are in the same block.

Note that if \(v^+\) generates \(M(\lambda)\), \(v^+ + N\) generated the quotient.

Now inducting on \(\ell(w)\), iterating this argument yields all \(L(w\cdot \lambda)\) (as \(w\) varies) in the same block.

Example

This isn’t true for non-integral weights. Let \({\mathfrak{g}}= {\mathfrak{sl}}(2, {\mathbb{C}})\) with \(\lambda \in {\mathbb{R}}\setminus {\mathbb{Z}}\) and \(\lambda > -1\). Then

\[\begin{align*} \mu &= s_\alpha \lambda \\ &= -\lambda - 2 \\ &<_{{\mathbb{R}}} -1 \end{align*}\]

with the usual ordering on \({\mathbb{R}}\), but \(\mu \not > \lambda\) in the ordering on \({\mathfrak{h}}^\vee\): we have \(\lambda - \mu = 2\lambda + 2\), but \(\alpha \equiv 2\) and thus these don’t differ by an element of \(2{\mathbb{Z}}\).

Thus \(\mu, \lambda\) are in different cosets of \({\mathbb{Z}}\Phi = \Lambda_r\) in \({\mathfrak{h}}^\vee\). However, \(M(\lambda), M(\mu)\) are simple since \(\lambda, \mu\) are not non-negative integers.

By exercise 1.13, there can be no nontrivial extension, so they’re in different homological blocks but in the same \({\mathcal{O}}_{\chi_\lambda}\) since \(\mu = s_\alpha \cdot \lambda\). So this infinitesimal block splits into multiple homological blocks.

Friday: 1.14 and 1.15.

Friday February 14th

Recall that we have a decomposition \[\begin{align*}{\mathcal{O}}= \bigoplus_{\chi \in \widehat{\mathcal{Z}}({\mathfrak{g}})} {\mathcal{O}}_\chi\end{align*}\] into infinitesimal blocks, where \({\mathcal{O}}_0 \mathrel{\vcenter{:}}={\mathcal{O}}_{\chi_0}\) is the principal block. Since \(0\in{\mathfrak{h}}^\vee\), we can associate \(\chi_0, M_0, L(0) = {\mathbb{C}}\) the trivial module for \({\mathfrak{g}}\).

1.14 – 1.15: Formal Characters

Some background from finite dimensional representation theory of a finite group \(G\) over \({\mathbb{C}}\). The hope is to find matrices for each element of \(G\), but this isn’t basis invariant. Instead, we take traces of these matrices, which is less data and basis-independent This is referred to as the character of the representation, and in nice situations, the characters determine the irreducible representations.

For a semisimple lie algebra \({\mathfrak{g}}\) and a finite dimensional representation \(M\), it’s enough to keep track of weight multiplicities when \({\mathfrak{g}}\) is the lie algebra associated to a compact lie group \(G\). From this data, the characters can be recovered. So the data of all pairs \((\dim M_\lambda, \lambda \in {\mathfrak{h}}^\vee)\) suffices. To track this information, we introduce a formal character.

Remark: If \(G\) is a group and \(k\) is a commutative ring, \(kG\) is the group ring of \(G\). This has the following properties:

Let \({\mathbb{Z}}\Lambda\) be the integral group ring of the lattice. Since \(\Lambda\) is an abelian group, and the additive notation would be more difficult. So we write \(\Lambda\) multiplicatively and introduce \(e(\lambda)\) for \(\lambda \in {\mathfrak{h}}^\vee\), where \(e(\lambda) e(\mu) = e(\lambda + \mu)\). For \(M\) a finite dimensional \({\mathfrak{g}}{\hbox{-}}\)module, the formal character of \(M\) is given by

\[\begin{align*} \operatorname{ch}M = \sum_{\lambda\in \Lambda} \qty{ M(\lambda) } e(\lambda) \quad\in {\mathbb{Z}}\Lambda .\end{align*}\]

This satisfies

By Weyl’s complete reducibility theorem, any semisimple module decomposes into a sum of simple modules. Thus it suffices to determine that characters of simple modules \(L(\lambda)\) for \(\lambda \in \Lambda^+\), corresponding to dominant integral weights. Then we can reconstruct \(\operatorname{ch}(M)\) from \(\operatorname{ch}L(\lambda)\) for \(M\in {\mathcal{O}}\).

Specifying the weight spaces dimensions is equivalent to a function \(\operatorname{ch}_M: {\mathfrak{h}}^\vee\to {\mathbb{Z}}^+\) where \(\operatorname{ch}_M(\lambda) = \dim M_\lambda\). The analogy of \(e(\lambda)\) in this setting is the characteristic function \(e_\lambda\) where \(e_\lambda(\mu) = \delta_{\lambda \mu}\) for \(\mu \in {\mathfrak{h}}^\vee\). We can thus write the function

\[\begin{align*} \operatorname{ch}_M = \sum_{\lambda \in {\mathfrak{h}}^\vee} \qty{ \dim M_\lambda } e_\lambda .\end{align*}\]

When \(\dim M < \infty\), \(\operatorname{ch}_M\) has finite support, although we generally don’t have this in \({\mathcal{O}}\). In this setting, multiplication of formal characters corresponds to convolution of functions, i.e. \[\begin{align*}(f\ast g)(\lambda) = \sum_{\mu + \nu = \lambda} f(\mu) g(\nu).\end{align*}\] Define \[\begin{align*} {\mathcal{X}}= \left\{{f: {\mathfrak{h}}^\vee\to {\mathbb{Z}}{~\mathrel{\Big|}~}{\operatorname{supp}}(f) \subset \cup_{i\leq n} \qty{ \lambda_i - {\mathbb{Z}}^+ \Phi^+ } ~~\text{ for some } \lambda_1, \cdots, \lambda_n \in {\mathfrak{h}}^\vee}\right\} \end{align*}\]

Idea: this is a “cone” below some weights.

This makes \({\mathcal{X}}\) into a \({\mathbb{Z}}{\hbox{-}}\)module with a well-defined convolution, thus \({\mathcal{X}}\) is a commutative ring where

If \(M\in {\mathcal{O}}\), then \(\operatorname{ch}_M \in {\mathcal{X}}\) by axiom O5 (local finiteness).

Example: \(\operatorname{ch}L(\lambda) = e(\lambda) + \sum_{\mu < \lambda} m_{\lambda \mu} e(\mu)\), where \(m_{\lambda \mu} = \dim L(\lambda)_{\mu} \in {\mathbb{Z}}^\pm\).

Definition (Principal Block)
Let \({\mathcal{X}}_0\) be the additive subgroup of \({\mathcal{X}}\) generated by all \(\operatorname{ch}M\) for \(M \in {\mathcal{O}}\).
Proposition (Additivity of Characters, Correspondence with K(O) )
  1. If \(0 \to M' \to M \to M'' \to 0\) is a SES in \({\mathcal{O}}\), then \(\operatorname{ch}M = \operatorname{ch}M' + \operatorname{ch}M''\).

  2. There is a 1-to-1 correspondence

\[\begin{align*} {\mathcal{X}}_0 &\iff K({\mathcal{O}}) \\ \operatorname{ch}M &\iff [M] ,\end{align*}\]

where \(K\) is the Grothendieck group.

  1. If \(M\in {\mathcal{O}}\) and \(\dim L < \infty\), then \(\operatorname{ch}(L\otimes M) = \operatorname{ch}L \ast \operatorname{ch}M\).

Remark: (a) implies that \(\operatorname{ch}M\) is the sum of the formal characters of its composition factors with multiplicities. Thus \[\begin{align*} \operatorname{ch}M = \sum_{L \text{ simple }} [M:L] ~\operatorname{ch}L .\end{align*}\]

Proof (of a)
Use the fact that \(\dim M_\lambda = \dim M_\lambda' + \dim M_\lambda''\)
Proof (of b)
Check that the obvious maps are well-defined and mutually inverse.
Proof (of c)
Because of the module structure we’ve put on the tensor product \((L \otimes M)_\lambda = \sum_{\mu + \nu = \lambda} L_\mu \otimes M_\nu\).

Remark: The natural action of \(W\) on \(\Lambda\) or on \({\mathfrak{h}}^\vee\) extends to \({\mathbb{Z}}\Lambda\) and \({\mathcal{X}}\) if we define

\[\begin{align*} w\cdot e(\lambda) \coloneqq e(w\lambda) \quad w\in W,~~\lambda \in \Lambda \text{ or } {\mathfrak{h}}^\vee .\end{align*}\]

If \(\lambda \in \Lambda^+\), then \(w( \operatorname{ch}L(\lambda) ) = \operatorname{ch}L(\lambda)\) since \(\dim L(\lambda)_\mu = \dim L(\lambda)_{w\mu}\). Thus the characters of simple finite-dimensional modules are \(W{\hbox{-}}\)invariant.

1.16: Formal Characters of Verma Modules

1: We have a similar formula

\[\begin{align*} \operatorname{ch}M(\lambda) = \operatorname{ch}L(\lambda) + \sum_{\mu < \lambda} a_{\lambda \mu} \operatorname{ch}L(\mu) \\ \quad \text{ with } a_{\lambda \mu} \in {\mathbb{Z}}^+ \text{ and } a_{\lambda \mu} = [M(\lambda): L(\mu)] .\end{align*}\]

This all happens in a single block of \({\mathcal{O}}\), which has finitely many simple and Verma modules.
In fact, the sum will be over \(\left\{{ \mu \in W\cdot \lambda {~\mathrel{\Big|}~}\mu < \lambda}\right\}\). But computing \(L(\mu)\) is difficult in general.

Since the set of weights \(W\cdot \lambda\) is finite, we can totally order it in a way that’s compatible with the partial order on \({\mathfrak{h}}^\vee\) (so \(\leq\) in the partial order implies \(\leq\) in the total order). So if we order the weights \(\mu_i\) indexing the Verma modules in columns and indexing the simple modules in the rows, this is an upper triangular matrix with 1s on the diagonal. This can inverted since it’s unipotent, with the inverse of same upper triangular form.

2: We can write \[\begin{align*} \operatorname{ch}L(\lambda) = \operatorname{ch}M(\lambda) + \sum_{\mu < \lambda,~\mu \in W\cdot \lambda} b_{\lambda \mu} \operatorname{ch}M(\mu) \quad b_{\lambda \mu} \in {\mathbb{Z}} \end{align*}\] This expresses the character in terms of Verma modules, which are easier to compute.

Next time: formulas for the characters

Monday February 17th

Character Formulas

Last time: The second character formula (equation (2)),

\[\begin{align*} \operatorname{ch}L(\lambda) = \operatorname{ch}M(\lambda) + \sum_{\mu < \lambda, ~~ \mu \in W\cdot \lambda} b_{\lambda, \mu} ~\operatorname{ch}M(\mu) .\end{align*}\]

Note that \(b_{\lambda, \mu} \in {\mathbb{Z}}\), and this formula comes from inverting the previous one.

Holy grail: characters of simple modules!

We can write \(M(\lambda) \cong U({\mathfrak{n}}^-) \otimes_{\mathbb{C}}{\mathbb{C}}_\lambda\) as a \({\mathfrak{h}}{\hbox{-}}\)module. Define \(p: {\mathfrak{h}}^\vee\to {\mathbb{Z}}\) where \(p(\gamma)\) is the number of tuples \((t_\beta)_{\beta\in\Phi^+}\) where \(t_\beta \in {\mathbb{Z}}^+\) and \(\gamma = - \sum_{\beta \in \Phi^+} t_\beta \beta\). We have \({\operatorname{supp}}(p) = - {\mathbb{Z}}^+ \Phi^+\), which gives us something like a negative quadrant of the lattice.

The function \(p\) is essentially the Kostant partition function. The advantage here is that \(p \in {\mathcal{X}}\) (defined last time, support is less than some finite weights?).

Observation: \(p = \operatorname{ch}_{M(0)}\) since \(U({\mathfrak{n}}^-) \otimes{\mathbb{C}}_{\lambda = 0}\) has PBW basis \[\begin{align*} \left\{{ \prod_{\beta\in\Phi^+} y_\beta^{t_\beta} \otimes 1_{\lambda = 0} {~\mathrel{\Big|}~}t_\beta \in {\mathbb{Z}}^+ }\right\}. \end{align*}\]

Example: Let \({\mathfrak{g}}= {\mathfrak{sl}}(3)\), then \(\Phi^+ = \left\{{\alpha_1, \alpha_2, \alpha_1 + \alpha_2}\right\}\). Then \(\gamma = -\qty{\alpha_1 + 2\alpha_2}\) corresponds to \((1,2,0), (0,1,1)\) so \(p(\gamma) = 2\). If \(\gamma = -\qty{2\alpha_1 + 2\alpha_2}\), this corresponds to \((2,2,0), (1,1,1), (0,0,2)\) so \(p(\gamma) = 3\).

Note: just the number of ways of obtaining \(-\gamma\) as a linear combinations of roots.

In general, \(\dim M(0)_\gamma = p(\gamma)\).

Proposition (Characters as Convolution Products)
For any \(\lambda \in {\mathfrak{h}}^\vee\), we have \(\operatorname{ch}_{M_\lambda} = p\ast e_\gamma\), taking the convolution product.

In particular, \(\operatorname{ch}_{M(0)} = p\).

Proof (of Proposition)
We have the following computation: \[\begin{align*} (p\ast e_\lambda)(\lambda+\gamma) &= p(\gamma) e_\lambda(\lambda) \\ &= p(\gamma) 1 \\ &= p(\gamma) \\ &= \dim M(\lambda)_{\lambda + \gamma} \quad\text{ as a weight space } .\end{align*}\]

Note that we can also write equation (2) as

\[\begin{align*} \operatorname{ch}L(\lambda) = \sum_{w\cdot \lambda \leq \lambda} b_{\lambda, w} \operatorname{ch}M(w\cdot \lambda) .\end{align*}\]

Here \(b_{\lambda, w} \in {\mathbb{Z}}\) and in fact \(b_{\lambda, 1} = 1\).

Example: Let \({\mathfrak{g}}= {\mathfrak{sl}}(2)\). We know

\[\begin{align*} \operatorname{ch}M(\lambda) &= \operatorname{ch}L(\lambda) + \operatorname{ch}L(s_\alpha \cdot \lambda) \\ \operatorname{ch}M(s_\alpha \cdot \lambda) &= \operatorname{ch}L(s_\alpha \cdot \lambda) .\end{align*}\]

We can think of this pictorially as the ‘head’ on top of the socle:

\[\begin{align*} M(\lambda) = \frac{L(\lambda)}{L(s_\alpha \cdot \lambda)} .\end{align*}\]

The formula above corresponds to the matrix \[\begin{align*} \left[\begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array}\right] \end{align*}\]

We can invert the formula to get equation (2), which corresponds to inverting this matrix:

\[\begin{align*} \operatorname{ch}L(\lambda) &= \operatorname{ch}M(\lambda) - \operatorname{ch}M(s_\alpha \cdot \lambda) \\ \operatorname{ch}L(s_\alpha \cdot \lambda) &= \operatorname{ch}M(s_\alpha \cdot \lambda) .\end{align*}\]

Note that the coefficients \(b_{w, \lambda} \in \left\{{0, \pm 1}\right\}\) in this equation are independent of \(\lambda \in \Lambda^+\).

If \(\lambda \not\in\Lambda^+\), then \(\operatorname{ch}L(\lambda) = \operatorname{ch}M(\lambda)\) and \(b_{\lambda, 1} = 1, b_{\lambda, s_\alpha} = 0\) are again independent of \(\lambda \in {\mathfrak{h}}^\vee\setminus \Lambda^+\).

Question: To what extent to \(b_{\lambda, w}\) depend on \(\lambda\)? The answer is seemingly “not much”.

Category \({\mathcal{O}}\) Methods

Note: skipping chapter 2 since we’re focusing on infinite dimensional representations.

Hom and Ext

Recall that \(\hom({\,\cdot\,}, {\,\cdot\,})\) is left exact but not exact, and is either covariant or contravariant depending on which variable is fixed. So taking \(\hom\) of a SES yields a LES involving the derived functors \(\operatorname{Ext}^n\). Convention: \(\operatorname{Ext}^0 \mathrel{\vcenter{:}}=\hom\) and \(\operatorname{Ext}^1 \mathrel{\vcenter{:}}=\operatorname{Ext}\).

Let \(A, C\) be \(U({\mathfrak{g}}){\hbox{-}}\)modules. Consider two short exact sequences

\[\begin{align*} 0 &\to A \to B \to C \to 0 \\ 0 &\to A \to B' \to C \to 0 .\end{align*}\]

where \(B, B'\) are extensions of \(C\) by \(A\).

We say two such sequences are equivalent iff there is an isomorphism making this diagram commute:

The set \(\operatorname{Ext}_{U({\mathfrak{g}})}(C, A)\) of equivalence classes of extensions is a group under an operation called “Baer sum” (see Wikipedia) in which the identity is the class of the split SES \[\begin{align*} 0 \to A \to A\oplus C \to C \to 0. \end{align*}\]

It turns out that the first right-derived functor of \(\hom\) defined using projective resolutions, namely \(\operatorname{Ext}_1\), is isomorphic to \(\operatorname{Ext}\). In particular, each SES leads to a pair of LESs given by applying \(\hom({\,\cdot\,}, D)\) and \(\hom(E, {\,\cdot\,})\) for \(D, E \in U({\mathfrak{g}}){\hbox{-}}\)mod.

Warning: Even if \(A, C\in {\mathcal{O}}\), there’s no guarantee \(B\in {\mathcal{O}}\) for \(B\) an extension. In this case, we define \(\operatorname{Ext}_{\mathcal{O}}(C, A)\) to be only those extensions lying in \({\mathcal{O}}\).

Proposition (Homs and Exts for Vermas and Quotients)

Let \(\lambda, \mu \in {\mathfrak{h}}^\vee\).

  1. If \(M\) is a highest weight module of highest weight \(\mu\) and \(\lambda \not< \mu\), then \(\operatorname{Ext}_{\mathcal{O}}(M(\lambda), M) = 0\). Contrapositive: nontrivial extensions force the strict inequality \(\mu < \lambda\). In particular, \(\operatorname{Ext}_{\mathcal{O}}(M(\lambda), X) = 0\) for \(X = L(\lambda), M(\lambda)\).

  2. If \(\mu \leq \lambda\), then \(\operatorname{Ext}_{\mathcal{O}}(M(\lambda), L(\mu)) = 0\).

  3. If \(\mu < \lambda\), then \(\operatorname{Ext}_{\mathcal{O}}(L(\lambda), L(\mu)) \cong \hom_{\mathcal{O}}(N(\lambda), L(\mu))\).

    1. is useful, homs can be easier to compute. Can just look at radical structure of \(N\), i.e. just the head.
  4. \(\operatorname{Ext}_{\mathcal{O}}(L(\lambda), L(\lambda)) = 0\).

Proof (of (a))

Given an extension \(0 \to M \xrightarrow{f} E \xrightarrow{g} M(\lambda) \to 0\) where \(M\) is a highest weight module of highest weight \(\mu \not< \lambda\). We want to show it splits.

Claim: Let \(v^+\) be a maximal vector of \(M(\lambda)\), let \(v\) be its preimage under \(g\), then \(v\) is a maximal vector of weight \(\lambda\) in \(E\). For \(x\in {\mathfrak{n}}\), we can think of the RHS as a quotient and identify

\[\begin{align*} x\cdot v + M &= x\cdot (v+M) \\ &= x\cdot v^+ \\ &= 0 \\ &= 0 + M ,\end{align*}\]

and for these to be equal this implies \(x\cdot v \in M\). But \(x\cdot v\) has weight \(> \lambda\); since \(\mu \not> \lambda\), \(M\) has no such weights. So we must have \(x\cdot v = 0\in E\), and \(v\) is a maximal vector.

It’s also the case that \(U({\mathfrak{n}}^-)\) acts freely on \(v\), since it acts freely on its image in the quotient \(M(\lambda)\). So \(v\) generates a submodule \(\left\langle{v}\right\rangle \leq E\) isomorphic to \(M(\lambda)\). This defines a splitting (because of the freeness of this action) given by \(h(v^+) = v\).

Proof (of (b))
Follows from (a).
Proof (of (c))

Look at the SES \(0\to N(\lambda) \to M(\lambda) \to L(\lambda) \to 0\). Apply \(\hom_{\mathcal{O}}({\,\cdot\,}, L(\mu))\) to get the LES

\[\begin{align*} \cdots &\to \hom_{\mathcal{O}}(M(\lambda), L(\mu)) \to \hom_{\mathcal{O}}( N(\lambda), L(\mu) ) \\ &\to \operatorname{Ext}_{\mathcal{O}}( L(\lambda), L(\mu) ) \to \operatorname{Ext}_{\mathcal{O}}( M(\lambda), L(\mu) ) \to \cdots \end{align*}\]

and since \(L(\lambda)\) is the only simple quotient of \(M(\lambda)\), so the first \(\hom\) is zero. Similarly, the last \(\operatorname{Ext}_{\mathcal{O}}\) is zero by (b), and the middle is an isomorphism.

Proof (of (d))
Replace \(\mu\) by \(\lambda\) in the LES, now term 2 above is zero since \(\Pi(L(\lambda)) < \lambda\). Term 4 is zero by (b), and thus term 3 is zero.

Next section: duality in category \({\mathcal{O}}\).

Monday February 24th

Antidominant Weights

Recall that for \(\lambda \in {\mathfrak{h}}^\vee\), we can associate \(\Phi_{[\lambda]}\) and \(W_{[\lambda]}\) and consider \(W_{[\lambda]} \cdot \lambda\). When \(\lambda \in \Lambda\) is integral and \(\mu \in W\lambda \cap\Lambda^+\), we have \(M(\mu) \to L(\mu)\) its simple quotient, which is finite-dimensional.

Definition (Antidominant)
\(\lambda \in {\mathfrak{h}}^\vee\) is antidominant if \((\lambda + \rho, \alpha^\vee) \not\in {\mathbb{Z}}^{> 0}\) for all \(\alpha \in \Phi^+\). Dually, \(\lambda\) is dominant if \((\lambda + \rho, \alpha^\vee)\not\in{\mathbb{Z}}^{<0}\) for all \(\alpha\in\Phi^+\).

Note that most weights are both dominant and antidominant. Example: take \(\lambda = -\rho\). We won’t use the dominant condition often.

Remark
For \(\lambda \in {\mathfrak{h}}^\vee\), \(W\cdot \lambda\) and \(W_{[\lambda]}\cdot \lambda\) contain at least one antidominant weight. Let \(\mu\) be minimal in either set with respect to the usual ordering on \({\mathfrak{h}}^\vee\). If \((\mu + \rho, \alpha^\vee) \in {\mathbb{Z}}^{>0}\) for some \(\alpha > 0\), then \(s_\alpha \cdot \mu < \mu\), which is a contradiction. So any minimal weight will be antidominant.
Proposition (Equivalent Definitions of Antidominant)

Fix \(\lambda\in{\mathfrak{h}}^\vee\), as well as \(W_{[\lambda]}, \Phi_{[\lambda]}\), Then define \(\Phi^+_{[\lambda]} \mathrel{\vcenter{:}}=\Phi_{[\lambda]} \cap\Phi^+ \supset \Delta_{[\lambda]}\). TFAE:

  1. \(\lambda\) is antidominant.
  2. \((\lambda + \rho, \alpha^\vee) \leq 0\) for all \(\alpha\in \Delta_{[\lambda]}\).
  3. \(\lambda \leq s_\alpha \cdot \lambda\) for all \(\alpha \in \Delta_{[\lambda]}\).
  4. \(\lambda \leq w\cdot \lambda\) for all \(w\in W_{[\lambda]}\).

In particular, there is a unique antidominant weight in \(W_{[\lambda]} \cdot \lambda\).

Proof (a implies b)
\((\lambda + \rho, \alpha^\vee) \in {\mathbb{Z}}\) for all \(\alpha \in \Delta_{[\lambda]}\) or \(\Phi^+{[\lambda]}\).
Proof (b implies a)
Suppose (b) and \((\lambda + \rho, \alpha^\vee) \in {\mathbb{Z}}\) for all \(\alpha\in\Phi^+\). Then \(\alpha \in \Phi^+ \cap\Phi_{[\lambda]}\), which is equal to \(\Phi^+_{[\lambda]}\) by the homework problem. So \(\alpha \in {\mathbb{Z}}^+ \Delta_{[\lambda]}\), and thus (claim) \((\lambda + \rho, \alpha^\vee) \leq 0\) by (b). Why? Replace \(\alpha^\vee\) with a bunch of other \(\alpha_i^\vee\) for which \((\lambda + \rho, \alpha_i^\vee) < 0\) and sum.
Proof (b iff c)
\(s_\alpha \cdot \lambda = \lambda - (\lambda + \rho, \alpha^\vee)\alpha\).
Proof (d implies c)
Trivial due to definitions.
Proof (c implies d)

Use induction on \(\ell(w)\) in \(W_{[\lambda]}\). Assume (c), and hence (b), and consider \(\ell(w) = 0 \implies w = 1\). For the inductive step, if \(\ell(w) > 0\), write \(w = w' s_\alpha\) in \(W_{[\lambda]}\) with \(\alpha \in \Delta_{[\lambda]}\). Then \(\ell(w') = \ell(w) - 1\), and by Proposition 0.3.4, \(w(\alpha) < 0\).

We can then write \[\begin{align*} \lambda - w\cdot \lambda = (\lambda - w'\cdot \lambda) + (w' \cdot \lambda - w\cdot \lambda) .\end{align*}\]

The first term is \(\leq 0\) by hypothesis, so noting that the \(w\) action is not linear but still an action, we have

\[\begin{align*} w' \cdot \lambda - w\cdot \lambda &= w\cdot s_\alpha \cdot \lambda - w\cdot \lambda \\ &= w(s_\alpha \lambda - \lambda) \quad\text{by 1.8b} \\ &= w(-(\lambda+\rho, \alpha^\vee)\alpha) \\ &= -(\lambda + \rho, \alpha^\vee)(w\alpha) \\ &= -1 (\in {\mathbb{Z}}^-)(<0) ,\end{align*}\]

which is a product of three negatives and thus negative.

A remark from page 56: Even when \(\lambda \not \in \Lambda\), we can decompose \({\mathcal{O}}_\chi\) into subcategories \({\mathcal{O}}_\lambda\). We then recover \({\mathcal{O}}_\chi\) as the sum over \({\mathcal{O}}_\lambda\) for antidominant \(\lambda\)’s in the intersection of the linkage class with cosets of \(\Lambda_r\). These are the homological blocks.

Tensoring Verma and Finite Dimensional Modules

First step toward understanding translation functors, which help with calculations.

By Corollary 1.2, we know that every \(N\in {\mathcal{O}}\) has a filtration with every section being a highest weight module. We will improve this result to show that if \(M\) is finite-dimensional and \(V\) is a Verma module, then \(V\otimes M\) has a filtration whose sections are all Verma modules. This is important for studying projectives in a couple of sections.

Theorem (Sections of Finite-Dimensional Tensor Verma are Verma)
Let \(M\) be a finite dimensional \(U({\mathfrak{g}}){\hbox{-}}\)module. Then \(T \mathrel{\vcenter{:}}= M(\lambda) \otimes M\) has a finite filtration with sections \(M(\lambda + \mu)\) for \(\mu \in \Pi(M)\), occuring with the same multiplicities.
Proof

Use the tensor identity

\[\begin{align*} \qty{ U({\mathfrak{g}}) \otimes_{U({\mathfrak{b}})} L} \otimes_{\mathbb{C}}M \cong U({\mathfrak{g}})\otimes_{U({\mathfrak{b}})} \qty{ L \otimes_{\mathbb{C}}M } ,\end{align*}\]

where

The LHS is a \(U({\mathfrak{g}}){\hbox{-}}\)module via the tensor action, and the \(RHS\) has an induced \(U({\mathfrak{g}}){\hbox{-}}\)action.

See proof in Knapp’s “Lie Groups, Lie Algebras, and Cohomology”. This is true more generally if \({\mathfrak{g}}\) is any lie algebra and \({\mathfrak{b}}\leq {\mathfrak{g}}\) any lie-subalgebra.

Recall from page 18 that the functor \(\operatorname{Ind}_{\mathfrak{h}}^{\mathfrak{g}}\) is exact on finite-dimensional \({\mathfrak{b}}{\hbox{-}}\)modules. Assume \(L, M\) are finite-dimensional, and set \(N \mathrel{\vcenter{:}}= L \otimes_{\mathbb{C}}M\). Take a basis \(v_1, \cdots, v_n\) of weight vectors for \(N\) of weights \(\nu_1, \cdots, \nu_n\). Order these such that \(\nu_i \leq \nu_j \iff i<j\).

Set \(N_k\) to be the \(U({\mathfrak{b}})\left\langle{v_k, \cdots, v_n}\right\rangle\) for \(1\leq k \leq n\), which is a decreasing filtration since acting by \(U({\mathfrak{b}})\) moves along this list of vectors/weights to the right. By induction on \(n\), this filtration induces a filtration on \(U({\mathfrak{g}})\otimes_{U({\mathfrak{b}})} N\) whose sections are Verma modules.

This yields \[\begin{align*} \operatorname{Ind}_{\mathfrak{b}}^{\mathfrak{g}}N_k / \operatorname{Ind}_{\mathfrak{b}}^{\mathfrak{g}}N_{k+1} \cong M(\nu_k) .\end{align*}\]

The intermediate quotients will be 1-dimensional submodules, which will induce up to highest weight modules. We’ll finish the proof next time.

Wednesday February 26th

We want to show the following identity:

\[\begin{align*} \qty{ U({\mathfrak{g}}) \otimes_{U({\mathfrak{b}})} L } \otimes_{\mathbb{C}}M \cong U({\mathfrak{g}}) \otimes_{U({\mathfrak{b}})} \qty{ L \otimes_{\mathbb{C}}M } .\end{align*}\]

Assume \(L\) and \(M\) are finite dimensional. Then for \(N = L \otimes M\), there is a basis of weight vectors \(v_1, \cdots, v_n, \nu_1, \cdots, \nu_m\) with \(\nu_i \leq \nu_j \iff i\leq j\). Moreover \[\begin{align*}N_k = {\mathbb{C}}\left\langle{v_k, \cdots, v_n}\right\rangle = U({\mathfrak{b}})\left\langle{v_k, \cdots, v_n}\right\rangle,\end{align*}\] and we have a natural filtration

\[\begin{align*} 0 \subset N_n \subset \cdots \subset N_1 = N \end{align*}\] with \(N_i / N_{i+1} \cong {\mathbb{C}}_{v_i}\) as \({\mathfrak{b}}{\hbox{-}}\)modules.

We thus obtain \[\begin{align*}\operatorname{Ind}_{\mathfrak{b}}^{\mathfrak{g}}N_i / \operatorname{Ind}_{\mathfrak{b}}^{\mathfrak{g}}N_{i+1} \cong \operatorname{Ind}_{\mathfrak{b}}^{\mathfrak{g}}{\mathbb{C}}_{v_i} = M(v_i)\end{align*}\] by exactness of the Ind functor. Apply this to \(L = {\mathbb{C}}_\lambda\), then the LHS is \(M(\lambda) \otimes_{\mathbb{C}}M\), where \(M\) is finite dimensional. On the RHS, \(N = {\mathbb{C}}_\lambda \otimes M\) has the same dimension as \(M\) with weights \(\lambda + \mu\) for \(\mu \in \Pi(M)\). Thus \(M(\lambda) \otimes M\) has filtration with quotients \(M(\lambda + \mu)\) over \(\mu \in \Pi(M)\), which was the theorem we had last time.

Remark
The proof shows that \(M(\lambda) \otimes M\) has a submodules \(M(\lambda + \mu)\) for any maximal weight \(\mu\) of \(M\), and a quotient \(M(\lambda + \nu)\) where \(\nu\) is any minimal weight of \(M\). We knew that every \(M\in {\mathcal{O}}\) has a finite filtration, but here the quotients are now Verma modules. This will help us study projectives later, which we need to study higher Exts.

Standard Filtrations

There are several main players in the theory of highest-weight categories, of which \({\mathcal{O}}_{\chi_\lambda}\) is one:

Definition (Standard Filtration/Verma flag)
A standard filtration of \(M\in {\mathcal{O}}\) is a filtration with subquotients isomorphic to Verma modules.

Note that when \(M\) has a standard filtration, the submodules are not unique, but the length, subquotients, and multiplicities are unique. We can thus use \(K({\mathcal{O}})\) or formal characters as an invariant, since the multiplicities \((M: M(\lambda))\) are well-defined.

If \(M, N\) have standard filtration, then so does \(M \oplus N\) by concatenation. In this case, \((M\oplus N: M(\lambda)) = (M:M(\lambda)) + (N: M(\lambda))\).

Proposition (Submodules and Direct Summands Also Have Standard Filtrations)

Let \(M\in {\mathcal{O}}\) have a standard filtration. Then

  1. If \(\lambda\) is maximal in \(\Pi(M)\), then \(M\) has a submodule isomorphic to \(M(\lambda)\) and \(M/M(\lambda)\) has a standard filtration \[\begin{align*}0 = M_0 \subset \cdots \subset M_n = M.\end{align*}\]
  2. If \(M = M' \oplus M''\), then \(M'\) and \(M''\) have standard filtrations.
  3. \(M\) is free as a \(U({\mathfrak{n}}^-){\hbox{-}}\)module.
Proof (of (a))

By assumption on \(\lambda\), \(M\) has a maximal vector of weight \(\lambda\), and thus the universal property yields a nonzero morphism \(\phi: M(\lambda) \to M\).

The claim is that \(\phi\) is injective, from which the proof follows. Proof of claim: choose a minimal index \(i\) such that \(\phi(M(\lambda)) \subset M_i\) in the filtration. Follow this with the subquotient map to yield \[\begin{align*}\psi: M(\lambda) \to M^i \mathrel{\vcenter{:}}= M_i/M_{i-1} \cong M(\mu),\end{align*}\] which is nonzero by minimality of \(i\).

Thus \(\lambda \leq \mu\), and by our assumption, this implies \(\lambda = \mu\). But then \(\psi\) sends highest weight vectors to highest weight vectors and is free, so \(\psi\) is an isomorphism. Thus \(\phi\) is injective and \(M(\lambda) \subset M\).

We can now write \(M(\lambda) \cap M_{i-1} = \ker \psi = 0\), so we obtain a direct sum decomposition \(M_i \cong M_{i-1} \oplus M(\lambda)\). We thus obtain a SES \[\begin{align*}0 \to M_{i-1} \to M/M(\lambda) \to M/M_i \to 0.\end{align*}\]

We can easily construct standard filtrations for \(M_{i-1}\) and \(M/M_i\), so the middle term also has a standard filtration. Thus \(M/M(\lambda)\) has a standard filtration of length one less than that of \(M\).

Proof (of(b))

By induction of the filtration length \(n\) of \(M\). If \(n=0\), \(M\) is a Verma module and thus indecomposable and there’s nothing to show.

For \(n\geq 1\), let \(\pi \in \Pi(M)\) be maximal (which we can always find for \(M\in {\mathcal{O}})\)) and WLOG \(M_\lambda' \neq 0\).

By the universal property, we have a nonzero composition

Applying (a) to this composite map,

  1. It must be injective, so \(M(\lambda) \hookrightarrow M'\)
  2. \(M/M(\lambda) \cong M'/M(\lambda) \oplus M''\) has a standard filtration of length \(n-1\).

By induction, \(M'/M(\lambda)\) and \(M''\) have standard filtrations, and thus so does \(M'\).

Proof (of (c))
By induction on \(n\): if \(n=1\), then \(M \cong M(\lambda)\) is \(U({\mathfrak{n}}^-){\hbox{-}}\)free. Otherwise, if \(n > 1\), by (a) \(M(\lambda) \subset M\) and \(M/M(\lambda)\) has a standard filtration of length \(n-1\). By induction, \(M/M(\lambda)\) is \(U({\mathfrak{n}}^-){\hbox{-}}\)free, and hence so is \(M\).
Theorem (Multiplicities of Vermas)
If \(M\) has a standard filtration, then \((M: M(\lambda)) = \dim \hom_{\mathcal{O}}(M, M(\lambda)^\vee)\).
Proof
By induction on the filtration length \(n\). If \(n=1\), \(M\) is a Verma module, and \((M(\mu) : M(\lambda)) = \delta_{\mu \lambda} = \dim \hom_{\mathcal{O}}(M(\mu), M(\lambda)^\vee)\) by Theorem 3.3c.

For \(n>1\), consider \[\begin{align*}0 \to M_{n-1} \to M \to M(\mu) \to 0.\end{align*}\] Apply the left-exact contravariant functor \(\hom_{\mathcal{O}}({\,\cdot\,}, M(\lambda)^\vee)\) to obtain

Projectives in \({\mathcal{O}}\)

We want to show that \({\mathcal{O}}\) has enough projectives, i.e. every \(M\in {\mathcal{O}}\) is a quotient of a projective object. We’ll also want to show \({\mathcal{O}}\) has enough injectives, i.e. every modules embeds into an injective object.

Definition (Projective Objects)

If \({\mathcal{A}}\) is an abelian category, an object \(P\in{\mathcal{A}}\) is projective iff the left-exact functor \(\hom_{\mathcal{A}}(P, {\,\cdot\,})\) is exact, or equivalently

In other words, there is a SES \[\begin{align*}\hom(P, M) \to \hom(P, N) \to 0,\end{align*}\] which precisely says that every \(f\) in the latter has a lift \(\tilde f\) in the former by surjectivity.

Definition (Injective Objects)

An object \(Q\in{\mathcal{A}}\) is injective iff \(\hom_{\mathcal{A}}({\,\cdot\,}, Q)\) is exact, i.e.

i.e., \[\begin{align*}\hom_{\mathcal{A}}(M, Q) \to \hom_{\mathcal{A}}(N, Q) \to 0\end{align*}\] so every \(g\) in the latter has a lift to \(\tilde g\) in the former.

In \({\mathcal{O}}\), having enough projectives is equivalent to having enough injectives because \(({\,\cdot\,})^\vee\) is an exact contravariant endofunctor, which sends projectives to injectives and vice-versa. Thus we’ll focus on projectives.

Friday February 28th

Recall that \(\lambda \in {\mathfrak{h}}^\vee\) is dominant iff for all \(\alpha \in \Phi^+\), we have \((\lambda + \rho, \alpha^\vee) \not\in{\mathbb{Z}}^{<0}\). Equivalently, as in Proposition 3.5c, \(\lambda\) is maximal in its \(W_{[\lambda]}\cdot\) orbit.

Constructing Projectives

Proposition (Dominant Weights Yield Projective Vermas, Projective Tensor Finite-Dimensional is Projective)
  1. If \(\lambda \in {\mathfrak{h}}^\vee\) is dominant, then \(M(\lambda)\) is projective in \({\mathcal{O}}\).

  2. If \(P\in {\mathcal{O}}\) is projective and \(\dim L < \infty\)< then \(P \otimes_{\mathbb{C}}L\) is projective.

Proof
  1. We want to find a \(\psi\) making this diagram commute: {=tex} \begin{center} \begin{tikzcd} & & v^+ \in M(\lambda) \arrow[dd, "\phi"] \arrow[lldd, "\psi", dashed] & & \\ & & & & \\ M \arrow[rr, "\pi"] & & p(v^+) \in N \arrow[rr] & & 0 \end{tikzcd} \end{center}

Assume \(\phi \neq 0\). Since \(M(\lambda) \in {\mathcal{O}}_{\chi_\lambda}\), we have \(\phi(M(\lambda)) \subset N^{\chi_\lambda}\). WLOG, we can assume \(M, N \in {\mathcal{O}}_{\chi_\lambda}\), and if \(v^+\) is maximal, \(p(v^+)\) is maximal. By surjectivity of \(\pi\), there exists a \(v\in M\) such that \(v \mapsto p(v^+)\). Then \(M \supset U({\mathfrak{n}}) v\) is finite dimensional, so it contains a maximal vector whose weight is linked to \(\lambda\) since \(M\in {\mathcal{O}}_{\chi_\lambda}\).

But since \(\lambda\) is dominant, there is no such weight greater than \(\lambda\), so \(v\) itself must be this maximal vector. Then by the universal property of \(M(\lambda)\), there is a map \(\psi: M(\lambda) \to M\) where \(v^+ \mapsto v\) making the diagram commute.

Note nice property: Vermas are projective iff maximal in orbit.

  1. We want to show \(F = \hom_{\mathcal{O}}(P\otimes L, {\,\cdot\,})\) is exact. But this is isomorphic to \[\begin{align*}\hom_{\mathcal{O}}(P, \hom_{\mathbb{C}}(L, {\,\cdot\,})) \cong \hom_{\mathcal{O}}(P, L^\vee\otimes_{\mathbb{C}}{\,\cdot\,}).\end{align*}\]

Thus \(F\) is the composition of two exact functors: first do \(L^\vee\otimes_{\mathbb{C}}{\,\cdot\,}\), which is exact since \({\mathbb{C}}\) is a field, and \(\hom_{\mathcal{O}}(P,{\,\cdot\,})\) is exact since \(P\) is projective.

Example
Let \(M(-\rho)\) be the Verma of highest weight \(\rho\). This is irreducible because \(-\rho\) is antidominant, and projective since \(-\rho\) is dominant. In fact \(W\cdot(-\rho) = \left\{{-\rho}\right\}\) by a calculation. Thus \[\begin{align*}L(-\rho) = M(-\rho) = P(-\rho) = M(-\rho)^\vee,\end{align*}\] so all 4 members of the highest weight category here are equal.

By convention, there is notation \(M(-\rho) = \Delta(-\rho)\) and \(M(-\rho)^\vee= \nabla(-\rho)\).

Note that we always have \(\operatorname{Ext}_0(L(-\rho), L(-\rho)) = 0\), and every \({\mathcal{O}}_{\chi_{-\rho}} \in M\) is equal to \(\bigoplus L(-\rho)^{\oplus n}\).

So this is referred to as a semisimple category.

Theorem (O has Enough Projectives and Injectives)
\({\mathcal{O}}\) has enough projectives and injectives.
Proof

Step 1

For all \(\lambda \in {\mathfrak{h}}^\vee\), there exists a projective mapping onto \(L(\lambda)\). Clearly \(\mu \mathrel{\vcenter{:}}=\lambda + n\rho\) is dominant for \(n\gg 0\), i.e. for \(n\) large enough there are no negative integers resulting from inner products with coroots. Thus \(M(\mu)\) is projective, and since \(n\rho \in \Lambda^+\), we have \(\dim L(n\rho) < \infty\). This implies \(P \mathrel{\vcenter{:}}= M(\mu) \otimes L(n\rho)\) is projective by the previous proposition.

Apply \(w_0\) reverses the weights, so \(w_0(n\rho) = -n\rho\). Note that this doesn’t happen for all weights, so this property is somewhat special for \(\rho\). In particular, since \(n\rho\) was a highest weight, \(-n\rho\) is a lowest weight.

By remark 3.6, \(P\) has a quotient isomorphic to \(M(\mu - n\rho) = M(\lambda)\). Thus \(P \twoheadrightarrow M(\lambda) \to L(\lambda)\), and \(L(\lambda)\) is a quotient of a projective. This establishes the result for simple modules.

Remark: By theorem 3.6, \(P\) has a standard filtration with sections \(M(\mu + \nu)\) for \(\nu \in \Pi(L(n\rho))\). In particular \(M(\lambda)\) occurs just once since \[\begin{align*}\dim L(n\rho)_{-n\rho} = \dim L(n\rho)_{w_0(n\rho)} = \dim L(n\rho)_{n\rho} = 1,\end{align*}\] with all \(\mu + \nu > \lambda\).

Step 2

Use induction on Jordan-Hilder length to prove that any \(0\neq M\in {\mathcal{O}}\) is a quotient of a projective. For \(\ell = 1\), \(M\) is simple, and by Step 1 this case holds.

Assume \(\ell > 1\), then \(M\) has a submodule \(L(\lambda)\) obtained by taking the bottom of a Jordan-Holder series, so there is a SES \[\begin{align*}0 \to L(\lambda) \xrightarrow{\alpha} M \xrightarrow{\beta} N \to 0.\end{align*}\]

By induction, since \(\ell(N) = \ell(M) - 1\), there exists a projective module \(Q \xrightarrow{\phi} N\) which extends to a map \(\psi: Q \to M\).

If \(\psi\) is surjective, we are done. Otherwise, then the composition length forces \(\psi(Q) \cong N\), and by commutativity there is a section \(\gamma: N \to \psi(Q)\) splitting this SES. Thus \(M \cong L(\lambda) \oplus N\), and by 1, there are projectives \(P \oplus Q\) projecting onto each factor, so \(M\) is projective.

3.9 Indecomposable Projectives

Definition (A Projective Cover)
A projective cover of \(M\in {\mathcal{O}}\) is a map \(\pi: P_M \to M\) where \(P_M\) is projective and \(\pi\) is an essential epimorphism, i.e. no proper submodule of \(P_M\) is mapped surjectively onto \(M\) by \(\pi_M\).

It is an algebraic fact that in an Artinian (abelian) category with enough projectives, every module has a projective cover that is unique up to isomorphism.

See Curtis and Reiner, Section 6c.

Definition (The Projective Cover for a Weight)
For \(\lambda \in {\mathfrak{h}}^\vee\), denote \(\pi_\lambda: P(\lambda) \twoheadrightarrow L(\lambda)\) to be a fixed projective cover of \(L(\lambda)\).

? March 2nd

\[\begin{align*} \frac{L(\lambda) }{L(\mu)} = P(\lambda) .\end{align*}\]

? March 3rd

Monday March 16th

Proposition (Chains of Containments of Vermas for Dominant Integral)

Suppose \(\lambda + \rho\) is dominant integral, then

More precisely, if \(w = s_1 \cdots s_\ell\) is reduced with \(s_i = s_{\alpha_i}\) with \(\alpha_i \in \Delta\) and \(\lambda_k = s_k \cdots s_1 \cdot \lambda\), then \[\begin{align*} M(w\cdot \lambda) = M(\lambda_n) \subset M(\lambda_{n-1}) \subset \cdots \subset M(\lambda_0) = M(\lambda) \end{align*}\]

Moreover \((\lambda_k + \rho, \alpha_{k+1}^\vee) \in {\mathbb{Z}}^+\) for \(0\leq k \leq n-1\) and so \[\begin{align*} \lambda_n \leq \lambda_{n-1} \leq \cdots \leq \lambda_0 .\end{align*}\]

Proof
By induction on \(n = \ell(w)\). The \(n=0\) case is obvious. For \(\ell(w) = k+1\), write \(w'= s_k \cdots s_1\). From section 0.3, \((w')^{-1}\alpha_{k+1} > 0\). We can compute \[\begin{align*} (\lambda_k + \rho, \alpha_{k+1}^\vee) &= (w' \cdots \lambda + \rho, \alpha_{k+1}^\vee) \\ &= (w'(\lambda + \rho), \alpha_{k+1}^\vee) \\ &= (\lambda + \rho, (w')^{-1}\alpha_{k+1}^\vee) \\ &= (\lambda + \rho, ((w')^{-1}\alpha_{k+1})^\vee) \\ &\in {\mathbb{Z}}^{+} \end{align*}\] since \(\lambda + \rho \in \Lambda^+\) and \((w')^{-1}\alpha_{k+1} \in \Phi^+\).

This means that \(\lambda_{k+1} = s_{k+1} \lambda_k \leq \lambda_k\). By proposition 1.4, reformulated in terms of the dot action, we have a map \(M(\lambda_{k+1}) \hookrightarrow M(\lambda_k)\), and nonzero morphisms are injective by 4.2a.

Exercise (4.3)
If \(\lambda + \rho \in \Lambda^+\), \(\operatorname{Soc}\,M(\lambda) = M(w_o \cdot \lambda)\), and moreover if \(\lambda \in \Lambda_0^+\) then the inclusions in the proposition are all proper.
Remark
For general \(\mu \in \Lambda\), it is not so easy to decide when \(M(w\cdot \mu) \subset M(\mu)\). The basic problem is that Proposition 1.4 only works for simple roots, whereas we can have \(s_\gamma \cdot \mu < \mu\) for \(\gamma \in \Phi^+\setminus \Delta\) with no obvious way to constrct an embedding \(M(s_\gamma \cdot \mu) \subset M(\mu)\). See the following example.
Example

Let \({\mathfrak{g}}= {\mathfrak{sl}}(3, {\mathbb{C}})\).

We don’t know if there’s a diagonal map indicated by the question mark in the following diagram:

Next few sections: any root reflection that moves downward through the ordering induces a containment of Verma modules.

(4.4) Simplicity Criterion: The Integral Case

Theorem (Vermas Equal Quotients iff Antidominant Weight)
Let \(\lambda \in {\mathfrak{h}}^\vee\) be any weight. Then \(M(\lambda) = L(\lambda) \iff \lambda\) is antidominant.

The proof for \(\lambda\) integral is fairly easy, because antidominance reduces to a condition involving simple roots, where we can use our Verma module embedding criterion from Proposition 1.4.

Proof (Integral Case)

Assume \(\lambda \in \Lambda\).

\(\implies\): Assume \(M(\lambda)\) is simple but \(\lambda\) is not antidominant. Then since \(\lambda \in \Lambda\), \((\lambda + \rho, \alpha^\vee)\) is a positive integer for some \(\alpha \in \Delta\). But then \(s_\alpha \lambda < \lambda\) so \(M(s_\alpha \cdot \lambda) \subset M(\lambda)\) by 1.4 and 4.2. But then \(N(\lambda) \neq 0\), which contradicts irreducibility.

\(\impliedby\): Assume \(\lambda\) is antidominant. By proposition 3.5, \(\lambda < w\cdot \lambda\) for all \(w\in W\). Since all composition factors of \(M(\lambda)\) and \(L(w\cdot \lambda)\) where \(w\cdot \lambda \leq \lambda\). This can only happen if \(w\cdot \lambda = \lambda\), and so the only possible composition factor is \(L(\lambda)\). Since \([M(\lambda) : L(\lambda)]\) is always equal to one, \(M(\lambda)\) is simple.

Remark
The reverse implication still works in general if \(W\) is replaced by \(W_{[\lambda]}\). To extend the forward implication, we need to understand embeddings \(M(s_\beta \cdots \lambda) \hookrightarrow M(\lambda)\) when \(\beta\) is not simple.

Existence of Embeddings (Preliminaries)

Lemma (Commuting Nilpotents)
Let \({\mathfrak{a}}\) be a nilpotent Lie algebra (e.g. \({\mathfrak{n}}^-\)) and \(x\in {\mathfrak{a}}, u\in U({\mathfrak{a}})\), then for every \(n\in {\mathbb{Z}}^+\) there exists a \(t\in {\mathbb{Z}}^+\) such that \(x^t u \in U({\mathfrak{a}}) x^n\).

See Engel’s theorem

Proof

Use the fact that \(\operatorname{ad}x\) acts nilpotently on \(U({\mathfrak{a}})\), so there exists a \(q\geq 0\) such that \(\qty{\operatorname{ad}x}^{q+1}u = 0\).

Let \(\ell_x, r_x\) be left and right multiplication by \(x\) on \(U({\mathfrak{a}})\). Then \(\operatorname{ad}x = \ell_x - r_x\), and \(\ell_x, r_x \operatorname{ad}x\) all commute.

Choosing \(t \geq q + n\), we have

\[\begin{align*} x^t u &= \ell_x^t u \\ &= (r_x + \operatorname{ad}x)^t u \\ &= \sum_{i=0}^t {t \choose i} r_x^{t-i} \qty{\operatorname{ad}x}^i u \\ &= \sum_{i=0}^q {t \choose i} \qty{\qty{\operatorname{ad}x}^iu }x^{t-i} \\ &\in U({\mathfrak{a}}) x^{t-q} \\ &\subset U({\mathfrak{a}}) x^n \end{align*}\]

This will be useful when moving things around by positive roots that are not simple.

Monday March 30th

Reminder of what we did already: we started on chapter 4, going into more detail on the structure of Verma modules and morphisms between them. We showed that the socle is an irreducible Verma modules, any nonzero morphism is injective, and the dimension of the hom space is at most 1. We ended showing a proposition about how to commute elements.

Proposition (Key Result: Containments of Vermas When Applying Weyl Elements)

Let \(\lambda, \mu \in {\mathfrak{h}}^\vee\) and \(\alpha\in\Delta\) be simple. Assume that \(n\mathrel{\vcenter{:}}=(\lambda + \rho, \alpha^\vee) \in {\mathbb{Z}}\) and \(M(s_\alpha \cdot \mu) \subset M(\mu) \subset M(\lambda)\). Then either

  1. \(n\leq 0\) and \(M(\lambda) \subset M(s_\alpha \cdot \lambda)\), or
  2. \(n>0\) and \(M(s_\alpha \cdot \mu) \subset M(s_\alpha \cdot \lambda) \subset M(\lambda)\).

In either case, \(M(s_\alpha \cdot \mu) \subset M(s_\alpha \cdot \lambda)\).

Proof (of (a))
Use proposition 1.4 (exchanging \(\lambda\) and \(s_\alpha \cdot \lambda\)).

Proof (of (b))

Assume \(n>0\). Then \(M(s_\alpha \cdot \lambda) \subset M(\lambda)\) by proposition 1.4. Set \(s = (\mu + \rho, \alpha^\vee) \in {\mathbb{Z}}^+\). Denote maximal vectors as follows:

Image

Apply the lemma about nilpotent lie algebras to \({\mathfrak{n}}^-, y_\alpha, u\), and \(n\), then there exists a \(t>0\) such that \(y_\alpha^t u \in U({\mathfrak{n}}^-) y_\alpha^n\). Then \[\begin{align*}\label{star} y_\alpha^t \cdot v_\lambda^+ = y_\alpha^t u \cdot v_\lambda^+ \in U({\mathfrak{n}}^-) y_\alpha^n \cdot v_\lambda^+ \subseteq M(s_\alpha \cdot \lambda) .\end{align*}\]

Now there are two cases.

Case 1:

If \(t\leq s\), we can apply \(y_\alpha^{s-t}\) to equation star to obtain \(y_\alpha^s \cdot v_\lambda^+ \in M(s_\alpha \cdot \lambda)\). Thus we have the containment we wanted to prove.

Case 2:

Suppose \(t > s\). We can’t divide in the enveloping algebra, but recall the identity in lemma 1.4(c): \[\begin{align*} [x_\alpha y_\alpha^t] = t y_\alpha^{t-1} \qty{ h_\alpha - t + 1} .\end{align*}\]

Thus \[\begin{align*} [x_\alpha y_\alpha^t] \cdot v_\mu^+ = t(s-t) y_\alpha^{t-1} \cdot v_\mu^+ .\end{align*}\]

Calculating the bracket another way, the LHS is equal to \(x_\alpha y_\alpha^t \cdot v_\mu^+ - y_\alpha^t x_\alpha \cdot v_\mu^+\) and the second term is zero, so this is in \(M(s_\alpha \cdot \lambda)\) by equation star. We can then iterate if \(t-1 > s\), reducing the power of \(y_\alpha\) until we get down to \(y_\alpha^s \cdot v_\mu^+ \in M(s_\alpha \cdot \lambda)\), in which case we are done by case 1.

\(\hfill\blacksquare\)

4.6: Existence of Embeddings

Theorem (Verma’s Thesis: Existence of Embeddings)
Let \(\lambda \in {\mathfrak{h}}^\vee\) and \(\alpha\in\Phi^+\) and assume \(\mu \mathrel{\vcenter{:}}= s_\alpha \cdot \lambda \leq \lambda\). Then \(M(\mu) \subset M(\lambda)\).

Proof

Assume \(\lambda \in \Lambda\) is integral and \(\mu\) is linked to \(\lambda\), all weights involved are integral. Without loss of generality, \(\mu < \lambda\), since we can apply a Weyl group element to place it in the dominant Weyl chamber.

  1. Since \(\mu\) is integral, choose \(w\in W\) such that \(\mu' \mathrel{\vcenter{:}}= w^{-1}\cdot \mu \in \Lambda^+ - \rho\). Following the notation in proposition 4.3, write \(w = s_n \cdots s_1, \mu_k = s_k \cdots s_1 \cdot \mu'\). Then \(\mu' = \mu_0 \geq \mu_1 \geq \cdots \geq \mu_n = \mu\), which yields a chain of inclusions of Verma modules \(M(\mu_0) \supset M(\mu_1) \supset \cdots\).

  2. Set \(\lambda' = w^{-1}\lambda\) and \(\lambda_k = s_k \cdots s_1 \cdot \lambda'\) so \(\lambda_0 = \lambda'\) and \(\lambda_n = \lambda\). Note that since \(\mu \neq \lambda\), we have \(\mu_k \neq \lambda_k\).

  3. How are \(\mu_k\) and \(\lambda_k\) related? Set \(w_k = s_n \cdots s_{k+1}\). It can be checked that \(\mu_k = w_k^{-1}s_\alpha w_k \cdot \lambda_k = s_{\beta_k} \lambda_k\) where \(\beta_k = {\left\lvert {w_k^{-1}} \right\rvert} \in \Phi^+\) is the choice of whichever is positive by Humphreys 1, Lemma 9.2. In particular, \(\mu_k - \lambda_k \in {\mathbb{Z}}\beta_k\).

  4. We have \(\mu' = \mu_0 \geq \cdots \geq \mu_k \geq \mu_{k+1} \geq \cdots \geq \mu_n = \mu\). Since \(\lambda'<\mu'\) (because \(\mu'\) is the unique dominant weight in \(W\cdot \lambda\) but \(\mu < \lambda\), so the inequalities must switch at some \(k\). So say \(\lambda_k < \mu_k\) but \(\lambda_{k+1} > \mu_{k+1}\), where \(k\) is chosen to be the smallest index for which this happens. Note that all of the weights are linked to \(\lambda\).

  5. We want to show that \(M(\mu_{k+1}) \subset M(\lambda_{k+1}), \cdots, M(\mu) = M(\mu_n) \subset M(\lambda_n) = M(\lambda)\).

  6. First, \(\mu_{k+1} - \lambda_{k+1} = s_{k+1} \cdot \mu_k - s_{k+1} \cdot \lambda_k\), where the LHS is some negative times \(\beta_{k+1}\), and the RHS is equal to \(s_{k+1} \qty{ \mu_k - \lambda_k }\), which is a positive times \(\beta_k\) by exercise 1.8. Since \(s_{k+1}\) permutes the positive roots other than \(\alpha_{k+1}\), this forces \(\beta_k = \beta_{k+1} = \alpha_{k+1}\). So we have \(\mu_{k+1} = s_{\alpha_{k+1}} \lambda_{k+1}\) which by proposition 1.4 implies that \(M(\mu_{k+1}) \subset M(\lambda_{k+1})\).

  7. Combining 1 and 6, we have \(M(\mu_{k+2}) = M(s_{k+2} \cdot \mu_{k+1}) \subset M(\mu_{k+1}) \subset M(\lambda_{k+1})\). This is the setup of proposition 4.5 and wither alternative leads to \(M(\mu_{k+2}) \subset M(\lambda_{k+2}) = M(s_{\alpha_{k+2}} \cdot \lambda_{k+1})\),

  8. Since this increases the index, we can iterate step 7 to complete step 5 and get the desired containment.

Wednesday April 1st

Exercise
Work through the steps for \({\mathfrak{sl}}(3)\), due next Thursday.

Preview of next sections:

4.11: Application to \({\mathfrak{sl}}(3, {\mathbb{C}})\)

The simplest nontrivial case, what can we say about the Verma modules?

Image

Image

We have \(\Delta = \left\{{\alpha, \beta}\right\}\) and \(\Phi^+ = \left\{{\alpha, \beta, \gamma\mathrel{\vcenter{:}}=\alpha+\beta}\right\}\). The Weyl group is \[\begin{align*} W = \left\{{1, s_\alpha,s_\beta, s_\alpha s_\beta, s_\beta s_\alpha, w_0\mathrel{\vcenter{:}}= s_\alpha s_\beta s_\alpha = s_\beta s_\alpha s_\beta }\right\} .\end{align*}\]

We first consider an integral regular linkage class \(W\cdot \lambda\), and we way choose an antidominant \(\lambda\) and assume \[\begin{align*} (\lambda + \rho, \alpha^\vee) \in {\mathbb{Z}}^{< 0} \quad \forall \alpha \in \Phi^+ \quad \text{e.g. } \lambda = - 2\rho \end{align*}\]

Then \(W_\lambda = \left\{{1}\right\}\), given by the stabilizer of the isotropy subgroup, and \({\left\lvert {W\cdot \lambda} \right\rvert} = 6\). So there are 6 Verma modules to understand, and we have the following diamond:

Image

The middle two edges are \(s_\gamma\), and each edge corresponds to an inclusion of Verma modules (with the inclusions going upward). By Verma’s theorem, the Bruhat order corresponds to these inclusions.

  1. \(M(\lambda) = L(\lambda)\) since \(\lambda\) is antidominant by Theorem 4.4

  2. By the same theorem, no other \(M(w\cdot \lambda)\) is simple. Then by Proposition 4.18, Theorem 4.2c, we have \(\operatorname{Soc}\,M(w\cdot \lambda) = L(\lambda)\) for all \(w\in W\)

  3. Consider \(M(s_\alpha \cdot \lambda)\) and set \(\mu \mathrel{\vcenter{:}}= s_\alpha \cdot \lambda\) and the only possible composition factors are \(L(\mu)\) and \(L(\lambda)\) and \([M(\mu): L(\mu) ] = 1\). If we use Theorem 4.10, this multiplicity is 1 in the socle and we’re done. If we don’t, could it be larger than 1? Since \(\operatorname{Ext}_{\mathcal{O}}(L(\lambda), L(\lambda)) = 0\), we can not have the following situation:

    Image

    The first extension doesn’t exist, since the higher \(L(\lambda)\) would drop down to give the bottom diagram, which contradicts \(\operatorname{Soc}\,M(\mu) = L(\lambda)\).

    So the only possibilities are multiplicity 1, and \(M(s_\alpha \cdot \lambda) = L(s_\alpha \cdot \lambda)\) which lives over \(L(\lambda)\), so \(\operatorname{ch}L(s_\alpha \cdot \lambda) = \operatorname{ch}M(s_\alpha \cdot \lambda) - \operatorname{ch}M(\lambda)\).

    Similary for \(M(s_\beta \cdot \lambda)\).

  4. For the higher weights in the orbit, we need more theory. We know there are inclusions \(x\leq w \implies M(x\cdot \lambda) \subset M(w\cdot \lambda)\) according to the Bruhat order - so every edge in the weight poset is a reflection, so use Verma’s theorem. \[\begin{align*} [M(w\cdot \lambda): L(x \cdot \lambda)] = \begin{cases} \geq 1 & ? \\ 0? & ? \end{cases} .\end{align*}\]

We’ll skip 4.12,4.13, 4.14.

Chapter 5: Highest Weight Modules II

Development by BGG after 1970s, based on partly incorrect proof in Verma’s thesis.

5.1: The BGG Theorem

Which simple modules occur as composition factors of \(M(\lambda)\)?

Definition (Strongly Linked Weights)
For \(\mu, \lambda \in {\mathfrak{h}}^\vee\), write \(\mu \uparrow \lambda\) if \(\mu = \lambda\) or there exists an \(\alpha \in \Phi^+\) such that \(\mu = s_\alpha \cdot \lambda < \lambda\), i.e. \((\lambda + \rho, \alpha^\vee) \in {\mathbb{Z}}^{> 0}\). Extend this relation transitively: if there exists \(\alpha_1, \cdots, \alpha_r \in \Phi^+\) such that \(\mu = (s_{\alpha_1} \cdots s_{\alpha_r}) \cdot \lambda \uparrow (s_{\alpha_2} \cdots s_{\alpha_r} \uparrow \cdots \uparrow s_{\alpha_r} \lambda \uparrow \lambda\), we again write \(\mu \uparrow\lambda\) and say \(\mu\) is strongly linked to \(\lambda\), yielding a partial order on \({\mathfrak{h}}^\vee\).
Theorem (Strong Linkage implies Verma Embedding)

Let \(\mu, \lambda \in {\mathfrak{h}}^\vee\).

Corollary
\([M(\lambda): L(\mu)] \neq 0 \iff M(\mu) \hookrightarrow M(\lambda)\).

The situation is not as straightforward as it might appear (and as Verma believed). Namely, if \(0 = M_0 \subset \cdots \subset M_n = M(\lambda)\) is a composition series and if \(M_i / M_{i-1} \cong L(\mu) \ni \mkern 1.5mu\overline{\mkern-1.5muv\mkern-1.5mu}\mkern 1.5mu_{\mu}^+\), there need not be any preimage of \(v_\mu^+\) which is a maximal vector in \(M(\lambda)\), leading to a map \(M(\mu) \hookrightarrow M(\lambda)\).

However, when this happens, there will always be some other \(M_j/M_{j-1} \cong L(\mu)\) where a preimage of a maximal vector is maximal in \(M(\lambda)\), leading to the required embedding.

5.2 Bruhat Ordering

In the case of “\(\rho{\hbox{-}}\)regular” integral weights, the BGG theorem has a nice reformulation in terms of \(W\) and the Bruhat ordering. Fix \(\lambda \in \Lambda\) antidominant and \(\rho{\hbox{-}}\)regular, so \((\lambda + \rho, \alpha^\vee) \in {\mathbb{Z}}^{< 0}\) for all \(\alpha\in \Phi^+\).

As in the discussion of \({\mathfrak{sl}}(3)\), this means that \({\left\lvert {W\cdot \lambda} \right\rvert} = {\left\lvert {W} \right\rvert}\) and \([M(w\cdot \lambda) : L(\mu)] \neq 0\) implying that \(\mu = x\cdot \lambda\) for some \(x\in W\). What can we say about the relative positions of \(w\) and \(x\)?

Suppose that \(w\in W, \alpha\in\Phi^+\) and \(s_\alpha \cdot (w\cdot \lambda) < w\cdot \lambda\) so that \(M(s_\alpha w \cdot \lambda) \hookrightarrow M(w\cdot \lambda)\). Our assumption is equivalent to \[\begin{align*} {\mathbb{Z}}^{>0} \ni (w\cdot \lambda + \rho, \alpha^\vee) &= (w(\lambda+\rho), \alpha^\vee)\\ &= (\lambda + \rho, w^{-1}\alpha^\vee) \\ &= (\lambda + \rho, (w^{-1}\alpha)^\vee) \\ &\iff w^{-1}\alpha \in \Phi^- \iff (w^{-1}s_\alpha) \alpha \in \Phi^+ \\ &\iff \ell(w) > \ell(w') \quad \text{where } w' = s_\alpha w \\ &\iff w' \xrightarrow{s_\alpha} w .\end{align*}\]

Friday April 3rd

Recall from last time that we defined a new partial order for all positive roots generated by “reflecting down”, namely strong linkage. We had a theorem: \(\mu \uparrow \lambda \implies M(\mu)\) occurs as a composition factor of \(M(\lambda)\). We also have a side-arrow notation \(w' \xrightarrow{s_\alpha} w\) indicates that \(w' = s_\alpha w\) and \(w'\) is shorter than \(w\). We conclude that \(x\cdot \lambda \uparrow w\cdot \lambda \iff x\leq w\) for \(x, w\in W\), where the RHS is the usual Bruhat order and is notably independent of \(\lambda\).

Corollary
Let \(\lambda \in \Lambda\) be antidominant and \(\rho{\hbox{-}}\)regular and \(x, w\in W\). Then \[\begin{align*} [M(w\cdot \lambda) : L(x\cdot \lambda)] \neq 0 \iff M(x\cdot \lambda) \hookrightarrow M(w\cdot \lambda) \iff x \leq w \end{align*}\] Note that this statement is why we use antidominant instead of dominant, since this equation now goes in the right direction.

Jantzen Filtration

Theorem (The Jantzen Filtration and Sum Formula)

Given \(\lambda \in {\mathfrak{h}}^\vee\), \(M(\lambda)\) has a terminating descending filtration satisfying

  1. Each nonzero quotient has a certain nondegenerate contravariant form (3.14)

  2. \(M(\lambda)^i = N(\lambda)\)

  3. \[\begin{align*}\sum_{i > 0} \operatorname{ch}M(\lambda)^i = \sum_{\alpha > 0, s_\alpha \cdot \lambda < \lambda} \operatorname{ch}M(s_\alpha \cdot \lambda)\end{align*}\] (the Integer sum formula, very important)

Note that the sum on the RHS is over \(\left\{{ \alpha\in\Phi^+_{[\lambda]} {~\mathrel{\Big|}~}s_\alpha \cdot \lambda < \lambda }\right\} \mathrel{\vcenter{:}}=\Phi^+_\lambda\).

Fact
\(\operatorname{Soc}\,M(\lambda) = L(\mu)\) for the unique antidominant \(\mu\) in \(W_{[\lambda]}\cdot \lambda\). Moreover, \([M(\lambda) : L(\mu)] = 1\).

Now suppose \(M(\lambda)^n \neq 0\) but \(M(\lambda)^{n+1} = 0\). Each \(M(\lambda)^i \supset \operatorname{Soc}\,M(\lambda) = L(\mu)\), since they’re submodules, and each \(M(s_\alpha \cdot \lambda) \supset L(\mu)\), using the uniqueness of \(\mu\). By looking at coefficients of \(\operatorname{ch}L(\mu)\) on each side of the sum formula, we obtain \(n = {\left\lvert {\Phi^+} \right\rvert}\).

Exercise (5.3)
When \(\lambda\) is antidominant, integral, and \(\rho{\hbox{-}}\)regular, then \(n = \ell(w)\). More generally, for nonintegral, \(n = \ell_\lambda(w)\) where \(\ell_\lambda\) is the length function of the system \((W_{[\lambda]}, \Delta_{[\lambda]})\).

Some natural questions:

  1. Is the Jantzen filtration unique for properties (a)-(c)?
  2. What are the “layer multiplicities” \([M(\lambda): L(\mu)]\)?
  3. Are the layers \(M(\lambda)\) semisimple? If so, is the Jantzen filtration the same as the canonical filtrations with semisimple quotients (the radical or socle filtrations)?
  4. When \(M(\mu) \subset M(\lambda)\), how do the respective Jantzen filtrations compare?

A guess for (4): Assume \(\mu \uparrow\lambda\), set \(r = {\left\lvert {\Phi^+_\lambda} \right\rvert} - {\left\lvert {\Phi_\mu^+} \right\rvert}\), which is the difference in lengths of the two Jantzen filtrations.

Is it true that:

Image

with \(M(\mu) \cap M(\lambda)^i = M(\mu)^{i-r}\) for \(i\geq r\)?

This is called the Jantzen conjecture and turns out to be true.

Thought equivalent to KL-conjecture, but turned out to be deeper. See decomposition theorem, sheaves on flag varieties, no simple algebraic proof until recently. See chapter 8.

Recall that we obtained a hexagon:

Image

We have \[\begin{align*} \Phi_{w\cdot \lambda}^{+}=\left\{\gamma \in \Phi^{+} | s_{\gamma} \cdot(w\cdot \lambda)<w \cdot \lambda\right\}=\{\alpha, \alpha+\beta\} \end{align*}\]

with corresponding weights \(s_\gamma(w\cdot \lambda) = s_\beta \cdot \lambda, s_\alpha \cdot \lambda\). Thus we have a two-step filtration, and we’ve worked out the characters of the pieces previously.

Thus \[\begin{align*} \sum_{i=1}^n \operatorname{ch}M(w\cdot \lambda)^i &= \operatorname{ch}M(s_\alpha \cdot \lambda) + \operatorname{ch}M(s_\beta \cdot \lambda) \\ &= \operatorname{ch}L(s_\alpha \cdot \lambda) + \operatorname{ch}L(s_\beta \cdot \lambda) + 2\operatorname{ch}L(\lambda) \end{align*}\]

where we know the last expression explicitly. Since \(n\) has the be the number of \(L(\lambda)\)s occurring on the RHS, we must have \(n=2\).

We can reason that \(M(w\cdot \lambda)^2 = L(\lambda)\), since any composition factor in \(M(w\cdot \lambda)^2\) recurs in \(M(w\cdot \lambda)^1\), and so \[\begin{align*} \operatorname{ch}M(w\cdot \lambda)^1 &= \operatorname{ch}N(w\cdot \lambda) \\ &= \operatorname{ch}L(s_\alpha \cdot \lambda) + \operatorname{ch}L(s_\beta \cdot \lambda) + \operatorname{ch}L(\lambda) .\end{align*}\]

We then obtain the following structure on the sections/subquotients of the Jantzen filtration

Image

where the subquotients move upward through the diagram, e.g. the middle is \(M(w\cdot \lambda)^1 / M(w\cdot \lambda)^2\).

Exercise (Last Assignment)
  1. Try to work on the Jantzen filtration sections for \(M(w_0 \cdot \lambda)\). List completely any additional assumptions or facts needed to deduce \(M(w_0 \cdot \lambda)^i\) uniquely.
  2. Continue 4.11 in the case where \(\lambda\) is singular Does this allow you to deduce that structure of all \(M(w\cdot \lambda)\) using the Jantzen sum formula?
  3. Work out the non-integral case for \({\mathfrak{sl}}(3, {\mathbb{C}})\). (There are four different cases to consider here.)

Showing Jantzen Implies BGG

We’ll prove that \([M(\lambda) : L(\mu)] \neq 0 \implies \mu \uparrow\lambda\).

Proof

By induction on the number of linked weights \(\mu \leq \lambda\)

If \(\lambda\) is minimal in its linkage class, then \(M(\lambda) = L(\lambda)\) so \(\mu = \lambda\) and \(\lambda \uparrow\lambda\) trivially.

Otherwise, inductively suppose \([M(\lambda): L(\mu)] > 0\) with \(\mu < \lambda\). Then \([M(\lambda)^1: L(\mu)] > 0\) since \(M(\lambda)^1 = N(\lambda)\). By the sum formula, \([M(s_\alpha \cdot \lambda) : L(\mu)] > 0\) for some \(\alpha \in \Phi_\lambda^+\). Then \(s_\alpha \cdot \lambda < \lambda\) so the number of linked weights \(\nu \leq s_\alpha \cdot \lambda\) is smaller than for \(\lambda\).

So by induction,

Image

and \(\mu \uparrow\lambda\) as required.

Example: \({\mathfrak{sl}}(4, {\mathbb{C}})\) with Dynkin diagram \(\cdot \to \cdot \to \cdot\).

If \(\lambda = (0, -1, 0) \in \Lambda^+ - \rho\) with coordinates with respect to the fundamental weight basis for \(\Lambda\) or \({\mathfrak{h}}^\vee\). Then take \(w = s_2 s_3, x= s_3 s_2 s_3 s_1 s_2\), then \(\mu = w\cdot \lambda = (1, -2, -1)\) and \(\mu - x\cdot \mu = \alpha_1 + \alpha_3\) so \(x\cdot \mu < \mu\).

However, Verma’s direct calculations in \(U({\mathfrak{sl}}(4, {\mathbb{C}}))\) showed that \(M(x\cdot \mu) \not\subset M(\mu)\), so \(x\cdot \mu \not\uparrow\mu\).

The explanation (due to Verma) is that \(x\cdot \mu = xw\cdot \lambda\), using the fact that \(s_1, s_3\) commute, \[\begin{align*} xw &= (s_3 s_2 s_3 s_1 s_2) s_2 s_3 \\ &= s_3 s_2 s_3 s_3 s_1 \\ &= s_3 s_2 s_1 .\end{align*}\]

and \(s_2 s_3, s_3 s_2 s_1\) are not related in the Bruhat order.

This is because there is no root reflection relating the two. Note that this can be seen by considering subexpressions: \(a < b\) iff \(a\) occurs as some deleted subexpression of \(b\).

So it’s possible to have one weight less than another with no inclusion of the Verma modules.

Monday April 6th

Note that most of the theory thus far has not relied on the properties of \({\mathbb{C}}\), so Jantzen’s strategy was to extend the base field to \(K = {\mathbb{C}}(T)\), rational functions in \(T\), then setting \(g_K \mathrel{\vcenter{:}}= K \otimes_{\mathbb{C}}{\mathfrak{g}}\). The theory over \(K\) adapts to \(A = {\mathbb{C}}[T]\), the PID of polynomials in one variable \(T\) with \(K\) as its fraction field and the “Lie algebra” \(g_A = A \otimes_{\mathbb{C}}{\mathfrak{g}}\).

Setup: Let \(A\) be any PID, for example \({\mathbb{Z}}\) or \({\mathbb{C}}[T]\), and \(M\) a free \(A{\hbox{-}}\)module of finite rank \(r\). Let \(e, f\in M\) and suppose \(M\) has an \(A{\hbox{-}}\)valued symmetric bilinear form denoted \(({\,\cdot\,}, {\,\cdot\,})\). Since \(M\) is finite rank, we can choose a basis \(\left\{{e_i}\right\}^r\), so the matrix \(F\) of this form relative to this basis has nonzero determinant \(D\) depending on the choice of basis. A change of basis is realized by some \(P \in \operatorname{GL}(r, A)\), giving \(F' = P^t F P\) (note that forms change by a transpose instead of an inverse) and \(\det P \in A^{\times}\). Thus \(D\) only changes by a unit \(u = \qty{\det P}^2\).

We can define the dual module \(M^* = \hom_A(M, A)\) which is also free of rank \(r\), and contains a submodule \(M^\vee\) consisting of functions \(e^\vee: M \to A\) given by \(e^\vee(f) = (e, f)\) for any fixed \(e\in M\). Surprisingly, this doesn’t give every hom: e.g. if the form only has even outputs. Since \(({\,\cdot\,}, {\,\cdot\,})\) is nondegenerate, the map \(\phi: M\to M^\vee\) sending \(e\) to \(e^\vee\) is an isomorphism.

We’ll now invoke the structure theory of modules over a PID: There exists a basis of \(M^*\) given by \(\left\{{e+i^*}\right\}^r\) where \(M^\vee\) has a basis \(\left\{{d_i e_i^*}\right\}^r\) for some scalars \(0\neq d_i \in A\). We can define a dual basis of \(M\) given by \(\left\{{e_i}\right\}^r\) where \(e_i^*(e_j) = \delta_{ij}\). We can similarly get a separate dual basis \(\left\{{f_i}\right\}\) where \(f_i^\vee= d_i e_i^*\).

We can compare these two bases: \[\begin{align*} (e_i, f_j) = f_j^\vee(e_i) = d_j e_j^* (e_i) = d_j \delta_{ij} \end{align*}\] (Formula 1)

Thus up to units, \(D = \prod_{i=1}^r d_i\), so this hybrid matrix is one way to compute this determinant.

Fix a prime element in \(A\), then there is an associated valuation \(v_p: A \to {\mathbb{Z}}^+\) where \(v_p(a) = n\) if \(p^n {~\Bigm|~}a\) but \(p^{n+1}\nmid a\). Since \(p\) is prime, \(\mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu \mathrel{\vcenter{:}}= M/pM\) makes sense and is a finitely generated module over the field \(\mkern 1.5mu\overline{\mkern-1.5muA\mkern-1.5mu}\mkern 1.5mu = A/pA\); thus \(\mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu\) is a vector space.

We’ll now define a filtration: for \(n\in {\mathbb{Z}}^+\), define \(M(n) = \left\{{e\in M{~\mathrel{\Big|}~}(e, M) \subset p^n A}\right\}\). Then \[\begin{align*} M = M(0) \supset M(1) \supset \cdots \end{align*}\] is a decreasing filtration, with corresponding images \(\mkern 1.5mu\overline{\mkern-1.5muM(n)\mkern-1.5mu}\mkern 1.5mu\) that are vector spaces.

Lemma

For \(A\) a PID, \(p\in A\) prime, \(\mkern 1.5mu\overline{\mkern-1.5muA\mkern-1.5mu}\mkern 1.5mu = A/pA\) with valuation \(v_p\) and \(M\) a free \(A{\hbox{-}}\)module with a nondegenerate symmetric bilinear form wrt some basis of \(M\) having nonzero determinant \(D\) Then

  1. \(v_p(D) = \sum_{n > 0} \dim_{\mkern 1.5mu\overline{\mkern-1.5muA\mkern-1.5mu}\mkern 1.5mu} \mkern 1.5mu\overline{\mkern-1.5muM(n)\mkern-1.5mu}\mkern 1.5mu\).
  2. For each \(n\), the modified bilinear form \((e, f)_n \mathrel{\vcenter{:}}= p^{-n} (e, f)\) on \(M(n)\) induces a nondegenerate form on \(\mkern 1.5mu\overline{\mkern-1.5muM(n)\mkern-1.5mu}\mkern 1.5mu / \mkern 1.5mu\overline{\mkern-1.5muM(n+1)\mkern-1.5mu}\mkern 1.5mu\).
Proof (of (a))
  1. For \(f\in M\), write \(f = \sum a_ij f_j\) in terms of the given basis, and \((e_i, f) = a_i d_i\). For \(n> 0\), we have {=tex} \begin{center} \begin{align*} f \in M(n) & \iff v_p((e_i, f)) \geq n \forall i \\ & \iff v_p(a_i d_i) \geq n \\ & \iff v_p(a_i) + v_p(d_i) \geq n \\ & \iff v_p(a_i) \geq n - n_i \quad n_i \coloneqq v_p(d_i) \end{align*} \end{center}

This \(a_i\) must be divisibly by \(p\) at least \(n-n_i\) times. This \(M(n)\) is spanned by \(\left\{{f_i {~\mathrel{\Big|}~}n_i \geq n}\right\} \cup\left\{{p^{n-n_i}f_i {~\mathrel{\Big|}~}n_i < n}\right\}\).

  1. So \(\mkern 1.5mu\overline{\mkern-1.5muM(n)\mkern-1.5mu}\mkern 1.5mu\) has basis \(\left\{{\mkern 1.5mu\overline{\mkern-1.5muf\mkern-1.5mu}\mkern 1.5mu_i {~\mathrel{\Big|}~}n_i \geq n}\right\}\) and \(\dim \mkern 1.5mu\overline{\mkern-1.5muM(n)\mkern-1.5mu}\mkern 1.5mu = \# \left\{{i {~\mathrel{\Big|}~}n_i \geq n}\right\}\). In particular, \(\mkern 1.5mu\overline{\mkern-1.5muM(n)\mkern-1.5mu}\mkern 1.5mu = 0\) for \(n\gg 0\) since there are only finitely many \(n_i\). Thus
Proof ( of (b) )
  1. Note that \((e, f) \in p^n A \implies (e, f)_n \in A\), so this is well-defined on \(M(n)\). To see that it’s well-defined on \(\mkern 1.5mu\overline{\mkern-1.5muM(n)\mkern-1.5mu}\mkern 1.5mu\) we must show that \(e \in p M(n)\). \[\begin{align*} (e, pM(n))_n \subset pA \\ \implies (e, M(n))_n \subset p^{-n}(pM, M(n)) \subset p^{-n+1}(M, M(n)) \subset p^{-n+1}p^n A = pA \end{align*}\] So there is an induced form \((\mkern 1.5mu\overline{\mkern-1.5mue\mkern-1.5mu}\mkern 1.5mu, \mkern 1.5mu\overline{\mkern-1.5muf\mkern-1.5mu}\mkern 1.5mu)_n\) on \(\mkern 1.5mu\overline{\mkern-1.5muM(n)\mkern-1.5mu}\mkern 1.5mu\).

To show it’s nondegenerate, need to compute the radical.

Wednesday April 8th

Recall that we are setting up Jantzen’s filtration. Let \(A\) be a PID, \({\mathfrak{p}}\in A\) prime, \(\mkern 1.5mu\overline{\mkern-1.5muA\mkern-1.5mu}\mkern 1.5mu = A/pA\), \(M\) a free \(A{\hbox{-}}\)module of rank \(r\), with a nondegenerate symmetric bilinear form \(({\,\cdot\,},{\,\cdot\,})\) having nonzero determinant wrt some basis of \(M\). Define \(M(n), \mkern 1.5mu\overline{\mkern-1.5muM(n)\mkern-1.5mu}\mkern 1.5mu\) as before

Lemma
  1. \(v_p(D) = \sum_{n > 0} \dim_{\mkern 1.5mu\overline{\mkern-1.5muA\mkern-1.5mu}\mkern 1.5mu} \mkern 1.5mu\overline{\mkern-1.5muM(n)\mkern-1.5mu}\mkern 1.5mu\)
  2. For each \(n\), the modified bilinear form induces a nondegenerate form on \(\mkern 1.5mu\overline{\mkern-1.5muM(n)\mkern-1.5mu}\mkern 1.5mu / p\mkern 1.5mu\overline{\mkern-1.5muM(n)\mkern-1.5mu}\mkern 1.5mu\)?

Proof of Jantzen’s Theorem

Let \(A = {\mathbb{C}}[T]\) and \(K = {\mathbb{C}}(T)\) its fraction field, and let \({\mathfrak{g}}_A = A \otimes_{\mathbb{C}}{\mathfrak{g}}\) and \({\mathfrak{g}}_K = K \otimes_{\mathbb{C}}{\mathfrak{g}}\), which is a Lie algebra that is split over \(K\), i.e. every \(h\in {\mathfrak{h}}_K = K \otimes_{\mathbb{C}}{\mathfrak{h}}\) has all eigenvalues of \(\operatorname{ad}h\) in \(K\).

The theory we need carries over to the extended setting. The plan is the following:

We’ll use Lemma 5.6 to construct filtrations on the weight spaces of the extended Verma module, then reduce mod \(T\) (using evaluation morphisms) to assemble the Jantzen filtration for the original \({\mathbb{C}}{\hbox{-}}\)module.

Given \(\lambda \in {\mathfrak{h}}^\vee\), set \(\lambda_T = \lambda + T_\rho \in {\mathfrak{h}}_K^\vee\). For all \(\alpha\in \Phi\), we have \[\begin{align*} (\lambda_T + \rho, \alpha^\vee) = (\lambda + \rho , \alpha^\vee) + T(\rho, \alpha^\vee) \not\in {\mathbb{Z}} ,\end{align*}\] since this is a linear polynomial. So \(\lambda_T\) is antidominant.

Therefore \(M(\lambda_T)\) is simple, and equivalently (unique up to scalars) its nonzero contravariant form is nondegenerate.

The module \(U({\mathfrak{g}}_A) \cong A \otimes_{\mathbb{C}}U({\mathfrak{g}})\) is a natural “\(A{\hbox{-}}\)form” of \(U({\mathfrak{g}}_K) \cong K \otimes_{\mathbb{C}}U({\mathfrak{g}})\). This yields \(M(\lambda_T)_A\), an \(A{\hbox{-}}\)form of \(M(\lambda_T)\), where each weight space is an \(A{\hbox{-}}\)module of finite rank.

Some remarks about contravariant forms on highest weight modules: given \(M\) and such a form \(({\,\cdot\,}, {\,\cdot\,}): M\times M \to {\mathbb{C}}\), the transpose serves as an adjoint and we have \((uv, v') = (v, \tau(u) v')\).

Distinct weight spaces are orthogonal, i.e. \((M_\mu, M_\nu) = 0\) since \[\begin{align*} (hv, v') = \mu(h) (v, v') \\ = (v, hv') = \nu(h) (v, v') \\ \end{align*}\] where \(\mu(h) \neq \nu(h)\), implying \((v, v') = 0\).

We can compute \[\begin{align*} (u v^+ \in M_\mu, u' v^+ \in M_\mu) = (v^+, \tau(u) u' v') = a (v^+, v^+) \end{align*}\]

for some \(a\in A\), since \(\lambda_T = \lambda + T_p\) maps \({\mathfrak{h}}_A \to A\). We can use the decomposition \(U({\mathfrak{g}}) = U({\mathfrak{h}}) \oplus ({\mathfrak{n}}^- U({\mathfrak{g}}) + U({\mathfrak{g}}) {\mathfrak{n}})\), where \({\mathfrak{n}}^+\) kills \(v^+\) and \({\mathfrak{n}}^-\) lowers into an orthogonal weight space, and so this pairing only depends on \((v^+, v^+)\).

Note that the radical of this bilinear form is a maximal submodule.

The weights are of the form \(\lambda_T - \nu\) for \(\nu \in {\mathfrak{n}}^+ \Phi^+ = \Lambda_r^+\). Apply lemma 5.6 to the \(A{\hbox{-}}\)form \(M_{\lambda - \nu}\) of \(M(\lambda_T)_{\lambda_T - \nu}\) to get a decreasing finite filtration of \(A{\hbox{-}}\)submodules

\[\begin{align*} M_{\lambda-\nu}(0)=M_{\lambda_{T}-v}(1) \supset \cdots \end{align*}\]

where \(M_{\lambda_T - \nu}(i) = \left\{{e\in M_{\lambda_T - \nu} {~\mathrel{\Big|}~}(e, M_{\lambda_T - \nu}) \subset T^i A}\right\}\).

For each \(i \geq 0\), set \(M(\lambda_T)_A^i = \sum_{\nu \in \Lambda_r^+} M_{\lambda_T - \nu}(i)\). This yields a decreasing filtration of \(A{\hbox{-}}\)submodules.

Next we want to “set \(T=0\)”: formally, pass to the quotient \(\mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu = M/TM\) over the field \(\mkern 1.5mu\overline{\mkern-1.5muA\mkern-1.5mu}\mkern 1.5mu = A/TA \cong {\mathbb{C}}\). Since \(\lambda_T = \lambda + T_\rho \xrightarrow{T = 0} \lambda\), we have \(M(\lambda_T)_A \cong M(\lambda)\) and the filtration becomes a decreasing filtration of \(M(\lambda)\).

By the lemma, the sections of this filtration inherit nondegenerate contravariant forms, proving (a). By the proof of that lemma, the filtration on each individual weight space terminates at 0.

Claim: Some \(M(\lambda)^{n+1} = 0\).

Proof
If not, since \(M(\lambda)\) has finite length, we must have \(0 \neq M(\lambda)^n = M(\lambda)^{n+1} = \cdots\) for some \(n\). Choose some \(u\in {\mathfrak{h}}^\vee\) such that \(M(\lambda)_\mu^n = 0\), but then \(0 \neq M(\lambda)_\mu^n = M(\lambda)_\mu^n = \cdots\), a contradiction.

For a proof of (c), we want to show \(\sum_{i > 0} \operatorname{ch}M(\lambda)^i = \sum_{\alpha \in \Phi^+} \operatorname{ch}M(s_\alpha \cdot \lambda)\). We can show that the LHS is given by \[\begin{align*} \cdots &= \sum_{i > 0} \sum_{\nu \in \Lambda_r^+} \dim M(\lambda)_{\lambda-\nu}^i \\ &= \sum_{i > 0} \sum_\nu \dim\qty{\mkern 1.5mu\overline{\mkern-1.5muM(\Lambda_T)_A^i\mkern-1.5mu}\mkern 1.5mu}_{\lambda_T - \nu} e(\lambda - v) \\ &= \sum_i \sum_\nu \dim \mkern 1.5mu\overline{\mkern-1.5muM_{\lambda_T -\nu}(i)\mkern-1.5mu}\mkern 1.5mu e(\lambda - \nu) \\ &= \sum_\nu \sum_i \dim \mkern 1.5mu\overline{\mkern-1.5muM_{\lambda_T -\nu}(i)\mkern-1.5mu}\mkern 1.5mu e(\lambda - \nu) \\ &= \sum_\nu v_T(D_\nu(\lambda_T)) e(\lambda - v) \quad\text{Lemma 5.6a} .\end{align*}\]

where \(D_\nu(\lambda_T)\) is the determinant of the matrix of the contravariant form on the \(\lambda_T - \nu\) weight space of \(M(\lambda_T)\).

Fact (Jantzen, Shapovalov): Up to a nonzero scalar multiple depending on a choice of basis of \(U({\mathfrak{n}}^-)\), \[\begin{align*} D_\nu(\lambda_T) = \prod_{\alpha > 0} \prod_{r \in {\mathbb{Z}}^{>0}} \qty{ (\lambda_T, \rho, \alpha^\vee) - r }^{P(\nu - r_\alpha)} \end{align*}\] where \(P\) is the Kostant partition function.

We can calculate \(v_T\) of this, which doesn’t depend on the scalar: \[\begin{align*} (\lambda_T + \rho, \alpha^\vee) - r = (\lambda+\rho, \alpha^\vee) -r + T(\rho, \alpha^\vee)\\ \implies v_T( \cdots ) = \begin{cases} 1 & (\lambda + \rho, \alpha^\vee) = r > 0 \iff \alpha \in \Phi_\lambda^+ \\ 0 & \text{else} \end{cases} .\end{align*}\]

We then have \[\begin{align*} v_T(D_\nu(\lambda_T)) = \sum_{\alpha \in \Phi^+_\lambda} P(\nu - (\lambda + \rho, \alpha^\vee)\alpha) .\end{align*}\]

Thus the LHS is given by \[\begin{align*} \cdots &= \sum _{\nu \in \Lambda_r^+} \sum_{\alpha\in \Phi_\lambda^+} P(\nu - (\lambda + \rho, \alpha^\vee) \alpha) e(\lambda - \nu) \\ &= \sum _{\alpha \in \Phi_\lambda^+} \sum_{\sigma \in \Lambda_r^+} P(\sigma) e(\lambda - (\lambda + \rho, \alpha^\vee)\alpha - \sigma) \\ &= \sum_{\alpha \in \Phi_\lambda^+} \operatorname{ch}M(s_\alpha \cdot \lambda) ,\end{align*}\]

where we’ve used what we know about characters of Verma modules.

Note that the proof of the determinant formula is technical.

We’ll skip chapter 6 on KL theory.

Friday April 10th

Translation Functors

Extremely important, allow mapping functorially between blocks (recalling \({\mathcal{O}}= \bigoplus {\mathcal{O}}_{\chi_\lambda}\)) and in good situations gives an equivalence of categories.

Definition (Projection Functors)
A projection functor \(\mathrm{pr}_\lambda: {\mathcal{O}}\to {\mathcal{O}}_{\chi_\lambda}\) where \(M = \bigoplus_\mu M^{\chi_\mu} \mapsto M^{\chi_\lambda}\).

Convention: From now on, all weights will be integral

Proposition (Properties of Projection Functors)
  1. \(\mathrm{pr}_\lambda\) is an exact functor.
  2. \(\hom(M, N) \cong \bigoplus_\lambda \hom(\mathrm{pr}_\lambda M, \mathrm{pr}_\lambda N)\)
  3. \(\mathrm{pr}_\lambda (M^\vee) = (\mathrm{pr}_\lambda M)^\vee\)
  4. \(\mathrm{pr}_\lambda\) maps projectives to projectives
  5. \(\mathrm{pr}_\lambda\) is self-adjoint
Proof
  1. Given \[\begin{align*}0 \xrightarrow{f} N \xrightarrow{g} P \to 0,\end{align*}\] we can decompose this as \[\begin{align*}0 \to \bigoplus_\lambda M^{\chi_\lambda} \xrightarrow{\oplus f_\lambda} \bigoplus_\lambda N^{\chi_\lambda} \xrightarrow{\oplus g_\lambda} \bigoplus_\lambda P^\lambda \to 0,\end{align*}\] which gives exactness on each factor.
  2. We can move direct sums out of homs.
  3. Write \(\mathrm{pr}_\lambda \qty{ \qty{\bigoplus M^{\chi_\lambda} }^\vee}\) and use theorem 3.2b to write as \((M^{\chi_\lambda})^\vee\).
  4. \(\mathrm{pr}_\lambda(P)\) is a direct summand of a projective and thus projective.
  5. We have \(\hom(\mathrm{pr}_\lambda M, N) = \hom(\mathrm{pr}_\lambda M, \mathrm{pr}_\lambda N) = \hom(M, \mathrm{pr}_\lambda N)\).
Definition (Translation Functors)
Let \(\lambda, \mu \in \Lambda\) with \(\nu = \mu - \lambda\) integral. Then there exists \(w\in W\) such that \(\tilde \nu \mathrel{\vcenter{:}}= w\nu \in \Lambda^+\) is in the dominant chamber. Define the translation functor \[\begin{align*}T_\lambda^\mu = \mathrm{pr}_\mu(L(\tilde \nu) \otimes_{\mathbb{C}}\mathrm{pr}_\lambda(M)),\end{align*}\] where we use the fact that \(\tilde \nu\) dominant makes \(L(\tilde \nu)\) finite-dimensional.

This is a functor \({\mathcal{O}}^{\chi_\lambda} \to {\mathcal{O}}^{\chi_\mu}\).

Proposition (Propoerties of Translation Functors)
  1. The translation functor is exact.
  2. \(T_\lambda^\mu (M^\vee) = \qty{T_\lambda^\mu M}^\vee\)
  3. It maps projectives to projectives.
Proof
  1. It is a composition of exact functors, noting that tensoring over a field is always exact.
  2. Use proposition 12, \(L(\tilde \nu)\) is self-dual, and \(A^\vee\otimes B^\vee\cong (A\otimes B)^\vee\).
  3. Use proposition 1 and previous results, e.g. \(L \otimes_{\mathbb{C}}({\,\cdot\,})\) preserves projectives if \(\dim L < \infty\) (Prop 3.8b).
Proposition (Adjoint Property of Translation Functor)
\(\hom(T_\lambda^\mu M, N) \cong \hom(M, T_\mu^\lambda N)\), which also holds for every \(\operatorname{Ext}^n\).
Proof

We have


But \(L(\tilde \nu)^\vee\cong L(-w_0 \tilde \nu)\) and \(-w_0 \tilde \nu = w_0 w(\lambda - \mu)\) is the dominant weight in the orbit of \(\lambda - \mu\) used to define \(T_\mu^\lambda\).

For the second part, use a long exact sequence – if two functors are isomorphic, then their right-derived functors are isomorphic.

Does this functor take Vermas to Vermas? I.e. do we have \(M(w\cdot \lambda) \mapsto M(w\cdot \mu)\) when \(T_\lambda^\mu {\mathcal{O}}_{\chi_\lambda} \to {\mathcal{O}}_{\chi_\mu}\)?

Picture for \({\mathfrak{sl}}(3, {\mathbb{C}})\):


This doesn’t always happen, and depends on the geometry.

Weyl Group Geometry – Facets

Definition (Facets)

Given a partition of \(\Phi^+ = \Phi^0_F \cup\phi^+_F \cup\Phi^-_F\), a facet associated to this partition is a nonempty set consisting of solutions \(\lambda \in E\) to the following equations:


Example: \(A_2\), where \(\Phi^+ = \left\{{\alpha, \beta, \alpha + \beta}\right\}\).

  1. Take \(\Phi_F^0 = \Phi^+\), and by the orthogonality conditions, \(F = \left\{{-\rho}\right\}\) since it must be orthogonal to all 3 roots. So the origin is a facet.
  2. Take \(\Phi_F^0 \left\{{\alpha, \beta}\right\}\) and \(\Phi_F^+ = \left\{{\alpha + \beta}\right\}\), so \(F = \emptyset\) can not be a facet.
  3. See notes
  4. see notes

Note that \(\mkern 1.5mu\overline{\mkern-1.5muF\mkern-1.5mu}\mkern 1.5mu \supset \widehat{F} \cup\underbar F\) in general.

Monday April 13th

Reviewing the definition of facets. We partitioned \(\Phi\) into 3 sets \(\Phi_F^{0, \pm}\), some of which could be empty. We had notion of upper and lower closure given by replacing the strict inequalities with inequalities in condition (3) and (2) respectively.


Definition (Chambers)
If \(F\) is a facet with \(\Phi_F^0=\emptyset\), then \(F\) is called a chamber.

A facet with exactly 1 root in \(\Phi_F^0\), then this is called a wall.

Observations:

  1. \(\Phi^+ = \Phi_F^+\) always defines a chamber called the fundamental chamber and is denoted \(C_0\).
  2. If \(F\) is any chamber, then \(F = w\cdot C_0\) for some \(w\in W\).
Proposition (Relation Between Facets and Chambers)
  1. Every facet \(F\) is the upper closure of some unique chamber \(C\).
  2. If \(F \subset \widehat{C}\) then \(\widehat{F} \subset \widehat{C}\).
Proof
  1. If \(F\) is given by \(\Phi_F^0 \cup\Phi_F^+ \cup\Phi_F^-\) and \(C\) pairs with \(\Phi_C^+ = \Phi_F^+\) and thus \(\Phi_C^- = \Phi_F^- \cup\Phi_F^0\). To see that \(C\neq \emptyset\), use remark (1) on page 132.
  2. Obvious from above description of \(C\).

Key Lemma from 7.5

We’re focusing only on integral weights, and we want to calculate the translation functor of a Verma \(T_\lambda^\mu M(\lambda)\). First step: project onto \(\lambda\) block, but \(M(\lambda)\) is in that block already. Then tensor with \(L(\tilde \nu)\), then the product has a standard filtration with certain Verma section \(M(\lambda + w\tilde \nu)\), each occurring with multiplicity one. The weight \(\tilde \nu\) is the unique dominant weight in the orbit of \(\mu - \lambda\), one of the Verma sections is in \(M(\mu)\). We plan to show that \(T_\lambda^\mu M(\lambda) = M(\mu)\) in “good” situations.

Lemma
Let \(\lambda, \mu \in \Lambda\) be integral weights and \(\nu = \mu - \lambda\) and \(\tilde \nu \in \Lambda^+ \cap W \nu\) (which is unique). Assume there is a facet \(F\) with \(\lambda \in F, \mu \in \mkern 1.5mu\overline{\mkern-1.5muF\mkern-1.5mu}\mkern 1.5mu\). Then for all weights \(\nu' \neq \nu\) of \(L(\tilde\nu)\), the weight \(\lambda + \nu'\) is not linked to \(\lambda + \nu = \mu\) under \(W\).
Proof

Toward a contradiction, suppose there exists \(\nu \neq \nu\) in \(\Pi(L(\tilde \nu))\) with \(\lambda + \nu' \in W\cdot (\lambda + \nu)\). Define the distance between two chambers \(C, C'\) as the number of root hyperplanes separating them. Under the correspondence between chambers and \(W\) given by picking a fundamental chamber, the distance corresponds to the difference in lengths between the corresponding Weyl group elements.


So choose chambers \(C, C'\) with \(F \subset \mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu\), \(\lambda + \nu' \in \mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu'\), and \(d(C, C')\) is minimal. We now go through 14 easy steps.


  1. The case \(d(C, C') = 0\) is impossible, since this would force \(C =C'\)/ But \(C\) is a fundamental domain for the dot action, where \(C' \ni \lambda + \nu' \neq \lambda + \nu = \mu \in \mkern 1.5mu\overline{\mkern-1.5muF\mkern-1.5mu}\mkern 1.5mu \subset \mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu\). This contradicts \(C\) being a fundamental domain, since each ? will be conjugate to a unique element.
  2. The case \(d(C, C') > 0\) implies there’s a hall \(H_\alpha \cap\mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu '\) of \(C'\) separating \(C'\) from \(C\). Wlog assume \(C'\) is on the positive side of \(H_\alpha\) and \(\alpha > 0\) and \(C\) is on the negative side. Since \(\mkern 1.5mu\overline{\mkern-1.5muF\mkern-1.5mu}\mkern 1.5mu \subset \mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu\), we have \((\xi + \phi, \alpha^\vee) \leq 0\) for all \(\xi \in \mkern 1.5mu\overline{\mkern-1.5muF\mkern-1.5mu}\mkern 1.5mu\).
  3. Reflect and set \(C'' \mathrel{\vcenter{:}}= s_\alpha C'\), then \(d(C, C'') < d(C, C')\) and we will be able to apply the induction hypothesis.
  4. By (2), \((\lambda + \nu' + \rho, \alpha^\vee) \geq 0\) since \(\lambda + \nu'\) was on the positive side.
  5. By (2), \((\lambda + \rho, \alpha^\vee) \leq 0\) since \(\lambda + \nu'\) was on the negative side.
  6. By (4), \(\lambda + \nu' \geq s_\alpha \cdot (\lambda + \nu') = s_\alpha \cdot \lambda + s_\alpha \nu' = \lambda - (\lambda + \rho, \alpha^\vee)\alpha + s_\alpha \nu' \mathrel{\vcenter{:}}=\nu''\) by just applying the formula for the dot action.
  7. By (5) and (6), \(s_\alpha \nu' \leq s_\alpha \nu' - (\lambda + \rho, \alpha^\vee) \alpha \leq \nu'\), where the first and last terms are weights of \(L(\tilde \nu)\), so \(\nu'' \leq \nu'\). In fact, this inequality is obtained by cancelling \(\lambda\) and adding/substracting multiples of \(\alpha\), so these come from an \(\alpha\) root string.
  8. Rewriting (6), we have \(s_\alpha (\lambda + \nu') = \lambda + \nu'' \in s_\alpha \mkern 1.5mu\overline{\mkern-1.5muC'\mkern-1.5mu}\mkern 1.5mu = \mkern 1.5mu\overline{\mkern-1.5muC''\mkern-1.5mu}\mkern 1.5mu\).
  9. By 1.6 bullet (2) in Humphreys, \(\nu'' \in \Pi (L(\tilde \nu))\).
  10. By the minimality assumed for \(\nu'\), along with (3), (8), (9), we have \(\nu'' = \nu\).
  11. Rewriting (7) and using the hypothesis \(\nu \neq \nu'\), we can write \(s_\alpha \nu ' \leq \nu < \nu'\) where the inequality is strict because they are not equal. This is still an \(\alpha\) root string of weights in the simple module \(L(\tilde \nu)\) with \(\nu \in W\tilde \nu\). The first inequality can not be strict, otherwise \(v\pm \alpha\) would both be weights of \(L(\tilde \nu)\), contradicting Humphreys 1.6 bullet 1. So \(s_\alpha \nu' = \nu\).
  12. By (10), (11), and (6), \(s_\alpha \nu' = \nu = \nu'' = s_\alpha \nu ' - (\lambda + \rho, \alpha^\vee) \alpha\), so \((\lambda + \rho, \alpha^\vee) = 0\).
  13. Since \(\lambda \in F\) by assumption, if \(\alpha \in \Phi_F^0\) then \((\xi + \rho, \alpha^\vee) = 0\) for all \(\xi \in \mkern 1.5mu\overline{\mkern-1.5muF\mkern-1.5mu}\mkern 1.5mu\). In particular, for \(\xi = \mu = \lambda + \nu\), and combined with (12), this say \((\nu, \alpha^\vee) = 0\) since the pairing is linear in the first slot.
  14. We’re now done: combining (11), (13) yields \(\nu ' = s_\alpha \nu = \nu - (\nu, \alpha^\vee)\alpha = \nu\), which contradicts \(\nu \neq \nu'\).

7.6: Translation Functors and Verma Modules

Theorem (Translation Functors on Vermas for Antidominant Weights)
Let \(\lambda, \mu \in \Lambda\) be antidominant. Assume there is a facet \(F\) relative to the dot action of \(W\) with \(\lambda \in F\) and \(\mu \in \mkern 1.5mu\overline{\mkern-1.5muF\mkern-1.5mu}\mkern 1.5mu\). Then for all \(w\in W\), we have \[\begin{align*} T_\lambda^\mu M(w\cdot \lambda) &= M(w\cdot \mu) \\ T_\lambda^\mu M(w\cdot \lambda)^\vee&\cong M(w\cdot \mu)^\vee .\end{align*}\]
Proof

Apply the previous lemma to \(w\cdot \lambda, w\cdot \mu\) and the facet \(w\cdot F\) using \(\nu = w\cdot \mu - w\cdot \lambda\). To compute \(T_\lambda^\mu\), first consider \(L(\tilde \nu) \otimes M(w\cdot \lambda)\). By Theorem 3.6, this has a standard filtration with quotients \(M(w\cdot \lambda + \nu')\) for \(\nu' \in \Pi(L(\tilde \nu))\), potentially with multiplicity.


In particular, \(M(w\cdot \mu) = M(w\cdot \lambda + \nu)\) appears exactly once. By the lemma, none of the other highest weights \(w\cdot \lambda + \nu'\) are linked to \(\mu\). Thus decomposing the tensor product into direct summands in infinitesimal blocks, the only summand in \({\mathcal{O}}_{\chi_\mu}\) is \(M(w\cdot \mu)\). Therefore \(T_\lambda^\mu M(w\cdot \lambda) = {\operatorname{pr}}_\mu (L(\tilde \nu) \otimes M(w\cdot \lambda)) = M(w\cdot \mu)\). The statement about duals follows from translation functors commuting with taking duals.

Wednesday April 15th

Section 7.6, proved theorem about translation functors on Verma modules. We fixed an antidominant weight, since we can apply elements of \(W\) to obtain the rest. We proved that translation functors take Verma modules to Verma modules.

Corollary (Translations Have Standard Filtrations)
If \(M\in {\mathcal{O}}_\chi\) has a standard filtration, then so does \(T_\lambda^\mu M \in {\mathcal{O}}_\mu\).
Proof
By induction on the length of the filtration, where the length 1 case is handled by the theorem. In general we have \(0 \to N \to M \to M(w\cdot \lambda) \to 0\) and since \(T_\lambda^\mu\) is exact, we can apply it to get another exact sequence. ??? See notes.

Translation Functors and Simple Modules

Proposition (Translation Functors Applied to L for Antidominant Weights)
Let \(\lambda, \mu \in \Lambda\) be antidominant with a facet \(F\) such that \(\lambda \in F\) and \(\mu \in \mkern 1.5mu\overline{\mkern-1.5muF\mkern-1.5mu}\mkern 1.5mu\). Then for all \(w\in W\), \(T_\lambda^\mu L(w\cdot \lambda)\) is either \(L(w\cdot \mu)\) or 0.

Idea: we’re pushing \(\lambda\) to something more singular.

Proof
Apply the exact functor \(T_\lambda^\mu\) to the surjection \(M(w\cdot \lambda) \twoheadrightarrow L(w\cdot \lambda)\) so obtain \(M(w\cdot \mu) \twoheadrightarrow M\) for some \(M\). Since \(M\) is a quotient of a Verma module, it is a highest weight module of highest weight \(w\cdot \mu\). Suppose \(M\neq 0\), we can apply \(T_\lambda^\mu\) to \(L(w\cdot \lambda) \hookrightarrow M(w\cdot \lambda) ^\vee\) to obtain \(M(w\cdot \mu) \twoheadrightarrow M \hookrightarrow M(w\cdot \mu) ^\vee\), a nonzero map. By Theorem 3.3c, the image is the socle, so we obtain \(M \cong L(w\cdot \mu)\).

It turns out that \(T_\lambda^\mu \cong L(w\cdot \mu)\) precisely when \(w\cdot \mu \in \widehat{w\cdot F}\) (the upper closure, see Theorem 7.9 and example 7.7 for \({\mathfrak{sl}}(2, {\mathbb{C}})\)). Example: take \(\rho = -1\).

So if we can figure out \(L(w\cdot \lambda)\) for \(\lambda\) \(\rho{\hbox{-}}\)regular, we can determine the composition factors of all Verma modules by “pushing to walls”. There’s also a need to “cross walls”, and there’s a nice rule for what happens for Verma modules in this case. Going “off the wall” on the other side is more complicated.

7.8: Category Equivalences

We saw in the case of \({\mathfrak{sl}}(2, {\mathbb{C}})\) and \({\mathfrak{sl}}(3, {\mathbb{C}})\) that the composition factors depended more on the Weyl group than the highest weight. We want to show that \(T_\lambda^\mu\) gives an equivalence of categories between \({\mathcal{O}}_\lambda\) and \({\mathcal{O}}_\mu\). when \(\lambda, \mu\) are antidominant and lie in the same facet. We’ll first show that it induces an isomorphism on the Grothendieck groups.

Proposition (Isomorphism of Grothendieck Groups for Weights in a Common Facet)
Suppose there is a single facet \(F\) containing \(\lambda, \mu\). Then the obvious map is an isomorphism: \[\begin{align*} T_\lambda^\mu: K({\mathcal{O}}_\lambda) &\xrightarrow{\cong }K({\mathcal{O}}_\mu)\\ [M(w\cdot \lambda)] &\mapsto [M(w\cdot \mu)] \\ [L(w\cdot \lambda)] &\mapsto [L(w\cdot \mu)] .\end{align*}\]
Proof
Recall that \(\left\{{[M(w\cdot \lambda)] {~\mathrel{\Big|}~}w\in W}\right\}\) (and/or replacing by \(L\)) forms \({\mathbb{Z}}{\hbox{-}}\)basis for \(K({\mathcal{O}}_\lambda)\) and similarly for \(\mu\). So when \([M]\in K({\mathcal{O}}_\lambda)\) is written was a \({\mathbb{Z}}{\hbox{-}}\)linear combination of \([M(w\cdot \lambda)]\), it’s clear that \(T_\mu^\lambda T_\lambda^\mu [M] = [M]\). So these functors are mutually inverse.
By the previous proposition, if we take \(L\) instead, the result is either the identity or zero. But zero is impossible by what we just proved, so we must have \(T_\lambda^\mu [ L(w\cdot \lambda)] = [L(w\cdot \mu)]\) forcing \(T_\lambda^\mu L(w\cdot \lambda) = L(w\cdot \mu)\).

Since \(K({\mathcal{O}}) \cong \chi_0\), the group of formal characters of modules in \({\mathcal{O}}\), we in fact get \[\begin{align*} [M: L(w\cdot \lambda)] = [T_\lambda^\mu M: L(w\cdot \mu)] \quad \forall M\in {\mathcal{O}} .\end{align*}\]

Thus the \(\lambda, \mu\) don’t matter as much (as long as they’re in the same facet), and the \(w\) is what’s important. There is a theorem (2005) that for any artinian abelian category, and isomorphism of Grothendieck groups implies an equivalence of categories.

Wednesday April 22nd

For \(p\in {\mathbb{Q}}[x]\), we’ll denote \(p[i]\) the \(i\)th coefficient.

For \(M\in {\mathcal{O}}\) or any module of finite length, we define its radical series: \[\begin{align*}{\operatorname{rad}}^0 M = M,\quad {\operatorname{rad}}^{i+1} M = {\operatorname{rad}}({\operatorname{rad}}^i M)\end{align*}\] Note that the layers/subquotients are semisimple.

Dually, \(\operatorname{Soc}\,M\) is the largest semisimple submodule of \(M\), and iterating this yields the socle series. Denote \[\begin{align*}\operatorname{Soc}\,_i M \mathrel{\vcenter{:}}=\operatorname{Soc}\,^{i+1} M / \operatorname{Soc}\,^i M\end{align*}\] the \(i\)th socle layer. The corresponding layers are the same as in the radical filtration, just with reversed indexing.

For convenience, set \[\begin{align*}Q_{x, w} = P_{w_0 w, w_0 x}(q)\end{align*}\] the “inverse” KL polynomial. Recall that a consequence of the KL conjecture is that \([M_w] = \sum_{x\leq w} Q_{x, w}(1) [L_x]\)

The following theorem follows from the Jantzen conjecture

Theorem (Coefficients of Inverse KL Polynomials)
  1. \[\begin{align*}Q_{x, w}[i] = [ {\operatorname{rad}}_{\ell(xw) - 2i} M_w : L_x] = [\operatorname{Soc}\,_{\ell(x) + 2i} M_w: L_x]\end{align*}\]

    Note that Humphreys adds \(+1\) here due to indexing.

  2. If \(x < w\), then \(\dim \operatorname{Ext}^1 (L_x, L_w) = \mu(x, w)\).

That concludes the KL theory.

Tilting Modules (Ch. 11)

The Lusztig conjecture is an analog of the KL conjecture for representations of algebraic groups in characteristic \(p> 0\). It gives the characters of simple modules in terms of characters of known “standard” modules. for \(p > h\) and the formulas are independent of \(p\).

Holy grail: characters of simple modules!

? showed that that Lusztig character formula is correct for \(p \gg 0\) but the bounds are very large. In 2016, Geordie Williamson found counterexamples to this conjecture for fairly large \(p\).

Most recent work, suggests that more uniform formulas can be obtained using another family of indecomposable representations, the tilting modules.

Definition (Tilting Modules)
\(M\in {\mathcal{O}}\) is a tilting module if both \(M, M^\vee\) have standard filtrations (quotients are Vermas). Equivalently, adapts to settings in which there are standard and co-standard modules: \(M\) is a tiling module iff \(M\) has a standard filtration (for \({\mathcal{O}}\), sections are Verma modules) and a costandard filtration (in \({\mathcal{O}}\), sections are duals of Verma modules).

Note that a self-dual modules with a standard filtration is automatically tilting.

Example
If \(\lambda\) is antidominant, \(M(\lambda) = L(\lambda) = L(\lambda)^\vee\) is self-dual and thus tilting.
Example
If \(\lambda + \rho \in \Lambda^+\) is dominant integral, so \(w_0 \cdot \lambda\) is antidominant and integral, then \(P(w_0 \cdot \lambda)\) is self-dual and hence tilting. Its standard filtration has each \(M(w\cdot \lambda)\) as a section exactly once, see Theorem 4.10.
Proposition (Properties of Tilting Modules)

Let \(M\) be a tilting module.

  1. \(M^\vee\) is tilting.
  2. For \(N\) tilting, \(M \oplus N\) is tilting.
  3. Any direct summand of \(M\) is tilting.
  4. If \(\dim L < \infty\), then \(L \otimes M\) is tilting.
  5. \(T_\lambda^\mu M\) is tilting.
  6. If \(N\) is tilting then \(\operatorname{Ext}_{\mathcal{O}}^n(M, N)\) for all \(n>0\) (take projective resolution and apply hom)
Proof
  1. Obvious since \((M^\vee)^\vee\cong M\).
  2. \(M\oplus N\) has a standard filtration by section 3.7, so does \((M\oplus N)^\vee\cong M^\vee\oplus N^\vee\) (Theorem 3.2d)
  3. From Proposition 3.7b direct summands also have standard filtrations, and the formula used in the proof applies to the dual module here.
  4. This follows from theorem 3.6 since \(L \otimes M(\lambda)\) has a standard filtration and exercise 3.2 distributing duals through tensors.
  5. This follows from (e) and (d).
  6. In theorem 3.3d we proved \(\operatorname{Ext}_{\mathcal{O}}^1 (M(\mu), M(\lambda)^\vee) = 0\) for all \(\mu, \lambda\). In section 6.12 this is extended to \(\operatorname{Ext}^n\) and thus \(\operatorname{Ext}^n(M, N^\vee) = 0\) for any \(M, N\) with standard filtrations.

From now on, for simplicity we work in the full subcategory \({\mathcal{O}}_{\text{int}}\) of modules whose weights lie in \(\Lambda\), but the results generalize to arbitrary weights. Set \({\mathcal{K}}= K({\mathcal{O}}_{\text{int}})\) which is a subgroup of \(K({\mathcal{O}})\).

In order to classify all tilting modules, it suffices to classify indecomposable by Proposition 11.1c. These turn out to be classified by highest weight.

To prove existence, we’ll use translation functors to move to and from walls.

Theorem (Translation Off the Wall for Antidominant Regular Weights)

Let \(\lambda, \mu \in \Lambda\) be antidominant with \(\lambda\) regular (so in the antidominant chamber \(C\)) and \(\mu\) lies on a single simple root wall \(H_\alpha \cap\mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu\) (i.e. the stabilizers \(W_\mu^0)\) of \(\mu\) under the dot action is \(\left\{{1, s}\right\}\) with \(s = s_\alpha\) for some \(\alpha \in \Delta\).)
Assume that \(w\in W\) with \(ws > w\), then

  1. There is a SES (singular to regular, translation off the wall): \[\begin{align*} 0 \to M(ws \cdot \lambda ) \to T_{\mu}^\lambda M(w\cdot \mu) \to M(w\cdot \lambda) \to 0.\end{align*}\]
  2. The head is given by \(\text{Head} T_{\mu}^\lambda M(w\cdot \mu) = L(w\cdot \lambda)\), and in particular the LHS is indecomposable and the sequence in (a) is non-split.

Also recall Proposition 3.7a: If \(M\in {\mathcal{O}}\) has a standard filtration and \(\lambda \in \Pi(M)\) is maximal, then \(M(\lambda) \hookrightarrow M\) and \(M/M(\lambda)\) has a standard filtration.

Proposition (Existence of “Highest Weight” Tilting Modules)

Let \(\lambda \in \Lambda\):

  1. There exists an indecomposable tilting module \(T(\lambda) \in {\mathcal{O}}_{\text{int}}\) such that \(\dim T(\lambda)_\lambda = 1\) and \(\mu \in \Pi(T(\lambda)) \implies \mu \leq \lambda\). In particular, \(( T(\lambda): M(\lambda) ) = 1\) and \(M(\lambda) \hookrightarrow T(\lambda)\).
  2. Every indecomposable tilting module in \({\mathcal{O}}_{\text{int}}\) is isomorphic to \(T(\lambda)\) for some \(\lambda \in \Lambda\).

Friday April 24th

Chapter 11: Tilting Modules.

Recall that these are defined by modules with both a standard and a costandard filtration.

Theorem (7.14, Translation Off the Walls for Antidominant Regular Weights)

Let \(\lambda, \mu \in \Lambda\) be antidominant with \(\lambda\) regular (so in the antidominant chamber \(C\)) and \(\mu\) lies on a single simple root wall \(H_\alpha \cap\mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu\) (i.e. the stabilizers \(W_\mu^0)\) of \(\mu\) under the dot action is \(\left\{{1, s}\right\}\) with \(s = s_\alpha\) for some \(\alpha \in \Delta\).)
Assume that \(w\in W\) with \(ws > w\), then

  1. There is a SES (singular to regular, translation off the wall): \[\begin{align*} 0 \to M(ws \cdot \lambda ) \to T_{\mu}^\lambda M(w\cdot \mu) \to M(w\cdot \lambda) \to 0.\end{align*}\]
  2. The head is given by \(\text{Head} T_{\mu}^\lambda M(w\cdot \mu) = L(w\cdot \lambda)\), and in particular the LHS is indecomposable and the sequence in (a) is non-split.

The SES here represents starting at the RHS, translating to get to a wall to get the middle term, then translating off the wall and picking up an \(s\) term.

To see that standard and costandard filtrations exist, we can consider:


Also recall Proposition 3.7a: If \(M\in {\mathcal{O}}\) has a standard filtration and \(\lambda \in \Pi(M)\) is maximal, then \(M(\lambda) \hookrightarrow M\) and \(M/M(\lambda)\) has a standard filtration.

Theorem (Existence of “Highest Weight” Tilting Modules)

Let \(\lambda \in \Lambda\):

  1. There exists an indecomposable tilting module \(T(\lambda) \in {\mathcal{O}}_{\text{int}}\) such that \(\dim T(\lambda)_\lambda = 1\) and \(\mu \in \Pi(T(\lambda)) \implies \mu \leq \lambda\). In particular, \(( T(\lambda): M(\lambda) ) = 1\) and \(M(\lambda) \hookrightarrow T(\lambda)\).

Note that this implies that \(T(\lambda)\) must lie in the single block \({\mathcal{O}}_{\chi_\lambda}\), since it has a Verma \(M(\lambda)\) and is indecomposable.

  1. Every indecomposable tilting module in \({\mathcal{O}}_{\text{int}}\) is isomorphic to \(T(\lambda)\) for some \(\lambda \in \Lambda\).
  2. \(T(\lambda)\) is the only tilting module up to isomorphicin \({\mathcal{O}}_{\text{int}}\) having the properties in (a).
  3. \(\left\{{[T(\lambda)]}\right\}_{\lambda \in \Lambda}\) is a basis for \({\mathcal{K}}= K({\mathcal{O}}_{\text{int}})\).
Proof (of (a))

Existence is by induction on length in \(W\) along with translation to and from walls. Fix \(\lambda \in \Lambda\) to be \(\rho{\hbox{-}}\)regular and antidominant. Consider the linked weights \(w\cdot \lambda\) and their translates to walls. To start, set \(T(\lambda) \mathrel{\vcenter{:}}= M(\lambda)\) which has the properties in (a). Likewise for \(\mu \in \mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu\), for \(C\) the antidominant chamber, \(\mu\) is still antidominant, \(T_\lambda^\mu M(\lambda) = M(\mu)\) again irreducible and seta \(T(\mu) \mathrel{\vcenter{:}}= M(\mu)\)

For the inductive step, assume \(T(w\cdot \mu)\) has been constructed for all \(\mu \in \mkern 1.5mu\overline{\mkern-1.5muw\cdot C\mkern-1.5mu}\mkern 1.5mu\) to \(T(w\cdot \lambda)\) is defined. If \(s\) is a simple reflection with \(ws > w\) choose an antidominant integral weight \(\mu\) with \(W_{\mu}^0 = \left\{{1, s}\right\}\) so we have defined \(T(w\cdot \mu) T(ws \cdot \mu)\). Apply the exact functor \(T_\mu^\lambda\) and use theorem 7.14: \(T_\mu^\lambda T(w\cdot \mu)\) has a two-step filtration with sections \[\begin{align*} N: {M(w\cdot \lambda) \over M(ws\cdot \lambda) } .\end{align*}\] where the top term is a non-split extension and the bottom is indecomposable with \(\operatorname{Head}\,= L(w\cdot \lambda)\).

The other sections \(M(x\cdot \mu)\) of \(T(w\cdot \mu)\) have \(x\cdot \mu < w\cdot \mu\). Applying \(T_\mu^\lambda\) to these produces 2-step modules like \(N\) but with highest weights either \(xs \cdot \lambda < ws\cdot \lambda\), or \(x\cdot \lambda\), whichever is greater.


Thus \(ws \cdot \lambda\) is the unique largest weight (occurring with multiplicity one) in the titling module \(T_\mu^\lambda T(w\cdot \mu)\) by Prop 11.1e. Set \(T(ws\cdot \lambda)\) to be the indecomposable summand of this involving the weight \(ws\cdot \lambda\), this has the required properties in (a).


We can now translate \(T(ws\cdot \lambda)\) to the walls of \(\mkern 1.5mu\overline{\mkern-1.5muws\cdot C\mkern-1.5mu}\mkern 1.5mu\), which were not already walls of \(\mkern 1.5mu\overline{\mkern-1.5muw\cdot C\mkern-1.5mu}\mkern 1.5mu\). For weights \(\nu\) in such walls, the translated module with have a 1-dim \(\nu{\hbox{-}}\)weight space \(M\) (Theorem 7.6), so we can take the indecomposable summand containing \(M\) to be \(M(\nu)\), completing the inductive step.

Proof (of (b))

Let \(T\) be any indecomposable tilting module having \(\lambda\) as one of its maximal weights. By the remark concerning Prop 3.7a, there is an embedding \(M(\lambda) \hookrightarrow T\) at the bottom of a standard filtration and \(T/M(\lambda)\) has a standard filtration. Thus \(\operatorname{Ext}^1(T/M(\lambda, T(\lambda))) = 0\), using Prop 11.1f and Exercise 6.12.

Applying \(\hom({\,\cdot\,}, T(\lambda))\) to \[\begin{align*}0 \to M(\lambda) \xrightarrow{f} T \to T/M(\lambda) \to 0\end{align*}\] yields \[\begin{align*}\hom(T, T(\lambda)) \xrightarrow{f^*} \hom(M(\lambda), T(\lambda)) \to \operatorname{Ext}^1(T/M(\lambda), T(\lambda)) = 0 \to \cdots.\end{align*}\] Thus \(f_*\) is surjective, and the embedding \(M(\lambda) \hookrightarrow T(\lambda)\) lifts to a map \(\phi: T\to T(\lambda)\):

Similarly, reversing \(T, T(\lambda)\) we get the map \(\psi\) with is the identity on the specified submodules of \(M(\lambda)\) in each. This we get endomorphisms \(\phi \circ \psi\) of \(T(\lambda)\), which is an isomorphism on the \(\lambda{\hbox{-}}\)weight space \(T(\lambda)_\lambda\) and of \(\psi\circ \phi(T)\), which is an isomorphism at least on the \({\mathbb{C}}{\hbox{-}}\) span of \(v_\lambda^+ \in M(\lambda) \subset T\) (although maybe not on all of \(T_\lambda\) as claim in Humphreys).

By Fitting’s lemma (?) an endomorphism of a finite length indecomposable module is either nilpotent or invertible. But the two compositions can not be nilpotent since they are isomorphisms on \({\mathbb{C}}v_\lambda^+\) viewed in \(T(\lambda)\) and in \(T\).

Proof (of (c))
Take \(T\) to be a tilting module…

Monday April 27th

We get a non-split SES \[\begin{align*} 0 \to M(xs\cdot \lambda) \to T_\mu^\lambda M(x\cdot \mu) \to M(x\cdot \lambda) \to 0 .\end{align*}\]

Since \(w\cdot \mu\) is the highest weight of \(T(w\cdot \mu)\) and occurs with multiplicity one, apply this to all Verma section \(M(x\cdot \mu)\), including \(x= w\) in a standard filtration of \(T(w\cdot \mu)\), thus \(T_\mu^\lambda T(w\cdot \mu)\) has highest weight \(ws\cdot \lambda\) with multiplicity one.

By Prop 11.1e and Theorem 11.2, \(T_\mu^\lambda T(w\cdot \mu) \cong T(ws \cdot \lambda) \oplus T\) where \(T\) is a tilting module in \({\mathcal{O}}_{\chi_\lambda}\) having all weights less than \(ws\cdot \lambda\). It suffices to show \(T=0\), or equivalently \(T_\lambda^\mu T = 0\).

Lemma (Translating and Inverting Doubles the Character)
For \(\lambda, \mu\) as above and any \(M\in {\mathcal{O}}_{\chi_\mu}\), \(\operatorname{ch}T_\lambda^\mu T_\mu^\lambda M = 2 \operatorname{ch}M\).

Proof: By writing \(\operatorname{ch}M\) in a basis of \(M(x\cdot \mu)\) with \(x\in W\) and \(xs > x\), since \(M(xs \cdot \mu) = M(x \cdot \mu)\), it suffices to show this for \(M = M(x\cdot \mu)\). But \(T_\lambda^\mu T_\mu^\lambda M(x\cdot \mu)\) is given by applying \(T_\lambda^\mu\) to the original SES and we know \[\begin{align*} T_\lambda^\mu M(xs\cdot \lambda) = M(xs \cdot \mu) = M(x\cdot \mu) = T_\lambda^\mu M(x\cdot \lambda) \end{align*}\] Thus \(\operatorname{ch}T_\lambda^\mu T_\mu^\lambda M(x\cdot \mu) = 2\operatorname{ch}M(x\cdot \mu)\).

\(\hfill\blacksquare\)

Now by the lemma, \(T_\lambda^\mu T(ws \cdot \lambda) \oplus T_\lambda^\mu T = T_\lambda^\mu T_\mu^\lambda T(w\cdot \mu)\) has formal character \(2 \operatorname{ch}T(w\cdot \mu)\). Since it’s a tilting module,we must have \(T_\lambda^\mu T_\mu^\lambda T(w\cdot \mu) = T(w\cdot \mu) \oplus T(w\cdot \mu)\). In particular, it has highest weight \(w\cdot \mu\) with multiplicity 2.

If we can show that \(T_\lambda^\mu T(ws\cdot \lambda)\) already has \(w\cdot \mu\) as a weight with multiplicity 2, it will follow that the remaining term must be zero as desired.

Start with an embedding \(\phi: M(ws\cdot \lambda) \hookrightarrow T(ws\cdot \lambda)\). Using Theorem 6.13c, our \(\operatorname{Ext}^1\) vanishes between Vermas and dual Vermas, and so we have \(\operatorname{Ext}^1 (T(ws\cdot \lambda), M(w\cdot\lambda A)^\vee) = 0\).

Dualizing, \(\operatorname{Ext}^1(M(w\cdot \lambda), T(ws\cdot \lambda)) = 0\) and this sequence must split. Applying \(\hom({\,\cdot\,}, T(ws\cdot \lambda))\), we get a LES \[\begin{align*} \hom(T_\mu^\lambda M(w\cdot \lambda), T(ws\cdot \lambda)) \to \hom(M(ws\cdot \lambda), T(ws\cdot \lambda)) \to \operatorname{Ext}^1(M(w\cdot \lambda), T(ws\cdot\lambda)) \to \cdots .\end{align*}\]

Since the last term vanishes, a \(\phi\) in the middle term lifts to the first term.

Proposition (Injective Embedding of Vermas into Tilting Modules)
\((\ker\phi)_{w\cdot \lambda} = 0\).
Proof

If not, since \(\phi\) restricted to \(M(ws\cdot \lambda)\) is injective, and using the origin SES we must have a preimage \(v\in (\ker \phi)_{w\cdot \lambda}\) of the highest weight vector in \(M(w\cdot \lambda)\).

But every \(z\in T_\mu^\lambda M(w\cdot \mu)\) can be written uniquely (since the SES splits as vector spaces) as \(z = uv + m\) where \(u\in U({\mathfrak{n}}^-)\) and \(m\in M(ws\cdot \lambda)\). Then \(\phi(z) = 0 + \phi(m) = m \in M(ws\cdot \lambda) \subset T(ws\cdot\lambda)\), since \(\phi\) is the identity on this submodule. But then \(\phi\) provides a splitting of the non-split SES, a contradiction.

Thus \(\phi\) induces a nonzero homomorphism \[\begin{align*} \mkern 1.5mu\overline{\mkern-1.5mu\phi \mkern-1.5mu}\mkern 1.5mu: M(w\cdot \lambda) \cong T_\mu^\lambda M(w\cdot \mu) / M(ws\cdot \lambda) \to T(ws\cdot \lambda) / M(ws\cdot \lambda) .\end{align*}\]

In particular, \(w\cdot \lambda\) is a weight of the quotient module, and is a maximal weight – keeping in mind that \(T(ws\cdot \lambda)\) has a standard filtration with sections \(M(x\cdot \lambda)\) for \(x \leq ws\) with \(M(ws\cdot \lambda)\) occurring just once. The quotient module also has a standard filtration, so \(M(w\cdot \lambda)\) must occur in the standard filtration of \(T(ws\cdot \lambda)\) along with \(M(ws\cdot \lambda)\).

Since \(w<ws\), theorem 1.4 provides an injection \(M(w\cdot \lambda) \hookrightarrow M(ws\cdot \lambda)\). Applying \(T_\lambda^\mu\) to these two copies of \(M(w\cdot \lambda)\), we get \(w\cdot \mu\) a weight with multiplicity at least 2.

\(\hfill\blacksquare\)

Corollary (Standard Multiplicities of Vermas in Tilting Modules)
Under the hypotheses of the theorem, \((T(ws\cdot \lambda): M(xs \cdot \lambda)) = (T(ws\cdot \lambda) : M(x\cdot \lambda))\).
Proof
\(T(ws\cdot \lambda) = T_\mu^\lambda T(w\cdot \mu)\), and wlog \(x < xs\) since the claimed formula is symmetric in \(x\) and \(xs\) and \(xs\cdot \mu = x\cdot \mu\). Now tilting modules (and thus their filtration multiplicities) are determined by their formal characters. Using 1’, we see that each occurrence of \(M(x\cdot \mu)\) as a section of \(T(w\cdot \mu)\) leads to exactly one occurence of \(M(xs\cdot \lambda)\) and \(M(x\cdot \lambda)\) in the character of \(T(ws \cdot \lambda)\).

Remark: Both the theorem and corollary can fail when \(ws<w\).


  1. Note that the product of reduced expressions is not usually reduced, so the length isn’t additive.↩︎

  2. Note that this is because conjugating a simple reflection may not yield a simple reflection again.↩︎

  3. Note that this implies that \(1\) is not only a minimal element in this order, but an infimum.↩︎

  4. Note that in Jantzen’s book, \(X\) is used for \(\Lambda\) and \(X^+\) correspondingly.↩︎

  5. See Humphreys #1, #20.2, p. 110.↩︎

  6. Ascending chain condition on submodules, i.e. no infinite filtrations by submodules.↩︎

  7. Recall: this is the center of \(U({\mathfrak{g}})\)), i.e. \(\dim{\operatorname{span}}~Z({\mathfrak{g}})v < \infty\) for all \(v\in M\).↩︎