# Definitions

• Indecomposable: doesn’t decompose as $$A \oplus B$$. Weaker than irreducible.
• Irreducible: simple, i.e. no nontrivial proper submodules. Implies indecomposable.
• Completely reducible: Direct sum of irreducibles.
• Solvable: Derived series terminates.
• Borel: maximal solvable subalgebra.
• Semisimple: Direct sum of simple modules.
• Acts in a diagonalizable way.
• Antidominant weight: $${\left\langle {\lambda + \rho},~{\alpha^\vee} \right\rangle} \not\in{\mathbb{Z}}^{>0}$$, equivalently $$M(\lambda) = L(\lambda)$$.
• Dominant weight: $${\left\langle {\lambda + \rho},~{\alpha^\vee} \right\rangle} \not\in {\mathbb{Z}}^{< 0}$$.
• Regular weight: $$\lambda$$ is regular iff the isotropy/stabilizer group $${\operatorname{Stab}}_W(\lambda) \mathrel{\vcenter{:}}=\left\{{w\in W{~\mathrel{\Big|}~}w\lambda = w}\right\}= 1$$, equivalently $${\left\lvert {W\lambda} \right\rvert} = {\left\lvert {W} \right\rvert}$$ so $${\left\langle {\lambda + \rho},~{\alpha^\vee} \right\rangle} \neq 0$$ for all $$\alpha\in \Phi$$.
• Singular weight: Not regular.
• Linked: $$\mu \sim \lambda \iff \mu \in W\cdot \lambda$$, the orbit of $$\lambda$$ under $$W$$, a.k.a. the linkage class of $$\lambda$$.
• Socle: Direct sum of all simple submodules.
• Radical: Intersection of all maximal submodules, smallest submodule such that quotient is semisimple.
• Head: $$M / \mathrm{rad}(M)$$.

# List of Notation

• $$M(\lambda)$$: Verma Modules

• $$L(\lambda)$$: Unique simple quotient of $$M(\lambda)$$.

• $$N(\lambda)$$ the maximal submodule of $$M(\lambda)$$

• The root system \begin{align*}\Phi = \left\{{\alpha \in {\mathfrak{h}}^\vee{~\mathrel{\Big|}~}[hx] = \alpha(h)x ~\forall h\in {\mathfrak{h}}}\right\}\end{align*} containing roots $$\alpha$$

• Abstractly: spans a Euclidean space, $$\lambda \alpha \in \phi \implies \lambda = \pm 1$$, and closed under reflections about orthogonal hyperplanes.
• $$\Phi^+$$ the corresponding positive system (choose a hyperplane not containing any root), $$\Phi \mathrel{\vcenter{:}}=\Phi^+ {\coprod}\Phi^-$$.

• \begin{align*}s_\alpha({\,\cdot\,}) \mathrel{\vcenter{:}}=({\,\cdot\,}) - 2{\left\langle {{\,\cdot\,}},~{\alpha} \right\rangle} \frac{\alpha}{{\left\lVert {\alpha} \right\rVert}^2}\end{align*} the corresponding reflection about the hyperplane $$H_\alpha$$

• $${\mathfrak{g}}_\alpha \mathrel{\vcenter{:}}=\left\{{x\in {\mathfrak{g}}{~\mathrel{\Big|}~}[hx] = \alpha(h)x ~\forall h\in {\mathfrak{h}}}\right\}$$ the corresponding root space

• The triangular decomposition \begin{align*}{\mathfrak{g}}= \bigoplus_{\alpha\in \Phi^+} {\mathfrak{g}}_{\alpha} \oplus {\mathfrak{h}}\oplus \bigoplus_{\alpha \in \Phi^-} {\mathfrak{g}}_{-\alpha} \mathrel{\vcenter{:}}={\mathfrak{n}}^{-} \oplus {\mathfrak{h}}\oplus {\mathfrak{n}}^{+}\end{align*}

• $$\Delta$$ the corresponding simple system of size $$\ell$$, i.e $$\alpha = \sum_{\delta_k \in\Delta} c_\delta \delta_k$$ with $$c_\delta \in {\mathbb{Z}}^{\geq 0}$$.

• $$\Lambda = \left\{{\lambda \in E {~\mathrel{\Big|}~}{\left\langle {\lambda},~{\alpha^\vee} \right\rangle} \in {\mathbb{Z}}~\forall \alpha\in\Phi }\right\}$$ the integral weight lattice

• $$\Lambda^+ = {\mathbb{Z}}^+\Omega$$ the dominant integral weights

• $$\Omega \mathrel{\vcenter{:}}=\left\{{\mkern 1.5mu\overline{\mkern-1.5mu\omega\mkern-1.5mu}\mkern 1.5mu_1, \cdots, \mkern 1.5mu\overline{\mkern-1.5mu\omega\mkern-1.5mu}\mkern 1.5mu_\ell}\right\}$$ the fundamental weights
• $$[A: B]$$ the composition factor multiplicity of $$B$$ in a composition series for $$A$$.

• $$(A: B)$$ the composition factor multiplicity of $$B$$ in a standard filtration for $$A$$.

• $$\Phi_{[\lambda]} = \left\{{\alpha\in \Phi {~\mathrel{\Big|}~}{\left\langle {\lambda},~{\alpha^\vee} \right\rangle} \in {\mathbb{Z}}}\right\}$$ the integral root system of $$\lambda$$

• $$\Delta_{[\lambda]}$$ the corresponding simple system

• $$W_{[\lambda]}$$ the integral Weyl group of $$\lambda$$

• $$\mu \uparrow \lambda$$: strong linkage of weights

• $${\mathcal{O}}_{\chi_\lambda}$$: the block corresponding to $$\lambda$$.

• $$\operatorname{ch}M \mathrel{\vcenter{:}}=\sum_{\lambda \in {\mathfrak{h}}^\vee} \qty{\dim M_\lambda} e^{\lambda}$$ the formal character.

# Useful Facts

• $$\lambda$$ dominant integral $$\implies w\lambda \leq \lambda$$ for all $$W$$.
• $$M(\lambda)$$ is simple $$\iff \lambda$$ is antidominant.
• The dot action is given by $$w\cdot \lambda = w(\lambda + \rho) - \rho$$.
• For any filtration $$0 \hookrightarrow M^n \hookrightarrow M^{n-1} \hookrightarrow\cdots \hookrightarrow M^1 \hookrightarrow M^0 = M$$, we have \begin{align*}\operatorname{ch}M = \sum_{i=1}^n \operatorname{ch}\qty{M^i/M^{i-1}},\end{align*} i.e. the character of $$M$$ is the sum of the characters of its composition factors (with multiplicity).
• $$\operatorname{Head}\,(M(\lambda)) = L(\lambda)$$
• $${\operatorname{rad}}(M(\lambda)) = N(\lambda)$$
• $$\operatorname{Soc}\,(M(\lambda)) =_? M(w_0 \cdot \lambda) = L(\mu)$$ for $$\mu$$ the unique antidominant highest weight in the block determined by $$\lambda$$ (?)
• $$\operatorname{Soc}\,(M(w \cdot \lambda)) = L(w_0 \cdot \lambda)$$.
• \begin{align*}[M(\lambda) : L(\mu)] \geq 1 \iff \mu \uparrow \lambda {\quad \text{(strong linkage)} \quad}\end{align*}

# SL2 Theory

Definition The group and the algebra:

\begin{align*} {\mathfrak{sl}}(n, {\mathbb{C}}) &= \left\{{M \in \operatorname{GL}(n, {\mathbb{C}}) {~\mathrel{\Big|}~}\det(M) = 1}\right\} \\ {\mathfrak{sl}}(n, {\mathbb{C}}) &= \left\{{M \in \operatorname{GL}(n, {\mathbb{C}}) {~\mathrel{\Big|}~}\operatorname{Tr}(M) = 0}\right\} .\end{align*}

• The usual representation on $${\mathbb{C}}^2$$: $$h$$ has eigenvalues $$\pm 1$$, yields $$L(1)$$.
• The adjoint representation on $${\mathbb{C}}^3$$: $$\operatorname{ad}h = \mathrm{diag}(2, 0, -2)$$ with eigenvalues $$0, \pm 2$$, yields $$L(2)$$.

Generated by \begin{align*} x = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} ,\quad h = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} ,\quad y = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} \end{align*}

with relations

\begin{align*} [hx] &= 2x \\ [hy] &= -2y \\ [xy] &= h \\ .\end{align*}

Some identifications: \begin{align*} \Phi &= A_1 \\ \dim {\mathfrak{h}}&= 1\\ \Lambda &\cong {\mathbb{Z}}\\ \Lambda_r & \cong {\mathbb{Z}}/2{\mathbb{Z}}\\ \Lambda^+ &= \left\{{0, 1, 2, 3, \cdots}\right\} \\ W &= \left\{{1, s_0}\right\} \quad \lambda \overset{s_0}\iff -\lambda \\ \chi_\lambda = \chi_\mu &\iff \mu = \lambda, -\lambda-2 {\quad \text{(linked)} \quad}\\ \Pi(M(\lambda)) &= \left\{{\lambda, \lambda-2, \cdots}\right\} \\ \rho &= 1 \\ \alpha &= 2 \\ s_\alpha \cdot \lambda &= - \lambda - 2 .\end{align*}

For $$\lambda$$ dominant integral \begin{align*} N(\lambda) &\cong L(-\lambda - 2) \\ \dim L(\lambda) &= \lambda + 1 \\ \Pi(L(\lambda)) &= \left\{{\lambda, \lambda-2, \cdots, -\lambda}\right\} \\ \dim \qty{L(\lambda)}_\mu &= 1 \quad\quad\forall \mu = \lambda-2i .\end{align*}

• Simple modules are parameterized by dominant integral weights: \begin{align*}M(\lambda) \text{ is simple } \iff \lambda \not\in{\mathbb{Z}}^{\geq 0} = \Lambda^+ \iff \dim L(\lambda) = \infty\end{align*} Finite-dimensional irreducible representations (i.e. simple modules) of $${\mathfrak{sl}}(2, {\mathbb{C}})$$ are in bijection with dominant integral weights $$n\in \Lambda$$, i.e. $$n\in {\mathbb{Z}}^{\geq 0}$$, are denoted $$M(n)$$, and each admits a basis $$\left\{{\mathbf{v}_i{~\mathrel{\Big|}~}0\leq i \leq n}\right\}$$ where \begin{align*} h \cdot v_{i} &= (n-2 i) v_{i}\\ x \cdot v_{i} &= (n-i+1) v_{i-1}\\ y \cdot v_{i} &= (i+1)v_{i+1} ,\end{align*} setting $$v_{-1} = v_{n + 1}=0$$ and letting $$v_0$$ be the unique vector in $$L(n)$$ annihilated by $$x$$.

• $$\mathrm{rad}~M(\lambda) = N(\lambda)$$
• $$\mathrm{hd}~M(\lambda) = L(\lambda)$$.
• $$M(\lambda)$$ for $$\lambda > 0$$ not integral is simple, however $$-\lambda-2\not\in W\cdot \lambda$$.
• $$\lambda \geq 0 \implies \operatorname{ch}L(\lambda) = \operatorname{ch}M(\lambda) - \operatorname{ch}M(s_\alpha \cdot \lambda)$$ where $$s_\alpha \cdot \lambda = -\lambda - 2$$.
• For $$\lambda \geq 0$$, $$\dim L(\lambda) = \lambda + 1$$ and so \begin{align*}\operatorname{ch}L(\lambda) = e^\lambda + e^{\lambda-2} + \cdots + e^{-\lambda} = {e^{\lambda + 1} - e^{\lambda - 1} \over e^1 - e^{-1}}.\end{align*}
• For $$\lambda \neq \rho\in {\mathbb{Z}}$$, the composition factors of $$M(\lambda)$$ are $$M(\lambda), L(-\lambda - 2)$$.
• There is an exact sequence

Characters: \begin{align*} \operatorname{ch}M(\lambda) &= \operatorname{ch}L(\lambda) + \operatorname{ch}L(s_\alpha \cdot \lambda) \\ \operatorname{ch}M(s_\alpha \cdot \lambda) &= \operatorname{ch}L(s_\alpha \cdot \lambda) .\end{align*}

We can think of this pictorially as the ‘head’ on top of the socle:

\begin{align*} M(\lambda) = \frac{L(\lambda)}{L(s_\alpha \cdot \lambda)} .\end{align*}

We can invert the formula to get equation (2), which corresponds to inverting this matrix:

\begin{align*} \operatorname{ch}L(\lambda) &= \operatorname{ch}M(\lambda) - \operatorname{ch}M(s_\alpha \cdot \lambda) \\ \operatorname{ch}L(s_\alpha \cdot \lambda) &= \operatorname{ch}M(s_\alpha \cdot \lambda) .\end{align*}

If $$\lambda \not\in\Lambda^+$$, then $$\operatorname{ch}L(\lambda) = \operatorname{ch}M(\lambda)$$ and $$b_{\lambda, 1} = 1, b_{\lambda, s_\alpha} = 0$$ are again independent of $$\lambda \in {\mathfrak{h}}^\vee\setminus \Lambda^+$$.

# SL3

$${\mathfrak{sl}}(3, {\mathbb{C}})$$ has root system $$A_2$$: \begin{align*} \Phi &= \left\{{\pm \alpha, \pm \beta, \pm\gamma \coloneqq\alpha + \beta}\right\} \\ \Delta &= \left\{{\alpha, \beta}\right\} \\ \Phi^+ &= \left\{{\alpha, \beta, \gamma}\right\} \\ W &= \left\{{1, s_\alpha, s_\beta, s_\alpha s_\beta, s_\beta s_\alpha, w_0 = s_\alpha s_\beta s_\alpha = s_\beta s_\alpha s_\beta}\right\} .\end{align*}

For $$\lambda$$ regular, integral, and antidominant:

• $$M(\lambda) = L(\lambda)$$
• No other $$M(w\cdot \lambda)$$ is simple
• $$\operatorname{Soc}\,(M(w\cdot \lambda)) = L(\lambda)$$.
• $$[M(w\cdot \lambda) : L(\lambda)] = [M(w\cdot \lambda) : L(w\cdot \lambda)] = 1$$ for all $$w$$.
• $$\operatorname{ch}L(s_\alpha \cdot \lambda) = \operatorname{ch}M(s_\alpha \cdot \lambda) - \operatorname{ch}M(\lambda)$$.
• $$\operatorname{ch}M(s_\alpha \cdot \lambda) = \operatorname{ch}L(s_\alpha \cdot \lambda) + \operatorname{ch}L(\lambda)$$.
• The Jantzen filtration when $$w \in \left\{{s_{\alpha\beta}, s_{\beta\alpha}, w_0}\right\}$$ is given by \begin{align*} M(w\cdot \lambda)^0 &= M(w\cdot \lambda) \\ M(w\cdot \lambda)^1 &= ? \\ M(w\cdot \lambda)^2 &= L(\lambda) \\ M(w\cdot \lambda)^{\geq 3} &= 0 .\end{align*}

# Wednesday January 8

Course Website: https://faculty.franklin.uga.edu/brian/math-8030-spring-2020

## Chapter Zero: Review

Material can be found in Humphreys 1972.

Practice writing lowercase mathfrak characters!

In this course, we’ll take $$k = {\mathbb{C}}$$.

Recall that a Lie Algebra is a vector space $${\mathfrak{g}}$$ with a bracket \begin{align*} [{\,\cdot\,}, {\,\cdot\,}]: {\mathfrak{g}}\otimes{\mathfrak{g}}\to {\mathfrak{g}} \end{align*} satisfying

• $$[x x] = 0$$ for all $$x\in {\mathfrak{g}}$$

• $$[x [y z]] = [[x y] z] + [y [x z]]$$ (The Jacobi identity)

Note that the last axiom implies that $$x$$ acts as a derivation.

Show that $$[x y] = -[y x]$$.

Hint: Consider $$[x+y, x+y]$$. Note that the converse holds iff $$\operatorname{ch}k \neq 2$$.

Show that Lie Algebras never have an identity.

$${\mathfrak{g}}$$ is abelian iff $$[x y] = 0$$ for all $$x,y\in{\mathfrak{g}}$$.

There are also the usual notions (define for rings/algebras) of:

• Subalgebras,
• A vector subspace that is closed under brackets.
• Homomorphisms
• I.e. a linear transformation $$\phi$$ that commutes with the bracket, i.e. $$\phi([x y]) = [\phi(x) \phi(y)]$$.
• Ideals

Given a vector space (possibly infinite-dimensional) over $$k$$, then (exercise) $${\mathfrak{gl}}(V) \mathrel{\vcenter{:}}=\mathrm{End}_k(V)$$ is a Lie algebra when equipped with $$[f g] = f\circ g - g\circ f$$.

A representation of $${\mathfrak{g}}$$ is a homomorphism $$\phi: {\mathfrak{g}}\to \operatorname{GL}(V)$$ for some $$V$$.

The adjoint representation is \begin{align*} \operatorname{ad}: {\mathfrak{g}}&\to {\mathfrak{gl}}({\mathfrak{g}}) \\ \operatorname{ad}(x)(y) &\mathrel{\vcenter{:}}=[x y] .\end{align*}

Representations give $${\mathfrak{g}}$$ the structure of a module over $$V$$, where $$x\cdot v \mathrel{\vcenter{:}}=\phi(x)(v)$$. All of the usual module axioms hold, where now \begin{align*} [x y] \cdot v \mathrel{\vcenter{:}}= x\cdot(\cdot v) - y\cdot(x\cdot v) \end{align*}

The trivial representation $$V = k$$ where $$x\cdot a = 0$$.

$$V$$ is irreducible (or simple) iff $$V$$ as exactly two $${\mathfrak{g}}{\hbox{-}}$$invariant subspaces, namely $$0, V$$.

$$V$$ is completely reducible iff $$V$$ is a direct sum of simple modules, and indecomposable iff $$V$$ can not be written as $$V = M \oplus N$$, a direct sum of proper submodules.

There are several constructions for creating new modules from old ones:

\begin{align*} V^\vee&\mathrel{\vcenter{:}}=\hom_k(V, k) \qquad (x\cdot f) &= -f(x\cdot v) .\end{align*} for $$f\in V^\vee, x\in {\mathfrak{g}}, v\in V$$.

• The direct sum $$V\oplus W$$ where \begin{align*} x\cdot(v, w) = (x\cdot v, x\cdot w) \end{align*}

• The tensor product where \begin{align*} x\cdot(v\otimes w) = x\cdot v \otimes w + v\otimes x\cdot w \end{align*}

• $$\hom_k(V, W)$$ where \begin{align*} (x\cdot f)(v) = x\cdot f(v) - f(x\cdot v) \end{align*}

• Note that if we take $$W=k$$ then the first term vanishes and this recovers the dual.

## Semisimple Lie Algebras

The derived ideal is given by $${\mathfrak{g}}^{(1)} \mathrel{\vcenter{:}}=[{\mathfrak{g}}{\mathfrak{g}}] \mathrel{\vcenter{:}}={\operatorname{span}}_k\qty{\left\{{[x y] {~\mathrel{\Big|}~}x,y\in{\mathfrak{g}}}\right\}}$$.

This is the analog of the commutator subgroup.

$${\mathfrak{g}}$$ is abelian iff $${\mathfrak{g}}^{(1)} = \left\{{0}\right\}$$, and 1-dimensional algebras are always abelian.

This follows because if $$[x y] \mathrel{\vcenter{:}}= xy = yx$$ then $$[x y] = 0 \iff xy = yx$$.

A lie algebra $${\mathfrak{g}}$$ is simple iff the only ideals of $${\mathfrak{g}}$$ are $$0, {\mathfrak{g}}$$ and $${\mathfrak{g}}^{(1)} \neq \left\{{0}\right\}$$.

Note that thus rules out the zero modules, abelian lie algebras, and particularly 1-dimensional lie algebras.

The derived series is defined by $${\mathfrak{g}}^{(2)} = [{\mathfrak{g}}^{(1)} {\mathfrak{g}}^{(1)}]$$, continuing inductively. $${\mathfrak{g}}$$ is said to be solvable if $${\mathfrak{g}}^{(n)} = 0$$ for some $$n$$.

Abelian implies solvable.

The lower central series of $${\mathfrak{g}}$$ is defined as $${\mathfrak{g}}_{j+1} \mathrel{\vcenter{:}}=[{\mathfrak{g}}, {\mathfrak{g}}_j]$$. The lie algebra $${\mathfrak{g}}$$ is nilpotent if this series terminates at zero.

Note that an element $$x$$ of a Lie algebra is nilpotent iff $$\operatorname{ad}x$$ is nilpotent as a matrix (so $$x$$ is ad-nilpotent), i.e. $$\operatorname{ad}(x)^n =0$$ for some $$n$$. There is a result, Engel’s theorem, which relates these two notions: a Lie algebra is nilpotent iff all of its elements are nilpotent (with potentially different $$n$$s depending on $$x$$).

$${\mathfrak{g}}$$ is semisimple (s.s.) iff $${\mathfrak{g}}$$ has no nonzero solvable ideals.

Show that simple implies semisimple.

1. Semisimple algebras $${\mathfrak{g}}$$ will usually have solvable subalgebras.
2. $${\mathfrak{g}}$$ is semisimple iff $${\mathfrak{g}}$$ has no nonzero abelian ideals.

The Killing form is given by $$\kappa: {\mathfrak{g}}\otimes{\mathfrak{g}}\to k$$ where $$\kappa(x, y) = \operatorname{Tr}(\operatorname{ad}x ~\operatorname{ad}y)$$, which is a symmetric bilinear form.

\begin{align*} \kappa([x y], z) = \kappa(x, [y z]) \end{align*}

If $$\beta: V^{\otimes 2} \to k$$ is any symmetric bilinear form, then its radical is defined by

\begin{align*} {\operatorname{rad}}\beta = \left\{{v\in V {~\mathrel{\Big|}~}\beta(v, w) = 0 ~\forall w\in V}\right\} .\end{align*}

A bilinear form $$\beta$$ is nondegenerate iff $$\mathrm{rad}\beta = 0$$.

$$\mathrm{rad}\kappa {~\trianglelefteq~}{\mathfrak{g}}$$ is an ideal, which follows by the above associative property.

$${\mathfrak{g}}$$ is semisimple iff $$\kappa$$ is nondegenerate.

The standard example of a semisimple lie algebra is

\begin{align*} {\mathfrak{g}}= {\mathfrak{sl}}(n, {\mathbb{C}}) \mathrel{\vcenter{:}}=\left\{{x\in {\mathfrak{gl}}(n, {\mathbb{C}}) {~\mathrel{\Big|}~}\operatorname{Tr}(x) = 0 }\right\} \end{align*}

From now on, $${\mathfrak{g}}$$ will denote a semisimple lie algebra over $${\mathbb{C}}$$.

Every finite dimensional representation of a semisimple $${\mathfrak{g}}$$ is completely reducible.

In other words, the category of finite-dimensional representations is relatively uninteresting – there are no extensions, so everything is a direct sum. Thus once you classify the simple algebras (which isn’t terribly difficult), you have complete information.

# Friday January 10th

## Root Space Decomposition

Let $${\mathfrak{g}}$$ be a finite dimensional semisimple lie algebra over $${\mathbb{C}}$$. Recall that this means it has no proper solvable ideals. A more useful characterization is that the Killing form $$\kappa: {\mathfrak{g}}\otimes{\mathfrak{g}}\to {\mathfrak{g}}$$ is a non-degenerate symmetric (associative) bilinear form. The running example we’ll use is $${\mathfrak{g}}= {\mathfrak{sl}}(n, {\mathbb{C}})$$, the trace zero $$n\times n$$ matrices. Let $${\mathfrak{h}}$$ be a maximal toral subalgebra, where $$x\in{\mathfrak{g}}$$ is toral if $$x$$ is semisimple, i.e. $$\operatorname{ad}x$$ is semisimple (i.e. diagonalizable).

$${\mathfrak{h}}$$ is the diagonal matrices in $${\mathfrak{sl}}(n, {\mathbb{C}})$$.

$${\mathfrak{h}}$$ is abelian, so $$\operatorname{ad}{\mathfrak{h}}$$ consists of commuting semisimple elements, which (theorem from linear algebra) can be simultaneously diagonalized.

This leads to the root space decomposition,

\begin{align*} {\mathfrak{g}}= {\mathfrak{h}}\oplus \bigoplus_{\alpha\in \Phi} {\mathfrak{g}}_\alpha .\end{align*}

where \begin{align*} {\mathfrak{g}}_\alpha = \left\{{x\in {\mathfrak{g}}{~\mathrel{\Big|}~}[h x] = \alpha(h) x ~\forall h\in {\mathfrak{h}}}\right\} \end{align*} where $$\alpha \in {\mathfrak{h}}^\vee$$ is a linear functional.

Here $${\mathfrak{h}}= C_{\mathfrak{g}}({\mathfrak{h}})$$, so $$[h x] = 0$$ corresponds to zero eigenvalues, and (fact) it turns out that $${\mathfrak{h}}$$ is its own centralizer.

We then obtain a set of roots of $${\mathfrak{h}}, {\mathfrak{g}}$$ given by \begin{align*} \Phi = \left\{{\alpha\in{\mathfrak{h}}^\vee{~\mathrel{\Big|}~}\alpha\neq 0, {\mathfrak{g}}_\alpha \neq \left\{{0}\right\}}\right\} \end{align*}

$${\mathfrak{g}}_\alpha = {\mathbb{C}}E_{ij}$$ for some $$i\neq j$$, the matrix with a 1 in the $$i,j$$ position and zero elsewhere.

The restriction $${\left.{\kappa}\right|_{{\mathfrak{h}}}}$$ is nondegenerate, so we can identify $${\mathfrak{h}}, {\mathfrak{h}}^\vee$$ via $$\kappa$$ (can always do this with vector spaces with a nondegenerate bilinear form), where $$\kappa$$ maps to another bilinear form $$({\,\cdot\,}, {\,\cdot\,})$$. We thus get a correspondence \begin{align*} {\mathfrak{h}}^\vee\ni \lambda \iff t_\lambda \in {\mathfrak{h}}\\ \lambda(h) = \kappa(t_\lambda, h) \quad\text{where } (\lambda, \mu) = \kappa(t_\lambda, t_\mu) .\end{align*}

## Facts About $$\Phi$$ and Root Spaces

Let $$\alpha, \beta \in \Phi$$ be roots.

1. $$\Phi$$ spans $${\mathfrak{h}}^\vee$$ and does not contain zero.

2. If $$\alpha \in \Phi$$ then $$-\alpha \in \Phi$$, but no other scalar multiple of $$\alpha$$ is in $$\Phi$$.

• Note: see .
3. $$(\beta, \alpha^\vee) \in {\mathbb{Z}}$$

4. $$s_\alpha(\beta) \mathrel{\vcenter{:}}=\beta - (\beta, \alpha^\vee)\alpha \in \Phi$$.

• Note: see

An aside:

• $$\dim {\mathfrak{g}}_\alpha = 1$$.

• If $$0 \neq x_\alpha \in {\mathfrak{g}}_\alpha$$ then there exists a unique $$y_\alpha \in {\mathfrak{g}}_{-\alpha}$$ such that $$x_\alpha, y_\alpha, h_\alpha \mathrel{\vcenter{:}}=[x_\alpha, y_\alpha]$$ spans a 3-dimensional subalgebra in $${\mathfrak{sl}}_2$$, given by \begin{align*} x_\alpha = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, \quad y_\alpha = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}, \quad h_\alpha =\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} .\end{align*}

• Under the correspondence $${\mathfrak{h}}\iff {\mathfrak{h}}^\vee$$ induced by $$\kappa$$, \begin{align*} h_\alpha \iff \alpha^\vee\mathrel{\vcenter{:}}=\frac{2\alpha}{(\alpha, \alpha)} \end{align*} Thus for all $$\lambda \in{\mathfrak{h}}^\vee$$, \begin{align*} \lambda(h_\alpha) = (\lambda, \alpha^\vee) = \frac{2(\lambda, \alpha)}{(\alpha, \alpha)} .\end{align*}

• If $$\alpha + \beta \neq 0$$, then $$\kappa(g_\alpha, g_\beta) = 0$$.

If $$\alpha + \beta \in \Phi$$, then $$[{\mathfrak{g}}_\alpha {\mathfrak{g}}_\beta] = {\mathfrak{g}}_{\alpha+\beta}$$.

Example: If $$\alpha = E_{ij}, \beta = E_{jk}$$ where $$k\neq i$$, then $$[E_{ij}, E_{jk}]= E_{ik}$$.

• $${\mathfrak{g}}$$ is generated as an algebra by the root spaces $${\mathfrak{g}}_\alpha$$
• Root strings: If $$\beta \neq \pm\alpha$$, then the roots of the form $$\alpha + k\beta$$ for $$k\in {\mathbb{Z}}$$ form an unbroken string \begin{align*} \alpha - r\beta, \cdots, \alpha-\beta, \alpha,\alpha+\beta,\cdots,\alpha + \ell \beta \end{align*} consisting of at most 4 roots where $$r-s = (\alpha, \beta^\vee)$$.

The circled roots below form the root string for $$\beta$$: In general, a subset $$\Phi$$ of a real euclidean space $$E$$ satisfying conditions (1) through (4) is an (abstract) root system. Note that when $$\Phi$$ comes from a $${\mathfrak{g}}$$, we define $$E\mathrel{\vcenter{:}}={\mathbb{R}}\Phi$$.

### The Root System

There exists a subset $$\Delta \subseteq \Phi$$ such that

• $$\Delta$$ is a $${\mathbb{C}}{\hbox{-}}$$basis for $${\mathfrak{g}}^\vee$$
• $$\beta\in\Phi$$ implies that $$\beta = \sum_{\alpha \in \Delta} c_\alpha \alpha$$ with either
• All $$c_\alpha \in {\mathbb{Z}}_{\geq 0} \iff \beta \in \Phi^+$$ or $$\beta < 0$$.
• All $$c_\alpha \in {\mathbb{Z}}_{\leq 0} \iff \beta \in \Phi^-$$ or $$\beta > 0$$.

$$\Delta$$ is called a simple system.

If $$\Delta = \left\{{a_1, \cdots, a_\ell}\right\}$$ then $$\Phi^+$$ are the positive roots, and if $$\Phi^+ \ni \beta = \sum_{\alpha \in \Delta} c_\alpha \alpha$$, then the height of $$\beta$$ is defined as

\begin{align*} \text{ht}(\beta) \mathrel{\vcenter{:}}= \sum c_\alpha \in {\mathbb{Z}}_{> 0} \end{align*}

Note that $${\mathbb{Z}}\Phi \mathrel{\vcenter{:}}=\Lambda_r$$ is a lattice, and is referred to as the root lattice, and $$\Lambda_r \subset E = {\mathbb{R}}\Phi$$.

We also have \begin{align*} \Phi^+ = \left\{{\beta^\vee{~\mathrel{\Big|}~}\beta \in \Phi}\right\} ,\end{align*} the dual root system, is a root system with simple system $$\Delta^\vee$$.

\begin{align*} {\mathfrak{n}}= {\mathfrak{n}}^+ &\mathrel{\vcenter{:}}=\sum_{\beta > 0} {\mathfrak{g}}_\beta && \text{Upper triangular with zero diagonal,} \\ {\mathfrak{n}}^- &\mathrel{\vcenter{:}}=\sum_{\beta > 0} {\mathfrak{g}}_{-\beta} && \text{Lower triangular with zero diagonal,} \\ {\mathfrak{b}}&\mathrel{\vcenter{:}}={\mathfrak{h}}+ {\mathfrak{n}} && \text{Upper triangular (the "Borel" subalgebra),} \\ {\mathfrak{b}}^- &\mathrel{\vcenter{:}}={\mathfrak{h}}+ {\mathfrak{n}}^- && \text{Lower triangular.} .\end{align*}

\begin{align*} {\mathfrak{g}}= {\mathfrak{n}}^- \oplus {\mathfrak{h}}\oplus {\mathfrak{n}} \end{align*}

If $$\beta \in \Phi^+\setminus \Delta$$, and if $$\alpha \in \Delta$$ such that $$(\beta, \alpha^\vee) > 0$$, then $$\beta - (\beta,\alpha^\vee)\alpha \in \Phi^+$$ has height strictly less than the height of $$\beta$$.

By root strings, $$\beta-\alpha\in\Phi^+$$ is positive root of height one less than $$\beta$$, yielding a way to induct on heights (useful technique).

### Weyl Groups

For $$\alpha \in \Phi$$, define

\begin{align*} S_\alpha : {\mathfrak{h}}^\vee&\to {\mathfrak{h}}^\vee\\ \lambda &\mapsto \lambda - (\lambda, \alpha^\vee)\alpha .\end{align*}

This is reflection in the hyperplane in $$E$$ perpendicular to $$\alpha$$:

Note that $$s_\alpha^2 = \text{id}$$.

Define $$W$$ as the subgroup of $$\operatorname{GL}(E)$$ generated by all $$s_\alpha$$ for $$\alpha \in \Phi$$, this is the Weyl group of $${\mathfrak{g}}$$ or $$\Phi$$, which is finite and $$W = \left\langle{s_\alpha {~\mathrel{\Big|}~}\alpha\in\Delta}\right\rangle$$ is generated by simple reflections.

By (4), $$W$$ leaves $$\Phi$$ invariant. In fact $$W$$ is a finite Coxeter group with generators $$S = \left\{{s_\alpha {~\mathrel{\Big|}~}\alpha\in \Delta}\right\}$$ and defining relations $$(s_\alpha s_\beta)^{m(\alpha, \beta)} = 1$$ for $$\alpha,\beta \in \Delta$$ where $$m(\alpha, \beta) \in \left\{{2,3,4,6}\right\}$$ when $$\alpha \neq \beta$$ and $$m(\alpha, \alpha) = 1$$.

If this finiteness on numerical conditions are met, then $$W$$ is referred to as a Crystallographic group.

# Monday January 13th

## Lengths

Recall that we have a root space decomposition $${\mathfrak{g}}= {\mathfrak{h}}\oplus \bigoplus_{\beta \in \Phi} {\mathfrak{g}}_\beta$$ for finite dimensional semisimple lie algebras over $${\mathbb{C}}$$. We have $$s_\beta(\lambda) = \lambda - (\lambda, \beta^\vee)\beta$$, for $$\lambda \in {\mathfrak{h}}^\vee$$ and the Weyl group \begin{align*} W = \left\langle{s_\beta {~\mathrel{\Big|}~}\beta\in\Phi}\right\rangle = \left\langle{s_\alpha {~\mathrel{\Big|}~}\alpha \in \Delta}\right\rangle \end{align*} where $$\Delta = \left\{{a_i}\right\}$$ are the simple roots.

For $$w\in W$$, we can take the reduced expression for $$w$$ by writing $$w = s_1 \cdots s_n$$ with $$s_i$$ simple and $$n$$ minimal. The length is uniquely determined, but not the expression. So we define $$\ell(w) \mathrel{\vcenter{:}}= n$$ where $$\ell(1) \mathrel{\vcenter{:}}= 0$$.

1. $$\ell(w)$$ is the size of the set $$\left\{{\beta\in\Phi^+ {~\mathrel{\Big|}~}w\beta < 0}\right\}$$

• The above set is equal to $$\Phi^+ \cap w^{-1}\Phi^-$$.
• In particular, for $$\beta \in \Phi^+$$, $$\beta$$ is simple (i.e. $$\beta \ni \Delta$$ iff $$\ell(s_\beta) = 1)$$.
• Note: $$\alpha$$ is the only root that $$s_\alpha$$ sends to a negative root, so $$s_\alpha(\beta) > 0$$ for all $$\beta\in\Phi^+\setminus\left\{{\alpha}\right\}$$.
2. $$\ell(w) = \ell(w^{-1})$$ for all $$w\in W$$, so $$\ell(w)$$ is also the size of $$\Phi \cap w\Phi$$ (replacing $$w^{-1}$$ with $$w$$)

3. There exists a unique $$w_0 \in W$$ with $$\ell(w_0)$$ maximal such that $$\ell(w_0) = {\left\lvert {\Phi^+} \right\rvert}$$ and $$w_0(\Phi^+) = \Phi^-$$.

• Also $$\ell(w_0 w) = \ell(w_0) - \ell(w)$$1
4. For $$\alpha \in \Phi^+$$, $$w\in W$$, we have either

\begin{align*} \ell(w s_\alpha) > \ell (w) \iff w(\alpha) > 0 \\ \ell(w s_\alpha) < \ell (w) \iff w(\alpha) < 0 \\ .\end{align*}

Taking inverses yields $$\ell(s_\alpha w) > \ell(w) \iff w^{-1}\alpha > 0$$.

## Bruhat Order

Let $$S$$ be the set of simple reflections, i.e. $$S = \left\{{s_\alpha {~\mathrel{\Big|}~}\alpha \in \Delta}\right\}$$. Then define \begin{align*} T \mathrel{\vcenter{:}}=\bigcup_{w\in W} wSw^{-1}= \left\{{s_\beta {~\mathrel{\Big|}~}\beta\in\Phi^+}\right\} .\end{align*} This is the set of all reflections in $$W$$ through hyperplanes in $$E$$.

We’ll write $$w' \xrightarrow{t} w$$ means $$w=tw'$$ and $$\ell(w') < \ell(w)$$. Note that in the literature, it’s also often assumed that that $$\ell(w') = \ell(w) - 1$$. In this case, we say $$w'$$ covers $$w$$, and refer to this as the covering relation. So $$w' \to w$$ means that $$w' \xrightarrow{t} w$$ for some $$t\in T$$. We extend this to a partial order: $$w' < w$$ means that there exists a $$w$$ such that \begin{align*} w' = w_0 \to w_1 \to \cdots \to w_n = w. \end{align*} This is called the Bruhat-Chevalley order on $$W$$.

$$w' < w \implies \ell(w') < \ell(w)$$, so $$1\in W$$ is the unique minimal element in $$W$$ under this order.

It turns out that if we set $$w = w' t$$ instead, this results in the same partial order. If you restrict $$T$$ to simple reflections, this yields the weak Bruhat order In this case, the left and right versions differ, yielding the left/right weak Bruhat orders respectively.2

Recall that lie algebras yield finite crystallographic coxeter groups.

For $$(W, S)$$ a coxeter group,

1. $$w' \leq w$$ iff $$w'$$ occurs as a subexpression/subword of every reduced expression $$s_1 \cdots s_n$$ for $$w$$, where a subexpression is any subcollection of $$s_i$$ in the same order.3
1. Adjacent elements $$w', w$$ (i.e. $$w' < w$$ and there does not exist a $$w''$$ such that $$w' < w'' < w$$) in the Bruhat order differ in length by 1.

2. If $$w' < w$$ and $$s\in S$$, then $$w' s \leq w$$ or $$w's \leq ws$$ (or both). i.e., if $$\ell(w_1) = 2 = \ell(w_2)$$, then the size of $$\left\{{w\in W {~\mathrel{\Big|}~}w_1 < w < w_2}\right\}$$ is either 0 or 2.

## Properties of Universal Enveloping Algebras

Let $${\mathfrak{g}}$$ be any lie algebra, and $$\phi: {\mathfrak{g}}\to A$$ be any map into an associative algebra. Then there exists an object $$U({\mathfrak{g}})$$ and a map $$i$$ such that the following diagram commutes: where $$\tilde \phi$$ is a map in the category of associative algebras.

Moreover any lie algebra homomorphism $${\mathfrak{g}}_1 \to {\mathfrak{g}}_1$$ induces a morphism of associative algebras $$U({\mathfrak{g}}_1) \to U({\mathfrak{g}}_2)$$, where $${\mathfrak{g}}$$ generates $$U({\mathfrak{g}})$$ as an algebra.

$$U({\mathfrak{g}})$$ can be constructed as \begin{align*} U({\mathfrak{g}}) = T({\mathfrak{g}})/ \left\langle{ [x,y] - x\otimes y - y\otimes x {~\mathrel{\Big|}~}x,y\in{\mathfrak{g}}}\right\rangle .\end{align*} Note that this ideal is not necessarily homogeneous.

• Usually noncommutative
• Left and right Noetherian
• No zero divisors
• $${\mathfrak{g}}\curvearrowright U({\mathfrak{g}})$$ by the extension of the adjoint action, $$(\operatorname{ad}x)(u) = xu - ux$$ for $$x\in {\mathfrak{g}}, u\in U({\mathfrak{g}})$$.

If $$\left\{{x_1, \cdots x_n}\right\}$$ is a basis for $${\mathfrak{g}}$$, then $$\left\{{x_1^{t_1}, \cdots, x_n^{t_n} {~\mathrel{\Big|}~}t_i \in {\mathbb{Z}}^+}\right\}$$ (noting that $$x^n = x\otimes x \otimes\cdots x$$ and $${\mathbb{Z}}^+$$ includes 0) is a basis for $$U({\mathfrak{g}})$$.

$$i:{\mathfrak{g}}\to U({\mathfrak{g}})$$ is injective, so we can think of $${\mathfrak{g}}\subseteq U({\mathfrak{g}})$$. If $${\mathfrak{g}}$$ is semisimple, then it admits a triangular decomposition $${\mathfrak{g}}= {\mathfrak{n}}^- \oplus {\mathfrak{h}}\oplus {\mathfrak{n}}$$ and choosing a compatible basis for $${\mathfrak{g}}$$ yields $$U({\mathfrak{g}}) = U({\mathfrak{n}}^-)\otimes U({\mathfrak{h}}) \otimes U({\mathfrak{n}})$$.

If $$\phi: {\mathfrak{g}}\to \operatorname{GL}(V)$$ is any Lie algebra representation, it induces an algebra representation $$U({\mathfrak{g}})$$ of $$U({\mathfrak{g}})$$ on $$V$$ and vice-versa. It satisfies \begin{align*}x\cdot (y \cdot v) - y\cdot (x \cdot v) = [x y] \cdot v\end{align*} for all $$x,y \in {\mathfrak{g}}$$ and $$v\in V$$.

Note that this lets us go back and forth between Lie algebra representations and associative algebra representations, i.e. the theory of modules over rings.

A note on notation: $$\mathcal Z({\mathfrak{g}})$$ denotes the center of $$U({\mathfrak{g}})$$.

## Integral Weights

We have a Euclidean space $$E = {\mathbb{R}}\Phi^+$$, the $${\mathbb{R}}{\hbox{-}}$$span of the roots.

We also have the integral weight lattice \begin{align*} \Lambda = \left\{{\lambda \in E {~\mathrel{\Big|}~}(\lambda, \alpha^\vee) \in {\mathbb{Z}}~\forall \alpha\in\Phi (\text{or}~ \Phi^+ ~\text{or}~ \Delta)}\right\} .\end{align*}

There is a sublattice $$\Lambda_r \subseteq \Lambda$$, which is an additive subgroup of finite index. There is a partial order of $$\Lambda$$ on $$E$$ and $${\mathfrak{h}}^\vee$$. We write \begin{align*} \mu \leq \lambda \iff \lambda - \mu \in {\mathbb{Z}}^+ \Delta = {\mathbb{Z}}^+ \Phi^+ \end{align*}

For a basis $$\Delta = \left\{{\alpha_1, \cdots, \alpha_n}\right\}$$, define a dual basis $$(w_i ,\alpha_j^\vee) = \delta_{ij}$$. The fundamental weights are given by a $${\mathbb{Z}}{\hbox{-}}$$basis for $$\Lambda$$. Then $$\Lambda$$ is a free abelian group of rank $$\ell$$, and \begin{align*} \Lambda^+ = {\mathbb{Z}}^+ w_1 + \cdots + {\mathbb{Z}}^+ w_\ell \end{align*} are the dominant integer weights.4

# Wednesday January 15th

## Review

The Weyl vector is given by \begin{align*} \rho = \mkern 1.5mu\overline{\mkern-1.5mu\omega\mkern-1.5mu}\mkern 1.5mu_1 + \cdots + \mkern 1.5mu\overline{\mkern-1.5mu\omega\mkern-1.5mu}\mkern 1.5mu_\ell = \frac 1 2 \sum_{\beta \in \Phi^+} \beta \in \Lambda^+ .\end{align*}

envlist - If $$\alpha \in \Delta$$ then $$(\rho, \alpha^\vee) = 1$$ - $$s_\alpha(\rho) = \rho - \alpha$$.

Let $$\lambda \in \Lambda^+$$; a few facts:

1. The size of $$\left\{{\mu\in \Lambda^+ {~\mathrel{\Big|}~}\mu \leq \lambda}\right\}$$ (with the partial order from last time) is finite.
2. $$w\lambda < \lambda$$ for all $$w\in W$$.

The Weyl chamber for a fixed root in $$E$$ a Euclidean space is \begin{align*} C = \left\{{\lambda \in E {~\mathrel{\Big|}~}(\lambda, \alpha) > 0 ~ \forall \alpha\in\Delta}\right\} \end{align*}

Note that the hyperplane splits $$E$$ into connected components, we mark this component as distinguished.

• A connected component of the union of hyperplanes is orthogonal to roots.
• They’re in bijection with $$\Delta$$.
• They’re permuted simply transitively by $$W$$.

We also let $$\mkern 1.5mu\overline{\mkern-1.5muC\mkern-1.5mu}\mkern 1.5mu$$ denote the fundamental domain.

## Weight Representations

For $$\lambda \in {\mathfrak{h}}^\vee$$, we let \begin{align*} M_\lambda \mathrel{\vcenter{:}}= \left\{{v\in M {~\mathrel{\Big|}~}h\cdot v = \lambda(h) v ~\forall h\in{\mathfrak{h}}}\right\} .\end{align*} denote a weight space of $$M$$ if $$M_\lambda \neq 0$$. In this case, $$\lambda$$ is a weight of $$M$$. The dimension of $$M_\lambda$$ is the multiplicity of $$\lambda$$ in $$M$$, and we define the set of weights as \begin{align*} \Pi(M) \mathrel{\vcenter{:}}= \left\{{\lambda \in {\mathfrak{h}}^\vee{~\mathrel{\Big|}~}M_\lambda \neq 0}\right\} .\end{align*}

If $$M = {\mathfrak{g}}$$ under the adjoint action, then $$\Pi(M) = \Phi \cup\left\{{0}\right\}$$.

The weight vectors for distinct weights are linearly independent. Thus there is a $${\mathfrak{g}}{\hbox{-}}$$submodule given by $$\sum_\lambda M_\lambda$$, which is in fact a direct sum. It may not be the case that $$\sum_\lambda M_\lambda = M$$, and can in fact be zero, although this is an odd situation.5 In our case, we’ll have a weight module $$M = \bigoplus_\lambda M_\lambda$$, where $${\mathfrak{h}}\curvearrowright M$$ semisimply.

## Finite Dimensional Modules

Recall Weyl’s complete reducibility theorem, which implies that any finite dimensional $${\mathfrak{g}}{\hbox{-}}$$module is a weight module. In fact, $${\mathfrak{n}}, {\mathfrak{n}}^- \curvearrowright M$$ nilpotently.

• $$\Pi(M) \subset \Lambda$$ is a subset of the integral lattice.
• $$\Pi(M)$$ is $$W{\hbox{-}}$$invariant.
• $$\dim M_\lambda = \dim M_{w\lambda}$$ for any $$\lambda \in \Pi(M), w\in W$$.

## Simple Finite Dimensional $${\mathfrak{sl}}(2, {\mathbb{C}}){\hbox{-}}$$modules

Fix the standard basis $$\left\{{x, h, y}\right\}$$ of $${\mathfrak{sl}}(2, {\mathbb{C}})$$ with

\begin{align*} [h x] &= 2x \\ [h y] &= -2y \\ [x y] &= h .\end{align*}

Since $$\dim {\mathfrak{h}}= 1$$, there is a bijection \begin{align*} {\mathfrak{h}}^\vee&\leftrightarrow {\mathbb{C}}\\ \Lambda &\leftrightarrow {\mathbb{Z}}\\ \Lambda_r &\leftrightarrow 2{\mathbb{Z}}\\ \alpha &\to 2\\ \rho &\to 1 .\end{align*}

There is a correspondence between weights and simple modules:

\begin{align*} \left\{{\substack{\text{Isomorphism classes of simple modules}}}\right\} &\iff \Lambda^+ = \left\{{0,1,2,3,\cdots}\right\} \\ L(\lambda) &\iff \lambda .\end{align*}

Moreover, $$L(\lambda)$$ has a 1-dimensional weight spaces with weights $$\lambda, \lambda - 2, \cdots, -\lambda$$ and thus $$\dim L(\lambda) = \lambda + 1$$.

• $$L(0) = {\mathbb{C}}$$, the trivial representation,
• $$L(1) = {\mathbb{C}}^2$$, the natural representation where $${\mathfrak{sl}}(2, {\mathbb{C}})$$ acts by matrix multiplication,
• $$L(2) = {\mathfrak{g}}$$, the adjoint representation.

Choose a basis $$\left\{{v_1, \cdots, v_\lambda}\right\}$$ for $$L(\lambda)$$ so that $${\mathbb{C}}v_0 = M_{\lambda}$$, $${\mathbb{C}}v_1 = M_{\lambda - 2}$$, $$\cdots {\mathbb{C}}v_{\lambda} M_{-\lambda}$$. Then

• $$h\cdot v_i = (\lambda - 2i) v_i$$
• $$x \cdot v_i = (\lambda - i + 1) v_{i-1}$$, where $$v_{-1} \mathrel{\vcenter{:}}= 0$$
• $$y \cdot v_i = (i + 1)v_{i+1}$$ where $$v_{\lambda + 1} \mathrel{\vcenter{:}}= 0$$.

We then say $$L(\lambda)$$ is a highest weight module, since it is generated by a highest weight vector $$\lambda$$. Then $$W = \left\{{1, s_\alpha}\right\}$$, where $$s_\alpha$$ is reflection through 0 by the identification $$\alpha = 2$$. Weyl group reflection in $${\mathfrak{sl}}_2({\mathbb{C}})$$

# Chapter 1: Category $$\mathcal O$$ Basics

The category of $$U({\mathfrak{g}}){\hbox{-}}$$modules is too big. Motivated by work of Verma in 60s, started by Bernstein-Gelfand-Gelfand in the 1970s. Used to solve the Kazhdan-Lusztig conjecture.

## Axioms and Consequences

$$\mathcal O$$ is the full subcategory of $$U({\mathfrak{g}})$$ modules consisting of $$M$$ such that

1. $$M$$ is finitely generated as a $$U({\mathfrak{g}}){\hbox{-}}$$module.
2. $$M$$ is $${\mathfrak{h}}{\hbox{-}}$$semisimple, i.e. $$M$$ is a weight module \begin{align*} M = \bigoplus_{\lambda \in {\mathfrak{h}}^\vee} M_\lambda \end{align*}
3. $$M$$ is locally $$n{\hbox{-}}$$finite, i.e.  \begin{align*} \dim_{\mathbb{C}}U({\mathfrak{n}}) v < \infty \qquad \forall v\in M .\end{align*}

If $$\dim M < \infty$$, then $$M$$ is $${\mathfrak{h}}{\hbox{-}}$$semisimple, and axioms 1, 3 are obvious.

Let $$M \in {\mathcal{O}}$$, then

1. $$\dim M_\mu < \infty$$ for all $$\mu \in {\mathfrak{h}}^\vee$$.
2. There exist $$\lambda_1, \cdots \lambda_r \in {\mathfrak{h}}^\vee$$ such that \begin{align*} \Pi(M) \subset \bigcup_{i=1}^\lambda (\lambda_i - {\mathbb{Z}}^+ \Phi^+) \end{align*}

By axiom 2, every $$v\in M$$ is a finite sum of weight vectors in $$M$$. We can thus assume that our finite generating set consists of weight vectors. We can then reduce to the case where $$M$$ is generated by a single weight vector $$v$$. So consider $$U({\mathfrak{g}}) v$$. By the PBW theorem, there is a triangular decomposition \begin{align*} U({\mathfrak{g}}) = U({\mathfrak{n}}^-) U({\mathfrak{h}}) U({\mathfrak{n}}) \end{align*}

By axiom 3, $$U({\mathfrak{n}}) \cdot v$$ is finite dimensional, so there are finitely many weights of finite multiplicity in the image. Then $$U({\mathfrak{h}})$$ acts by scalar multiplication, and $$U({\mathfrak{n}}^-)$$ produces the “cones” that result in the tree structure:

A weight of the form $$\mu = \lambda_i - \sum n_i \alpha_i$$ can arise from $$y_{n_1}^{n_1} \cdots$$.

# Friday January 17th

Let $$M$$

1. Be finitely generated,
2. Semisimple $$M = \oplus_{\lambda \in {\mathfrak{h}}^\vee} M_\lambda$$,
3. Locally finite
4. $$\dim M_\mu < \infty$$ for all $$\mu \in {\mathfrak{h}}^\vee$$,
5. Satisfy the forest condition for weights.
1. $$\mathcal O$$ is Noetherian6

2. $$\mathcal O$$ is closed under quotients, submodules, finite direct sums

3. $$\mathcal O$$ is abelian (similar to a category of modules)

4. If $$M\in \mathcal O$$, $$\dim L < \infty$$, then $$L \otimes M \in \mathcal O$$ and the endofunctor $$M \mapsto L\otimes M$$ is exact

5. If $$M\in \mathcal O$$, then $$M$$ is locally $$Z({\mathfrak{g}}){\hbox{-}}$$finite7

6. $$M\in \mathcal O$$ is a finitely generated $$U({\mathfrak{n}}^-){\hbox{-}}$$module.

See BA II, page 103.

Implied by (b), BA II Page 330.

Can check that $$L\otimes M$$ satisfies 2 and 3 above. Need to check first condition. Take a basis $$\left\{{v_i}\right\}$$ for $$L$$ and $$\left\{{w_j}\right\}$$ a finite set of generators for $$M$$. The claim is that $$B = \left\{{v_i \otimes w_j}\right\}$$ generates $$L\otimes M$$. Let $$N$$ be the submodule generated by $$B$$.

For any $$v\in V$$, $$v\otimes w_j \in N$$. For arbitrary $$x\in {\mathfrak{g}}$$, compute \begin{align*}x\cdot(v\otimes w_j) = (x\cdot v) \otimes w_j + x\otimes(v\cdot w_j).\end{align*} Since the LHS is in $$N$$ and the first term on the RHS is in $$N$$, the entire RHS is in $$N$$. By iterating, we find that $$v\otimes(u\cdot w_j) \in N$$ for all PBW monomials $$u$$. So $$L\otimes M \in \mathcal O$$.

Since $$v\in M$$is a sum of weight vectors, wlog we can assume $$v \in M_\lambda$$ is a weight vector (where $$\lambda \in {\mathfrak{h}}^\vee$$). For any central element $$z\in Z({\mathfrak{g}})$$, we can compute \begin{align*}h\cdot(z\cdot v) = z \cdot (h\cdot v) = z \cdot \lambda(h) v = \lambda(h)z \cdot v.\end{align*} Thus $$z\cdot v\in M_\lambda$$. By (4), we know that $$\dim M_\lambda < \infty$$, so $$\dim {\operatorname{span}}~Z({\mathfrak{g}}) v < \infty$$ as well.

By 5, $$M$$ is generated by a finite dimensional $$U(\mathfrak b)$$ submodule $$N$$. Since we have a triangular decomposition $$U({\mathfrak{g}}) = U({\mathfrak{n}}^-) U(\mathfrak b)$$, there is a basis of weight vectors for $$N$$ that generates $$M$$ as a $$U({\mathfrak{n}}^-)$$ module.

## Highest Weight Modules

A maximal vector $$v^+ \in M \in \mathcal O$$ is a nonzero vector such that $${\mathfrak{n}}\cdot v^+ = 0$$.

By properties 2 and 3, every nonzero $$M\in \mathcal O$$ has a maximal vector.

A highest weight module $$M$$ of highest weight $$\lambda$$ is a module generated by a maximal vector of weight $$\lambda$$, i.e.  \begin{align*}M = U({\mathfrak{g}}) v^+ = U({\mathfrak{n}}^-) U({\mathfrak{h}}) U({\mathfrak{n}}) v^+ = U({\mathfrak{n}}^-) v^+\end{align*}

Let $$M = U({\mathfrak{n}}^-)v^+$$ be a highest weight module, where $$v^+ \in M_\lambda$$. Fix $$\Phi^+ = \left\{{\beta_1, \cdots, \beta_n}\right\}$$ with root vectors $$y_i \in {\mathfrak{g}}_{\beta_i}$$.

1. $$M$$ is the $${\mathbb{C}}{\hbox{-}}$$span of PBW monomials $$\left\langle{ y_1^{t_1} \cdots y_m^{t_m}}\right\rangle$$ of weight $$\lambda - \sum t_i \beta_i$$. Thus $$M$$ is a module.

2. All weights $$\mu$$ of $$M$$ satisfy $$\mu \leq \lambda$$

3. $$\dim M_\mu < \infty$$ for all $$\mu \in T(M)$$, and $$\dim M_\lambda = 1$$. In particular, property (3) holds and $$M \in \mathcal O$$.

4. Every nonzero quotient of $$M$$ is a highest-weight module of highest weight $$\lambda$$.

5. Every submodule of $$M$$ is a weight module, and any submodule generated by a maximal vector with $$\mu < \lambda$$ is proper. If $$M$$ is semisimple, then the set of maximal weight vectors equals $${\mathbb{C}}^{\times}v^+$$.

6. $$M$$ has a unique maximal submodule $$N$$ and a unique simple quotient $$L$$, thus $$M$$ is indecomposable.

7. All simple highest weight modules of highest weight $$\lambda$$ are isomorphic.

For such $$M$$, $$\dim \operatorname{End}(M) = 1$$. (Category $$\mathcal O$$ version of Schur’s Lemma, generalizes to infinite dimensional case)

Either obvious or follows from previous results. First few imply $$M$$ is in $$\mathcal O$$, and we know the latter hold for such modules.

$$N$$ is a sum of submodules, so $$N = \sum M_i$$, proper submodules of $$M$$. So take $$L = M/N$$. To see indecomposability, there exists a better proof in section 1.3.

Let $$M_1 = U({\mathfrak{n}}^-)v_1^+$$ and $$M_2$$ be define similarly, where the $$v_i \in (M_i)_\lambda$$ have the same weight. Then $$M_0 \mathrel{\vcenter{:}}= M_1 \oplus M_2$$ implies that $$v^+ \mathrel{\vcenter{:}}=(v_1^+, v_2^+)$$ is a maximal vector for $$M_0$$. So $$N \mathrel{\vcenter{:}}= U({\mathfrak{n}}^-) v^+$$ is a highest weight module of highest weight $$\lambda$$.

We have the following diagram: and since e.g. $$N \to M_1$$ is not the zero map, it is a surjection.

By (f), $$N$$ is a unique simple quotient, so this forces $$M_1 \cong M_2$$. Since $$M$$ is simple, any nonzero $${\mathfrak{g}}{\hbox{-}}$$endomorphism $$\phi$$ must be an isomorphism, and so we take $$v^+ \mapsto cv^+$$ for some $$c\neq 0$$. Note that since $$\phi$$ is also a $${\mathfrak{h}}{\hbox{-}}$$morphism, we have $$\dim M_\lambda = 1$$.

Since $$v^+$$ generates $$M$$ and \begin{align*} \phi(u\cdot v^+) = u \phi(v^+) = cu\cdot v^+ ,\end{align*} $$\phi$$ is multiplication by a constant.

# Wednesday January 22nd

Try problems 1.1 and 1.3* in Humphreys.

Recall: In category $$\mathcal O$$, we have finite dimensional, semisimple modules over $${\mathbb{C}}$$ with triangular decompositions.

If $$M$$ is any $$U({\mathfrak{g}})$$ module than a $$v^+ \in M_\lambda$$ a weight vector (so $$\lambda \in {\mathfrak{h}}^\vee$$) is primitive iff $${\mathfrak{n}}\cdot v^+ = 0$$. Note: it doesn’t have to be of maximal weight. $$M$$ is a highest weight module of highest weight $$\lambda$$ iff it’s generated over $$U({\mathfrak{g}})$$ as an associative algebra by a maximal vector $$v^+$$ of weight $$\lambda$$. Then $$M = U({\mathfrak{g}}) \cdot v^+$$.

See structure of highest weight modules, and irreducibility.

If $$0 \neq M\in\mathcal O$$, then $$M$$ has a finite filtration with quotients highest weight modules, i.e. $$M_0 \subset M_1 \subset \cdots \subset M_n = M$$ with $$M_i/M_{i-1}$$ highest weight modules.

Note that the quotients are not necessarily simple, so this isn’t a composition series, although we’ll show such a series exists later.

Let $$V$$ be the $${\mathfrak{n}}$$ submodule of $$M$$ generated by a finite set of weight vectors which generate $$M$$ as a $$U({\mathfrak{g}})$$ module, i.e. take the finite set of weight vectors and act on them by $$U({\mathfrak{n}})$$. Then $$\dim_{\mathbb{C}}V < \infty$$ since $$M$$ is locally $${\mathfrak{n}}{\hbox{-}}$$finite.

Note that $${\mathfrak{n}}$$ increases weights.

Induction on $$n = \dim V$$. If $$n=1$$, $$M$$ itself is a highest weight module. For $$n > 1$$, choose a weight vector $$v_1 \in V$$ of weight $$\lambda$$ which is maximal among all weights of $$V$$. Set $$M_1 \mathrel{\vcenter{:}}= U({\mathfrak{g}}) v_1$$; this is a highest weight submodule of $$M$$ of highest weight $$\lambda$$. ($${\mathfrak{n}}$$ has to kill v_1, otherwise it increases weight and $$v_1$$ wouldn’t be maximal.)

Let $$\mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu = M/M_1 \in \mathcal O$$, this is generated by the image of $$\mkern 1.5mu\overline{\mkern-1.5muV\mkern-1.5mu}\mkern 1.5mu$$ of $$V$$ and thus $$\dim \mkern 1.5mu\overline{\mkern-1.5muV\mkern-1.5mu}\mkern 1.5mu < \dim V$$. By the IH, $$\mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu$$ has the desired filtration, say \begin{align*}0 \subset \mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu_2 \subset \mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu_{n-1} \subset \mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu_n = \mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu.\end{align*} Let $$\pi: M \to M/M_1$$, then just take the preimages $$\pi^{-1}(\mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu_i)$$ to be the filtration on $$M$$.

By isomorphism theorems, the quotients in the series for $$M$$ are isomorphic to the quotients for $$\mkern 1.5mu\overline{\mkern-1.5muM\mkern-1.5mu}\mkern 1.5mu$$.

## Verma and Simple Modules

Constructing universal highest weight modules using “algebraic induction”. Start with a nice subalgebra of $${\mathfrak{g}}$$ then “induce” via $$\otimes$$ to a module for $${\mathfrak{g}}$$.

Recall $${\mathfrak{g}}= {\mathfrak{n}}^- \oplus {\mathfrak{h}}\oplus {\mathfrak{n}}$$, here $${\mathfrak{h}}\oplus {\mathfrak{n}}$$ is the Borel subalgebra $${\mathfrak{b}}$$, and $${\mathfrak{n}}$$ corresponds to a fixed choice of positive roots in $$\Phi^+$$ with basis $$\Delta$$. Then $$U({\mathfrak{g}}) = U({\mathfrak{n}}^-) \otimes_{\mathbb{C}}U({\mathfrak{b}})$$. Given any $$\lambda \in {\mathfrak{h}}^\vee$$, let $${\mathbb{C}}_\lambda$$ be the 1-dimensional $${\mathfrak{h}}{\hbox{-}}$$module (i.e. 1-dimensional $${\mathbb{C}}{\hbox{-}}$$vector space)on which $${\mathfrak{h}}$$ acts by $$\lambda$$.

Let $$\left\{{1}\right\}$$ be the basis for $${\mathbb{C}}$$, so $$h \cdot 1 = \lambda(h)1$$ for all $$h\in {\mathfrak{h}}$$. Then there is a map $${\mathfrak{b}}\to {\mathfrak{b}}/ {\mathfrak{n}}\cong {\mathfrak{h}}$$, so make $$C_\lambda$$ a $${\mathfrak{b}}{\hbox{-}}$$module via this map. This “inflate” $$C_\lambda$$ into a 1-dimensional $${\mathfrak{b}}{\hbox{-}}$$module.

Note that $${\mathfrak{h}}$$ is solvable, and by Lie’s Theorem, every finite dimensional irreducible $${\mathfrak{b}}{\hbox{-}}$$module is of the form $${\mathbb{C}}_\lambda$$ for some $$\lambda \in {\mathfrak{h}}^\vee$$.

\begin{align*} M(\lambda) \mathrel{\vcenter{:}}= U({\mathfrak{g}}) \otimes_{U({\mathfrak{b}})} {\mathbb{C}}_\lambda \mathrel{\vcenter{:}}=\operatorname{Ind}_{\mathfrak{b}}^{\mathfrak{g}}{\mathbb{C}}_\lambda \end{align*} is the Verma module of highest weight $$\lambda$$.

This process is called algebraic/tensor induction. This is a $$U({\mathfrak{g}})$$ module via left multiplication, i.e. acting on the first tensor factor.

Since $$U({\mathfrak{g}}) \cong U({\mathfrak{n}}^-) \otimes_{\mathbb{C}}U({\mathfrak{h}})$$, we have $$M(\lambda) \cong U({\mathfrak{n}}^-) \otimes_{\mathbb{C}}{\mathbb{C}}_\lambda$$, but at what level of structure?

• As a vector space (clear)
• As a $${\mathfrak{n}}^-{\hbox{-}}$$module via left multiplication
• As a $${\mathfrak{h}}^-{\hbox{-}}$$module via the $$\otimes$$ action.

In particular, $$M(\lambda)$$ is a free $$U({\mathfrak{n}}^-){\hbox{-}}$$module of rank 1. Note that this always happens when tensoring with a vector space.

Consider $$v^+ \mathrel{\vcenter{:}}= 1\otimes 1 \in M(\lambda)$$. Note that $$U({\mathfrak{n}}^-)$$ is not homogeneous, so not graded, but does have a filtration. Then $$v^+$$ is nonzero, and freely generates $$M(\lambda)$$ as a $$U({\mathfrak{n}}^-){\hbox{-}}$$module. Moreover $${\mathfrak{n}}\cdot v^+ = 0$$ since for $$x\in {\mathfrak{g}}_\beta$$ for $$\beta \in \Phi^+$$, we have

\begin{align*} x(1\otimes 1) &= x\otimes 1 \\ &= 1\otimes x\cdot 1 \quad\text{since } x\in {\mathfrak{b}}\\ &= 1 \otimes 0 \implies x\in {\mathfrak{n}}\\ &= 0 ,\end{align*}

and for $$h\in {\mathfrak{h}}$$,

\begin{align*} h(1\otimes 1) &= h1\otimes 1 \\ &= 1 \otimes h1\\ &=1 \otimes\lambda(h) 1 \\ &=\lambda(h) v^+ .\end{align*}

So $$M(\lambda)$$ is a highest weight module of highest weight $$\lambda$$, and thus $$M(\lambda) \in \mathcal O$$.

Any weight $$\lambda \in {\mathfrak{h}}^\vee$$ is the highest weight of some $$M\in \mathcal O$$. Let $$\Pi(M)$$ denote the set of weights of a module, then $$\Pi(M(\lambda)) = \lambda - {\mathbb{Z}}^+ \Phi^+$$.

By PBW, we can obtain a basis for $$M(\lambda)$$ as $$\left\{{ y_1^{t_1} \cdots y_m^{t_m}v^+ {~\mathrel{\Big|}~}t_i \in {\mathbb{Z}}^+}\right\}$$. Taking a fixed ordering $$\left\{{\beta_1, \cdots, \beta_m}\right\} = \Phi^+$$, then $$0\neq y_i \in {\mathfrak{g}}_{-\beta_i}$$. Then every weight of this form is a weight of some $$M(\lambda)$$, and every weight of $$M(\lambda)$$ is of this form: $$\lambda - \sum t_i \beta_i$$.

The functor $$\operatorname{Ind}_{\mathfrak{h}}^{\mathfrak{g}}({\,\cdot\,}) = U({\mathfrak{g}}) \otimes_{{\mathfrak{b}}} {\,\cdot\,}$$ from the category of finite-dimensional $${\mathfrak{g}}{\hbox{-}}$$semisimple $${\mathfrak{b}}{\hbox{-}}$$modules to $$\mathcal O$$ is an exact functor, since it is naturally isomorphic to $$U({\mathfrak{n}}^-) \otimes_{\mathbb{C}}{\,\cdot\,}$$ (which is clearly exact since we are tensoring a vector space over its ground field).

Let $$I$$ by a left ideal of $$U({\mathfrak{g}})$$ which annihilates $$v^+$$, so \begin{align*} I = \left\langle{{\mathfrak{n}}, h-\lambda(h)\cdot 1 {~\mathrel{\Big|}~}h\in{\mathfrak{h}}}\right\rangle .\end{align*} Since $$v^+$$ generates $$M(\lambda)$$ as a $$U({\mathfrak{g}}){\hbox{-}}$$module, then (by a standard ring theory result) $$M(\lambda) = U({\mathfrak{g}})/I$$, since $$I$$ is the annihilator of $$M(\lambda)$$.

Let $$M$$ be any highest weight module of highest weight $$\lambda$$ generated by $$v$$. Then $$I\cdot v = 0$$, so $$I$$ is the annihilator of $$v$$ and thus $$M$$ is a quotient of $$M(\lambda)$$. Thus $$M(\lambda)$$ is universal in the sense that every other highest weight module arises as a quotient of $$M(\lambda)$$.

By theorem 1.2, $$M(\lambda)$$ has a unique maximal submodule $$N(\lambda)$$ (nonstandard notation) and a unique simple quotient $$L(\lambda)$$ (standard notation).

Every simple module in $$\mathcal O$$ is isomorphic to $$L(\lambda)$$ for some $$\lambda \in {\mathfrak{h}}^\vee$$ and is determined uniquely up to isomorphism by its highest weight. Moreover, there is an analog of Schur’s lemma: \begin{align*}\dim \hom_{\mathcal O}(L(\mu), L(\lambda)) = \delta_{\mu\lambda}\end{align*}, i.e. it’s 1 iff $$\lambda=\mu$$ and 0 otherwise.

Up to isomorphism, we’ve found all of the simple modules in $$\mathcal O$$, and most are finite-dimensional.

# Friday January 24th

A standard theorem about classifying simple modules in category $${\mathcal{O}}$$:

Theorem (Classification of Simple Modules)
Every simple module in $${\mathcal{O}}$$ is isomorphic to $$L(\lambda)$$ for some $$\lambda \in {\mathfrak{h}}^\vee$$, and is determined uniquely up to isomorphism by its highest weight. Moreover, $$\dim \hom_{\mathcal{O}}(L(\mu), L(\lambda)) = \delta_{\lambda \mu}$$.
Proof

Let $$L \in {\mathcal{O}}$$ be irreducible. As observed in 1.2, $$L$$ has a maximal vector $$v^+$$ of some weight $$\lambda$$.

Recall: can increase weights and reach a maximal in a finite number of steps.

Since $$L$$ is irreducible, $$L$$ is generated by that weight vector, i.e. $$L U({\mathfrak{g}}) \cdot v^+$$, so $$L$$ must be a highest weight module.

Standard argument: use triangular decomposition.

By the universal property, $$L$$ is a quotient of $$M(\lambda)$$. But this means $$L \cong L(\lambda)$$, the unique irreducible quotient of $$M(\lambda)$$.

By Theorem 1.2 part g (see last Friday), $$\dim \operatorname{End}_{\mathcal{O}}(L(\lambda)) = 1$$ and $$\hom_{\mathcal{O}}(L(\mu), L(\lambda)) = 0$$ since both entries are irreducible.

Theorem (1.2 f, Highest Weight Modules are Indecomposable)
A highest weight module $$M$$ is indecomposable, i.e. can’t be written as a direct sum of two nontrivial proper submodules.
Proof (of Theorem 1.2 f)
Suppose $$M = M_1 \oplus M_2$$ where $$M$$ is a highest weight module of highest weight $$\lambda$$. Category $${\mathcal{O}}$$ is closed under submodules, so $$M_i$$ are weight modules and have weight-space decompositions. But $$M_\lambda$$ is 1-dimensional (triangular decomposition, only $${\mathbb{C}}$$ acts), and thus $$M_\lambda \subset M_1$$. Since $$M_\lambda$$ is a highest weight module, it generates the entire module, so $$M \subset M_1$$. The reverse holds as well, so $$M = M_1$$ and this forces $$M_2 = 0$$.

## 1.4: Maximal Vectors in Verma Modules

1.5: Examples in the case $${\mathfrak{sl}}(2)$$, over $${\mathbb{C}}$$ as usual.

First, some review from Lie algebras.

Let $${\mathfrak{g}}$$ be any lie algebra, and take $$u, v \in U({\mathfrak{g}})$$. Recall that we have the formula \begin{align*}uv = [uv] + vu,\end{align*} where we use the definition $$[uv] = uv - vu$$.

Let $$x, y_1, y_2$$ be in $${\mathfrak{g}}$$, what is $$[x, y_1 y_2]$$? Use the fact that $$\operatorname{ad}x (y_1, y_2)$$ acts as a derivation, and so $$[x, y_1 y_2] = [x y_1]y_2 + y_1[x y_2]$$, which is a bracket entirely in the Lie algebra. This extends to an action on $$U({\mathfrak{g}})$$ by the product rule.

Recall that $${\mathfrak{sl}}(2)$$ is spanned by $$y =[0,0; 1,0], h = [1,0; 0, -1], x = [0,1; 0,0]$$, where each basis vector spans $${\mathfrak{n}}^-, {\mathfrak{h}}, {\mathfrak{n}}$$ respectively. Then $$[x y] = h, [h x] = 2x, [h y] = -2y$$, so $$E_{ij} E_{kl} = \delta_{jk} E_{il}$$ (should be able to compute easily!).

Then $${\mathfrak{h}}= {\mathbb{C}}$$, so $${\mathfrak{h}}^\vee\cong {\mathbb{C}}$$ where $$\lambda \mapsto \lambda(h)$$. So we identify $$\lambda$$ with a complex number, this is kind of like a bundle of Verma modules over $${\mathbb{C}}$$.

Consider $$M(1)$$, then $$\lambda = 1$$ will denote $$\lambda(h) = 1$$. As in any Verma module, $$M(\lambda) \cong U({\mathfrak{n}}^-) \otimes_{\mathbb{C}}{\mathbb{C}}_{\lambda}$$. We can think of $$v^+$$ as $$1\otimes 1$$, with the action $$yv^+ = y1\otimes 1$$. Note that $$y$$ has weight $$-2$$.

Weight Basis
1 $$v^+$$
-1 $$yv^+$$
-3 $$y^2 v^+$$
-5 $$y^3 v^+$$

Consider how $$x\curvearrowright y^2 v^+$$. Note that $$x$$ has weight $$+2$$. We have

\begin{align*} x \cdot y^2 v^+ &= x \cdot y^2 \otimes 1_\lambda \\ &= ([x y^2] + y^2 x) \otimes 1 \\ &= ([xy]y + y[xy]) \otimes 1 + y^2 \otimes x\cdot 1 \quad\text{moving x across the tensor because ?}\\ &= ([xy]y + y[xy]) \otimes 1 + 0 \quad\text{since x is maximal} \\ &= (hy + yh) \otimes 1 \\ &= hy \otimes 1 + y\otimes h\cdot 1 \\ &= hy \otimes 1 + \lambda(h)1 \\ &= hy \otimes 1 + 1 \\ &= ([xy] + yh)\otimes 1 + y\otimes 1 \\ &= -2y \otimes 1 + y\otimes 1 + y\otimes 1 \\ &= 0 .\end{align*}

So $$y$$ moves us downward through the table, and $$x$$ moves upward, except when going from $$-3\to -1$$ in which case the result is zero.

Thus there exists a morphism $$\phi: M(-3) \to M(1)$$, with image $$U({\mathfrak{g}}) y^2 v^+ = U({\mathfrak{n}}^-) y^2 v^+$$. So the image of $$\phi$$ is everything spanned by the bases in the rows $$-3, -5, \cdots$$, which is exactly $$M(-3)$$. So $$M(-3) \hookrightarrow M(1)$$ as a submodule.

Motivation for next section: we want to find Verma modules which are themselves submodules of Verma modules.

It turns out that $$\operatorname{im}({\phi })\cong N(1)$$. We should have $$M(1) / N(1) \cong L(1)$$. What is the simple module of weight 1 for $${\mathfrak{sl}}(2)$$? The weights of $$L(n)$$ are $$n, n-2, n-4, \cdots, -n$$, so the representations are parameterized by $$n\in {\mathbb{Z}}^{+}$$. These are the Verma modules for $${\mathfrak{sl}}(2)$$. What happens is that $$y\curvearrowright-n \to -n-2$$ gives a maximal vector, so the calculation above roughly goes through the same way. So we’ll have a similar picture with $$L(n)$$ at the top.

## Back to 1.4

Question 1: What are the submodules of $$M(\lambda)$$?

Question 2: What are the Verma submodules $$M(\mu) \subset M(\lambda)$$? Equivalently, when do maximal vectors of weight $$\mu < \lambda$$ (the interesting case) lie in $$M(\lambda)$$?

Question 3: As a special case, when do maximal vectors of weight $$\lambda - k\alpha$$ for $$\alpha \in \Delta$$ lie in $$M(\lambda)$$ for $$k\in {\mathbb{Z}}^+$$?

Fix a Chevalley basis for $${\mathfrak{g}}$$ (see section 0.1) $$h_1, \cdots, h_\ell \in {\mathfrak{h}}$$ and $$x_\alpha \in {\mathfrak{g}}_\alpha$$ and $$y_\alpha \in {\mathfrak{g}}_{-\alpha}$$ for $$\alpha \in \Phi^+$$. Let $$\Delta = \left\{{\alpha_1, \cdots, \alpha_\ell}\right\}$$ and let $$x_i = x_{\alpha_i}, y_i = y_{\alpha_i}$$ be chosen such that $$[x_i y_i] = h_i$$.

Lemma

For $$k\geq 0$$ and $$1\leq i, j \leq \ell$$, then

1. $$[x_j, y_i^{k+1}] = 0$$ if $$j\neq i$$

2. $$[h_j, y_i^{k+1}] = -(k+1) \alpha_i(h_j) y_i^{k+1}$$.

3. $$[x_i, y_i^{k+1}] = -(k+1) y_i(k\cdot 1 - h_i)$$.

Proof (sketch)

Both easy to prove by induction since $$[x_j, y_i] \to \alpha_j - \alpha_i \not\in \Phi$$ is a difference of simple roots.

For $$k=0$$, all identities are easy. For $$k> 0$$, an inductive formula that uses the derivation property, which we’ll do next class.

# Monday January 27th

## Section 1.4

Fix $$\Delta = \left\{{\alpha_1, \cdots, \alpha_\ell}\right\}$$, $$x_i \in g_{\alpha_i}$$ and $$y_i \in g_{-\alpha_i}$$ with $$h_i = [x_i y_i]$$.

Lemma

For $$k\geq 0$$ and $$1 \leq i, j \leq \ell$$,

1. $$[x_j y_i^{k+1}] = 0$$ if $$j\neq i$$
2. $$[h_j y_i^{k+1}] = -(k+1) \alpha_i(h_j) y_i^{k+1}$$
3. $$[x_i y_i^{k+1}] = (k+1) y_i^{k} (k\cdot 1 - h_i)$$.
Proof (Sketch for (c))

By induction, where $$k=0$$ is clear.

\begin{align*} [x+i y_i^{k+1}] &= [x_i y_i] y_i^k + y_i [x_i y_i^k] \\ &=h_i y_i^k + y_i(-k y_i^{k-1} ((k-1)1 - h_i)) \quad\text{by I.H.} \\ &= (k+1)y_i^k h_i - (k^2 -k + 2k)y_i^k \\ &= -(k+1) y_i^k ( k\cdot 1 - h_i ) .\end{align*}

Proposition (Existence of Morphisms of Verma Modules)

Suppose $$\lambda \in {\mathfrak{h}}^\vee, \alpha \in \Delta$$, and $$n\mathrel{\vcenter{:}}=(\lambda, \alpha^\vee) \in {\mathbb{Z}}^+$$. Then in $$M(\lambda)$$, $$y_\alpha^{n+1} v^+$$ is a maximal weight vector of weight $$\mu \mathrel{\vcenter{:}}=\lambda - (n+1)\alpha < \lambda$$.

Note this is free as an $$U({\mathfrak{n}}^-){\hbox{-}}$$module, so $$v^+ \neq 0$$. Note that $$n = \lambda(h_\alpha)$$.

By the universal property, there is a nonzero homomorphism $$M(\mu) \to M(\lambda)$$ with image contained in $$N(\lambda)$$, the unique maximal proper submodules of $$M(\lambda)$$.

Proof

Say $$\alpha = \alpha_i$$. Fix $$j\neq i$$.

\begin{align*} x_i y_\alpha^{n+1} \otimes 1 &= [x_j y_i^{n+1}] \otimes 1 + y_i^{n+1} \otimes x_j \cdot 1 \\ &= [x_j y_i^{n+1}] \otimes 1 + y_i^{n+1} \otimes 0 \quad\text{by a} \\ &= 0 .\end{align*}

\begin{align*} x_i y_i^{n+1} \otimes 1 &= [x_i y_i^{n+1} \otimes 1] \\ &= -(n+1) y_i^n (n\cdot 1 - h_i) \otimes 1 \\ &= -(n+1) (n - \lambda(h_i)) 1 \otimes 1 \\ &\coloneqq-(n+1) (\lambda(h_i) - \lambda(h_i)) 1 \otimes 1 \\ &= 0 .\end{align*}

Since $$g_{\alpha_j}$$ generate $${\mathfrak{n}}$$ as a Lie algebra, since $$[{\mathfrak{g}}_\alpha, {\mathfrak{g}}_\beta] = {\mathfrak{g}}_{\alpha + \beta}$$. This shows that $${\mathfrak{n}}\cdot y_i^{n+1} v^+ = 0$$, and the weight of $$y_i^{n+1} v^+$$ is $$\lambda - (n+1)\alpha_i$$. So $$y_i^{n+1}$$ is a maximal vector of weight $$\mu$$. The universal property implies there is a nonzero map $$M(\mu) \to M(\lambda)$$ sending highest weight vectors to highest weight vectors and preserving weights. The image is proper since all weights of $$M_\mu$$ are less than or equal to $$\mu < \lambda$$.

Consider $${\mathfrak{sl}}(2)$$, then $$M(1) \supset M(-3)$$. Note that reflecting through 0 doesn’t send 1 to -3, but shifting the origin to $$-1$$ and reflecting about that with $$s_\alpha \cdot$$ fixes this problem. Note that $$L(1)$$ is the quotient.

For $$\lambda \in {\mathfrak{h}}^\vee$$ and $$\alpha \in \Delta$$, we can compute $$s_\alpha \cdot \lambda \mathrel{\vcenter{:}}= s_\alpha(\lambda + \rho) - \rho$$ where $$\rho = \sum_{j=1}^\ell e_i$$. Then $$(w_j, \alpha_i^\vee) = \delta_{ij}$$ and $$(\rho, \alpha_i^\vee) = 1$$.

\begin{align*} s\alpha \cdot \lambda &= s_\alpha(\lambda + \rho) - \rho \\ &= (\lambda + \rho) - (\lambda + \rho, \alpha^\vee)\alpha -\rho \\ &= \lambda + \rho - ((\lambda< \alpha^\vee) +1)\alpha - \rho \\ &= \lambda - (n+1)\alpha \\ &= \mu .\end{align*}

So this gives a well-defined, nonzero map $$M(s_\alpha \cdot \lambda) \to M(\lambda)$$ for $$s_\alpha \cdot \lambda < \lambda$$. Corollary
Let $$\lambda, \alpha, n$$ be as in the above proposition. Let $$\mkern 1.5mu\overline{\mkern-1.5muv\mkern-1.5mu}\mkern 1.5mu^+$$ now be a maximal vector of weight $$\lambda$$ in $$L(\lambda)$$. Then $$y_\alpha^{n+1} \mkern 1.5mu\overline{\mkern-1.5muv\mkern-1.5mu}\mkern 1.5mu^+ = 0$$.
Proof
If not, then this would be a maximal vector, since it’s the image of the vector $$y_i^{n+1}v^+ \in M(\lambda)$$ under the map $$M(\lambda) \to L(\lambda)$$ of weight $$\mu < \lambda$$. Then it would generate a proper submodules of $$L(\lambda)$$, but this is a contradiction since $$L(\lambda)$$ is irreducible.

## Section 1.5

Example: $${\mathfrak{sl}}(2)$$. What do Verma modules $$M(\lambda)$$ and their simple quotients $$L(\lambda)$$ look like?

Fix a Chevalley basis $$\left\{{y,h,x}\right\}$$ and let $$\lambda \in {\mathfrak{h}}^\vee\cong {\mathbb{C}}$$.

Fact 1

For $$v^+ = 1\otimes 1_\lambda$$, we have \begin{align*}M(\lambda) = U({\mathfrak{n}}^-) v^+ = {\mathbb{C}}\left\langle{y^i v^+ {~\mathrel{\Big|}~}i\in {\mathbb{Z}}^+}\right\rangle\end{align*} is a basis for $$M(\lambda)$$ with weights $$\lambda - 2i$$ where $$\alpha$$ corresponds to 2. So the weights of $$M(\lambda)$$ are $$\lambda, \lambda-2, \lambda-4, \cdots$$ each with multiplicity 1.

Letting $$v_i = \frac 1 {i!} y^i v^+$$ for $$i\in {\mathbb{Z}}^+$$; this is a basis for $$M(\lambda)$$. Using the lemma, we have

\begin{align*} h\cdot v_i &= (\lambda - 2i) v_i \\ y \cdot v_i &= (i+1) v_{i+1} \\ x\cdot v_i &= (\lambda - i + 1)v_{i-1} .\end{align*}

Note that these are the same for finite-dimensional $${\mathfrak{sl}}(2){\hbox{-}}$$modules, see section 0.9.

Fact (2)
We know from the proposition that if $$\lambda \in {\mathbb{Z}}^+$$, i.e. $$(\lambda, \alpha^\vee) \in {\mathbb{Z}}^+$$, then $$M(\lambda)$$ has a maximal vector of weight \begin{align*}\lambda - (n+1)\alpha = \lambda - (\lambda+1)2 = -\lambda-2 = s_\alpha \cdot \lambda.\end{align*}
Exercise

Check that this maximal vector generates the maximal proper submodule \begin{align*}N(\lambda) = M(-\lambda - 2).\end{align*}

So the quotient $$L(\lambda) = M(\lambda) / N(\lambda) = M(\lambda) / M(-\lambda - 2)$$ has weights $$\lambda, \lambda-2, \cdots, -\lambda+2, -\lambda$$. So when $$\lambda \in {\mathbb{Z}}^+$$, $$L(\lambda)$$ is the familiar simple $${\mathfrak{sl}}(2){\hbox{-}}$$module of highest weight $$\lambda$$.

Fact (3)

When $$\lambda \not\in{\mathbb{Z}}^+$$,

• $$N(\lambda) = \left\{{0}\right\}$$,
• $$M(\lambda) = L(\lambda)$$,
• $$M(\lambda)$$ is irreducible
• $$L(\lambda)$$ is infinite dimensional.
Proof
Argue by contradiction. If not, $$M(\lambda) \supset M \neq 0$$ is a proper submodule. So $$M\in {\mathcal{O}}$$, and thus $$M$$ has a maximal weight vector $$w^+$$, and by the restriction of weights for modules in $${\mathcal{O}}$$, we know $$w^+$$ has height $$\lambda - 2m$$ for some $$m\in {\mathbb{Z}}^+$$. Then $$w^+ = c v_i$$ where $$0\neq c \in {\mathbb{C}}$$, and taking $$v_{-1} \mathrel{\vcenter{:}}= 0$$ and $$x\cdot v_i = (\lambda - i + 1)v_{i-1}$$ for $$i\geq 1$$, so $$\lambda = i-1 \implies \lambda \in {\mathbb{Z}}^+$$.

# Friday January 31st

Theorem (Duals of Simple Quotients of Vermas)
A useful formula: $$L(\lambda)^\vee\cong L(-w_0)$$.
Proof

$$L(\lambda)^\vee$$ is a finite dimensional module, and $$(x\cdot f)(v) = -f(x\cdot v)$$, so $$L(\lambda)^\vee\cong L(\nu)$$ for some $$\nu \in \Lambda^+$$. The weights of $$L(\lambda)^\vee$$ are the negatives of the weight of $$L(\lambda)$$. Thus the lowest weight of $$L(\lambda)$$ is $$w_0\lambda$$, since $$w_o$$ reverses the partial order on $${\mathfrak{h}}^\vee$$, i.e $$w_0 \Phi^+ = \Phi^-$$

Then \begin{align*} \mu \in \Pi(L(\mu)) \implies w_0 \mu \in \Pi(L(\lambda)) \implies w_0\mu \leq \lambda .\end{align*} This shows that the lowest weight of $$L(\lambda)$$ is $$w_0 \lambda$$, thus the highest weight $$L(\lambda)^\vee$$ is $$-w_0 \lambda$$ by reversing this inequality.

The inner product is $$W$$ invariant and is its own inverse, so we can move it to the other side.

## 1.7: Action of $$Z({\mathfrak{g}})$$

Next big goal: Every module in $${\mathcal{O}}$$ has a finite composition series (Jordan-Holder series, the quotients are simple). Leads to Kazhdan-Lustzig conjectures from 1979/1980, which were solved, but are still open in characteristic $$p$$ case.

The technique we’ll use the Harish-Chandra homomorphism, which identifies $${\mathcal{Z}}({\mathfrak{g}})$$ explicitly.

It’s commutative, a subalgebra of a Noetherian algebra, no zero divisors – could be a quotient, but then it’d have zero divisors, so this seems to force it to be a polynomial algebra on some unknown.

Also note that $${\mathcal{Z}}({\mathfrak{g}}) \mathrel{\vcenter{:}}= Z(U({\mathfrak{g}}))$$.

Recall: $${\mathcal{Z}}({\mathfrak{g}})$$ acts locally finitely on any $$M\in {\mathcal{O}}$$ – this is by theorem 1.1e, i.e. $$v\in M_\mu$$ and $$z\in {\mathcal{Z}}({\mathfrak{g}})$$ implies that $$zv\in M_\mu$$. (The calculation just follows by computing the weight and commuting things through.)

Let $$\lambda \in {\mathfrak{h}}^\vee$$ and $$M = U({\mathfrak{g}})v^+$$ a highest weight module of highest weight $$\lambda$$. Then for $$z\in {\mathcal{Z}}({\mathfrak{g}})$$, $$z\cdot v^+ \in M_\lambda$$ which is 1-dimensional. Thus $$z$$ acts by scalar multiplication here, and $$z\cdot v^+ = \chi_\lambda(z) v^+$$. Now if $$u\in U({\mathfrak{u}}^-)$$, we have \begin{align*}z\cdot(u\cdot v^+) = u\cdot(z\cdot v^+) = u(\chi_\lambda(z)v^+) = \chi_\lambda(z) u\cdot v^+.\end{align*} Thus $$z$$ acts on all of $$M$$ by the scalar $$\chi_\lambda(z)$$.

Exercise
Show that $$\chi_\lambda$$ is a nonzero additive and multiplicative function, so $$\chi_\lambda: {\mathcal{Z}}({\mathfrak{g}}) \to {\mathbb{C}}$$ is linear and thus a morphism of algebras. Conclude that $$\ker \chi_\lambda$$ is a maximal ideal of $${\mathcal{Z}}({\mathfrak{g}})$$.

Note: this is called the infinitesimal character.

Note that $$\chi_\lambda$$ doesn’t depend on which highest weight module $$M_\lambda$$ was chosen, since they’re all quotients of $$M(\lambda)$$. In fact, every submodule and subquotient of $$M(\lambda)$$ has the same infinitesimal character.

Definition (Central/Infinitesimal Character)

$$\chi_\lambda$$ is called the central (or infinitesimal) character, and $$\widehat{\mathcal{Z}}({\mathfrak{g}})$$ denotes the set of all central characters. More generally, any algebra morphism $$\chi: {\mathcal{Z}}({\mathfrak{g}}) \to {\mathbb{C}}$$ is referred to as a central character. Central characters are in one-to-one correspondence with maximal ideals of $${\mathcal{Z}}({\mathfrak{g}})$$, where

\begin{align*} \chi & \iff \ker \chi \\ {\mathbb{C}}[x_1, \cdots, x_n] &\iff \left\langle{x_1 - a_1, \cdots, x_n - a_n}\right\rangle \end{align*}

where $$[a_1, \cdots, a_n] \in {\mathbb{C}}^n$$.

Next goal: Describe $$\chi_\lambda(z)$$ more explicitly.

Using PBW, we can write $$z\in {\mathcal{Z}}({\mathfrak{g}}) \subset U({\mathfrak{g}}) = U({\mathfrak{n}}^-) U({\mathfrak{h}}) U({\mathfrak{n}})$$. Some observations:

1. Any PBW monomial in $$z$$ ending with a factor in $${\mathfrak{n}}$$ will kill $$v^+$$, and hence can not contribute to $$\chi_\lambda(z)$$.
2. Any PBW monomial in $$z$$ beginning with a factor in $${\mathfrak{n}}^-$$ will send $$v^+$$ to a lower weight space, so it also can’t contribute.

So we only need to see what happens in the $${\mathfrak{h}}$$ part. A relevant decomposition here is \begin{align*} U({\mathfrak{g}}) = U({\mathfrak{h}}) \oplus \qty{ {\mathfrak{n}}^- U({\mathfrak{g}}) + U({\mathfrak{g}}){\mathfrak{n}}^+ } .\end{align*}

Exercise
Why is this sum direct?

Let $$\mathrm{pr}: U({\mathfrak{g}}) \to U({\mathfrak{h}})$$ be the projection onto the first factor. Then $$\chi_\lambda(z) = \lambda(\mathrm{pr} z)$$ for all $$z\in {\mathcal{Z}}({\mathfrak{g}})$$. Then if $$\mathrm{pr}(z) = h_1^{m_1} \cdots h_\ell^{m_\ell}$$, we can extend the action on $${\mathfrak{h}}$$ to all polynomials in elements of $${\mathfrak{h}}$$ (which is in fact evaluation on these monomials), and thus $$\chi_\lambda(z) = \lambda(h_1)^{m_1} \cdots \lambda(h_\ell)^{m_\ell}$$.

Note that for $$\lambda \in {\mathfrak{h}}^\vee$$, we’ve extended this to the “evaluation map” $$\lambda: U({\mathfrak{g}}) \cong S({\mathfrak{h}})$$, the symmetric algebra on $${\mathfrak{h}}$$.

Why is this the correct identification? The RHS is $$T({\mathfrak{h}}) / \left\langle{x\otimes y - y\otimes x - [xy]}\right\rangle$$, but notice that the bracket vanishes since $${\mathfrak{h}}$$ is abelian, and this is the exact definition of the symmetric algebra.

Thus $$\chi_\lambda = \lambda \circ \mathrm{pr}$$.

Observation:

\begin{align*} \lambda(\mathrm{pr}(z_1 z_2)) &= \chi_\lambda(z_1 z_1)\\ &= \chi_\lambda(z_1) \chi_\lambda(z_2) \\ &= \cdots \\ &= \lambda( \mathrm{pr}(z_1) \mathrm{pr}(z_2) ) .\end{align*}

Exercise
Show $$\cap_{\lambda \in {\mathfrak{h}}^\vee} \ker \lambda = \left\{{0}\right\}$$.
Definition (Harish-Chandra Morphism)
Let $$\xi = {\left.{\mathrm{pr}}\right|_{{\mathcal{Z}}({\mathfrak{g}})}}: {\mathcal{Z}}({\mathfrak{g}}) \to U({\mathfrak{h}})$$.
Definition (Twisted Harish-Chandra Morphism)
$$\xi$$ is an algebra morphism, and is referred to as the Harish-Chandra homomorphism.

See page 23 for interpretation of $$\xi$$ without reference to representations.

Questions:

1. Is $$\xi$$ injective?
2. What is $$\operatorname{im}({\xi })\subset U({\mathfrak{h}})$$?

When does $$\chi_\lambda = \chi_\mu$$? Proved last time: we introduced the $$\cdot$$ action and proved that $$M(s_\alpha \cdot \lambda) \subset M(\lambda)$$ where $$\alpha \in \Delta$$. It’ll turn out that the statement holds for all $$\lambda \in W$$.

Wednesday: Section 1.8.

# Wednesday February 5th

Recall the Harish-Chandra morphism $$\xi$$:

If $$M$$ is a highest weight module of highest weight $$\lambda$$ then $$z\in {\mathcal{Z}}({\mathfrak{g}})$$ acts on $$M$$ by scalar multiplication. Note that if we have $$\chi_\lambda(z)$$ where $$z\cdot v = \chi_\lambda(z) v$$ for all $$v\in M$$, we can identify $$\lambda(\mathrm{pr}(z)) = \lambda(\xi(z))$$.

The $$\chi_\lambda$$ are not all distinct – for example, if $$M(\mu) \subset M(\lambda)$$, then $$\chi_\mu = \chi_\lambda$$. More generally, if $$L(\mu)$$ is a subquotient of $$M(\lambda)$$ then $$\chi_\mu = \chi_\lambda$$. So when do we have equality $$\chi_\mu = \chi_\lambda$$?

Given $${\mathfrak{g}}\supset {\mathfrak{h}}$$ with $$\Phi \supset \Phi^+ \supset \Delta$$, then define \begin{align*}\rho = \frac 1 2 \sum_{\beta \in \Phi^+} \beta \in {\mathfrak{h}}^\vee.\end{align*} Note that $$\alpha \in \Delta \implies s_\alpha \rho = \rho - \alpha$$.

Definition (Dot Action)
The dot action of $$W$$ on $${\mathfrak{h}}^\vee$$ is given by \begin{align*}w\cdot \lambda = w(\lambda + \rho) - \rho,\end{align*} which implies $$(\rho, \alpha^\vee) = 1$$ for all $$\alpha \in \Delta$$. Then $$\rho = \sum_{i=1}^\ell w$$.
Exercise
Check that this gives a well-defined group action.
$$\mu$$ is linked to $$\lambda$$ iff $$\mu = w\cdot \lambda$$ for some $$w\in W$$. Note that this is an equivalence relation, with equivalence classes/orbits where the orbit of $$\lambda$$ is $$\left\{{w\cdot \lambda {~\mathrel{\Big|}~}w\in W}\right\}$$ is called the linkage class of $$\lambda$$.

Note that this is a finite subset, since $$W$$ is finite. Orbit-stabilizer applies here, so bigger stabilizers yield smaller orbits and vice-versa.

Example
$$w\cdot (-\rho) = w(-\rho + \rho) - \rho = -\rho$$, so $$-\rho$$ is in its own linkage class.
Definition (Dot-Regular)
$$\lambda \in {\mathfrak{h}}^\vee$$ is dot-regular iff $${\left\lvert {W\cdot \lambda } \right\rvert} = {\left\lvert {W} \right\rvert}$$, or equivalently if $$(\lambda + \rho, \beta^\vee) \neq 0$$ for all $$\beta \in \Phi$$.

To think about: does this hold if $$\Phi$$ is replaced by $$\Delta$$?

We also say $$\lambda$$ is dot-singular if $$\lambda$$ is not dot-regular, or equivalently $${\operatorname{Stab}}_{W\cdot}\lambda \neq \left\{{1}\right\}$$.

I.e. lying on root hyperplanes.

Exercise
If $$0\in {\mathfrak{h}}^\vee$$ is regular, then $$-\rho$$ is singular. Proposition (Weights in Weyl Orbit Yield Equal Characters)
If $$\lambda \in \Lambda$$ and $$\mu \in W\cdot \lambda$$, then $$\chi_\mu = \chi_\lambda$$.
Proof

Start with $$\alpha \in \Delta$$ and consider $$\mu = s_\alpha \cdot \lambda$$. Since $$\lambda \in \Lambda$$, we have $$n\mathrel{\vcenter{:}}=(\lambda ,\alpha^\vee) \in {\mathbb{Z}}$$ by definition. There are three cases:

1. $$n\in {\mathbb{Z}}^+$$, then $$M(s_\alpha \cdot \lambda) \subset M(\lambda)$$. By Proposition 1.4, we have $$\chi_\mu =\chi_\lambda$$.

2. For $$n=-1$$, $$\mu = s_\alpha \cdot \lambda = \lambda + \rho -(\lambda + \rho, \alpha^\vee)\alpha - \rho = \lambda + n+1 = \lambda + 0$$. So $$\mu = \lambda$$ and thus $$M_\mu = M_\lambda$$.

3. For $$n\leq -2$$,

\begin{align*} (\mu, \alpha^\vee) &= (s_\alpha \cdot \lambda , \alpha^\vee) \\ &= (\lambda i (n+1)\alpha, \alpha^\vee) \\ &= n - 2(n+1) \\ &= -n-2 \\ &\geq 0 ,\end{align*}

so $$\chi_\mu = \chi_{s_\alpha \cdot \mu} = \chi_{s\alpha \cdot (s_\alpha \cdot \lambda)} = \chi_\lambda$$. Since $$W$$ is generated by simple reflections and the linkage property is transitive, the result follows by induction on $$\ell(w)$$.

Exercise (1.8)
See book, show that certain properties of the dot action hold (namely nonlinearity).

## 1.9: Extending the Harish-Chandra Morphism

We want to extend the previous proposition from $$\lambda \in \Lambda$$ to $$\lambda \in {\mathfrak{h}}^\vee$$. We’ll use a density argument from affine algebraic geometry, and switch to the Zariski topology on $${\mathfrak{h}}^\vee\subset {\mathbb{C}}^n$$.

Fix a basis $$\Delta = \left\{{a_1, \cdots, a_\ell}\right\}$$ and use the Killing form to identify these with a basis for $${\mathfrak{h}}= \left\{{h_1, \cdots, h_\ell}\right\}$$. Similarly, take $$\left\{{w_1, \cdots, w_\ell}\right\}$$ as a basis for $${\mathfrak{h}}^\vee$$, and we’ll use the identification

\begin{align*} {\mathfrak{h}}^\vee&\iff {\mathbb{A}}^\ell \\ \lambda &\iff (\lambda(h_1), \cdots, \lambda(h_\ell)) .\end{align*}

We identify $$U({\mathfrak{h}}) = S({\mathfrak{h}}) = {\mathbb{C}}[h_1, \cdots, h_\ell]$$ with $$P({\mathfrak{h}}^\vee)$$ which are polynomial functions on $${\mathfrak{h}}^\vee$$. Fix $$\lambda \in {\mathfrak{h}}^\vee$$, extended $$\lambda$$ to be a multiplicative function on polynomials. For $$f\in {\mathbb{C}}[h_1, \cdots, h_\ell]$$, we defined $$\lambda(f)$$. Under the identification, we send this to $$\tilde f$$ where $$\tilde f(\lambda) = \lambda(f)$$.

Note: we’ll identify $$f$$ and $$\tilde f$$ notationally going forward and drop the tilde everywhere.

Then $$W$$ acts on $$P({\mathfrak{h}}^\vee)$$ by the dot action: $$(w\cdot \tilde f)(\lambda) = \tilde f(w^{-1}\cdot \lambda)$$.

Exercise
Check that this is a well-defined action.

Under this identification, we have

\begin{align*} {\mathfrak{h}}^\vee&\iff {\mathbb{A}}^\ell \\ \Lambda &\iff {\mathbb{Z}}^\ell .\end{align*}

Note that $$\Lambda$$ is discrete in the analytic topology, but is dense in the Zariski topology.

Proposition (Polynomials Vanishing on a Lattice Are Zero)
A polynomial $$f$$ on $${\mathbb{A}}^\ell$$ vanishing on $${\mathbb{Z}}^\ell$$ must be identically zero.
Proof

For $$\ell = 1$$: A nonzero polynomial in one variable has only finitely many zeros, but if $$f$$ vanishes on $${\mathbb{Z}}$$ it has infinitely many zeros.

For $$\ell > 1$$: View $$f\in {\mathbb{C}}[h_1, \cdots, h_{\ell-1}][h_\ell]$$. Substituting any fixed integers for the $$h_i$$ for $$i\leq \ell - 1$$ yields a polynomial in one variable which vanishes on $${\mathbb{Z}}$$. By the first case, $$f \equiv 0$$, so the coefficients must all be zero and the coefficient polynomials in $${\mathbb{C}}[h_1, \cdots ,h_{\ell-1}]$$ vanish on $${\mathbb{Z}}^{\ell-1}$$. By induction, these coefficient polynomials are identically zero.

Corollary (Lattices Are Zariski-Dense in Affine Space)
The only Zariski-closed subset of $${\mathbb{A}}^\ell$$ containing $${\mathbb{Z}}^\ell$$ is $${\mathbb{A}}^\ell$$ itself, so the Zariski closure $$\mkern 1.5mu\overline{\mkern-1.5mu{\mathbb{Z}}^\ell\mkern-1.5mu}\mkern 1.5mu = {\mathbb{A}}^\ell$$ and $${\mathbb{Z}}^\ell$$ is dense in $${\mathbb{A}}^\ell$$.

# Friday February 7th

So far, we have $$\chi_\lambda = \chi_{w.\cdot \lambda}$$ if $$\lambda \in \Lambda$$ and $$w\in W$$. We have $${\mathfrak{h}}^\vee\supset \Lambda$$ which is topologically equivalent to $${\mathbb{A}}^\ell \supset {\mathbb{Z}}^\ell$$, where $${\mathbb{Z}}^\ell$$ is dense in the Zariski topology.

For $$z\in {\mathcal{Z}}({\mathfrak{g}})$$, we have $$\chi_\lambda(z) = \chi_{W\cdot \lambda} (z)$$ and so $$\lambda(\xi(z)) = (w\cdot \lambda )(\xi(z))$$ where $$\xi: {\mathcal{Z}}({\mathfrak{g}}) \to U({\mathfrak{h}}) = S({\mathfrak{h}}) \cong P({\mathfrak{h}}^\vee)$$ where we send $$\lambda(f)$$ to $$f(\lambda)$$.

Then $$\xi(z)(\lambda) = \xi(z)(w\cdot \lambda)$$ for all $$\lambda \in \Lambda$$, and so $$\xi(z) = w^{-1}\xi(z)$$ on $$\Lambda$$. But both sides here are polynomials and thus continuous, and $$\Lambda \subset {\mathfrak{h}}^\vee$$ is dense, so $$\xi(z) = w^{-1}\xi(z)$$ on all of $${\mathfrak{h}}^\vee$$. I.e., $$\chi_\lambda = \chi_{w\cdot } \lambda$$ for all $$\lambda \in {\mathfrak{h}}^\vee$$.

This in fact shows that the image of $${\mathcal{Z}}({\mathfrak{g}})$$ under $$\xi$$ consists of $$W{\hbox{-}}$$invariant polynomials.

It’s customary to state this in terms of the natural action of $$W$$ on polynomials without the row-shift. We do this by letting $$\tau_\rho: S({\mathfrak{h}}) \xrightarrow{\cong} S({\mathfrak{h}})$$ be the algebra automorphism induced by $$f(\lambda) \mapsto f(\lambda - \rho)$$. This is clearly invertible by $$f(\lambda) \mapsto f(\lambda + \rho)$$. We then define \begin{align*}\psi: {\mathcal{Z}}({\mathfrak{g}}) \xrightarrow{\xi} S({\mathfrak{h}}) \xrightarrow{\tau_\rho} S({\mathfrak{h}})\end{align*} as this composition; this is referred to as the Harish-Chandra (HC) homomorphism.

Exercise
Show $$\chi_\lambda(z) = (\lambda + \rho) (\psi(z))$$ and $$\chi_{w\cdot \lambda} (w(\lambda+\rho))(\psi(z))$$, where $$w({\,\cdot\,})$$ is the usual $$w{\hbox{-}}$$action.

Replacing $$\lambda$$ by $$\lambda + \rho$$ and $$w$$ by $$w^{-1}$$, we get \begin{align*} w\psi(z) = \psi(z) \end{align*} for all $$z\in {\mathcal{Z}}({\mathfrak{g}})$$ and all $$w\in W$$ where $$(w\psi(z))(\lambda) = \psi(z)(w^{-1}\lambda)$$.

We’ve proved that

Theorem (Character Linkage and Image of the HC Morphism)
1. If $$\lambda, \mu \in {\mathfrak{h}}^\vee$$ that are linked, then $$\chi_\lambda = \chi_\mu$$.

2. The image of the twisted HC homomorphism $$\psi: {\mathcal{Z}}({\mathfrak{g}}) \to U({\mathfrak{h}}) = S({\mathfrak{h}})$$ lies in the subalgebra $$S({\mathfrak{h}})^W$$.

Example

Let $${\mathfrak{g}}= {\mathfrak{sl}}_2$$. Recall from finite-dimensional representations there is a canonical element $$c\in {\mathcal{Z}}({\mathfrak{g}})$$ called the Casimir element. For $${\mathcal{O}}$$, we need information about the full center $${\mathcal{Z}}({\mathfrak{g}})$$ (hence introducing infinitesimal characters).

Expressing $$c$$ in the PBW basis yields $$c = h^2 + 2h + 4yx$$, where $$h^2 + 2h \in U({\mathfrak{h}})$$ and $$4yx \in {\mathfrak{n}}^- U({\mathfrak{g}}) + U({\mathfrak{g}}) {\mathfrak{n}}$$.

Enveloping algebra convention: $$x$$s, $$h$$s, $$y$$s

Then $$\xi(c) = {\operatorname{pr}}(c) = h^2 + 2h$$, and under the identification $${\mathfrak{h}}^\vee\iff {\mathbb{C}}$$ where $$\lambda \iff \lambda(h)$$, we can identify $$\rho \iff \rho(h) = 1$$. The row shift is given by $$\psi(c) = (h-1)^2 + 2(h-1) = h^2 - 1$$. This is $$W{\hbox{-}}$$invariant, since $$s_{\alpha_h} = -h$$. But $$W = \left\langle{s_\alpha, 1}\right\rangle$$, so $$s_\alpha$$ generates $$W$$.

We also have $$\chi_\lambda(c) = (\lambda + \rho) (\psi(c)) = (\lambda+1)^2 - 1$$. Then \begin{align*} \chi_\lambda(c) = \chi_\mu(c) \iff (\lambda+1)^2 - 1 = (\mu + 1)^2 \iff \mu = \lambda \text{ or } \mu = -\lambda - 2 \end{align*}

But $$\lambda = 1 \cdot \lambda$$ and $$-\lambda - 2 = s_\alpha \cdot \lambda$$, so $${\mathcal{Z}}({\mathfrak{g}}) = \left\langle{c}\right\rangle \mathrel{\vcenter{:}}={\mathbb{C}}[c]$$ as an algebra. So these characters are equal iff $$\mu = w\cdot \lambda$$ for $$w\in W$$.

# Section 1.10: Harish-Chandra’s Theorem

Goal: prove the converse of the previous theorem.

Theorem (Harish-Chandra)

Let $$\psi: {\mathcal{Z}}({\mathfrak{g}}) \to S({\mathfrak{h}})$$ be the twisted HC homomorphism. Then

1. $$\psi$$ is an isomorphism of $${\mathcal{Z}}({\mathfrak{g}})$$ onto $$S({\mathfrak{h}})^W$$.

2. For all $$\lambda, \mu \in {\mathfrak{h}}^\vee$$, $$\chi_\lambda = \chi_\mu$$ iff $$\mu = w\cdot \lambda$$ for some $$w\in W$$.

3. Every central character $$\chi: {\mathcal{Z}}({\mathfrak{g}}) \to {\mathbb{C}}$$ is a $$\chi_\lambda$$.

Proof (of (a))

Relies heavily on the Chevalley Restriction Theorem (which we won’t prove here).

Initially we have a restriction map on polynomial functions $$\theta: P({\mathfrak{g}}) \to P({\mathfrak{h}})$$. We identified $$P({\mathfrak{g}}) = S({\mathfrak{g}}^\vee)$$, the formal polynomials on $${\mathfrak{g}}^\vee$$. However, for $${\mathfrak{g}}$$ semisimple, we can identify $$S({\mathfrak{g}}^\vee) \cong S({\mathfrak{g}})$$ via the Killing form.

By the Chinese Remainder Theorem, $$\theta: S({\mathfrak{g}})^G \to S({\mathfrak{h}})^W$$ is an isomorphism, where the subgroup $$G \leq \operatorname{Aut}({\mathfrak{g}})$$ is the adjoint group generated by $$\left\{{\exp\qty{\qty}{\operatorname}{ad}_x {~\mathrel{\Big|}~}x \text{ is nilpotent}}\right\}$$.

It turns out that $$S({\mathfrak{g}})^G$$ is very close to $${\mathcal{Z}}({\mathfrak{g}})$$ – it is the associated graded of a natural filtration on $${\mathcal{Z}}({\mathfrak{g}})$$. This is enough to show that $$\psi$$ is a bijection.

Proof (of (b))

We’ll prove the contrapositive of the converse.

Suppose $$W\cdot \lambda \cap W\cdot \mu = \emptyset$$ and both are in $${\mathfrak{h}}^\vee$$. Since these are finite sets, Lagrange interpolation yields a polynomial that is 1 on $$W\cdot \lambda$$ and 0 on $$W\cdot \mu$$. Let $$g = \frac{1}{{\left\lvert {W} \right\rvert}} \sum_{w\in W} w\cdot f$$.

Note: definitely the dot action here, may be a typo in the book.

Then $$g$$ is a $$W\cdot$$ invariant polynomial with the same properties. By part (a), we can get rid of the row shift to obtain an isomorphism $$\xi: {\mathcal{Z}}({\mathfrak{g}}) \to S({\mathfrak{h}})^(W\cdot)$$, the $$W\cdot$$ invariant polynomials. Choose $$z\in {\mathcal{Z}}({\mathfrak{g}})$$ such that $$\xi(z) = g$$, then $$\chi_\lambda(z) = \lambda(\xi(z)) = \lambda(g) = g(\lambda) = 1$$. So $$\chi_\mu(z) = 0$$ similarly, and $$\chi_\lambda = \chi_\mu$$.

Proof (of (c))
This follows from some commutative algebra, we won’t say much here. Look at maximal ideals in $${\mathbb{C}}[x, y,\cdots]$$ which correspond to evaluating on points in $${\mathbb{C}}^\ell$$.

$$\hfill\blacksquare$$

Remark
Chevalley actually proved that $$S({\mathfrak{h}})^W \cong {\mathbb{C}}(p_1, \cdots, p_\ell)$$ where the $$p_i$$ are homogeneous polynomials of degrees $$d_1 \leq \cdots \leq d_\ell$$. These numbers satisfy some remarkable properties: $$\prod d_i = {\left\lvert {W} \right\rvert}$$ and $$d_1 = 2$$ (these are called the degrees of $$W$$)

# Section 1.11

Theorem (Category O is Artinian)
Category $${\mathcal{O}}$$ is artinian, i.e. every $$M \in {\mathcal{O}}$$ is Artinian (DCC) and $$\dim \hom_{\mathfrak{g}}(M, N) < \infty$$ for every $$M, N$$.

Recall that $${\mathcal{O}}$$ is known to be Noetherian from an earlier theorem. This will imply that every $$M$$ has a composition/Jordan-Holder series, so we can take composition factors and multiplicities.

Most interesting question: what are the factors/multiplicities of the simple modules and Verma modules?

# Wednesday February 12th

## Infinitesimal Blocks

We’ll break up category $${\mathcal{O}}$$ into smaller subcategories (blocks).

Recall theorem 1.1 (e): $${\mathcal{Z}}({\mathfrak{g}})$$ acts locally finitely on $$M\in {\mathcal{O}}$$, and $$M$$ has a finite filtration with highest weight sections, so $$M$$ should involve only a finite number of central characters $$\chi_\lambda$$ (where $$\lambda \in{\mathfrak{h}}^\vee$$).

Note: an analog of Jordan decomposition works here because of this finiteness condition. This discussion will parallel the JCF of a simple operator on a finite dimensional $${\mathbb{C}}{\hbox{-}}$$vector space. However, this involves the entire center instead of just scalar matrices, so the analogy is diagonalizing a family of operators simultaneously.

Let $$\chi \in \widehat{\mathcal{Z}}({\mathfrak{g}})$$ and $$M\in {\mathcal{O}}$$, and \begin{align*} M^\chi \mathrel{\vcenter{:}}=\left\{{v\in M {~\mathrel{\Big|}~}~\forall z\in {\mathcal{Z}}({\mathfrak{g}}),~\exists n>0 ~{\text{s.t.}}~(z- \chi(z))^n \cdot v = 0}\right\} \end{align*}

Idea: write \begin{align*} z = \chi(z) \cdot 1 + (z-\chi(z)\cdot 1), \end{align*} where the first is a scalar operator and the second is (locally) nilpotent on $$M^\chi$$. Thus we can always arrange for $$z$$ to act by a sum of “Jordan blocks”: Some observations:

• $$M^\chi$$ are $$U({\mathfrak{g}}){\hbox{-}}$$submodules of $$M$$.
• The subspaces $$M^\chi$$ are linearly independent
• $${\mathcal{Z}}({\mathfrak{g}})$$ stabilizes each $$M_\mu$$ since $${\mathcal{Z}}({\mathfrak{g}})$$ and $$U({\mathfrak{h}})$$ are a commuting family of operators on $$M_\mu$$.
• We can write \begin{align*}M_\mu = \bigoplus_{\chi \in \widehat{\mathcal{Z}}({\mathfrak{g}})} (M_\mu \cap M^\chi),\end{align*} and since $$M$$ is generated by a finite sum of weight spaces, $$M = \bigoplus_{\chi \in \widehat{\mathcal{Z}}({\mathfrak{g}})} M^\chi$$.
• By Harish-Chandra’s theorem, every $$\chi$$ is $$\chi_\lambda$$ for some $$\lambda \in {\mathfrak{h}}^\vee$$.

Let $${\mathcal{O}}_\chi$$ be the full subcategory of modules $$M$$ such that $$M = M^\chi$$; we refer to this as a block.

Note: full subcategory means keep all of the hom sets.

Proposition (O Factors into Blocks, Indecomposables/Highest Weight Modules Lie in a Single Block)
$${\mathcal{O}}= \bigoplus_{\lambda \in {\mathfrak{h}}^\vee} {\mathcal{O}}_{\chi_\lambda}$$. Each indecomposable module in $${\mathcal{O}}$$ lies in a unique $${\mathcal{O}}_\chi$$. In particular, any highest weight module of highest weight $$\lambda$$ lies in $${\mathcal{O}}_{\chi_\lambda}$$.

Thus we can reduce to studying $${\mathcal{O}}_{\chi_\lambda}$$.

Remark: $${\mathcal{O}}_{\chi_\lambda}$$ has a finite number of simple modules $$\left\{{L(w\cdot \lambda) {~\mathrel{\Big|}~}w\in W}\right\}$$ and a finite number of Verma modules $$\left\{{M(w\cdot \lambda) {~\mathrel{\Big|}~}w\in W}\right\}$$.

## Blocks

Let $${\mathcal{C}}$$ be a category with is artinian and noetherian, with $$L_1, L_2$$ simple modules. We say $$L_1 \sim L_2$$ if there exists a non-split extension \begin{align*}0 \to L_1 \to M \to L_2 \to 0,\end{align*} i.e. $$\operatorname{Ext}^1_{\mathcal{O}}(L_2, L_1) \neq 0$$. In particular, $$M$$ equivalently needs to be indecomposable. We then extend $$\sim$$ to be reflexive/symmetric/transitive to obtain an equivalence relation.

$$L_1$$ ends up being the socle here.

This partitions the simple modules in $${\mathcal{C}}$$ into blocks $${\mathcal{B}}$$. More generally, we say $$M\in {\mathcal{C}}$$ belongs to $${\mathcal{B}}$$ iff all of the composition factors of $$M$$ belong to $${\mathcal{B}}$$. Although not obvious, there are no nontrivial extensions between modules in different blocks. Thus each simple module (generally, just an object) $$M\in {\mathcal{C}}$$ decomposes as a direct sum of submodules (subobjects) with each belonging to a single block.

Question: Is $${\mathcal{O}}_\chi$$ a block of $${\mathcal{O}}$$? The answer is not always. Because each indecomposable module in $${\mathcal{O}}$$ lives in a simple $${\mathcal{O}}_\chi$$ By the definition, it’s clear that each block is contained in a single simple infinitesimal block $${\mathcal{O}}_\chi$$.

The block containing $$L_1, L_2$$ will be contained in the same infinitesimal block, and continuing the composition series puts all composition factors in a single block.

Proposition (Integral Weights Yield Simple Blocks)
If $$\lambda$$ is an integral weight, so $$\lambda \in \Lambda$$, then $${\mathcal{O}}_{\chi_\lambda}$$ is a (simple) block of $${\mathcal{O}}$$.
Proof

It suffices to show that all $$L(w\cdot \lambda)$$ for $$w\in W$$ lie in a single block. We’ll induct on the length of $$w$$. Start with $$2 = s_\alpha$$ for some $$\alpha\in \Delta$$. Let $$\mu = s_\alpha \cdot \lambda$$. If $$\mu = \lambda$$, i.e. $$\lambda$$ is in the stabilizer, then we’re done.

Otherwise, assume WLOG $$\mu < \lambda$$ in the partial order, using the fact that $$\lambda \in \Lambda$$. (The difference between these is just an integer multiple of $$\alpha$$.)

By proposition 1.4, we have the following maps:

Then $$\phi$$ induces a map $$L(\mu) \xrightarrow{\mkern 1.5mu\overline{\mkern-1.5mu\phi\mkern-1.5mu}\mkern 1.5mu} M(\lambda)/N$$, where the codomain here is a highest weight module with quotient $$L(\lambda)$$. Since highest weight modules are indecomposable and thus lie in a single bloc, $$L(\mu)$$ and $$L(\lambda)$$ are in the same block.

Note that if $$v^+$$ generates $$M(\lambda)$$, $$v^+ + N$$ generated the quotient.

Now inducting on $$\ell(w)$$, iterating this argument yields all $$L(w\cdot \lambda)$$ (as $$w$$ varies) in the same block.

Example

This isn’t true for non-integral weights. Let $${\mathfrak{g}}= {\mathfrak{sl}}(2, {\mathbb{C}})$$ with $$\lambda \in {\mathbb{R}}\setminus {\mathbb{Z}}$$ and $$\lambda > -1$$. Then

\begin{align*} \mu &= s_\alpha \lambda \\ &= -\lambda - 2 \\ &<_{{\mathbb{R}}} -1 \end{align*}

with the usual ordering on $${\mathbb{R}}$$, but $$\mu \not > \lambda$$ in the ordering on $${\mathfrak{h}}^\vee$$: we have $$\lambda - \mu = 2\lambda + 2$$, but $$\alpha \equiv 2$$ and thus these don’t differ by an element of $$2{\mathbb{Z}}$$.

Thus $$\mu, \lambda$$ are in different cosets of $${\mathbb{Z}}\Phi = \Lambda_r$$ in $${\mathfrak{h}}^\vee$$. However, $$M(\lambda), M(\mu)$$ are simple since $$\lambda, \mu$$ are not non-negative integers.

By exercise 1.13, there can be no nontrivial extension, so they’re in different homological blocks but in the same $${\mathcal{O}}_{\chi_\lambda}$$ since $$\mu = s_\alpha \cdot \lambda$$. So this infinitesimal block splits into multiple homological blocks.

Friday: 1.14 and 1.15.

# Friday February 14th

Recall that we have a decomposition \begin{align*}{\mathcal{O}}= \bigoplus_{\chi \in \widehat{\mathcal{Z}}({\mathfrak{g}})} {\mathcal{O}}_\chi\end{align*} into infinitesimal blocks, where $${\mathcal{O}}_0 \mathrel{\vcenter{:}}={\mathcal{O}}_{\chi_0}$$ is the principal block. Since $$0\in{\mathfrak{h}}^\vee$$, we can associate $$\chi_0, M_0, L(0) = {\mathbb{C}}$$ the trivial module for $${\mathfrak{g}}$$.

## 1.14 – 1.15: Formal Characters

Some background from finite dimensional representation theory of a finite group $$G$$ over $${\mathbb{C}}$$. The hope is to find matrices for each element of $$G$$, but this isn’t basis invariant. Instead, we take traces of these matrices, which is less data and basis-independent This is referred to as the character of the representation, and in nice situations, the characters determine the irreducible representations.

For a semisimple lie algebra $${\mathfrak{g}}$$ and a finite dimensional representation $$M$$, it’s enough to keep track of weight multiplicities when $${\mathfrak{g}}$$ is the lie algebra associated to a compact lie group $$G$$. From this data, the characters can be recovered. So the data of all pairs $$(\dim M_\lambda, \lambda \in {\mathfrak{h}}^\vee)$$ suffices. To track this information, we introduce a formal character.

Remark: If $$G$$ is a group and $$k$$ is a commutative ring, $$kG$$ is the group ring of $$G$$. This has the following properties:

• $$\sum a_i g_i + \sum b_i g_i = \sum(a_i + b_i) g_i$$
• $$\qty{ \sum a_i g_i } \qty{ \sum b_i g_i } = \sum_{i, j} a_i b_j g_i g_j$$

Let $${\mathbb{Z}}\Lambda$$ be the integral group ring of the lattice. Since $$\Lambda$$ is an abelian group, and the additive notation would be more difficult. So we write $$\Lambda$$ multiplicatively and introduce $$e(\lambda)$$ for $$\lambda \in {\mathfrak{h}}^\vee$$, where $$e(\lambda) e(\mu) = e(\lambda + \mu)$$. For $$M$$ a finite dimensional $${\mathfrak{g}}{\hbox{-}}$$module, the formal character of $$M$$ is given by

\begin{align*} \operatorname{ch}M = \sum_{\lambda\in \Lambda} \qty{ M(\lambda) } e(\lambda) \quad\in {\mathbb{Z}}\Lambda .\end{align*}

This satisfies

• $$\operatorname{ch}(M\oplus N) = \operatorname{ch}(M) + \operatorname{ch}(N)$$
• $$\operatorname{ch}(M\otimes N) = \operatorname{ch}(M)\operatorname{ch}(N)$$
• For $$\operatorname{ch}(M) = \sum a_\mu e(\mu)$$ and $$\operatorname{ch}(N) = \sum b_\nu e(\nu)$$, we have \begin{align*}\operatorname{ch}(M) \operatorname{ch}(N) = \sum_{\lambda} \qty{ \sum_{\mu + \nu = \lambda} a_\mu b_\nu } e(\lambda)\end{align*}

By Weyl’s complete reducibility theorem, any semisimple module decomposes into a sum of simple modules. Thus it suffices to determine that characters of simple modules $$L(\lambda)$$ for $$\lambda \in \Lambda^+$$, corresponding to dominant integral weights. Then we can reconstruct $$\operatorname{ch}(M)$$ from $$\operatorname{ch}L(\lambda)$$ for $$M\in {\mathcal{O}}$$.

Specifying the weight spaces dimensions is equivalent to a function $$\operatorname{ch}_M: {\mathfrak{h}}^\vee\to {\mathbb{Z}}^+$$ where $$\operatorname{ch}_M(\lambda) = \dim M_\lambda$$. The analogy of $$e(\lambda)$$ in this setting is the characteristic function $$e_\lambda$$ where $$e_\lambda(\mu) = \delta_{\lambda \mu}$$ for $$\mu \in {\mathfrak{h}}^\vee$$. We can thus write the function

\begin{align*} \operatorname{ch}_M = \sum_{\lambda \in {\mathfrak{h}}^\vee} \qty{ \dim M_\lambda } e_\lambda .\end{align*}

When $$\dim M < \infty$$, $$\operatorname{ch}_M$$ has finite support, although we generally don’t have this in $${\mathcal{O}}$$. In this setting, multiplication of formal characters corresponds to convolution of functions, i.e. \begin{align*}(f\ast g)(\lambda) = \sum_{\mu + \nu = \lambda} f(\mu) g(\nu).\end{align*} Define \begin{align*} {\mathcal{X}}= \left\{{f: {\mathfrak{h}}^\vee\to {\mathbb{Z}}{~\mathrel{\Big|}~}{\operatorname{supp}}(f) \subset \cup_{i\leq n} \qty{ \lambda_i - {\mathbb{Z}}^+ \Phi^+ } ~~\text{ for some } \lambda_1, \cdots, \lambda_n \in {\mathfrak{h}}^\vee}\right\} \end{align*}

Idea: this is a “cone” below some weights.

This makes $${\mathcal{X}}$$ into a $${\mathbb{Z}}{\hbox{-}}$$module with a well-defined convolution, thus $${\mathcal{X}}$$ is a commutative ring where

• $$e_\lambda \in {\mathcal{X}}$$ for all $$\lambda$$
• $$e_0 = 1$$
• $$e_\lambda \ast e_\mu = e_{\lambda + \mu}$$.

If $$M\in {\mathcal{O}}$$, then $$\operatorname{ch}_M \in {\mathcal{X}}$$ by axiom O5 (local finiteness).

Example: $$\operatorname{ch}L(\lambda) = e(\lambda) + \sum_{\mu < \lambda} m_{\lambda \mu} e(\mu)$$, where $$m_{\lambda \mu} = \dim L(\lambda)_{\mu} \in {\mathbb{Z}}^\pm$$.

Definition (Principal Block)
Let $${\mathcal{X}}_0$$ be the additive subgroup of $${\mathcal{X}}$$ generated by all $$\operatorname{ch}M$$ for $$M \in {\mathcal{O}}$$.
Proposition (Additivity of Characters, Correspondence with K(O) )
1. If $$0 \to M' \to M \to M'' \to 0$$ is a SES in $${\mathcal{O}}$$, then $$\operatorname{ch}M = \operatorname{ch}M' + \operatorname{ch}M''$$.

2. There is a 1-to-1 correspondence

\begin{align*} {\mathcal{X}}_0 &\iff K({\mathcal{O}}) \\ \operatorname{ch}M &\iff [M] ,\end{align*}

where $$K$$ is the Grothendieck group.

1. If $$M\in {\mathcal{O}}$$ and $$\dim L < \infty$$, then $$\operatorname{ch}(L\otimes M) = \operatorname{ch}L \ast \operatorname{ch}M$$.

Remark: (a) implies that $$\operatorname{ch}M$$ is the sum of the formal characters of its composition factors with multiplicities. Thus \begin{align*} \operatorname{ch}M = \sum_{L \text{ simple }} [M:L] ~\operatorname{ch}L .\end{align*}

Proof (of a)
Use the fact that $$\dim M_\lambda = \dim M_\lambda' + \dim M_\lambda''$$
Proof (of b)
Check that the obvious maps are well-defined and mutually inverse.
Proof (of c)
Because of the module structure we’ve put on the tensor product $$(L \otimes M)_\lambda = \sum_{\mu + \nu = \lambda} L_\mu \otimes M_\nu$$.

Remark: The natural action of $$W$$ on $$\Lambda$$ or on $${\mathfrak{h}}^\vee$$ extends to $${\mathbb{Z}}\Lambda$$ and $${\mathcal{X}}$$ if we define

\begin{align*} w\cdot e(\lambda) \coloneqq e(w\lambda) \quad w\in W,~~\lambda \in \Lambda \text{ or } {\mathfrak{h}}^\vee .\end{align*}

If $$\lambda \in \Lambda^+$$, then $$w( \operatorname{ch}L(\lambda) ) = \operatorname{ch}L(\lambda)$$ since $$\dim L(\lambda)_\mu = \dim L(\lambda)_{w\mu}$$. Thus the characters of simple finite-dimensional modules are $$W{\hbox{-}}$$invariant.

## 1.16: Formal Characters of Verma Modules

1: We have a similar formula

\begin{align*} \operatorname{ch}M(\lambda) = \operatorname{ch}L(\lambda) + \sum_{\mu < \lambda} a_{\lambda \mu} \operatorname{ch}L(\mu) \\ \quad \text{ with } a_{\lambda \mu} \in {\mathbb{Z}}^+ \text{ and } a_{\lambda \mu} = [M(\lambda): L(\mu)] .\end{align*}

This all happens in a single block of $${\mathcal{O}}$$, which has finitely many simple and Verma modules.
In fact, the sum will be over $$\left\{{ \mu \in W\cdot \lambda {~\mathrel{\Big|}~}\mu < \lambda}\right\}$$. But computing $$L(\mu)$$ is difficult in general.

Since the set of weights $$W\cdot \lambda$$ is finite, we can totally order it in a way that’s compatible with the partial order on $${\mathfrak{h}}^\vee$$ (so $$\leq$$ in the partial order implies $$\leq$$ in the total order). So if we order the weights $$\mu_i$$ indexing the Verma modules in columns and indexing the simple modules in the rows, this is an upper triangular matrix with 1s on the diagonal. This can inverted since it’s unipotent, with the inverse of same upper triangular form.

2: We can write \begin{align*} \operatorname{ch}L(\lambda) = \operatorname{ch}M(\lambda) + \sum_{\mu < \lambda,~\mu \in W\cdot \lambda} b_{\lambda \mu} \operatorname{ch}M(\mu) \quad b_{\lambda \mu} \in {\mathbb{Z}} \end{align*} This expresses the character in terms of Verma modules, which are easier to compute.

Next time: formulas for the characters

# Monday February 17th

## Character Formulas

Last time: The second character formula (equation (2)),

\begin{align*} \operatorname{ch}L(\lambda) = \operatorname{ch}M(\lambda) + \sum_{\mu < \lambda, ~~ \mu \in W\cdot \lambda} b_{\lambda, \mu} ~\operatorname{ch}M(\mu) .\end{align*}

Note that $$b_{\lambda, \mu} \in {\mathbb{Z}}$$, and this formula comes from inverting the previous one.

Holy grail: characters of simple modules!

We can write $$M(\lambda) \cong U({\mathfrak{n}}^-) \otimes_{\mathbb{C}}{\mathbb{C}}_\lambda$$ as a $${\mathfrak{h}}{\hbox{-}}$$module. Define $$p: {\mathfrak{h}}^\vee\to {\mathbb{Z}}$$ where $$p(\gamma)$$ is the number of tuples $$(t_\beta)_{\beta\in\Phi^+}$$ where $$t_\beta \in {\mathbb{Z}}^+$$ and $$\gamma = - \sum_{\beta \in \Phi^+} t_\beta \beta$$. We have $${\operatorname{supp}}(p) = - {\mathbb{Z}}^+ \Phi^+$$, which gives us something like a negative quadrant of the lattice.

The function $$p$$ is essentially the Kostant partition function. The advantage here is that $$p \in {\mathcal{X}}$$ (defined last time, support is less than some finite weights?).

Observation: $$p = \operatorname{ch}_{M(0)}$$ since $$U({\mathfrak{n}}^-) \otimes{\mathbb{C}}_{\lambda = 0}$$ has PBW basis \begin{align*} \left\{{ \prod_{\beta\in\Phi^+} y_\beta^{t_\beta} \otimes 1_{\lambda = 0} {~\mathrel{\Big|}~}t_\beta \in {\mathbb{Z}}^+ }\right\}. \end{align*}

Example: Let $${\mathfrak{g}}= {\mathfrak{sl}}(3)$$, then $$\Phi^+ = \left\{{\alpha_1, \alpha_2, \alpha_1 + \alpha_2}\right\}$$. Then $$\gamma = -\qty{\alpha_1 + 2\alpha_2}$$ corresponds to $$(1,2,0), (0,1,1)$$ so $$p(\gamma) = 2$$. If $$\gamma = -\qty{2\alpha_1 + 2\alpha_2}$$, this corresponds to $$(2,2,0), (1,1,1), (0,0,2)$$ so $$p(\gamma) = 3$$.

Note: just the number of ways of obtaining $$-\gamma$$ as a linear combinations of roots.

In general, $$\dim M(0)_\gamma = p(\gamma)$$.

Proposition (Characters as Convolution Products)
For any $$\lambda \in {\mathfrak{h}}^\vee$$, we have $$\operatorname{ch}_{M_\lambda} = p\ast e_\gamma$$, taking the convolution product.

In particular, $$\operatorname{ch}_{M(0)} = p$$.

Proof (of Proposition)
We have the following computation: \begin{align*} (p\ast e_\lambda)(\lambda+\gamma) &= p(\gamma) e_\lambda(\lambda) \\ &= p(\gamma) 1 \\ &= p(\gamma) \\ &= \dim M(\lambda)_{\lambda + \gamma} \quad\text{ as a weight space } .\end{align*}

Note that we can also write equation (2) as

\begin{align*} \operatorname{ch}L(\lambda) = \sum_{w\cdot \lambda \leq \lambda} b_{\lambda, w} \operatorname{ch}M(w\cdot \lambda) .\end{align*}

Here $$b_{\lambda, w} \in {\mathbb{Z}}$$ and in fact $$b_{\lambda, 1} = 1$$.

Example: Let $${\mathfrak{g}}= {\mathfrak{sl}}(2)$$. We know

\begin{align*} \operatorname{ch}M(\lambda) &= \operatorname{ch}L(\lambda) + \operatorname{ch}L(s_\alpha \cdot \lambda) \\ \operatorname{ch}M(s_\alpha \cdot \lambda) &= \operatorname{ch}L(s_\alpha \cdot \lambda) .\end{align*}

We can think of this pictorially as the ‘head’ on top of the socle:

\begin{align*} M(\lambda) = \frac{L(\lambda)}{L(s_\alpha \cdot \lambda)} .\end{align*}

The formula above corresponds to the matrix \begin{align*} \left[\begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array}\right] \end{align*}

We can invert the formula to get equation (2), which corresponds to inverting this matrix:

\begin{align*} \operatorname{ch}L(\lambda) &= \operatorname{ch}M(\lambda) - \operatorname{ch}M(s_\alpha \cdot \lambda) \\ \operatorname{ch}L(s_\alpha \cdot \lambda) &= \operatorname{ch}M(s_\alpha \cdot \lambda) .\end{align*}

Note that the coefficients $$b_{w, \lambda} \in \left\{{0, \pm 1}\right\}$$ in this equation are independent of $$\lambda \in \Lambda^+$$.

If $$\lambda \not\in\Lambda^+$$, then $$\operatorname{ch}L(\lambda) = \operatorname{ch}M(\lambda)$$ and $$b_{\lambda, 1} = 1, b_{\lambda, s_\alpha} = 0$$ are again independent of $$\lambda \in {\mathfrak{h}}^\vee\setminus \Lambda^+$$.

Question: To what extent to $$b_{\lambda, w}$$ depend on $$\lambda$$? The answer is seemingly “not much”.

## Category $${\mathcal{O}}$$ Methods

Note: skipping chapter 2 since we’re focusing on infinite dimensional representations.

### Hom and Ext

Recall that $$\hom({\,\cdot\,}, {\,\cdot\,})$$ is left exact but not exact, and is either covariant or contravariant depending on which variable is fixed. So taking $$\hom$$ of a SES yields a LES involving the derived functors $$\operatorname{Ext}^n$$. Convention: $$\operatorname{Ext}^0 \mathrel{\vcenter{:}}=\hom$$ and $$\operatorname{Ext}^1 \mathrel{\vcenter{:}}=\operatorname{Ext}$$.

Let $$A, C$$ be $$U({\mathfrak{g}}){\hbox{-}}$$modules. Consider two short exact sequences

\begin{align*} 0 &\to A \to B \to C \to 0 \\ 0 &\to A \to B' \to C \to 0 .\end{align*}

where $$B, B'$$ are extensions of $$C$$ by $$A$$.

We say two such sequences are equivalent iff there is an isomorphism making this diagram commute:

The set $$\operatorname{Ext}_{U({\mathfrak{g}})}(C, A)$$ of equivalence classes of extensions is a group under an operation called “Baer sum” (see Wikipedia) in which the identity is the class of the split SES \begin{align*} 0 \to A \to A\oplus C \to C \to 0. \end{align*}

It turns out that the first right-derived functor of $$\hom$$ defined using projective resolutions, namely $$\operatorname{Ext}_1$$, is isomorphic to $$\operatorname{Ext}$$. In particular, each SES leads to a pair of LESs given by applying $$\hom({\,\cdot\,}, D)$$ and $$\hom(E, {\,\cdot\,})$$ for $$D, E \in U({\mathfrak{g}}){\hbox{-}}$$mod.

Warning: Even if $$A, C\in {\mathcal{O}}$$, there’s no guarantee $$B\in {\mathcal{O}}$$ for $$B$$ an extension. In this case, we define $$\operatorname{Ext}_{\mathcal{O}}(C, A)$$ to be only those extensions lying in $${\mathcal{O}}$$.

Proposition (Homs and Exts for Vermas and Quotients)

Let $$\lambda, \mu \in {\mathfrak{h}}^\vee$$.

1. If $$M$$ is a highest weight module of highest weight $$\mu$$ and $$\lambda \not< \mu$$, then $$\operatorname{Ext}_{\mathcal{O}}(M(\lambda), M) = 0$$. Contrapositive: nontrivial extensions force the strict inequality $$\mu < \lambda$$. In particular, $$\operatorname{Ext}_{\mathcal{O}}(M(\lambda), X) = 0$$ for $$X = L(\lambda), M(\lambda)$$.

2. If $$\mu \leq \lambda$$, then $$\operatorname{Ext}_{\mathcal{O}}(M(\lambda), L(\mu)) = 0$$.

3. If $$\mu < \lambda$$, then $$\operatorname{Ext}_{\mathcal{O}}(L(\lambda), L(\mu)) \cong \hom_{\mathcal{O}}(N(\lambda), L(\mu))$$.

1. is useful, homs can be easier to compute. Can just look at radical structure of $$N$$, i.e. just the head.
4. $$\operatorname{Ext}_{\mathcal{O}}(L(\lambda), L(\lambda)) = 0$$.

Proof (of (a))

Given an extension $$0 \to M \xrightarrow{f} E \xrightarrow{g} M(\lambda) \to 0$$ where $$M$$ is a highest weight module of highest weight $$\mu \not< \lambda$$. We want to show it splits.

Claim: Let $$v^+$$ be a maximal vector of $$M(\lambda)$$, let $$v$$ be its preimage under $$g$$, then $$v$$ is a maximal vector of weight $$\lambda$$ in $$E$$. For $$x\in {\mathfrak{n}}$$, we can think of the RHS as a quotient and identify

\begin{align*} x\cdot v + M &= x\cdot (v+M) \\ &= x\cdot v^+ \\ &= 0 \\ &= 0 + M ,\end{align*}

and for these to be equal this implies $$x\cdot v \in M$$. But $$x\cdot v$$ has weight $$> \lambda$$; since $$\mu \not> \lambda$$, $$M$$ has no such weights. So we must have $$x\cdot v = 0\in E$$, and $$v$$ is a maximal vector.

It’s also the case that $$U({\mathfrak{n}}^-)$$ acts freely on $$v$$, since it acts freely on its image in the quotient $$M(\lambda)$$. So $$v$$ generates a submodule $$\left\langle{v}\right\rangle \leq E$$ isomorphic to $$M(\lambda)$$. This defines a splitting (because of the freeness of this action) given by $$h(v^+) = v$$.

Proof (of (b))
Follows from (a).
Proof (of (c))

Look at the SES $$0\to N(\lambda) \to M(\lambda) \to L(\lambda) \to 0$$. Apply $$\hom_{\mathcal{O}}({\,\cdot\,}, L(\mu))$$ to get the LES

\begin{align*} \cdots &\to \hom_{\mathcal{O}}(M(\lambda), L(\mu)) \to \hom_{\mathcal{O}}( N(\lambda), L(\mu) ) \\ &\to \operatorname{Ext}_{\mathcal{O}}( L(\lambda), L(\mu) ) \to \operatorname{Ext}_{\mathcal{O}}( M(\lambda), L(\mu) ) \to \cdots \end{align*}

and since $$L(\lambda)$$ is the only simple quotient of $$M(\lambda)$$, so the first $$\hom$$ is zero. Similarly, the last $$\operatorname{Ext}_{\mathcal{O}}$$ is zero by (b), and the middle is an isomorphism.

Proof (of (d))
Replace $$\mu$$ by $$\lambda$$ in the LES, now term 2 above is zero since $$\Pi(L(\lambda)) < \lambda$$. Term 4 is zero by (b), and thus term 3 is zero.

Next section: duality in category $${\mathcal{O}}$$.

# Monday February 24th

## Antidominant Weights

Recall that for $$\lambda \in {\mathfrak{h}}^\vee$$, we can associate $$\Phi_{[\lambda]}$$ and $$W_{[\lambda]}$$ and consider $$W_{[\lambda]} \cdot \lambda$$. When $$\lambda \in \Lambda$$ is integral and $$\mu \in W\lambda \cap\Lambda^+$$, we have $$M(\mu) \to L(\mu)$$ its simple quotient, which is finite-dimensional.

Definition (Antidominant)
$$\lambda \in {\mathfrak{h}}^\vee$$ is antidominant if $$(\lambda + \rho, \alpha^\vee) \not\in {\mathbb{Z}}^{> 0}$$ for all $$\alpha \in \Phi^+$$. Dually, $$\lambda$$ is dominant if $$(\lambda + \rho, \alpha^\vee)\not\in{\mathbb{Z}}^{<0}$$ for all $$\alpha\in\Phi^+$$.

Note that most weights are both dominant and antidominant. Example: take $$\lambda = -\rho$$. We won’t use the dominant condition often.

Remark
For $$\lambda \in {\mathfrak{h}}^\vee$$, $$W\cdot \lambda$$ and $$W_{[\lambda]}\cdot \lambda$$ contain at least one antidominant weight. Let $$\mu$$ be minimal in either set with respect to the usual ordering on $${\mathfrak{h}}^\vee$$. If $$(\mu + \rho, \alpha^\vee) \in {\mathbb{Z}}^{>0}$$ for some $$\alpha > 0$$, then $$s_\alpha \cdot \mu < \mu$$, which is a contradiction. So any minimal weight will be antidominant.
Proposition (Equivalent Definitions of Antidominant)

Fix $$\lambda\in{\mathfrak{h}}^\vee$$, as well as $$W_{[\lambda]}, \Phi_{[\lambda]}$$, Then define $$\Phi^+_{[\lambda]} \mathrel{\vcenter{:}}=\Phi_{[\lambda]} \cap\Phi^+ \supset \Delta_{[\lambda]}$$. TFAE:

1. $$\lambda$$ is antidominant.
2. $$(\lambda + \rho, \alpha^\vee) \leq 0$$ for all $$\alpha\in \Delta_{[\lambda]}$$.
3. $$\lambda \leq s_\alpha \cdot \lambda$$ for all $$\alpha \in \Delta_{[\lambda]}$$.
4. $$\lambda \leq w\cdot \lambda$$ for all $$w\in W_{[\lambda]}$$.

In particular, there is a unique antidominant weight in $$W_{[\lambda]} \cdot \lambda$$.

Proof (a implies b)
$$(\lambda + \rho, \alpha^\vee) \in {\mathbb{Z}}$$ for all $$\alpha \in \Delta_{[\lambda]}$$ or $$\Phi^+{[\lambda]}$$.
Proof (b implies a)
Suppose (b) and $$(\lambda + \rho, \alpha^\vee) \in {\mathbb{Z}}$$ for all $$\alpha\in\Phi^+$$. Then $$\alpha \in \Phi^+ \cap\Phi_{[\lambda]}$$, which is equal to $$\Phi^+_{[\lambda]}$$ by the homework problem. So $$\alpha \in {\mathbb{Z}}^+ \Delta_{[\lambda]}$$, and thus (claim) $$(\lambda + \rho, \alpha^\vee) \leq 0$$ by (b). Why? Replace $$\alpha^\vee$$ with a bunch of other $$\alpha_i^\vee$$ for which $$(\lambda + \rho, \alpha_i^\vee) < 0$$ and sum.
Proof (b iff c)
$$s_\alpha \cdot \lambda = \lambda - (\lambda + \rho, \alpha^\vee)\alpha$$.
Proof (d implies c)
Trivial due to definitions.
Proof (c implies d)

Use induction on $$\ell(w)$$ in $$W_{[\lambda]}$$. Assume (c), and hence (b), and consider $$\ell(w) = 0 \implies w = 1$$. For the inductive step, if $$\ell(w) > 0$$, write $$w = w' s_\alpha$$ in $$W_{[\lambda]}$$ with $$\alpha \in \Delta_{[\lambda]}$$. Then $$\ell(w') = \ell(w) - 1$$, and by Proposition 0.3.4, $$w(\alpha) < 0$$.

We can then write \begin{align*} \lambda - w\cdot \lambda = (\lambda - w'\cdot \lambda) + (w' \cdot \lambda - w\cdot \lambda) .\end{align*}

The first term is $$\leq 0$$ by hypothesis, so noting that the $$w$$ action is not linear but still an action, we have

\begin{align*} w' \cdot \lambda - w\cdot \lambda &= w\cdot s_\alpha \cdot \lambda - w\cdot \lambda \\ &= w(s_\alpha \lambda - \lambda) \quad\text{by 1.8b} \\ &= w(-(\lambda+\rho, \alpha^\vee)\alpha) \\ &= -(\lambda + \rho, \alpha^\vee)(w\alpha) \\ &= -1 (\in {\mathbb{Z}}^-)(<0) ,\end{align*}

which is a product of three negatives and thus negative.

A remark from page 56: Even when $$\lambda \not \in \Lambda$$, we can decompose $${\mathcal{O}}_\chi$$ into subcategories $${\mathcal{O}}_\lambda$$. We then recover $${\mathcal{O}}_\chi$$ as the sum over $${\mathcal{O}}_\lambda$$ for antidominant $$\lambda$$’s in the intersection of the linkage class with cosets of $$\Lambda_r$$. These are the homological blocks.

## Tensoring Verma and Finite Dimensional Modules

First step toward understanding translation functors, which help with calculations.

By Corollary 1.2, we know that every $$N\in {\mathcal{O}}$$ has a filtration with every section being a highest weight module. We will improve this result to show that if $$M$$ is finite-dimensional and $$V$$ is a Verma module, then $$V\otimes M$$ has a filtration whose sections are all Verma modules. This is important for studying projectives in a couple of sections.

Theorem (Sections of Finite-Dimensional Tensor Verma are Verma)
Let $$M$$ be a finite dimensional $$U({\mathfrak{g}}){\hbox{-}}$$module. Then $$T \mathrel{\vcenter{:}}= M(\lambda) \otimes M$$ has a finite filtration with sections $$M(\lambda + \mu)$$ for $$\mu \in \Pi(M)$$, occuring with the same multiplicities.
Proof

Use the tensor identity

\begin{align*} \qty{ U({\mathfrak{g}}) \otimes_{U({\mathfrak{b}})} L} \otimes_{\mathbb{C}}M \cong U({\mathfrak{g}})\otimes_{U({\mathfrak{b}})} \qty{ L \otimes_{\mathbb{C}}M } ,\end{align*}

where

• $$L \in U({\mathfrak{b}}){\hbox{-}}$$mod.
• $$M \in U({\mathfrak{g}}){\hbox{-}}$$mod.
• $$L\otimes M \in U({\mathfrak{b}}){\hbox{-}}$$mod via the tensor action.

The LHS is a $$U({\mathfrak{g}}){\hbox{-}}$$module via the tensor action, and the $$RHS$$ has an induced $$U({\mathfrak{g}}){\hbox{-}}$$action.

See proof in Knapp’s “Lie Groups, Lie Algebras, and Cohomology”. This is true more generally if $${\mathfrak{g}}$$ is any lie algebra and $${\mathfrak{b}}\leq {\mathfrak{g}}$$ any lie-subalgebra.

Recall from page 18 that the functor $$\operatorname{Ind}_{\mathfrak{h}}^{\mathfrak{g}}$$ is exact on finite-dimensional $${\mathfrak{b}}{\hbox{-}}$$modules. Assume $$L, M$$ are finite-dimensional, and set $$N \mathrel{\vcenter{:}}= L \otimes_{\mathbb{C}}M$$. Take a basis $$v_1, \cdots, v_n$$ of weight vectors for $$N$$ of weights $$\nu_1, \cdots, \nu_n$$. Order these such that $$\nu_i \leq \nu_j \iff i<j$$.

Set $$N_k$$ to be the $$U({\mathfrak{b}})\left\langle{v_k, \cdots, v_n}\right\rangle$$ for $$1\leq k \leq n$$, which is a decreasing filtration since acting by $$U({\mathfrak{b}})$$ moves along this list of vectors/weights to the right. By induction on $$n$$, this filtration induces a filtration on $$U({\mathfrak{g}})\otimes_{U({\mathfrak{b}})} N$$ whose sections are Verma modules.

This yields \begin{align*} \operatorname{Ind}_{\mathfrak{b}}^{\mathfrak{g}}N_k / \operatorname{Ind}_{\mathfrak{b}}^{\mathfrak{g}}N_{k+1} \cong M(\nu_k) .\end{align*}

The intermediate quotients will be 1-dimensional submodules, which will induce up to highest weight modules. We’ll finish the proof next time.

# Wednesday February 26th

We want to show the following identity:

\begin{align*} \qty{ U({\mathfrak{g}}) \otimes_{U({\mathfrak{b}})} L } \otimes_{\mathbb{C}}M \cong U({\mathfrak{g}}) \otimes_{U({\mathfrak{b}})} \qty{ L \otimes_{\mathbb{C}}M } .\end{align*}

Assume $$L$$ and $$M$$ are finite dimensional. Then for $$N = L \otimes M$$, there is a basis of weight vectors $$v_1, \cdots, v_n, \nu_1, \cdots, \nu_m$$ with $$\nu_i \leq \nu_j \iff i\leq j$$. Moreover \begin{align*}N_k = {\mathbb{C}}\left\langle{v_k, \cdots, v_n}\right\rangle = U({\mathfrak{b}})\left\langle{v_k, \cdots, v_n}\right\rangle,\end{align*} and we have a natural filtration

\begin{align*} 0 \subset N_n \subset \cdots \subset N_1 = N \end{align*} with $$N_i / N_{i+1} \cong {\mathbb{C}}_{v_i}$$ as $${\mathfrak{b}}{\hbox{-}}$$modules.

We thus obtain \begin{align*}\operatorname{Ind}_{\mathfrak{b}}^{\mathfrak{g}}N_i / \operatorname{Ind}_{\mathfrak{b}}^{\mathfrak{g}}N_{i+1} \cong \operatorname{Ind}_{\mathfrak{b}}^{\mathfrak{g}}{\mathbb{C}}_{v_i} = M(v_i)\end{align*} by exactness of the Ind functor. Apply this to $$L = {\mathbb{C}}_\lambda$$, then the LHS is $$M(\lambda) \otimes_{\mathbb{C}}M$$, where $$M$$ is finite dimensional. On the RHS, $$N = {\mathbb{C}}_\lambda \otimes M$$ has the same dimension as $$M$$ with weights $$\lambda + \mu$$ for $$\mu \in \Pi(M)$$. Thus $$M(\lambda) \otimes M$$ has filtration with quotients $$M(\lambda + \mu)$$ over $$\mu \in \Pi(M)$$, which was the theorem we had last time.

Remark
The proof shows that $$M(\lambda) \otimes M$$ has a submodules $$M(\lambda + \mu)$$ for any maximal weight $$\mu$$ of $$M$$, and a quotient $$M(\lambda + \nu)$$ where $$\nu$$ is any minimal weight of $$M$$. We knew that every $$M\in {\mathcal{O}}$$ has a finite filtration, but here the quotients are now Verma modules. This will help us study projectives later, which we need to study higher Exts.

## Standard Filtrations

There are several main players in the theory of highest-weight categories, of which $${\mathcal{O}}_{\chi_\lambda}$$ is one:

• Simple modules: $$L(\lambda)$$
• Standard modules $$M(\lambda)$$
• Costandard modules $$M(\lambda)^\vee$$
• Indecomposable projectives $$P(\lambda)$$
• Tilting modules $$T(\lambda)$$.
Definition (Standard Filtration/Verma flag)
A standard filtration of $$M\in {\mathcal{O}}$$ is a filtration with subquotients isomorphic to Verma modules.

Note that when $$M$$ has a standard filtration, the submodules are not unique, but the length, subquotients, and multiplicities are unique. We can thus use $$K({\mathcal{O}})$$ or formal characters as an invariant, since the multiplicities $$(M: M(\lambda))$$ are well-defined.

If $$M, N$$ have standard filtration, then so does $$M \oplus N$$ by concatenation. In this case, $$(M\oplus N: M(\lambda)) = (M:M(\lambda)) + (N: M(\lambda))$$.

Proposition (Submodules and Direct Summands Also Have Standard Filtrations)

Let $$M\in {\mathcal{O}}$$ have a standard filtration. Then

1. If $$\lambda$$ is maximal in $$\Pi(M)$$, then $$M$$ has a submodule isomorphic to $$M(\lambda)$$ and $$M/M(\lambda)$$ has a standard filtration \begin{align*}0 = M_0 \subset \cdots \subset M_n = M.\end{align*}
2. If $$M = M' \oplus M''$$, then $$M'$$ and $$M''$$ have standard filtrations.
3. $$M$$ is free as a $$U({\mathfrak{n}}^-){\hbox{-}}$$module.
Proof (of (a))

By assumption on $$\lambda$$, $$M$$ has a maximal vector of weight $$\lambda$$, and thus the universal property yields a nonzero morphism $$\phi: M(\lambda) \to M$$.

The claim is that $$\phi$$ is injective, from which the proof follows. Proof of claim: choose a minimal index $$i$$ such that $$\phi(M(\lambda)) \subset M_i$$ in the filtration. Follow this with the subquotient map to yield \begin{align*}\psi: M(\lambda) \to M^i \mathrel{\vcenter{:}}= M_i/M_{i-1} \cong M(\mu),\end{align*} which is nonzero by minimality of $$i$$.

Thus $$\lambda \leq \mu$$, and by our assumption, this implies $$\lambda = \mu$$. But then $$\psi$$ sends highest weight vectors to highest weight vectors and is free, so $$\psi$$ is an isomorphism. Thus $$\phi$$ is injective and $$M(\lambda) \subset M$$.

We can now write $$M(\lambda) \cap M_{i-1} = \ker \psi = 0$$, so we obtain a direct sum decomposition $$M_i \cong M_{i-1} \oplus M(\lambda)$$. We thus obtain a SES \begin{align*}0 \to M_{i-1} \to M/M(\lambda) \to M/M_i \to 0.\end{align*}

We can easily construct standard filtrations for $$M_{i-1}$$ and $$M/M_i$$, so the middle term also has a standard filtration. Thus $$M/M(\lambda)$$ has a standard filtration of length one less than that of $$M$$.

Proof (of(b))

By induction of the filtration length $$n$$ of $$M$$. If $$n=0$$, $$M$$ is a Verma module and thus indecomposable and there’s nothing to show.

For $$n\geq 1$$, let $$\pi \in \Pi(M)$$ be maximal (which we can always find for $$M\in {\mathcal{O}})$$) and WLOG $$M_\lambda' \neq 0$$.

By the universal property, we have a nonzero composition

Applying (a) to this composite map,

1. It must be injective, so $$M(\lambda) \hookrightarrow M'$$
2. $$M/M(\lambda) \cong M'/M(\lambda) \oplus M''$$ has a standard filtration of length $$n-1$$.

By induction, $$M'/M(\lambda)$$ and $$M''$$ have standard filtrations, and thus so does $$M'$$.

Proof (of (c))
By induction on $$n$$: if $$n=1$$, then $$M \cong M(\lambda)$$ is $$U({\mathfrak{n}}^-){\hbox{-}}$$free. Otherwise, if $$n > 1$$, by (a) $$M(\lambda) \subset M$$ and $$M/M(\lambda)$$ has a standard filtration of length $$n-1$$. By induction, $$M/M(\lambda)$$ is $$U({\mathfrak{n}}^-){\hbox{-}}$$free, and hence so is $$M$$.
Theorem (Multiplicities of Vermas)
If $$M$$ has a standard filtration, then $$(M: M(\lambda)) = \dim \hom_{\mathcal{O}}(M, M(\lambda)^\vee)$$.
Proof
By induction on the filtration length $$n$$. If $$n=1$$, $$M$$ is a Verma module, and $$(M(\mu) : M(\lambda)) = \delta_{\mu \lambda} = \dim \hom_{\mathcal{O}}(M(\mu), M(\lambda)^\vee)$$ by Theorem 3.3c.

For $$n>1$$, consider \begin{align*}0 \to M_{n-1} \to M \to M(\mu) \to 0.\end{align*} Apply the left-exact contravariant functor $$\hom_{\mathcal{O}}({\,\cdot\,}, M(\lambda)^\vee)$$ to obtain

## Projectives in $${\mathcal{O}}$$

We want to show that $${\mathcal{O}}$$ has enough projectives, i.e. every $$M\in {\mathcal{O}}$$ is a quotient of a projective object. We’ll also want to show $${\mathcal{O}}$$ has enough injectives, i.e. every modules embeds into an injective object.

Definition (Projective Objects)

If $${\mathcal{A}}$$ is an abelian category, an object $$P\in{\mathcal{A}}$$ is projective iff the left-exact functor $$\hom_{\mathcal{A}}(P, {\,\cdot\,})$$ is exact, or equivalently

In other words, there is a SES \begin{align*}\hom(P, M) \to \hom(P, N) \to 0,\end{align*} which precisely says that every $$f$$ in the latter has a lift $$\tilde f$$ in the former by surjectivity.

Definition (Injective Objects)

An object $$Q\in{\mathcal{A}}$$ is injective iff $$\hom_{\mathcal{A}}({\,\cdot\,}, Q)$$ is exact, i.e.

i.e., \begin{align*}\hom_{\mathcal{A}}(M, Q) \to \hom_{\mathcal{A}}(N, Q) \to 0\end{align*} so every $$g$$ in the latter has a lift to $$\tilde g$$ in the former.

In $${\mathcal{O}}$$, having enough projectives is equivalent to having enough injectives because $$({\,\cdot\,})^\vee$$ is an exact contravariant endofunctor, which sends projectives to injectives and vice-versa. Thus we’ll focus on projectives.

# Friday February 28th

Recall that $$\lambda \in {\mathfrak{h}}^\vee$$ is dominant iff for all $$\alpha \in \Phi^+$$, we have $$(\lambda + \rho, \alpha^\vee) \not\in{\mathbb{Z}}^{<0}$$. Equivalently, as in Proposition 3.5c, $$\lambda$$ is maximal in its $$W_{[\lambda]}\cdot$$ orbit.

## Constructing Projectives

Proposition (Dominant Weights Yield Projective Vermas, Projective Tensor Finite-Dimensional is Projective)
1. If $$\lambda \in {\mathfrak{h}}^\vee$$ is dominant, then $$M(\lambda)$$ is projective in $${\mathcal{O}}$$.

2. If $$P\in {\mathcal{O}}$$ is projective and $$\dim L < \infty$$< then $$P \otimes_{\mathbb{C}}L$$ is projective.

Proof
1. We want to find a $$\psi$$ making this diagram commute: {=tex} \begin{center} \begin{tikzcd} & & v^+ \in M(\lambda) \arrow[dd, "\phi"] \arrow[lldd, "\psi", dashed] & & \\ & & & & \\ M \arrow[rr, "\pi"] & & p(v^+) \in N \arrow[rr] & & 0 \end{tikzcd} \end{center}

Assume $$\phi \neq 0$$. Since $$M(\lambda) \in {\mathcal{O}}_{\chi_\lambda}$$, we have $$\phi(M(\lambda)) \subset N^{\chi_\lambda}$$. WLOG, we can assume $$M, N \in {\mathcal{O}}_{\chi_\lambda}$$, and if $$v^+$$ is maximal, $$p(v^+)$$ is maximal. By surjectivity of $$\pi$$, there exists a $$v\in M$$ such that $$v \mapsto p(v^+)$$. Then $$M \supset U({\mathfrak{n}}) v$$ is finite dimensional, so it contains a maximal vector whose weight is linked to $$\lambda$$ since $$M\in {\mathcal{O}}_{\chi_\lambda}$$.

But since $$\lambda$$ is dominant, there is no such weight greater than $$\lambda$$, so $$v$$ itself must be this maximal vector. Then by the universal property of $$M(\lambda)$$, there is a map $$\psi: M(\lambda) \to M$$ where $$v^+ \mapsto v$$ making the diagram commute.

Note nice property: Vermas are projective iff maximal in orbit.

1. We want to show $$F = \hom_{\mathcal{O}}(P\otimes L, {\,\cdot\,})$$ is exact. But this is isomorphic to \begin{align*}\hom_{\mathcal{O}}(P, \hom_{\mathbb{C}}(L, {\,\cdot\,})) \cong \hom_{\mathcal{O}}(P, L^\vee\otimes_{\mathbb{C}}{\,\cdot\,}).\end{align*}

Thus $$F$$ is the composition of two exact functors: first do $$L^\vee\otimes_{\mathbb{C}}{\,\cdot\,}$$, which is exact since $${\mathbb{C}}$$ is a field, and $$\hom_{\mathcal{O}}(P,{\,\cdot\,})$$ is exact since $$P$$ is projective.

Example
Let $$M(-\rho)$$ be the Verma of highest weight $$\rho$$. This is irreducible because $$-\rho$$ is antidominant, and projective since $$-\rho$$ is dominant. In fact $$W\cdot(-\rho) = \left\{{-\rho}\right\}$$ by a calculation. Thus \begin{align*}L(-\rho) = M(-\rho) = P(-\rho) = M(-\rho)^\vee,\end{align*} so all 4 members of the highest weight category here are equal.

By convention, there is notation $$M(-\rho) = \Delta(-\rho)$$ and $$M(-\rho)^\vee= \nabla(-\rho)$$.

Note that we always have $$\operatorname{Ext}_0(L(-\rho), L(-\rho)) = 0$$, and every $${\mathcal{O}}_{\chi_{-\rho}} \in M$$ is equal to $$\bigoplus L(-\rho)^{\oplus n}$$.

So this is referred to as a semisimple category.

Theorem (O has Enough Projectives and Injectives)
$${\mathcal{O}}$$ has enough projectives and injectives.
Proof

Step 1

For all $$\lambda \in {\mathfrak{h}}^\vee$$, there exists a projective mapping onto $$L(\lambda)$$. Clearly $$\mu \mathrel{\vcenter{:}}=\lambda + n\rho$$ is dominant for $$n\gg 0$$, i.e. for $$n$$ large enough there are no negative integers resulting from inner products with coroots. Thus $$M(\mu)$$ is projective, and since $$n\rho \in \Lambda^+$$, we have $$\dim L(n\rho) < \infty$$. This implies $$P \mathrel{\vcenter{:}}= M(\mu) \otimes L(n\rho)$$ is projective by the previous proposition.

Apply $$w_0$$ reverses the weights, so $$w_0(n\rho) = -n\rho$$. Note that this doesn’t happen for all weights, so this property is somewhat special for $$\rho$$. In particular, since $$n\rho$$ was a highest weight, $$-n\rho$$ is a lowest weight.

By remark 3.6, $$P$$ has a quotient isomorphic to $$M(\mu - n\rho) = M(\lambda)$$. Thus $$P \twoheadrightarrow M(\lambda) \to L(\lambda)$$, and $$L(\lambda)$$ is a quotient of a projective. This establishes the result for simple modules.

Remark: By theorem 3.6, $$P$$ has a standard filtration with sections $$M(\mu + \nu)$$ for $$\nu \in \Pi(L(n\rho))$$. In particular $$M(\lambda)$$ occurs just once since \begin{align*}\dim L(n\rho)_{-n\rho} = \dim L(n\rho)_{w_0(n\rho)} = \dim L(n\rho)_{n\rho} = 1,\end{align*} with all $$\mu + \nu > \lambda$$.

Step 2

Use induction on Jordan-Hilder length to prove that any $$0\neq M\in {\mathcal{O}}$$ is a quotient of a projective. For $$\ell = 1$$, $$M$$ is simple, and by Step 1 this case holds.

Assume $$\ell > 1$$, then $$M$$ has a submodule $$L(\lambda)$$ obtained by taking the bottom of a Jordan-Holder series, so there is a SES \begin{align*}0 \to L(\lambda) \xrightarrow{\alpha} M \xrightarrow{\beta} N \to 0.\end{align*}

By induction, since $$\ell(N) = \ell(M) - 1$$, there exists a projective module $$Q \xrightarrow{\phi} N$$ which extends to a map $$\psi: Q \to M$$.

If $$\psi$$ is surjective, we are done. Otherwise, then the composition length forces $$\psi(Q) \cong N$$, and by commutativity there is a section $$\gamma: N \to \psi(Q)$$ splitting this SES. Thus $$M \cong L(\lambda) \oplus N$$, and by 1, there are projectives $$P \oplus Q$$ projecting onto each factor, so $$M$$ is projective.

## 3.9 Indecomposable Projectives

Definition (A Projective Cover)
A projective cover of $$M\in {\mathcal{O}}$$ is a map $$\pi: P_M \to M$$ where $$P_M$$ is projective and $$\pi$$ is an essential epimorphism, i.e. no proper submodule of $$P_M$$ is mapped surjectively onto $$M$$ by $$\pi_M$$.

It is an algebraic fact that in an Artinian (abelian) category with enough projectives, every module has a projective cover that is unique up to isomorphism.

See Curtis and Reiner, Section 6c.

Definition (The Projective Cover for a Weight)
For $$\lambda \in {\mathfrak{h}}^\vee$$, denote $$\pi_\lambda: P(\lambda) \twoheadrightarrow L(\lambda)$$ to be a fixed projective cover of $$L(\lambda)$$.

# ? March 2nd

\begin{align*} \frac{L(\lambda) }{L(\mu)} = P(\lambda) .\end{align*}

# Monday March 16th

Proposition (Chains of Containments of Vermas for Dominant Integral)

Suppose $$\lambda + \rho$$ is dominant integral, then

• $$M(w\cdot \lambda) \subset M(\lambda)$$ for all $$w\in W$$
• $$[M(\lambda): L(w\cdot \lambda)] > 0$$ for all $$w\in W$$

More precisely, if $$w = s_1 \cdots s_\ell$$ is reduced with $$s_i = s_{\alpha_i}$$ with $$\alpha_i \in \Delta$$ and $$\lambda_k = s_k \cdots s_1 \cdot \lambda$$, then \begin{align*} M(w\cdot \lambda) = M(\lambda_n) \subset M(\lambda_{n-1}) \subset \cdots \subset M(\lambda_0) = M(\lambda) \end{align*}

Moreover $$(\lambda_k + \rho, \alpha_{k+1}^\vee) \in {\mathbb{Z}}^+$$ for $$0\leq k \leq n-1$$ and so \begin{align*} \lambda_n \leq \lambda_{n-1} \leq \cdots \leq \lambda_0 .\end{align*}

Proof
By induction on $$n = \ell(w)$$. The $$n=0$$ case is obvious. For $$\ell(w) = k+1$$, write $$w'= s_k \cdots s_1$$. From section 0.3, $$(w')^{-1}\alpha_{k+1} > 0$$. We can compute \begin{align*} (\lambda_k + \rho, \alpha_{k+1}^\vee) &= (w' \cdots \lambda + \rho, \alpha_{k+1}^\vee) \\ &= (w'(\lambda + \rho), \alpha_{k+1}^\vee) \\ &= (\lambda + \rho, (w')^{-1}\alpha_{k+1}^\vee) \\ &= (\lambda + \rho, ((w')^{-1}\alpha_{k+1})^\vee) \\ &\in {\mathbb{Z}}^{+} \end{align*} since $$\lambda + \rho \in \Lambda^+$$ and $$(w')^{-1}\alpha_{k+1} \in \Phi^+$$.

This means that $$\lambda_{k+1} = s_{k+1} \lambda_k \leq \lambda_k$$. By proposition 1.4, reformulated in terms of the dot action, we have a map $$M(\lambda_{k+1}) \hookrightarrow M(\lambda_k)$$, and nonzero morphisms are injective by 4.2a.

Exercise (4.3)
If $$\lambda + \rho \in \Lambda^+$$, $$\operatorname{Soc}\,M(\lambda) = M(w_o \cdot \lambda)$$, and moreover if $$\lambda \in \Lambda_0^+$$ then the inclusions in the proposition are all proper.
Remark
For general $$\mu \in \Lambda$$, it is not so easy to decide when $$M(w\cdot \mu) \subset M(\mu)$$. The basic problem is that Proposition 1.4 only works for simple roots, whereas we can have $$s_\gamma \cdot \mu < \mu$$ for $$\gamma \in \Phi^+\setminus \Delta$$ with no obvious way to constrct an embedding $$M(s_\gamma \cdot \mu) \subset M(\mu)$$. See the following example.
Example

Let $${\mathfrak{g}}= {\mathfrak{sl}}(3, {\mathbb{C}})$$.

We don’t know if there’s a diagonal map indicated by the question mark in the following diagram:

Next few sections: any root reflection that moves downward through the ordering induces a containment of Verma modules.

## (4.4) Simplicity Criterion: The Integral Case

Theorem (Vermas Equal Quotients iff Antidominant Weight)
Let $$\lambda \in {\mathfrak{h}}^\vee$$ be any weight. Then $$M(\lambda) = L(\lambda) \iff \lambda$$ is antidominant.

The proof for $$\lambda$$ integral is fairly easy, because antidominance reduces to a condition involving simple roots, where we can use our Verma module embedding criterion from Proposition 1.4.

Proof (Integral Case)

Assume $$\lambda \in \Lambda$$.

$$\implies$$: Assume $$M(\lambda)$$ is simple but $$\lambda$$ is not antidominant. Then since $$\lambda \in \Lambda$$, $$(\lambda + \rho, \alpha^\vee)$$ is a positive integer for some $$\alpha \in \Delta$$. But then $$s_\alpha \lambda < \lambda$$ so $$M(s_\alpha \cdot \lambda) \subset M(\lambda)$$ by 1.4 and 4.2. But then $$N(\lambda) \neq 0$$, which contradicts irreducibility.

$$\impliedby$$: Assume $$\lambda$$ is antidominant. By proposition 3.5, $$\lambda < w\cdot \lambda$$ for all $$w\in W$$. Since all composition factors of $$M(\lambda)$$ and $$L(w\cdot \lambda)$$ where $$w\cdot \lambda \leq \lambda$$. This can only happen if $$w\cdot \lambda = \lambda$$, and so the only possible composition factor is $$L(\lambda)$$. Since $$[M(\lambda) : L(\lambda)]$$ is always equal to one, $$M(\lambda)$$ is simple.

Remark
The reverse implication still works in general if $$W$$ is replaced by $$W_{[\lambda]}$$. To extend the forward implication, we need to understand embeddings $$M(s_\beta \cdots \lambda) \hookrightarrow M(\lambda)$$ when $$\beta$$ is not simple.

## Existence of Embeddings (Preliminaries)

Lemma (Commuting Nilpotents)
Let $${\mathfrak{a}}$$ be a nilpotent Lie algebra (e.g. $${\mathfrak{n}}^-$$) and $$x\in {\mathfrak{a}}, u\in U({\mathfrak{a}})$$, then for every $$n\in {\mathbb{Z}}^+$$ there exists a $$t\in {\mathbb{Z}}^+$$ such that $$x^t u \in U({\mathfrak{a}}) x^n$$.

See Engel’s theorem

Proof

Use the fact that $$\operatorname{ad}x$$ acts nilpotently on $$U({\mathfrak{a}})$$, so there exists a $$q\geq 0$$ such that $$\qty{\operatorname{ad}x}^{q+1}u = 0$$.

Let $$\ell_x, r_x$$ be left and right multiplication by $$x$$ on $$U({\mathfrak{a}})$$. Then $$\operatorname{ad}x = \ell_x - r_x$$, and $$\ell_x, r_x \operatorname{ad}x$$ all commute.

Choosing $$t \geq q + n$$, we have

\begin{align*} x^t u &= \ell_x^t u \\ &= (r_x + \operatorname{ad}x)^t u \\ &= \sum_{i=0}^t {t \choose i} r_x^{t-i} \qty{\operatorname{ad}x}^i u \\ &= \sum_{i=0}^q {t \choose i} \qty{\qty{\operatorname{ad}x}^iu }x^{t-i} \\ &\in U({\mathfrak{a}}) x^{t-q} \\ &\subset U({\mathfrak{a}}) x^n \end{align*}

This will be useful when moving things around by positive roots that are not simple.

# Monday March 30th

Reminder of what we did already: we started on chapter 4, going into more detail on the structure of Verma modules and morphisms between them. We showed that the socle is an irreducible Verma modules, any nonzero morphism is injective, and the dimension of the hom space is at most 1. We ended showing a proposition about how to commute elements.

Proposition (Key Result: Containments of Vermas When Applying Weyl Elements)

Let $$\lambda, \mu \in {\mathfrak{h}}^\vee$$ and $$\alpha\in\Delta$$ be simple. Assume that $$n\mathrel{\vcenter{:}}=(\lambda + \rho, \alpha^\vee) \in {\mathbb{Z}}$$ and $$M(s_\alpha \cdot \mu) \subset M(\mu) \subset M(\lambda)$$. Then either

1. $$n\leq 0$$ and $$M(\lambda) \subset M(s_\alpha \cdot \lambda)$$, or
2. $$n>0$$ and $$M(s_\alpha \cdot \mu) \subset M(s_\alpha \cdot \lambda) \subset M(\lambda)$$.

In either case, $$M(s_\alpha \cdot \mu) \subset M(s_\alpha \cdot \lambda)$$.

Proof (of (a))
Use proposition 1.4 (exchanging $$\lambda$$ and $$s_\alpha \cdot \lambda$$).

## Proof (of (b))

Assume $$n>0$$. Then $$M(s_\alpha \cdot \lambda) \subset M(\lambda)$$ by proposition 1.4. Set $$s = (\mu + \rho, \alpha^\vee) \in {\mathbb{Z}}^+$$. Denote maximal vectors as follows: