# 1 Finite-dimensional Semisimple Lie Algebras over $${\mathbf{C}}$$ (Wednesday, August 17)

## 1.1 Humphreys 1.1

Main goal: understand semisimple finite-dimensional Lie algebras over $${\mathbf{C}}$$. These are extremely well-understood, but there are open problems in infinite-dimensional representations, representations over other fields, and Lie superalgebras.

Recall that an associative algebras is a ring with the structure of a $$k{\hbox{-}}$$vector space, and algebra generally means a non-associative algebra. Given any algebra, one can define a new bilinear product \begin{align*} [{-}, {-}]: A\otimes_k A &\to A \\ a\otimes b &\mapsto ab-ba \end{align*} called the commutator bracket. This yields a new algebra $$A_L$$ which is an example of a Lie algebra.

For $$L\in {}_{{ \mathbf{F} }}{\mathsf{Mod}}$$ with an operation $$[{-}, {-}]: L\times L\to L$$ (called the bracket) is a Lie algebra if

1. $$[{-},{-}]$$ is bilinear,
2. $$[x, x] = 0$$ for all $$x\in L$$, and
3. the Jacobi identity holds: $$[x[yz]] + [y[zx]] + [z[xy]] = 0$$.

Check that $$[ab]\mathrel{\vcenter{:}}= ab-ba$$ satisfies the Jacobi identity.

• Expanding $$[x+y, x+y] = 0$$ yields $$[xy] = -[yx]$$. Note that this is equivalent to axiom 2 when $$\operatorname{ch}{ \mathbf{F} }\neq 2$$ (given axiom 1).

• The Jacobi identity can be rewritten as $$[x[yz]] = [[xy]z] + [y[xz]]$$, where the second term is an error term measuring the failure of associativity. Note that this is essentially the Leibniz rule.

A Lie algebra $$L\in\mathsf{Lie} \mathsf{Alg}_{/ {{ \mathbf{F} }}}$$ is abelian if $$[xy]=0$$ for all $$x,y\in L$$.

A morphism in $$\mathsf{Lie} \mathsf{Alg}_{/ {{ \mathbf{F} }}}$$ is a morphism $$\phi\in {}_{{ \mathbf{F} }}{\mathsf{Mod}}(L, L')$$ satisfying $$\phi( [xy] ) = [ \phi(x) \phi(y) ]$$.

Check that if $$\phi$$ has an inverse in $${}_{{ \mathbf{F} }}{\mathsf{Mod}}$$, then $$\phi$$ automatically has an inverse in $$\mathsf{Lie} \mathsf{Alg}_{/ {{ \mathbf{F} }}}$$.

A vector subspace $$K\leq L$$ is a Lie subalgebra if $$[xy]\in K$$ for all $$x,y\in K$$.

• Note that any nonzero $$x\in L$$ determines a 1-dimensional Lie subalgebra $$K\mathrel{\vcenter{:}}={ \mathbf{F} }\cdot x$$, which is in fact abelian.
• A big source of Lie algebras: left-invariant vector fields on a Lie group.
• We’ll restrict to finite-dimensional algebras for the remainder of the class.

## 1.2 Humphreys 1.2: Linear Lie Algebras

For $$V\in {}_{{ \mathbf{F} }}{\mathsf{Mod}}$$, the endomorphisms $$A\mathrel{\vcenter{:}}={ \operatorname{End} }_{ \mathbf{F} }(V)$$ is an associative algebra over $${ \mathbf{F} }$$. Thus it can be made into a Lie algebra $${\mathfrak{gl}}(V) \mathrel{\vcenter{:}}= A_L$$ by defining $$[xy] = xy-yx$$ as above.

Any subalgebra $$K\leq {\mathfrak{gl}}(V)$$ is a linear Lie algebra.

After picking a basis for $$V$$, there is a noncanonical isomorphism $${ \operatorname{End} }_{ \mathbf{F} }(V) \cong \operatorname{Mat}_{n\times n}({ \mathbf{F} })$$ where $$n\mathrel{\vcenter{:}}=\dim_{ \mathbf{F} }V$$. The resulting Lie algebra is $${\mathfrak{gl}}_n({ \mathbf{F} }) \mathrel{\vcenter{:}}=\operatorname{Mat}_{n\times n}({ \mathbf{F} })_L$$.

By Ado-Iwasawa, any finite-dimensional Lie algebra is isomorphic to some linear Lie algebra.

The upper triangular matrices form a subalgebra $${\mathfrak{t}}_n({ \mathbf{F} }) \leq {\mathfrak{gl}}_n({ \mathbf{F} })$$.1 This is sometimes called the Borel and denoted $${\mathfrak{b}}$$. There is also a subalgebra $${\mathfrak{n}}_n({ \mathbf{F} })$$ of strictly upper triangular matrices. The diagonal matrices form a maximal torus/Cartan subalgebra $${\mathfrak{h}}_n({ \mathbf{F} })$$ which is abelian.

• Type $$A_n \leadsto {\mathfrak{sl}}_{n+1}({ \mathbf{F} })$$ is the special linear Lie algebra, traceless matrices.
• Type $$B_n \leadsto {\mathfrak{so}}_{2n+1}({ \mathbf{F} })$$ is the odd orthogonal Lie algebra.
• Type $$C_n \leadsto {\mathfrak{sp}}_{2n}({ \mathbf{F} })$$ is the symplectic Lie algebra.
• Type $$D_n \leadsto {\mathfrak{so}}_{2n}({ \mathbf{F} })$$ is the even orthogonal Lie algebra.
• The remaining 3 are defined by matrices satisfying $$sx = -x^t s$$ where $$s$$ is one of the following:
• $${ \begin{bmatrix} {1} & {0} & {0} \\ {0} & {0} & {I_n} \\ {0} & {I_n} & {0} \end{bmatrix} }$$ corresponding to $${\mathfrak{so}}_{2n+1}$$,
• $${ \begin{bmatrix} {0} & {I_n} \\ {-I_n} & {0} \end{bmatrix} }$$ corresponding to $${\mathfrak{sp}}_{2n}$$,
• $${ \begin{bmatrix} {0} & {I_n} \\ {I_n} & {0} \end{bmatrix} }$$ corresponding to $${\mathfrak{so}}_{2n}$$.

These can be viewed as the matrices of a nodegenerate bilinear form: writing $$N$$ for the size of the matrices, the matrices act on $$V \mathrel{\vcenter{:}}={ \mathbf{F} }^N$$ by a bilinear form $$f: V\times V\to { \mathbf{F} }$$ given by $$f(v, w) = v^t s w$$. The form will be symmetric for $${\mathfrak{so}}$$ and skew-symmetric for $${\mathfrak{sp}}$$. The equation $$sx=-x^ts$$ is a version of preserving the bilinear form $$s$$. Note that these are the Lie algebras of the Lie groups $$G = {\operatorname{SO}}_{2n+1}({ \mathbf{F} }), {\operatorname{Sp}}_{2n}({ \mathbf{F} }), {\operatorname{SO}}_{2n}({ \mathbf{F} })$$ defined by the condition $$f(gv, gw) = f(v, w)$$ for all $$v,w\in { \mathbf{F} }^N$$ where $$G = \left\{{g\in \operatorname{GL}_N({ \mathbf{F} }) {~\mathrel{\Big\vert}~}f(gv, gw) = f(v, w)}\right\}$$. This is equivalent to the condition that $$f(gv, w) = f(v, g^{-1}w)$$.

Philosophy: $$G\to {\mathfrak{g}}$$ sends products to sums.

Check that the definitions of $${\operatorname{SO}}_n({ \mathbf{F} }), {\operatorname{Sp}}_n({ \mathbf{F} })$$ yield Lie algebras.

# 2 Friday, August 19

## 2.1 Humphreys 1.3

Let $$A\in \mathsf{Alg}_{/ {{ \mathbf{F} }}}$$, not necessarily associative (e.g. a Lie algebra). An $${ \mathbf{F} }{\hbox{-}}$$derivation is a morphism $$D: A \to A$$ such that $$D(ab) = D(a)b + aD(b)$$. Equipped with the commutator bracket, this defines a Lie algebra $$\mathop{\mathrm{Der}}_{ \mathbf{F} }(A) \leq {\mathfrak{gl}}_{ \mathbf{F} }(A)$$.2

If $$D, D'$$ are derivations, then the composition $$D\circ D'$$ is not generally a derivation.

If $$L\in \mathsf{Lie} \mathsf{Alg}_{/ {{ \mathbf{F} }}}$$, for $$x\in L$$ fixed define the adjoint operator \begin{align*} { \operatorname{ad}}_x: L &\to L \\ y &\mapsto [x, y] .\end{align*} Note that $${ \operatorname{ad}}_x\in \mathop{\mathrm{Der}}_{ \mathbf{F} }(A)$$ by the Jacobi identity. Any derivation of this form is an inner derivation, and all other derivations are outer derivations.

Given $$x\in K\leq L$$, note that $$K$$ is a Lie subalgebra, and we’ll want to distinguish $${ \operatorname{ad}}_L x$$ and $${ \operatorname{ad}}_K x$$ (which may differ). Note that $${\mathfrak{gl}}_n({ \mathbf{F} }) \geq {\mathfrak{b}}= {\mathfrak{h}}\oplus {\mathfrak{n}}$$, where $${\mathfrak{b}}$$ are upper triangular, $${\mathfrak{h}}$$ diagonal, and $${\mathfrak{n}}$$ strictly upper triangular matrices. If $$x\in {\mathfrak{h}}$$ then note that $${ \operatorname{ad}}_{\mathfrak{h}}x = 0$$, but $${ \operatorname{ad}}_{\mathfrak{g}}x {\mathfrak{h}}\neq 0$$.

## 2.2 Humphreys 2.1

Some notes:

• $$L\geq I$$ is an ideal if $$[\ell, i] \in I$$. Note that this is like a left ideal in a ring, but since $$I$$ is closed under scalar multiplication and $$[i, \ell] = -[\ell, i]$$, this is closed to a two-sided ideal.
• If $$A, B \subseteq L$$ define $$[A, B] \mathrel{\vcenter{:}}=\mathop{\mathrm{span}}_{ \mathbf{F} }\left\{{[a,b] {~\mathrel{\Big\vert}~}a\in A,b\in B}\right\}$$. Not taking the span generally won’t even yield a subalgebra.
• For ideals $$I,J{~\trianglelefteq~}L$$, $$I+J, [I,J] {~\trianglelefteq~}L$$.
• $$L/I \mathrel{\vcenter{:}}=\left\{{x+I {~\mathrel{\Big\vert}~}x\in L}\right\}$$ with $$[x+I, y+I] \mathrel{\vcenter{:}}=[x,y] + I$$.3
• Ideals are subalgebras (since this only requires closure under bracketing), but subalgebras are not necessarily ideals.
• Centers: $$Z(L) = \left\{{x\in L {~\mathrel{\Big\vert}~}[xy] = 0 \forall y\in L}\right\} \leq L$$.
• Derived ideals $$L' \mathrel{\vcenter{:}}= L^1 \mathrel{\vcenter{:}}=[L, L] {~\trianglelefteq~}L$$.
• If $${\mathfrak{g}}\geq {\mathfrak{b}}= {\mathfrak{h}}\oplus {\mathfrak{n}}$$, and $${\mathfrak{n}}{~\trianglelefteq~}{\mathfrak{b}}$$ using the fact that products of upper-triangular matrices involve multiplying diagonals, and bracketing/subtracting cancels the diagonal off. Moreover $${\mathfrak{b}}/{\mathfrak{n}}\cong {\mathfrak{b}}$$.
• For $$K \subseteq L$$ a subspace, the normalizer is $$N_L(K) \mathrel{\vcenter{:}}=\left\{{x\in L{~\mathrel{\Big\vert}~}[x, K] \subseteq K}\right\}$$. If $$K = N_L$$ then $$K$$ is self-normalizing.
• The centralizer of $$K$$ in $$L$$ is $$C_L(K) \mathrel{\vcenter{:}}=\left\{{x\in L{~\mathrel{\Big\vert}~}[x, K] = 0}\right\} \leq L$$, which is a subalgebra by the Jacobi identity.

Is $${\mathfrak{h}}{~\trianglelefteq~}{\mathfrak{b}}$$?

A Lie algebra $$L$$ is simple if $$L\neq 0$$ and $$\operatorname{Id}(L) = \left\{{0, L}\right\}$$ and $$[L, L] \neq 0$$. Note that $$[LL] \neq 0$$ only rules out the 1-dimensional Lie algebra, since $$[L, L] = 0$$ and if $$0 < K < L$$ then $$K{~\trianglelefteq~}L$$ since $$[L,K] = 0$$.

Let $$L = {\mathfrak{sl}}_2({\mathbf{C}})$$, so $$\operatorname{tr}(x) = 0$$. This has standard basis \begin{align*} x = { \begin{bmatrix} {0} & {1} \\ {0} & {0} \end{bmatrix} }, \qquad y = { \begin{bmatrix} {0} & {0} \\ {1} & {0} \end{bmatrix} },\qquad h = { \begin{bmatrix} {1} & {0} \\ {0} & {-1} \end{bmatrix} }. \\ [xy]=h,\quad [hx] = 2x,\quad [hy] = -2y .\end{align*}

Prove that $${\mathfrak{sl}}_2({\mathbf{C}})$$ is simple.

Show that for $$K\leq L$$, the normalizer $$N_L(K)$$ is the largest subalgebra of $$L$$ in which $$K$$ is an ideal.

Show that $${\mathfrak{h}}\subseteq {\mathfrak{g}}\mathrel{\vcenter{:}}={\mathfrak{sl}}_n({\mathbf{C}})$$ is self-normalizing subalgebra of $${\mathfrak{g}}$$.

Hint: use $$[h, e_{ij}] = (h_i - h_j) e_{ij}$$ where $$h = \operatorname{diag}(h_1,\cdots, h_n)$$. The standard basis is $${\mathfrak{h}}= \left\langle{e_{11} - e_{22}, e_{22} - e_{33}, \cdots, e_{n-1, n-1} - e_{n,n} }\right\rangle$$.

What is $$\dim {\mathfrak{sl}}_3({\mathbf{C}})$$? What is the basis for $${\mathfrak{g}}$$ and $${\mathfrak{h}}$$?

## 2.3 Humphreys 2.2

Notes:

• Let $$L, L'\in \mathsf{C} \mathrel{\vcenter{:}}=\mathsf{Lie} \mathsf{Alg}_{/ {{ \mathbf{F} }}}, \phi \in \mathsf{C}(L, L'), \ker \phi = \left\{{x\in L{~\mathrel{\Big\vert}~}\phi(x) = 0}\right\} {~\trianglelefteq~}L$$.
• Note that if $$x\in \ker \phi, y\in L$$ then $$\phi([xy]) = [\phi(x)\phi(y)] = [0\phi(y)] = 0$$.
• A representation of $$L$$ is some $$\phi\in \mathsf{C}(L, {\mathfrak{gl}}(V))$$ for some $$V\in { \mathsf{Vect}}_{/ {{ \mathbf{F} }}}$$.
• The usual 3 isomorphism theorems for groups hold for Lie algebras.
• $${ \operatorname{ad}}: L\to {\mathfrak{gl}}(L)$$ where $$x\mapsto { \operatorname{ad}}x$$ is a representation.
• $$\ker { \operatorname{ad}}= Z(L)$$, so if $$L$$ is simple then $$Z(L) = 0$$ and $${ \operatorname{ad}}$$ is injective. Thus any simple Lie algebra is linear.
• Compare to: any finite dimensional Lie algebra is linear.

# 3 Monday, August 22

## 3.1 Humphreys 2.3: Automorphisms

Let $$L\in \mathsf{Lie} \mathsf{Alg}_{/ {{ \mathbf{F} }}}$$, then $$\mathop{\mathrm{Aut}}(L)$$ is the group of isomorphisms $$L { \, \xrightarrow{\sim}\, }L$$. Some important examples: if $$L$$ is linear and $$g\in \operatorname{GL}(V)$$, if $$gLg^{-1}= L$$ then $$x\mapsto gxg^{-1}$$ is an automorphism. This holds for example if $$L = {\mathfrak{gl}}_n({ \mathbf{F} })$$ or $${\mathfrak{sl}}_n({ \mathbf{F} })$$. Assume $$\operatorname{ch}{ \mathbf{F} }= 0$$ and let $$x\in L$$ with $${ \operatorname{ad}}x$$ nilpotent, say $$( { \operatorname{ad}}x)^k=0$$. Then the power series expansion $$e^{ { \operatorname{ad}}x} = \sum_{n\geq 0} ( { \operatorname{ad}}x)^n$$ is a polynomial.

$$\exp^{ { \operatorname{ad}}x}\in \mathop{\mathrm{Aut}}(L)$$ is an automorphism. More generally, $$e^\delta\in \mathop{\mathrm{Aut}}(L)$$ for $$\delta$$ any nilpotent derivation.

\begin{align*} \delta^n(xy) = \sum_{i=0}^n {n\choose i} \delta^{n-i}(x) \delta^{i}(y) .\end{align*}

One can prove this by induction. Then check that $$\exp(\delta(x))\exp(\delta(y)) = \exp(\delta(xy))$$ and writing $$\exp(\delta) = 1+\eta$$ there is an inverse $$1-\eta +\eta^2 +\cdots \pm \eta^{k-1}$$. Automorphisms which are of the form $$\exp(\delta)$$ for $$\delta$$ nilpotent derivation are called inner automorphisms, and all others are outer automorphisms.

# 4 Solvable and Nilpotent Lie Algebras (Wednesday, August 24)

## 4.1 Humphreys 3.1

Recall that if $$L \subseteq {\mathfrak{g}}$$ is any subset, the derived algebra $$[LL]$$ is the span of $$[xy]$$ for $$x,y\in L$$. This is the analog of needing to take products of commutators to generate a commutator subgroup for groups. Define the derived series of $$L$$ as \begin{align*} L^{(0)} = [LL], \quad L^{(1)} = [L^{(0)}, L^{(0)}], \cdots \quad L^{(i+1)} = [L^{(i)} L^{(i)}] .\end{align*}

These are all ideals.

By induction on $$i$$ – it STS that $$[x[ab]] \in L^{(i)}$$ for $$a,b \in L^{(i-1)}$$ and $$x\in L$$. Use the Jacobi identity and the induction hypothesis that $$L^{(i-1)} {~\trianglelefteq~}L$$: \begin{align*} [x,[ab]] = [[xa]b] + [a[xb]] \in L^{(i-1)} + L^{(i-1)} \subseteq L^{(i)} .\end{align*}

If $$L^{(n)} = 0$$ for some $$n\geq 01$$ then $$L$$ is called solvable.

Note that

• $$L$$ abelian implies solvable, since $$L^{(1)} = 0$$.
• $$L$$ simple implies non-solvable, since this forces $$L^{(1)} = L$$.

Let $${\mathfrak{b}}\mathrel{\vcenter{:}}={\mathfrak{b}}_n({ \mathbf{F} })$$ be upper triangular matrices, show that $${\mathfrak{b}}$$ is solvable.

Use that $$[{\mathfrak{b}}{\mathfrak{b}}] = {\mathfrak{n}}$$ is strictly upper triangular since diagonals cancel. More generally, bracketing matrices with $$n$$ diagonals of zeros yields matrices with about $$2^n$$ diagonals of zeros.

Let $$L\in \mathsf{Lie} \mathsf{Alg}_{/ {{ \mathbf{F} }}}$$, then

• $$L$$ solvable implies solvability of all subalgebras and homomorphic images of $$L$$.
• If $$I{~\trianglelefteq~}L$$ and $$L/I$$ are solvable, then $$L$$ is solvable.
• If $$I,J{~\trianglelefteq~}L$$ are solvable then $$I+J$$ is solvable.4

Prove these.

Every $$L\in \mathsf{Lie} \mathsf{Alg}_{/ {{ \mathbf{F} }}}^{{\mathrm{fd}}}$$ has a unique maximal solvable ideal, the sum of all solvable ideals, called the radical of $$L$$, denote $$\sqrt{(}L)$$. $$L$$ is semisimple if $$\sqrt{(}L) = 0$$.

Prove that any simple algebra is semisimple, and in general $$L/\sqrt{(}L)$$ is semisimple (if nonzero).

Assume $${\mathfrak{sl}}_n({\mathbf{C}})$$ is simple, then $$R\mathrel{\vcenter{:}}=\sqrt{(}{\mathfrak{gl}}_n({\mathbf{C}})) = Z({\mathfrak{g}}) \supseteq{\mathbf{C}}\operatorname{id}_n$$ for $${\mathfrak{g}}\mathrel{\vcenter{:}}={\mathfrak{gl}}_n({\mathbf{C}})$$.

$$\supseteq$$: Centers are always solvable ideals, since it’s abelian and brackets are ideals, and the radical is sum of all solvable ideals.

$$\subseteq$$: Suppose $$Z\subsetneq R$$ is proper, then there is a non-scalar matrix $$x\in R$$. Write $$x = aI_n + y$$ for $$a = {\mathrm{tr}}(x)/n$$ and $$0\neq y\in {\mathfrak{sl}}_n({\mathbf{C}})$$ is traceless. Consider $$I = \left\langle{x}\right\rangle {~\trianglelefteq~}{\mathfrak{gl}}_n({\mathbf{C}})$$, i.e. the span of all brackets $$[zx]$$ for $$z\in {\mathfrak{g}}$$ and their iterated brackets containing $$x$$, e.g. $$[z_1[z_2x]]$$. Note that $$[zx]=[zy]$$ since $$aI_n$$ is central. Since $${\mathfrak{sl}}_n({\mathbf{C}})$$ is simple, so $$\left\langle{y}\right\rangle_{{\mathfrak{sl}}_n({\mathbf{C}})} = {\mathfrak{sl}}_n({\mathbf{C}})$$ and thus $${\mathfrak{sl}}_n({\mathbf{C}}) \subseteq I$$. This containment must be proper, since $$I \subseteq \sqrt{(}{\mathfrak{g}})$$ and the latter is solvable, so $$I$$ must be solvable – but $${\mathfrak{sl}}_n({\mathbf{C}})$$ is not solvable. We can thus choose $$x\in I$$ such that $$x = aI_n + y$$ with $$a\neq 0$$ and $$0\neq y \in {\mathfrak{sl}}_n({\mathbf{C}})$$, so $$x-y= aI \in I$$ since $$y\in I$$ because $${\mathfrak{sl}}_n({\mathbf{C}}) \subseteq I$$. Since $$a\neq 0$$, we must have $$I_n\in I$$. Then $${\mathbf{C}}\cdot I_n \subseteq I$$, forcing $$I = {\mathfrak{g}}$$ since every matrix in $${\mathfrak{gl}}_n({\mathbf{C}})$$ is a scalar multiple of the identity plus a traceless matrix. This contradicts that $$I$$ is solvable, since $${\mathfrak{g}}^{(1)} \mathrel{\vcenter{:}}=[{\mathfrak{g}}{\mathfrak{g}}] = {\mathfrak{sl}}_n({\mathbf{C}})$$. But $${\mathfrak{g}}^{(1)} = {\mathfrak{sl}}_n({\mathbf{C}})$$, so the derived series never terminates. $$\contradiction$$

## 4.2 Humphreys 3.2

The descending/lower central series of $$L$$ is defined as \begin{align*} L^0 = L, \quad L^1 = [LL], \quad \cdots L^i = [L, L^{i-1}] .\end{align*} $$L$$ is nilpotent if $$L^n=0$$ for some $$n$$.

Check that $$L^i {~\trianglelefteq~}L$$.

Show that $$L$$ nilpotent is equivalent to there existing a finite $$n$$ such that for any set of elements $$\left\{{x_i}\right\}_{i=1}^n$$, \begin{align*} ( { \operatorname{ad}}_{x_1} \circ { \operatorname{ad}}_{x_2} \circ \cdots \circ { \operatorname{ad}}_{x_n})(y) = 0 \qquad \forall y\in L .\end{align*}

# 5 Friday, August 26

## 5.1 3.2: Engel’s theorem

Recall $$L$$ is nilpotent if $$L^{n} = 0$$ for some $$n\geq 0$$ (the descending central series) where $$L^{i+1} = [LL^{i}]$$. Equivalently, $$\prod_{i\leq n} { \operatorname{ad}}_{x_i} =0$$ for any $$\left\{{x_i}\right\}_{i\leq n} \subseteq L$$. Note that $$L^{i} \supseteq L^{(i)}$$ by induction on $$i$$ – these coincide for $$i=0,1$$, and one can check \begin{align*} L^{(i+1)} = [L^{(i)} L^{(i)}] \subseteq [L L^{i}] = L^{i+1} .\end{align*}

$${\mathfrak{b}}_n$$ is solvable but not nilpotent, since $${\mathfrak{b}}_n^1 = {\mathfrak{n}}_n$$ but $${\mathfrak{b}}_n^2 = {\mathfrak{n}}_n$$ and the series never terminates.

$${\mathfrak{n}}_n$$ is nilpotent, since the number of diagonals with zeros adds when taking brackets $$[LL^{i}]$$.

$${\mathfrak{h}}$$ is also nilpotent, since any abelian algebra is nilpotent.

Let $$L\in\mathsf{Lie} \mathsf{Alg}_{/ {{ \mathbf{F} }}}$$, then

• If $$L$$ is nilpotent then any subalgebra or homomorphic image of $$L$$ is nilpotent.
• If $$L/Z(L)$$ is nilpotent then $$L$$ is nilpotent.5
• If $$L\neq 0$$ is nilpotent, then $$Z(L) \neq 0$$.6

Show that if $$L/I$$ and $$I{~\trianglelefteq~}L$$ are nilpotent, then $$L$$ need not be nilpotent.

Distinguish $${ \operatorname{End} }(L)$$ whose algebra structure is given by associative multiplication and $${\mathfrak{gl}}(L)$$ with the bracket multiplication.

An element $$x\in L$$ is ad-nilpotent if $${ \operatorname{ad}}_x \in { \operatorname{End} }(L)$$ is a nilpotent endomorphism.

If $$L$$ is nilpotent then $$x\in L$$ is ad-nilpotent by taking $$x_i = x$$ for all $$i$$. It turns out that the converse is true:

If all $$x\in L$$ are ad-nilpotent, then $$L$$ is nilpotent.

To be covered in an upcoming section.

Let $$x\in {\mathfrak{gl}}(V)$$ be a nilpotent linear transformation for $$V$$ finite-dimensional. Then \begin{align*} { \operatorname{ad}}_x: {\mathfrak{gl}}(V)&\to {\mathfrak{gl}}(V) \\ y &\mapsto x\circ y - y\circ x \end{align*} is a nilpotent operator.

Let $$\lambda_x, \rho_x \in { \operatorname{End} }({\mathfrak{gl}}(V))$$ be left and right multiplication by $$x$$, which are commuting nilpotent operators. The binomial theorem shows that if $$D_1, D_2$$ are any two commuting nilpotent endomorphisms of a vector space, then $$D_1\pm D_2$$ is again nilpotent. But then one can write $${ \operatorname{ad}}_x = \lambda_x - \rho_x$$.

If $$x\in {\mathfrak{gl}}(V)$$ is nilpotent then so is $${ \operatorname{ad}}_x$$. Conversely, if all $${ \operatorname{ad}}_x$$ for $$x\in L \leq {\mathfrak{gl}}(V)$$ are nilpotent operators then $$L$$ is nilpotent by Engel’s theorem.

The converse of the above lemma is not necessarily true: $$x$$ being ad-nilpotent does not imply that $$x$$ is nilpotent. As a counterexample, take $$x=I_n\in {\mathfrak{gl}}_n({\mathbf{C}})$$, then $${ \operatorname{ad}}_x = 0$$ but $$x^k=x$$ for any $$k\geq 1$$.

## 5.2 3.3: Proof of Engel’s theorem

The following is related to the classical linear algebra theorem that commuting operators admit a simultaneous eigenvector:

Let $$L$$ be a Lie subalgebra of $${\mathfrak{gl}}(V)$$ for $$V$$ finite-dimensional. If $$L$$ consists of nilpotent endomorphisms, then there exists a nonzero $$v\in V$$ such that $$Lv=0$$.

Proceed by induction on $$n = \dim L$$ (assuming it holds for all vector spaces), where the $$n=1$$ case is clear – the characteristic polynomial of such an operator is $$f(t) = t^n$$, which has roots $$t=0$$ and every field contains zero. Once one has an eigenvalue, there is at least one eigenvector.

For $$n > 1$$, suppose $$K \leq L$$ is a proper Lie subalgebra. By hypothesis, $$K$$ consists of nilpotent elements in $${ \operatorname{End} }(V)$$, so apply the previous lemma to see that $${ \operatorname{ad}}(K) \subseteq { \operatorname{End} }(L)$$ acts by nilpotent endomorphisms of $$L$$ since they are restrictions to $$L$$ of nilpotent endomorphisms of $${\mathfrak{gl}}(V)$$. Since $$[KK] \subseteq K$$, we can view $${ \operatorname{ad}}(K) \subseteq { \operatorname{End} }(L/K)$$ where $$L/K$$ is a vector space.

By the IH with $$V = L/K$$, where $${ \operatorname{End} }(L/K)$$ has smaller dimension, one can find a nonzero $$x+K \in L/K$$ such that $${ \operatorname{ad}}(K)(x+K)=0$$. Hence one can find an $$x\in L\setminus K$$ such that for all $$y\in K$$ one has $$[yx] \in K$$, so $$x\in N_L(K)\setminus K$$. Thus $$K \subsetneq N_L(K)$$ is a proper containment.

To be continued.

# 6 Monday, August 29

## 6.1 Continuation of proof and corollaries

Recall: we were proving that if $$L \leq {\mathfrak{gl}}(V)$$ with $$V$$ finite dimensional and $$L$$ consists of nilpotent endomorphisms, then there exists a common eigenvector $$v$$, so $$Lv = 0$$.

We’re inducting on $$\dim L$$ (over all $$V$$). Assuming $$\dim V > 1$$, we showed that proper subalgebras are strictly contained in their normalizers: \begin{align*} K \lneq L \implies K \subsetneq N_L(K) .\end{align*} Let $$K$$ be a maximal proper subalgebra of $$L$$, then $$N_L(K) = L$$ by maximality and thus $$K$$ is a proper ideal of $$L$$. Then $$L/K$$ is a Lie algebra of some dimension, which must be 1 – otherwise the preimage in $$L$$ under $$L\twoheadrightarrow L/K$$ would be a subalgebra of $$L$$ properly between $$K$$ and $$L$$. Thus $$K$$ is a codimension 1 ideal in $$L$$.Minimal model program Choosing any $$z\in L\setminus K$$ yields a decomposition $$L = K \bigoplus { \mathbf{F} }z$$ as vector spaces. Let $$W \mathrel{\vcenter{:}}=\left\{{v\in V{~\mathrel{\Big\vert}~}Kv=0}\right\}$$, then $$W\neq 0$$ by the IH.

$$W$$ is an $$L{\hbox{-}}$$stable subspace.

To see this, let $$x\in L, y\in K, w\in W$$. A useful trick: \begin{align*} y.(x.w) = w.(y.w) - [xy].w = 0 ,\end{align*} since the first term is zero and $$[xy]\in K {~\trianglelefteq~}L$$.

Since $$z\curvearrowright W$$ nilpotently, choose an eigenvector $$v$$ for $$z$$ in $$W$$ for the eigenvalue zero. Then $$z.v=0$$, so $$Lv=0$$.

If all elements of a Lie algebra $$L$$ are ad-nilpotent, then $$L$$ is nilpotent as an algebra.

Induct on $$\dim L$$. Note that $${ \operatorname{ad}}(L) \leq {\mathfrak{gl}}(V)$$ consists of nilpotent endomorphisms. Use the theorem to pick $$x\in L$$ such that $${ \operatorname{ad}}(L).x = 0$$, i.e. $$[L, x] = 0$$, i.e. $$x\in Z(L)$$ and thus $$Z(L)$$ is nonzero. Now $$\dim L/Z(L) < \dim L$$, and a fortiori its elements are still ad-nilpotent so $$L/Z(L)$$ is nilpotent. By proposition 3.2b, $$L$$ is nilpotent.7

Let $$o\neq L \leq {\mathfrak{gl}}(V)$$ with $$\dim V < \infty$$ be a Lie algebra of nilpotent endomorphisms (as in the theorem).8 Then $$V$$ has a basis in which the matrices of $$L$$ are all strictly upper triangular.

Induct on $$\dim V$$. Use the theorem to pick a nonzero $$v_1$$ with $$Lv_1=0$$. Consider $$W\mathrel{\vcenter{:}}= V/{ \mathbf{F} }v_1$$, and view $$L \subseteq { \operatorname{End} }(V)$$ as a subspace of $${ \operatorname{End} }(W)$$ – these are still nilpotent endomorphisms. By the IH, $$W$$ has a basis $$\left\{{\overline{v}_i}\right\}_{2\leq i \leq n}$$ with respect to the matrices in $$L$$ (viewed as a subspace of $${ \operatorname{End} }(W)$$) are strictly upper triangular. Let $$\left\{{v_i}\right\} \subseteq V$$ be their preimages in $$L$$; this basis has the desired properties. This results in a matrix of the following form:

## 6.2 Chapter 4: Theorems of Lie and Cartan

From now on, assume $${ \mathbf{F} }= \overline{{ \mathbf{F} }}$$ is algebraically closed and $$\operatorname{ch}(k) = 0$$.

## 6.3 4.1: Lie’s Theorem

Let $$L\neq 0$$ be a solvable Lie subalgebra of $${\mathfrak{gl}}(V)$$ with $$\dim V < \infty$$. Then $$V$$ contains a common eigenvector for all of $$L$$.

Induct on $$\dim L$$. If $$\dim L = 1$$, then $$L$$ is spanned by 1 linearly operator $$x$$ and over an algebraically closed field, $$x$$ has at least one eigenvector. For $$\dim L > 1$$, take the following strategy:

1. Identify $$K{~\trianglelefteq~}L$$ proper of codimension 1,
2. By IH, find a common eigenvector for $$K$$,
3. Verify that $$L$$ stabilizes “the” subspace of all such common eigenvectors (much harder than before!)
4. In this subspace, find an eigenvector for some $$z\in L\setminus K$$ with $$L = K \oplus { \mathbf{F} }z$$.

Off we go!

Step 1: Since $$L$$ is solvable, we have $$[LL]$$ properly contained in $$L$$. In $$L/[LL]$$ choose any codimension 1 subspace – it is an ideal, which lifts to a codimension 1 ideal $$K \subset L$$.

Step 2: Since subalgebras of solvable algebras are again solvable, $$K$$ is solvable. By the IH, pick a common nonzero eigenvector $$v$$ for $$K$$. There exists a linear map $$\lambda: K\to { \mathbf{F} }$$ such that $$x.v = \lambda(x) v$$ for all $$x\in K$$. Let $$W \mathrel{\vcenter{:}}=\left\{{v\in V {~\mathrel{\Big\vert}~}y.v = \lambda(y) v\,\,\forall y\in K}\right\}$$, which is nonzero.

Step 3: Note $$L.W \subseteq W$$. Let $$w\in W, x\in L, y\in K$$; we WTS $$y.(x.w) = \lambda(y)x.w$$. Write \begin{align*} y.(x.w) &= x.(y.w) - [xy].w \\ &= \lambda(y)(x.w) - \lambda([xy])w ,\end{align*} where the second line follows since $$[xy]\in K$$. We then need $$\lambda([xy]) = 0$$ for all $$x\in L$$ and $$y\in K$$. Since $$\dim V < \infty$$, choose $$n$$ minimal such that $$\left\{{w, x.w, x^2.w,\cdots, x^n.w}\right\}$$ is linearly dependent. Set $$W_i \mathrel{\vcenter{:}}=\mathop{\mathrm{span}}_{ \mathbf{F} }\left\{{w, x.w, \cdots, x^i.w}\right\}$$, so $$W_0 = 0, W_1 = \mathop{\mathrm{span}}_{ \mathbf{F} }\left\{{w}\right\}$$, and so on, noting that

• $$\dim W_n = n$$,
• $$W_{n+k} = W_n$$ for all $$k\geq 0$$,
• $$x.W_n \subseteq W_n$$.

For all $$y\in K$$,

\begin{align*} y.x^i.w = \lambda(y) x^i.w \operatorname{mod}W_i .\end{align*}

To be continued!

# 7 Wednesday, August 31

## 7.1 Section 4.1, continuing the proof

Recall $$\dim L, \dim V < \infty$$, $${ \mathbf{F} }$$ is algebraically closed, and $$\operatorname{ch}{ \mathbf{F} }= 0$$. For $$L \leq {\mathfrak{gl}}(V)$$ solvable, we want a common eigenvector $$v\in V$$ for $$L$$. Steps for the proof:

1. Find a $$K{~\trianglelefteq~}L$$ proper of codimension 1.
2. Set $$W = \left\{{v\in V {~\mathrel{\Big\vert}~}x.v = \lambda(x) v \,\forall x\in K}\right\}\neq 0$$ for some linear $$\lambda: K\to { \mathbf{F} }$$.
3. Show $$L.W \subseteq W$$; we needed to show $$\lambda([LK] ) = 0$$.

Step 3: Fix $$x\in L, w\in W$$ and $$n$$ minimal such that $$\left\{{x^i w}\right\}_{i\leq n}$$ is linearly dependent. For $$i\geq 0$$ set $$W_i = { \mathbf{F} }\left\langle{w, xw, \cdots, x^{i-1}w}\right\rangle$$. Then $$\dim W_n = n, W_n = W_{n+i}$$ for $$i\geq 0$$, and $$xW_n \subseteq W_n$$.

For all $$y\in K$$, \begin{align*} yx^i .w = \lambda(y) x^i w \operatorname{mod}W_i .\end{align*}

This is proved by induction on $$i$$, where $$i=0$$ follows from how $$W$$ is defined. For $$i\geq 1$$, use the commuting trick: \begin{align*} yx^i . w &= yxx^{i-1}w \\ &= (xy - [xy]) x^{i-1} w \\ &= x(y x^{i-1} w) - [xy]x^{i-1}w \\ &\equiv \lambda(y) x^i w - \lambda([xy])x^{i-1} w \operatorname{mod}W_{i-1} \\ &\equiv \lambda(y) x^i w - \lambda([xy])x^{i-1} w \operatorname{mod}W_{i} \qquad \text{since } W_{i-1} \leq W_i \\ &\equiv \lambda(y) x^i w \operatorname{mod}W_i .\end{align*}

Given this claim, for $$i=n$$ this says that the matrices of any $$y\in K$$ with respect to the basis $$\left\{{x^iw}\right\}_{0\leq i \leq n-1}$$ is upper triangular with diagonal entries all equal to $$\lambda(y)$$. Thus $${ \left.{{\operatorname{tr}(y)}} \right|_{{W_m}} } = n \lambda(y)$$, and so $$[xy]\curvearrowright W_n$$ with trace $$n \lambda([xy])$$. On the other hand, $$x,y$$ both act on $$W_n$$ (e.g. by the formula in the claim for $$yx^i.w$$) and so \begin{align*} { \left.{{[xy]}} \right|_{{W_n}} } = { \left.{{xy}} \right|_{{W_n}} } - { \left.{{yx}} \right|_{{W_n}} } ,\end{align*} thus $${ \left.{{ \operatorname{tr}([xy])}} \right|_{{W_n}} } = 0$$. Since $${ \mathbf{F} }$$ is characteristic zero, we have $$n \lambda([xy]) = 0 \implies \lambda([xy]) = 0$$.

Step 4: By step 1, $$L = K \oplus { \mathbf{F} }z$$ for some $$z\in L\setminus K$$. Viewing $$z: W\to W$$ and using $${ \mathbf{F} }= \overline{{ \mathbf{F} }}$$, $$z$$ has an eigenvector $$v\in W$$. Since $$v\in W$$, it is also a common eigenvector for $$K$$ and thus an eigenvector for $$L$$ by additivity.

Let $$L\leq {\mathfrak{gl}}(V)$$ be a solvable subalgebra, then $$L$$ stabilizes some flag in $$V$$. In particular, there exists a basis for $$V$$ with respect to which the matrices in $$L$$ are all upper triangular.

Recall that for $$V \in{ \mathsf{Vect}}_{/ {{ \mathbf{F} }}}$$, a complete flag is an element of \begin{align*} \operatorname{Fl}(V) \mathrel{\vcenter{:}}=\left\{{ 0 = V^0 \subsetneq V^1 \subsetneq \cdots \subsetneq V^n = V {~\mathrel{\Big\vert}~}\dim V^i = i}\right\} .\end{align*} A subalgebra $$L$$ stabilizes a flag if $$LV^i \subseteq V^i$$ for all $$i$$, which implies there is a compatible basis (got by extending one vector at a time from a basis for $$V^1$$) for which $$L$$ acts by upper triangular matrices.

Use the theorem and induct on $$n=\dim V$$ as in Engel’s theorem – find a common eigenvector for $$V^1$$, since $$L$$ stabilizes one gets an action $$L\curvearrowright V^i/V^{i-1}$$ which is smaller dimension. Then just lift through the quotient.

Let $$L$$ be a solvable Lie algebra, then there exists a chain of ideals \begin{align*} 0 = L_0 \subsetneq L_1 \subsetneq \cdots \subsetneq L_n = L \end{align*} such that $$\dim L_i = i$$.

Consider $${ \operatorname{ad}}L \leq {\mathfrak{gl}}(L)$$. Apply Lie’s theorem: $$( { \operatorname{ad}}L)L_i \subseteq L_i \iff [LL_i] \subseteq L_i$$, making $$L_i{~\trianglelefteq~}L$$ an ideal.

Let $$L$$ be solvable, then $$x\in [LL]\implies { \operatorname{ad}}_L x$$ is nilpotent. Hence $$[LL]$$ is nilpotent by Lie’s theorem.

Find a flag of ideals by and let $$\left\{{x_1,\cdots, x_n}\right\}$$ be a compatible basis. Then the matrices $$\left\{{ { \operatorname{ad}}_x{~\mathrel{\Big\vert}~}x\in L}\right\}$$ are all upper triangular. If $$x\in [LL]$$, without loss of generality $$x = [yz]$$ for some $$y,z\in L$$. Then \begin{align*} { \operatorname{ad}}_x = [ { \operatorname{ad}}_y { \operatorname{ad}}_z] = { \operatorname{ad}}_y { \operatorname{ad}}_z - { \operatorname{ad}}_z { \operatorname{ad}}_y \end{align*} will be strictly upper triangular (since these are upper triangular and the commutator cancels diagonals) and hence nilpotent.

## 7.2 Section 4.3

We’ll come back to 4.2 next time. For this section, assume $${ \mathbf{F} }= \overline{{ \mathbf{F} }}$$ and $$\operatorname{ch}{ \mathbf{F} }= 0$$. Cartan’s criterion for a semisimple $$L$$ (i.e. $$\mathop{\mathrm{Rad}}(L) = 0$$) involves the Killing form, a certain nondegenerate bilinear form on $$L$$. Recall that if $$L$$ is solvable then $$[LL]$$ is nilpotent, or equivalently every $$x\in [LL]$$ is ad-nilpotent.

Let $$A \subseteq B$$ be subspaces of $${\mathfrak{gl}}(V)$$ (really $${ \operatorname{End} }(V)$$ as a vector space) with $$V$$ finite-dimensional. Let \begin{align*} M\mathrel{\vcenter{:}}=\left\{{w\in {\mathfrak{gl}}(V) {~\mathrel{\Big\vert}~}[wB] \subseteq A}\right\} \end{align*} and suppose some $$w\in M$$ satisfies $$\operatorname{tr}(wz) = 0$$ for all $$z\in M$$. Then $$w$$ is nilpotent.

Later!

A bilinear form is a map \begin{align*} \beta({-}, {-}): L\times L\to { \mathbf{F} } ,\end{align*} which is symmetric if $$\beta(x,y) = \beta(y,x)$$ and associative if $$\beta([xy], z) = \beta(x, [yz])$$ for all $$x,y,z\in L$$. The radical of $$\beta$$ is \begin{align*} \mathop{\mathrm{Rad}}(\beta) \mathrel{\vcenter{:}}=\left\{{w\in V{~\mathrel{\Big\vert}~}\beta(w, V) = 0}\right\} ,\end{align*} and $$\beta$$ is nondegenerate if $$\mathop{\mathrm{Rad}}(\beta) = 0$$.

For $$L = {\mathfrak{gl}}(V)$$, take $$\beta(x,y)\mathrel{\vcenter{:}}=\operatorname{tr}(xy)$$. One can check this is symmetric, bilinear, and associative – associativity follows from the following: \begin{align*} [xy]z &= xyz-yxz\\ x[yz] &= xyz - xzy .\end{align*} Then note that $$y(xz)$$ and $$(xz)y$$ have the same trace, since $$\operatorname{tr}(AB) = \operatorname{tr}(BA)$$.

If $$\beta$$ is associative, then $$\mathop{\mathrm{Rad}}(\beta) {~\trianglelefteq~}L$$.

Let $$z\in \mathop{\mathrm{Rad}}(\beta)$$ and $$x,y\in L$$. To see if $$[zx]\in \mathop{\mathrm{Rad}}(\beta)$$, check \begin{align*} \beta([zx], y) = \beta(z, [xy]) = 0 \end{align*} since $$z\in \mathop{\mathrm{Rad}}(\beta)$$. Thus $$[zx] \in \mathop{\mathrm{Rad}}(\beta)$$.

# 8 Friday, September 02

## 8.1 4.2: Jordan-Chevalley decomposition

Let $${ \mathbf{F} }= \overline{{ \mathbf{F} }}$$ of arbitrary characteristic and $$V\in{ \mathsf{Vect}}_{/ {{ \mathbf{F} }}}^{\mathrm{fd}}$$ with $$x\in { \operatorname{End} }_{ \mathbf{F} }(V)$$. The JCF of $$x$$ is of the form $$D+N$$ where $$D$$ is diagonal and $$N$$ is nilpotent where $$D, N$$ commute. Recall $$x$$ is semisimple (diagonalizable) iff the minimal polynomial of $$x$$ has distinct roots.

If $$x\in { \operatorname{End} }(V)$$,

1. There is a decomposition $$x = x_s + x_n$$ where $$x_s$$ is semisimple and $$x_n$$ is nilpotent. This is unique subject to the condition that $$x_s, x_n$$ commute.

2. There are polynomials $$p(T), q(T)$$ without constant terms with $$x_s = p(x), x_n = q(x)$$. In particular, $$x_s, x_n$$ commute with any endomorphism which commutes with $$x$$.

Let $$x\in {\mathfrak{gl}}(V)$$ with Jordan decomposition $$x = x_s + x_n$$. Then $${ \operatorname{ad}}_x = { \operatorname{ad}}_{x_s} + { \operatorname{ad}}_{x_n}$$ is the Jordan decomposition of $${ \operatorname{ad}}_x$$ in $${ \operatorname{End} }({ \operatorname{End} }(V))$$.

If $$x\in {\mathfrak{gl}}(V)$$ is semisimple then so is $${ \operatorname{ad}}_x$$ since the eigenvalues for $${ \operatorname{ad}}_x$$ are differences of eigenvalues of $$x$$. I.e. if $$\left\{{v_1,\cdots, v_n}\right\}$$ is an eigenbasis for $$V$$ and $$x.v_i = a_i v_i$$ in this bases, we have $$[x e_{ij}] = (a_i - a_j) = e_{ij}$$, so $$\left\{{e_{ij}}\right\}$$ is an eigenbasis for $${ \operatorname{ad}}_x$$. If $$x$$ is nilpotent then $${ \operatorname{ad}}_x$$ is nilpotent, since $${ \operatorname{ad}}_x(y) = \lambda_x(y) - \rho_x(y)$$ where $$\lambda, \rho$$ are left/right multiplication, and sums of nilpotents are nilpotent. One can check $$[ { \operatorname{ad}}_{x_s} { \operatorname{ad}}_{x_n}] = { \operatorname{ad}}_{[x_s x_n} = 0$$ since they commute.

One can show that if $$L$$ is semisimple then $${ \operatorname{ad}}(L) = \mathop{\mathrm{Der}}(L)$$, which is used to show that if $$L$$ is an arbitrary Lie algebra then one has

• $$x = x_s + x_n$$,
• $$[x_s x_n] = 0$$,
• $${ \operatorname{ad}}_{x_s}$$ is semisimple and $${ \operatorname{ad}}_{x_n}$$ is nilpotent.

This gives a notion of semisimplicity and nilpotency for Lie algebras not of the form $${\mathfrak{gl}}(V)$$.

Let $$U\in \mathsf{Alg}_{/ {{ \mathbf{F} }}}^{\mathrm{fd}}$$, then $$\mathop{\mathrm{Der}}(U)$$ is closed under taking semisimple and nilpotent parts.

Let $$\delta\in \mathop{\mathrm{Der}}(U)$$ and write $$\delta = \sigma + v$$ be the Jordan decomposition of $$\delta$$ in $${ \operatorname{End} }(U)$$. It STS $$\sigma$$ is a derivation, so for $$a\in { \mathbf{F} }$$ define \begin{align*} U_a \mathrel{\vcenter{:}}=\left\{{x\in U {~\mathrel{\Big\vert}~}(\delta - a)^k x = 0 \,\,\text{for some } k}\right\} .\end{align*} Note $$U = \bigoplus _{a\in \Lambda} U_a$$ where $$\Lambda$$ is the set of eigenvalues of $$\delta$$, which are also the eigenvalues of $$\sigma$$ – this is because $$\sigma, v$$ are commuting operators, so eigenvalues of $$\delta$$ are sums of eigenvalues of $$s$$ and $$v$$.

For any $$a,b\in { \mathbf{F} }$$, $$U_a U_b \subseteq U_{a+b}$$.

Assuming this, it STS $$\sigma(xy) = \sigma(x)y + x \sigma(y)$$ when $$x\in U_a, y\in U_b$$ where $$a,b$$ are eigenvalues. Using that eigenvalues of $$\delta$$ are also eigenvalues of $$\sigma$$, since $$xy\in U_{a+b}$$ by the claim, $$\sigma(xy) = (a+b)xy$$ and thus \begin{align*} \sigma(x)y + x \sigma(y) = axy + xby = (a+b)xy .\end{align*} So $$\sigma\in \mathop{\mathrm{Der}}(U)$$.

A sub-claim: \begin{align*} (\delta - (a+b) 1) (xy) = \sum_{0\leq i\leq n} {n\choose i} (\delta - aI)^{n-i}x (\delta- b 1)^i y .\end{align*}

# 9 Wednesday, September 07

## 9.1 4.3: Cartan’s criterion for semisimplicity

For the rest of the course, $$V$$ is a vector space of finite dimension. Goal: get a criterion for semisimplicity.

Let $$L\leq {\mathfrak{gl}}(V)$$ be a linear Lie algebra and suppose $$\operatorname{tr}(xz)=0$$ for all $$x\in [LL]$$ and $$z\in L$$. Then $$L$$ is solvable.

Let $$A \subseteq B$$ be subspaces of $${ \operatorname{End} }(V) = {\mathfrak{gl}}(V)$$ and define \begin{align*} M = \left\{{w\in {\mathfrak{gl}}(V) {~\mathrel{\Big\vert}~}[w, B] \subseteq A}\right\} .\end{align*} Suppose that $$w\in M$$ satisfies $$\operatorname{tr}(wz) = 0$$ for all $$z\in M$$. Then $$w$$ is nilpotent.

To show $$L$$ is solvable, it STS that $$[LL]$$ is nilpotent since the ideals used to check nilpotency are bigger than those to check solvability. By Engel’s theorem, it STS to show each $$w\in [LL]$$ is ad-nilpotent. Since $$L \leq {\mathfrak{gl}}(V)$$, it STS to show each $$w\in [LL]$$ is a nilpotent endomorphism. As in the setup of the lemma, set $$B = L, A = [LL]$$, then \begin{align*} M \mathrel{\vcenter{:}}=\left\{{z\in {\mathfrak{gl}}(V) {~\mathrel{\Big\vert}~}[zL] \subseteq [LL] }\right\} \supseteq L \supseteq[LL] .\end{align*} Let $$w\in [LL] \subseteq M$$, then note that $$\operatorname{tr}(wz) = 0$$ for all $$z\in L$$, but we need to know this for all $$z\in M$$. Letting $$z\in M$$ be arbitrary; by linearity of the trace it STS $$\operatorname{tr}(wz) = 0$$ on generators $$w = [xy]$$ on $$[LL]$$ for $$x,y\in L$$. We thus WTS $$\operatorname{tr}([xy]z) = 0$$: \begin{align*} \operatorname{tr}([xy]z) &= \operatorname{tr}(x [yz] ) \\ &=\operatorname{tr}([yz] x) \qquad \in \operatorname{tr}(LMx) \subseteq \operatorname{tr}([LL]L) \\ &= 0 \end{align*} by assumption. By the lemma, $$w$$ is nilpotent.

Let $$L\in \mathsf{Lie} \mathsf{Alg}$$ with $$\operatorname{tr}( { \operatorname{ad}}_x { \operatorname{ad}}_y) = 0$$ for all $$x \in [LL]$$ and $$y\in L$$. Then $$L$$ is solvable.

Use $${ \operatorname{ad}}: L\to {\mathfrak{gl}}(V)$$, a morphism of Lie algebras. Its image is solvable by Cartan’s criterion above, and $$\ker { \operatorname{ad}}= Z(L)$$ which is abelian and hence a solvable ideal.9 Therefore $$L$$ is solvable.

Let $$w = s + n$$ be the Jordan-Chevalley decomposition of $$w$$. Choose a basis for $$V$$ such that this is the JCF of $$w$$, i.e. $$s = \operatorname{diag}(a_1,\cdots, a_n)$$ and $$n$$ is strictly upper triangular. Idea: show $$s=0$$ by showing $$A\mathrel{\vcenter{:}}={\mathbf{Q}}\left\langle{a_1,\cdots, a_n}\right\rangle = 0$$ by showing $$A {}^{ \vee }= 0$$, i.e. any $${\mathbf{Q}}{\hbox{-}}$$linear functional $$f: A\to {\mathbf{Q}}$$ is zero. If $$\sum a_i f(a_i) = 0$$ then \begin{align*} 0 = f(\sum a_i f(a_i)) = \sum f(a_i)^2 \implies f(a_i) = 0 \,\,\forall i ,\end{align*} so we’ll show this. Let $$y = \operatorname{diag}( f(a_1), \cdots, f(a_n) )$$, then $${ \operatorname{ad}}_y$$ is a polynomial (explicitly constructed using Lagrange interpolation) in $${ \operatorname{ad}}_s$$ without a constant term. So do this for $${ \operatorname{ad}}_y$$ and $${ \operatorname{ad}}_s$$ (see exercise). Since $${ \operatorname{ad}}_s$$ is a polynomial in $${ \operatorname{ad}}_w$$ with zero constant term, and since $${ \operatorname{ad}}_w: B\to A$$, we have $${ \operatorname{ad}}_s(B) \subseteq A$$ and the same is thus true for $${ \operatorname{ad}}_y$$. So $$y\in M$$ and $$w\in M$$, and applying the trace condition in the lemma with $$z\mathrel{\vcenter{:}}= y$$ we get \begin{align*} 0 = \operatorname{tr}(wy) = \sum a_i f(a_i) ,\end{align*} noting that $$w$$ is upper triangular and $$y$$ is diagonal. So $$s=0$$ and $$w=n$$ is nilpotent.

Show $${ \operatorname{ad}}_y$$ is a polynomial in $${ \operatorname{ad}}_s$$.

Recall that $$\mathop{\mathrm{Rad}}L$$ is the unique maximal (not necessarily proper) solvable ideal of $$L$$. This exists, e.g. because sums of solvable ideals are solvable. Note that $$L$$ is semisimple iff $$\mathop{\mathrm{Rad}}L = 0$$.

Let $$L\in \mathsf{Lie} \mathsf{Alg}^{\mathrm{fd}}$$ and define the Killing form \begin{align*} \kappa: L\times L &\to { \mathbf{F} }\\ \kappa(x, y) &= \operatorname{tr}( { \operatorname{ad}}_x \circ { \operatorname{ad}}_y) .\end{align*} This is an associative10 bilinear form on $$L$$.

Let $$L = {\mathbf{C}}\left\langle{x, y}\right\rangle$$ with $$[xy] = x$$. In this ordered basis, \begin{align*} { \operatorname{ad}}_x = { \begin{bmatrix} {0} & {1} \\ {0} & {0} \end{bmatrix} } \qquad { \operatorname{ad}}_y = { \begin{bmatrix} {-1} & {0} \\ {0} & {0} \end{bmatrix} } ,\end{align*} and one can check $$\kappa(x,x) = \kappa(x, y) = \kappa(y, x) = 0$$ and $$\kappa(y,y) = 1$$. Moreover $$\mathop{\mathrm{Rad}}\kappa = {\mathbf{C}}\left\langle{x}\right\rangle$$.

See the text for $$\kappa$$ defined on $${\mathfrak{sl}}_2$$.

Let $$I {~\trianglelefteq~}L$$. If $$\kappa$$ is the Killing form of $$L$$ ad $$\kappa_I$$ that of $$I$$, then \begin{align*} \kappa_I = { \left.{{\kappa}} \right|_{{I\times I}} } .\end{align*}

Let $$x\in I$$, then $${ \operatorname{ad}}_x(L) \subseteq I$$ since $$I$$ is an ideal. Choosing a basis for $$I$$ yields a matrix:

So if $$x,y\in I$$, we have \begin{align*} \kappa(x,y) &= \operatorname{tr}( { \operatorname{ad}}_x \circ { \operatorname{ad}}_y) \\ &= \operatorname{tr}( { \operatorname{ad}}_{I, x} \circ { \operatorname{ad}}_{I, y}) \\ &= \kappa_I(x, y) .\end{align*}

# 10 Friday, September 09

## 10.1 5.1: The Killing form

For the rest of the course: $$k = { \overline{k} }$$ and $$\operatorname{ch}k = 0$$. Theorem from last time: $$L$$ is semisimple iff its Killing form $$\kappa(x, y) \mathrel{\vcenter{:}}=\operatorname{tr}( { \operatorname{ad}}_x { \operatorname{ad}}_y)$$ is nondegenerate.

Let $$S = \mathop{\mathrm{Rad}}(\kappa) {~\trianglelefteq~}L$$, which is easy to check using “invariance” (associativity) of the form. Given $$s,s'\in S$$, the restricted form $$\kappa_S(x, y) = \operatorname{tr}( { \operatorname{ad}}_{S, s} { \operatorname{ad}}_{S, s'}) = \operatorname{tr}( { \operatorname{ad}}_{L, s} { \operatorname{ad}}_{L, s'})$$, which was proved in a previous lemma. But this is equal to $$\kappa(s, s') = 0$$. In particular, we can take $$s\in [SS]$$, so by (the corollary of) Cartan’s criterion for solvable Lie algebras, $$S$$ is solvable as a Lie algebra and thus solvable as an ideal in $$L$$.

$$\implies$$: Since $$\mathop{\mathrm{Rad}}(L)$$ is the sum of all solvable ideals, we have $$S \subseteq \mathop{\mathrm{Rad}}(L)$$, but since $$L$$ is semisimple $$\mathop{\mathrm{Rad}}(L) = 0$$ and thus $$S=0$$.

$$\impliedby$$: Assume $$S=0$$. If $$I {~\trianglelefteq~}L$$ is a solvable ideal so $$I^{(n)} = 0$$ for some $$n\geq 0$$. If $$I^{(n-1)} \neq 0$$, it is a nonzero abelian ideal – since we want to show $$\mathop{\mathrm{Rad}}(L) = 0$$, we don’t want this to happen! Thus it STS every abelian ideal is contained in $$S$$.

So let $$I {~\trianglelefteq~}L$$ be an abelian ideal, $$x\in I$$, $$y\in L$$. Define an operator \begin{align*} A_{xy}^2 \mathrel{\vcenter{:}}=( { \operatorname{ad}}_x { \operatorname{ad}}_y)^2: L \xrightarrow{ { \operatorname{ad}}_y} L \xrightarrow{ { \operatorname{ad}}_x} I \xrightarrow{ { \operatorname{ad}}_y} I \xrightarrow{ { \operatorname{ad}}_x} 0 ,\end{align*} which is zero since $$[II] =0$$. Thus $$A_{xy}$$ is a nilpotent endomorphism, which are always traceless, so $$0 = \operatorname{tr}( { \operatorname{ad}}_x { \operatorname{ad}}_y) = \kappa(x, y)$$ for all $$y\in L$$, and so $$x\in S$$. Thus $$I \subseteq S$$.

$$\mathop{\mathrm{Rad}}(\kappa) \subseteq \mathop{\mathrm{Rad}}(L)$$ always, but the reverse containment is not always true – see exercise 5.4.

## 10.2 5.2: Simple Ideals of $$L$$

Let $$L_i\in \mathsf{Lie} \mathsf{Alg}_{/ {k}}$$, then their direct sum is the product $$L_1 \times L_2$$ with bracket \begin{align*} [x_1 \oplus x_2, y_1 \oplus y_2] \mathrel{\vcenter{:}}=[x_1 y_1] \oplus [x_2 y_2] .\end{align*}

In particular, $$[L_1, L_2] = 0$$, and thus any ideal $$I_1 {~\trianglelefteq~}L_1$$ yields an ideal $$I_1 \oplus 0 {~\trianglelefteq~}L_1 \oplus L_2$$. Moreover, if $$L = \bigoplus I_i$$ is a vector space direct sum of ideals of $$L$$, this is automatically a Lie algebra direct sum since $$[I_i, I_j] = I_i \cap I_j = 0$$ for all $$i\neq j$$.

This is not true for subalgebras! Also, in this theory, one should be careful about whether direct sums are as vector spaces or (in the stronger sense) as Lie algebras.

Let $$L$$ be a finite-dimensional semisimple Lie algebra. Then there exist ideals $$I_n$$ of $$L$$ such that $$L = \bigoplus I_i$$ with each $$I_j$$ simple as a Lie algebra. Moreover every simple ideal if one of the $$I_j$$.

Let $$I {~\trianglelefteq~}L$$ and define \begin{align*} I^\perp \mathrel{\vcenter{:}}=\left\{{x\in L {~\mathrel{\Big\vert}~}\kappa(x, I) = 0 }\right\} ,\end{align*} the orthogonal complement of $$I$$ with respect to $$\kappa$$. This is an ideal by the associativity of $$\kappa$$. Set $$J\mathrel{\vcenter{:}}= I \cap I^\perp {~\trianglelefteq~}L$$, then $$\kappa([JJ], J) = 0$$ and by Cartan’s criterion $$J$$ is a solvable ideal and thus $$J = 0$$, making $$L$$ semisimple.

From the Endman-Wildon lemma in the appendix (posted on ELC, lemma 16.11), $$\dim L = \dim I + \dim I^\perp$$ and $$L = I \oplus I^\perp$$, so now induct on $$\dim L$$ to get the decomposition when $$L$$ is not simple. These are semisimple ideals since solvable ideals in the $$I, I^\perp$$ remain solvable in $$L$$. Finally let $$I{~\trianglelefteq~}L$$ be simple, then $$[I, L] \subseteq L$$ is a ideal (in both $$L$$ and $$I$$), which is nonzero since $$Z(L) = 0$$. Since $$I$$ is simple, this forces $$[I, L] = I$$. Writing $$L = \bigoplus I_i$$ as a sum of simple ideals, we have \begin{align*} I = [I, L] = [I, \bigoplus I_i] = \bigoplus [I, I_i] ,\end{align*} and by simplicity only one term can be nonzero, so $$I = [I, I_j]$$ for some $$j$$. Since $$I_j$$ is an ideal, $$[I, I_j] \subseteq I_j$$, and by simplicity of $$I_j$$ we have $$I = I_j$$.

Let $$L$$ be semisimple, then $$L = [LL]$$ and all ideals and homomorphic images (but not subalgebras) are again semisimple. Moreover, every ideal of $$L$$ is a sum of simple ideals $$I_j$$ of $$L$$.

Take the canonical decomposition $$L = \bigoplus I_i$$ and check \begin{align*} [L, L] = [\bigoplus I_i, \bigoplus I_j] = \bigoplus [I_i, I_i] = \bigoplus I_i ,\end{align*} where in the last step we’ve used that the $$I_i$$ are (not?) abelian. Let $$J {~\trianglelefteq~}L$$ to write $$L = J \bigoplus J^\perp$$, both of which are semisimple as Lie algebras. In particular, if $$\phi: L\to L'$$, set $$J \mathrel{\vcenter{:}}=\ker \phi {~\trianglelefteq~}L$$. Then $$\operatorname{im}\phi = L/J \cong J^\perp$$ as Lie algebras, using the orthogonal decomposition, so $$\operatorname{im}\phi$$ is semisimple. Finally if $$J {~\trianglelefteq~}L$$ then $$L = J \oplus J^\perp$$ with $$J$$ semisimple, so by the previous theorem $$J$$ decomposes as $$J = \oplus K_i$$ with $$K_i$$ (simple) ideals in $$J$$ – but these are (simple) ideals in $$L$$ as well since it’s a direct sum. Thus the $$K_i$$ are a subset of the $$I_j$$, since these are the only simple ideals of $$L$$.

# 11 Monday, September 12

Question from last time: does $$L$$ always factor as $$\mathop{\mathrm{Rad}}(L) \oplus L_{{\mathrm{ss}}}$$ with $$L_{{\mathrm{ss}}}$$ semisimple? Not always, instead there is a semidirect product decomposition $$L = \mathop{\mathrm{Rad}}(L) \rtimes{\mathfrak{s}}$$ where $${\mathfrak{s}}$$ is the Levi subalgebra. Consider $$L = {\mathfrak{gl}}_n$$, then $$\mathop{\mathrm{Rad}}(L) \neq {\mathfrak{h}}$$ since $$[h, e_{ij}] = (h_i - h_j)e_{ij}$$, so in fact this forces $$\mathop{\mathrm{Rad}}(L) = {\mathbf{C}}I_n = Z(L)$$ with complementary subalgebra $${\mathfrak{s}}= {\mathfrak{sl}}_n$$. Note that $${\mathfrak{gl}}_n = {\mathbf{C}}I_n \oplus {\mathfrak{sl}}_n$$ where $${\mathfrak{sl}}_n = [L L]$$ is a direct sum, and $${\mathfrak{gl}}_n$$ is reductive.

## 11.1 5.3: Inner Derivations

If $$L$$ is semisimple then $${ \operatorname{ad}}(L) = \mathop{\mathrm{Der}}L$$.

We know $${ \operatorname{ad}}(L) \leq \mathop{\mathrm{Der}}L$$ is a subalgebra, and $$L$$ semisimple implies $$0 = Z(L) = \ker { \operatorname{ad}}$$, so $${ \operatorname{ad}}: L { \, \xrightarrow{\sim}\, } { \operatorname{ad}}(L)$$ is an isomorphism and $${ \operatorname{ad}}(L)$$ is semisimple. The Killing form of a semisimple Lie algebra is always nondegenerate, so $$\kappa_{ { \operatorname{ad}}(L)}$$ is nondegenerate, while $$\kappa_{\mathop{\mathrm{Der}}L}$$ may be degenerate. Recall that $${ \operatorname{ad}}(L) {~\trianglelefteq~}\mathop{\mathrm{Der}}L$$, so $$[\mathop{\mathrm{Der}}L, { \operatorname{ad}}(L)] \subseteq { \operatorname{ad}}(L)$$. Define $${ \operatorname{ad}}(L)^\perp {~\trianglelefteq~}\mathop{\mathrm{Der}}L$$ to be the orthogonal complement in $$\mathop{\mathrm{Der}}(L)$$ with respect to $$\kappa_{\mathop{\mathrm{Der}}L}$$, which is an ideal by the associative property.

Claim: $${ \operatorname{ad}}(L) \cap { \operatorname{ad}}(L)^\perp = 0$$, This follows readily from the fact that $$\kappa_{ { \operatorname{ad}}(L)}$$ is nondegenerate and so $$\mathop{\mathrm{Rad}}(\kappa_{ { \operatorname{ad}}(L)}) = 0$$.

Note that $${ \operatorname{ad}}(L), { \operatorname{ad}}(L)^\perp$$ are both ideals, so $$[ { \operatorname{ad}}(L), { \operatorname{ad}}(L)^\perp] \subseteq { \operatorname{ad}}(L) \cap { \operatorname{ad}}(L)^\perp = 0$$. Let $$\delta \in { \operatorname{ad}}(L)^\perp$$ and $$x\in L$$, then $$0 = [\delta, { \operatorname{ad}}_x] = { \operatorname{ad}}_{ \delta(x) }$$ where the last equality follows from an earlier exercise. Since $${ \operatorname{ad}}$$ is injective, $$\delta(x) = 0$$ and so $$\delta = 0$$, thus $${ \operatorname{ad}}(L)^\perp = 0$$. So we have $$\mathop{\mathrm{Rad}}\kappa_{\mathop{\mathrm{Der}}L} \subseteq { \operatorname{ad}}(L)^\perp = 0$$ since any derivation orthogonal to all derivations is in particular orthogonal to inner derivations, and thus $$\kappa_{\mathop{\mathrm{Der}}L}$$ is nondegenerate. Finally, we can write $$\mathop{\mathrm{Der}}L = { \operatorname{ad}}(L) \oplus { \operatorname{ad}}(L)^\perp = { \operatorname{ad}}(L) \oplus 0 = { \operatorname{ad}}(L)$$.

## 11.2 5.4: Abstract Jordan Decompositions

Earlier: if $$A\in \mathsf{Alg}_{/ {{ \mathbf{F} }}}^{\mathrm{fd}}$$, not necessarily associative, $$\mathop{\mathrm{Der}}A$$ contains the semisimple and nilpotent parts of all of its elements. Applying this to $$A = L\in \mathsf{Lie} \mathsf{Alg}$$ yields $$L \cong { \operatorname{ad}}(L) = \mathop{\mathrm{Der}}L$$ ad $${ \operatorname{ad}}_x = s + n \in { \operatorname{ad}}(L) + { \operatorname{ad}}(L)$$, so write $$s = { \operatorname{ad}}_{x_s}$$ and $$n = { \operatorname{ad}}_{x_n}$$, then $${ \operatorname{ad}}_x = { \operatorname{ad}}_{x_s} + { \operatorname{ad}}_{x_n} = { \operatorname{ad}}_{x_s + x_n}$$ so $$x = x_s + x_n$$ by injectivity of $${ \operatorname{ad}}$$, yielding a definition for the semisimple and nilpotent parts of $$x$$. If $$L \leq {\mathfrak{gl}}(V)$$, it turns out that these coincide with the usual decomposition – this is proved in section 6.4.

## 11.3 6.1: Modules (Chapter 6: Complete Reducibility of Representations)

Let $$L \in \mathsf{Lie} \mathsf{Alg}_{/ {{\mathbf{C}}}}^{\mathrm{fd}}$$, then a representation of $$L$$ on $$V$$ is a homomorphism of Lie algebras $$\phi: L \to {\mathfrak{gl}}(V)$$. For $$V\in{ \mathsf{Vect}}_{/ {{\mathbf{C}}}}$$ with an action of $$L$$, i.e. an operation \begin{align*} L\times V &\to V \\ (x, v) &\mapsto x.v ,\end{align*} is an $$L{\hbox{-}}$$module iff for all $$a,b\in {\mathbf{C}}, x,y\in L, v,w\in V$$,

• (M1) $$(ax+by).v = a(x.v) + b(y.v)$$.
• (M2) $$x.(av+bw) = a(x.v) + b(x.w)$$.
• (M3) $$[xy].v = x.(y.v) - y.(x.v)$$.

An $$L{\hbox{-}}$$module $$V$$ is equivalent to a representation of $$L$$ on $$V$$. If $$\phi: L \to {\mathfrak{gl}}(V)$$ is a representation, define $$x.v \mathrel{\vcenter{:}}=\phi(x)v \mathrel{\vcenter{:}}=\phi(x)(v)$$. Conversely, for $$V\in {}_{L}{\mathsf{Mod}}$$ define $$\phi: L\to {\mathfrak{gl}}(V)$$ by $$\phi(x)(v) \mathrel{\vcenter{:}}= x.v$$.

$$L\in {}_{L}{\mathsf{Mod}}$$ using $${ \operatorname{ad}}$$, this yields the adjoint representation.

A morphism of $$L{\hbox{-}}$$modules is a linear map $$\psi: V\to W$$ such that $$\psi(x.v) = x.\psi(v)$$ for all $$x\in L, v\in V$$. It is an isomorphism as an $$L{\hbox{-}}$$module iff it is an isomorphism of the underlying vector spaces.11 In this case we say $$V, W$$ are equivalent representations.

Let $$L = {\mathbf{C}}x$$ for $$x\neq 0$$, then

• What is a representation of $$L$$ on $$V$$? This amounts to picking an element of $${ \operatorname{End} }(V)$$.

• When are 2 $$L{\hbox{-}}$$modules on $$V$$ equivalent? This happens iff the two linear transformations are conjugate in $${ \operatorname{End} }(V)$$.

Thus representations of $$L$$ on $$V$$ are classified by Jordan canonical forms when $$V$$ is finite dimensional.

For $$V\in {}_{L}{\mathsf{Mod}}$$, a subspace $$W \subseteq V$$ is a submodule iff it is in invariant subspace, so $$x.w\in W$$ for all $$x\in L, w\in W$$. $$V$$ is irreducible or simple if $$V$$ has exactly two invariant subspaces $$V$$ and $$0$$.

Note that this rules out $$0$$ as being a simple Lie algebra.

For $$W\leq V \in {}_{L}{\mathsf{Mod}}$$ a submodule, the quotient module is $$V/W$$ has underlying vector space $$V/W$$ with action $$x.(v+W) \mathrel{\vcenter{:}}=(x.v) + W$$. This is well-defined precisely when $$W$$ is a submodule.

$$I{~\trianglelefteq~}L \iff { \operatorname{ad}}(I) \leq { \operatorname{ad}}(L)$$, i.e. ideals corresponds to submodules under the adjoint representation. However, irreducible ideals need not correspond to simple submodules.

# 12 Wednesday, September 14

## 12.1 6.1: Structure theory

Note that all of the algebras $${\mathfrak{g}}$$ we’ve considered naturally act on column vectors in some $${ \mathbf{F} }^n$$ – this is the natural representation of $${\mathfrak{g}}$$.

Letting $${\mathfrak{b}}_n$$ be the upper triangular matrices in $${\mathfrak{gl}}_n$$, this acts on $${ \mathbf{F} }^n$$. Taking a standard basis $${ \mathbf{F} }^n = V \mathrel{\vcenter{:}}=\left\langle{e_1,\cdots, e_n}\right\rangle_{ \mathbf{F} }$$, one gets submodules $$V_i = \left\langle{e_1,\cdots, e_i}\right\rangle_{ \mathbf{F} }$$ which correspond to upper triangular blocks got by truncating the first $$i$$ columns of the matrix. This yields a submodule precisely because the lower-left block is zero.

Let $$\phi: L\to {\mathfrak{gl}}(V)$$ be a representation, noting that $${ \operatorname{End} }(V)$$ is an associative algebra. We can consider the associative unital algebra $$A$$ generated by the image $$\phi(L)$$. Note that the structure of $$V$$ as an $$L{\hbox{-}}$$module is the same as its $$A{\hbox{-}}$$module structure, so we can apply theorems/results from the representation of rings and algebras to study Lie algebra representations, e.g. the Jordan-Holder theorem and Schur’s lemma.

Given $$V, W\in {}_{L}{\mathsf{Mod}}$$, their vector space direct sum admits an $$L{\hbox{-}}$$module structure using $$x.(v, w) \mathrel{\vcenter{:}}=(x.v, x.w)$$, which we’ll write as $$x.(v+w) \mathrel{\vcenter{:}}= xv + xw$$.

$$V\in {}_{L}{\mathsf{Mod}}$$ is completely reducible iff $$V$$ is a direct sum of irreducible $$L{\hbox{-}}$$modules. Equivalently, for each $$W\leq V$$ there is a complementary submodule $$W'$$ such that $$V = W \oplus W'$$.

“Not irreducible” is strictly weaker than “completely reducible,” since a submodule may not admit an invariant complement – for example, the flag in the first example above.

The natural representation of $${\mathfrak{h}}_n$$ is completely reducible, decomposing as $$V_1 \oplus V_2 \oplus \cdots V_n$$ where $$V_i = { \mathbf{F} }e_i$$.

A module $$V$$ is indecomposable iff $$V\neq W \oplus W'$$ for proper submodules $$W, W' \leq V$$. This is weaker than irreducibility.

Consider the natural representation $$V$$ for $$L \mathrel{\vcenter{:}}={\mathfrak{b}}_n$$. Every nonzero submodule of $$V$$ must contain $$e_1$$, so $$V$$ is indecomposable if $$n\geq 1$$.

Recall that the socle of $$V$$ is the (direct) sum of all of its irreducible submodules. If $$\mathop{\mathrm{Soc}}(V)$$ is simple (so one irreducible) then $$V$$ is indecomposable, since every summand must contain this simple submodule “at the bottom.” For $$L = {\mathfrak{b}}_n$$, note that $$\mathop{\mathrm{Soc}}(V) = { \mathbf{F} }e_1$$.

For the reminder of chapters 6 and 7, we assume all modules are finite-dimensional over $${ \mathbf{F} }= \overline{{ \mathbf{F} }}$$.

Let $$L\in\mathsf{Lie} \mathsf{Alg}_{/ {{ \mathbf{F} }}}^{\mathrm{fd}}$$, then there exists a composition series, a sequence of submodules $$0 = V_0 \subseteq V_1 \subseteq \cdots \subseteq V_n = V$$ such that each composition factor (sometimes called sections) $$V_i/V_{i-1}$$ is irreducible/simple. Moreover, any two composition series admit the same composition factors with the same multiplicities, up to rearrangement and isomorphism.

If $$V = W \oplus W'$$ with $$W, W'$$ simple, there are two composition series:

• $$0 \subseteq W \subseteq V$$ with factors $$W, V/W \cong W'$$,
• $$0 \subseteq W' \subseteq V$$ with factors $$W', V/W' \cong W$$.

These aren’t equal, since they’re representations on different coset spaces, but are isomorphic.

If $$\phi: L\to {\mathfrak{gl}}(V)$$ is an irreducible representation, then $${ \operatorname{End} }_L(V)\cong { \mathbf{F} }$$.

If $$V$$ is irreducible then every $$f\in {}_{L}{\mathsf{Mod}}(V, V)$$ is either zero or an isomorphism since $$f(V) \leq V$$ is a submodule. Thus $${ \operatorname{End} }_L(V)$$ is a division algebra over $${ \mathbf{F} }$$, but the only such algebra is $${ \mathbf{F} }$$ since $${ \mathbf{F} }= \overline{{ \mathbf{F} }}$$.

Letting $$\phi$$ be as above, it has an eigenvalue $$\lambda \in { \mathbf{F} }$$, again since $${ \mathbf{F} }= \overline{{ \mathbf{F} }}$$. Then $$\phi - \lambda I \in { \operatorname{End} }_L(V)$$ has a nontrivial kernel, the $$\lambda{\hbox{-}}$$eigenspace. So $$\phi - \lambda I = 0 \implies \varphi = \lambda I$$.

Schur’s lemma is not always true for Lie superalgebras.

The trivial $$L{\hbox{-}}$$module is $${ \mathbf{F} }\in {}_{L}{\mathsf{Mod}}$$ equipped with the zero map $$\varphi: L\to {\mathfrak{gl}}({ \mathbf{F} })$$ where $$x.1 \mathrel{\vcenter{:}}= 0$$ for all $$x\in L$$. Note that this is irreducible, and any two such 1-dimensional trivial modules are isomorphic by sending a basis $$\left\{{e_1}\right\}$$ to $$1\in { \mathbf{F} }$$.

More generally, an $$V \in {}_{L}{\mathsf{Mod}}$$ is trivial iff $$x.v = 0$$ for all $$x\in L, v\in V$$, and $$V$$ is completely reducible as a direct sum of copies of the above trivial module.

Let $$V, W\in {}_{L}{\mathsf{Mod}}$$, then the following are all $$L{\hbox{-}}$$modules:

• $$V\otimes_{ \mathbf{F} }W$$: the action is $$x.(v \otimes w) = (x.v)\otimes w + v\otimes(x.w)$$.12
• $$\mathop{\mathrm{Hom}}_{ \mathbf{F} }(V, W)$$: the action is $$(x.f)(v) = x.(f(v)) - f(x.v) \in W$$.
• $$V {}^{ \vee }\mathrel{\vcenter{:}}=\mathop{\mathrm{Hom}}_{ \mathbf{F} }(V, { \mathbf{F} })$$: the action is a special case of the above since $$x.w = 0$$, so13 \begin{align*} (x.f)(v) = -f(x.v) .\end{align*}

These structures come from the Hopf algebra structure on the universal associative algebra $$U({\mathfrak{g}})$$, called the universal enveloping algebra. Note that we also have \begin{align*} \mathop{\mathrm{Hom}}_{\mathbf{C}}(V, W)\underset{ {}_{L}{\mathsf{Mod}}}{\cong} V {}^{ \vee }\otimes_{ \mathbf{F} }W .\end{align*}

# 13 Friday, September 16

## 13.1 6.2: Casimir element of a representation

Last time: $$L$$ semisimple over $${\mathbf{C}}$$ implies $$\kappa(x,y)\mathrel{\vcenter{:}}=\operatorname{tr}( { \operatorname{ad}}_x { \operatorname{ad}}_y)$$ is nondegenerate. Using Cartan’s criterion, we can show that for any faithful representation $$\phi: L\to {\mathfrak{gl}}(V)$$ we can define a symmetric bilinear form $$\beta_\phi$$ on $$L$$ defined by $$\beta_\phi(x, y) = \operatorname{tr}(\phi(x) \phi(y))$$. Note that $$\beta_{ { \operatorname{ad}}} = \kappa$$. Since $$\mathop{\mathrm{Rad}}(\beta_\phi) = 0$$, it is nondegenerate. This defines an isomorphism $$L { \, \xrightarrow{\sim}\, }L {}^{ \vee }$$ by $$x\mapsto \beta(x, {-})$$, so given a basis $${\mathcal{B}}\mathrel{\vcenter{:}}=\left\{{x_i}\right\}_{i\leq n}$$ for $$L$$ there is a unique dual basis $${\mathcal{B}}' = \left\{{y_i}\right\}_{i\leq n}$$ for $$L$$ such that $$\beta(x_i, y_j) = \delta_{ij}$$. Note that the $$y_i \in L$$ are dual to the basis $$\beta(x_i, {-}) \in L {}^{ \vee }$$.

For $$L\mathrel{\vcenter{:}}={\mathfrak{sl}}_2({\mathbf{C}})$$, the matrix of $$\kappa$$ is given by $${ \begin{bmatrix} {0} & {0} & {4} \\ {0} & {8} & {0} \\ {4} & {0} & {0} \end{bmatrix} }$$ with respect to the ordered basis $${\mathcal{B}}=\left\{{x,h,y}\right\}$$.14 Thus $${\mathcal{B}}' = \left\{{{1\over 4}y, {1\over 8}h, {1\over 4}x}\right\}$$.

Now let $$\phi: L\to {\mathfrak{gl}}(V)$$ be a faithful irreducible representation. Fix a basis $${\mathcal{B}}$$ of $$L$$ and define the Casimir element \begin{align*} c_\phi = c_\phi(\beta) \mathrel{\vcenter{:}}=\sum_{i\leq n} \phi(x_i) \circ \phi(y_i) \in { \operatorname{End} }_{\mathbf{C}}(V) .\end{align*}

One can show that $$c_\phi$$ commutes with $$\phi(L)$$. Supposing $$\phi$$ is irreducible, $${ \operatorname{End} }_L(V) = {\mathbf{C}}$$ by Schur’s lemma, so $$c_\phi$$ is acts on $$V$$ as a scalar. This follows from \begin{align*} \operatorname{tr}(c_ \varphi) = \sum_{i\leq n} \operatorname{tr}( \varphi(x_i) \varphi(y_i) ) = \sum_{i\leq n} \beta(x_i, y_i) = n = \dim L \implies c_\phi = {\dim L \over \dim V} \operatorname{id}_V ,\end{align*} since there are $$\dim V$$ entries. In particular, $$c_\phi$$ is independent of the choice of $${\mathcal{B}}$$. This will be used to prove Weyl’s theorem, one of the main theorems of semisimple Lie theory over $${\mathbf{C}}$$. If $$L$$ is semisimple and $$\phi:$$ is not faithful, replace $$L$$ by $$L/\ker \phi$$. Since $$\ker \phi {~\trianglelefteq~}L$$ and $$L$$ is semisimple, $$\ker \phi$$ is a direct sum of certain simple ideals of $$L$$ and the quotient is isomorphic to the sum of the remaining ideals. This yields a representation $$\overline{\phi}: L/\ker \varphi \to {\mathfrak{gl}}(V)$$ which is faithful and can be used to define a Casimir operator.

Let $$L = {\mathfrak{sl}}_2({\mathbf{C}})$$ and let $$V = {\mathbf{C}}^2$$ be the natural representation, so $$\phi: L\to {\mathfrak{gl}}(V)$$ is the identity. Fix $${\mathcal{H}}= \left\{{x,h,y}\right\}$$, then $$\beta(u, v) = \operatorname{tr}( u v )$$ since $$\phi(u) = u$$ and $$\phi(v) = v$$. We get the following products:

$${ \begin{bmatrix} {0} & {1} \\ {0} & {0} \end{bmatrix} }$$ $${ \begin{bmatrix} {1} & {0} \\ {0} & {-1} \end{bmatrix} }$$ $${ \begin{bmatrix} {0} & {0} \\ {1} & {0} \end{bmatrix} }$$
$${ \begin{bmatrix} {0} & {1} \\ {0} & {0} \end{bmatrix} }$$ 0 $${ \begin{bmatrix} {0} & {-1} \\ {0} & {0} \end{bmatrix} }$$ $${ \begin{bmatrix} {1} & {0} \\ {0} & {0} \end{bmatrix} }$$
$${ \begin{bmatrix} {1} & {0} \\ {0} & {-1} \end{bmatrix} }$$ I $${ \begin{bmatrix} {0} & {0} \\ {-1} & {0} \end{bmatrix} }$$
$${ \begin{bmatrix} {0} & {0} \\ {1} & {0} \end{bmatrix} }$$ 0

Thus $$\beta = { \begin{bmatrix} {0} & {0} & {1} \\ {0} & {2} & {0} \\ {1} & {0} & {0} \end{bmatrix} }$$, and $${\mathcal{B}}' = \left\{{y, {1\over 2}h, x}\right\}$$, so \begin{align*} c_\phi = xy + {1\over 2}h^2 + yx = { \begin{bmatrix} {1} & {0} \\ {0} & {0} \end{bmatrix} } + {1\over 2}I + { \begin{bmatrix} {0} & {0} \\ {0} & {1} \end{bmatrix} } = {3\over 2}I = {\dim {\mathbf{C}}^2\over \dim {\mathfrak{sl}}_2({\mathbf{C}})} I .\end{align*}

## 13.2 6.3: Complete reducibility

Let $$\phi: L\to {\mathfrak{gl}}(V)$$ be a representation of a semisimple Lie algebra, then $$\phi(L) \subseteq {\mathfrak{sl}}(V)$$. In particular, $$L$$ acts trivially on any 1-dimensional $$L{\hbox{-}}$$module since a $$1\times 1$$ traceless matrix is zero. The proof follows from $$L = [LL]$$.

Arbitrary reductive Lie algebras can have nontrivial 1-dimensional representations.

Let $$\phi: L\to {\mathfrak{gl}}(V)$$ be a finite-dimensional representation of a finite-dimensional semisimple Lie algebra over $${\mathbf{C}}$$. This $$\phi$$ is completely reducible.

This is not true for characteristic $$p$$, or infinite dimensional representations in characteristic 0.

### 13.2.1 Proof of Weyl’s theorem

Replace $$L$$ by $$L/\ker \phi$$ if necessary to assume $$\phi$$ is faithful, since these yield the same module structures. Define a Casimir operator $$c_\phi$$ as before, and recall that complete reducibility of $$V$$ is equivalent to every $$L{\hbox{-}}$$submodule $$W\leq V$$ admitting a complementary submodule $$W''$$ such that $$V = W \oplus W''$$. We proceed by induction on $$\dim V$$, where the dimension 1 case is clear.

Case I: $$\operatorname{codim}_V W = 1$$, i.e. $$\dim (V/W) = 1$$. Take the SES $$W \hookrightarrow V \twoheadrightarrow V/W$$.

Case 1: Suppose $$W' \leq W$$ is a proper nonzero $$L{\hbox{-}}$$submodule. Schematic:

Using the 2nd isomorphism theorem, there is a SES $$W/W' \hookrightarrow V/W' \twoheadrightarrow V/W$$. Since $$\dim W' > 0$$, $$\dim V/W' \leq \dim V$$, so by the IH there is a 1-dimensional complement to $$W/W'$$ in $$V/W'$$. This lifts to $$\tilde W' \leq V$$ with $$W' \leq \tilde W$$ with $$\dim \tilde W/W' = 1$$, and moreover $$V/W' = W/W' \oplus \tilde W/W'$$. Take the SES $$W' \hookrightarrow\tilde W \twoheadrightarrow\tilde W/W'$$ with $$\dim \tilde W < \dim V$$. Apply the IH again to get a subspaces $$X \leq \tilde W \leq V$$ with $$\tilde W = W' \oplus X$$. We’ll continue by showing $$V = W \bigoplus X$$.

# 14 Monday, September 19

## 14.1 Proof of Weyl’s theorem (continued)

Recall: we’re proving Weyl’s theorem, i.e. every finite-dimensional representation of semisimple Lie algebra over $${\mathbf{C}}$$ is completely reducible. Strategy: show every $$W \leq_L V$$ has a complement $$W'' \leq_L V$$ such that $$V = W \oplus W''$$; induct on $$\dim V$$.

Case I: $$\dim V/W = 1$$.

Case 1: $$W$$ is reducible. We got $$0 < W' < W < V$$ (proper submodules), represented schematically by a triangle. We showed $$V/W' \cong W/W' \oplus \tilde W/W'$$, since

• $$\tilde W \cap W \subseteq W'$$,
• $$V= W + \tilde W + W' = W + \tilde W$$ since $$W' \subseteq W$$.
• $$\tilde W = W' \oplus X$$ for some submodule $$X \leq_L \tilde W \leq_L V$$.

Thus replacing $$\tilde W$$ in the second point yields $$V = W + \tilde W = W + W' +X = W + X$$; we want to prove this sum is direct. Since $$X$$ is contained in $$\tilde W$$, we can write \begin{align*} X \cap W &= (X \cap\tilde W) \cap W \\ &= X \cap(\tilde W \cap W) \qquad\text{by 1}\\ &\subseteq X \cap W' = 0 \qquad \text{by 2} ,\end{align*} so $$V = W \oplus X$$.

Case 2: Let $$c_\phi$$ be the Casimir element of $$\phi$$, and note that $$c_\phi(W) \subseteq W$$ since $$c_\phi$$ is built out of endomorphisms in $$\phi(L)$$ sending $$W$$ to $$W$$ (since $$W$$ is a submodule). In fact, $$\phi(L)(V) = W$$ since $$V/W$$ is a 1-dimensional representation of the semisimple Lie algebra $$L$$, hence trivial. Thus $$c_\phi(V) \subseteq W$$ and thus $$\ker c_\phi \neq 0$$ since $$W < V$$ is proper. Note also that $$c_\phi$$ commutes with anything in $$c_\phi(L)$$ on $$V$$, and so defines a morphism $$c_\phi \in {}_{L}{\mathsf{Mod}}(V, V)$$ and $$\ker c_\phi \leq_L V$$. On the other hand, $$c_\phi$$ induces an element of $${ \operatorname{End} }_{L}(W)$$, and since $$W$$ is irreducible, $${ \left.{{c_\phi}} \right|_{{W}} } = \lambda \operatorname{id}_W$$ for some scalar $$\lambda$$. This can’t be zero, since $$\operatorname{tr}({ \left.{{c_\phi}} \right|_{{W}} }) = {\dim L \over \dim W} > 0$$, so $$\ker c_\phi \cap W = 0$$. Since $$\operatorname{codim}_V W = 1$$, i.e. $$\dim W = \dim V - 1$$, we must have $$\dim \ker c_\phi = 1$$ and we have a direct sum decomposition $$V = W \oplus \ker c_\phi$$.

Use of the Casimir element in basic Lie theory: producing a complement to an irreducible submodule.

Case 2: Suppose $$0 < W < V$$ with $$W$$ any nontrivial $$L{\hbox{-}}$$submodule; there is a SES $$W \hookrightarrow V \twoheadrightarrow V/W$$ Consider $$H \mathrel{\vcenter{:}}=\hom_{\mathbf{C}}(V, W)$$, then $$H \in {\mathsf{L}{\hbox{-}}\mathsf{Mod}}$$ by $$(x.f)(v) \mathrel{\vcenter{:}}= x.(f(v)) - f(x.v)$$ for $$f\in H, x\in L, v\in V$$. Let $${\mathcal{V}}\mathrel{\vcenter{:}}=\left\{{f {~\mathrel{\Big\vert}~}H {~\mathrel{\Big\vert}~}{ \left.{{f}} \right|_{{W}} } = \alpha \operatorname{id}_W \,\text{ for some } \alpha \in {\mathbf{C}}}\right\} \subseteq H$$. For $$f\in V$$ and $$w\in W$$, we have \begin{align*} (x.f)(w) = x.f(w) - f(x.w) = \alpha x.w - \alpha x.w = 0 .\end{align*} So let $${\mathcal{W}}\mathrel{\vcenter{:}}=\left\{{f\in {\mathcal{V}}{~\mathrel{\Big\vert}~}f(W) = 0}\right\} \subseteq {\mathcal{V}}$$, then we’ve shown that $$L.{\mathcal{V}}\subseteq {\mathcal{W}}$$. Now roughly, the complement is completely determined by the scalar. Rigorously, since $$\dim {\mathcal{V}}/{\mathcal{W}}= 1$$, any $$f\in {\mathcal{V}}$$ is determined by the scalar $${ \left.{{f}} \right|_{{W}} } = \alpha \operatorname{id}_W$$: we have $$f-\alpha \chi_W \in {\mathcal{W}}$$ where $$\chi_W$$ is any extension of $$\operatorname{id}_W$$ to $$V$$ which is zero on $$V/W$$, e.g. by extending a basis and having it act by zero on the new basis elements.

Now $${\mathcal{W}}\hookrightarrow{\mathcal{V}}\twoheadrightarrow{\mathcal{V}}/{\mathcal{W}}\in {\mathsf{L}{\hbox{-}}\mathsf{Mod}}$$ with $$\operatorname{codim}_{\mathcal{V}}{\mathcal{W}}= 1$$. By Case I, $${\mathcal{V}}= {\mathcal{W}}\oplus {\mathcal{W}}''$$ for some complement $$L{\hbox{-}}$$submodule $${\mathcal{W}}''$$. Let $$f: V\to W$$ span $${\mathcal{W}}''$$, then $${ \left.{{f}} \right|_{{W}} }$$ is a nonzero scalar – a scalar since it’s in $${\mathcal{V}}$$, and nonzero since it’s in the complement of $${\mathcal{W}}$$. By rescaling, we can assume the scalar is 1, so $$\operatorname{im}f = W$$ and by rank-nullity $$\dim \ker f = \dim V - \dim W$$. Thus $$\ker f$$ has the right dimension to be the desired complement. It is an $$L{\hbox{-}}$$submodule, since $$L.f \subseteq {\mathcal{W}}'' \cap{\mathcal{W}}= 0$$ since $${\mathcal{W}}''$$ is an $$L{\hbox{-}}$$submodule and $$f\in {\mathcal{W}}$$ since $$L.{\mathcal{V}}\subseteq {\mathcal{W}}$$. Noting that if $$(x.f) = 0$$ then $$x.(f(v)) = f(x.v)$$, making $$f$$ an $${\mathsf{L}{\hbox{-}}\mathsf{Mod}}$$ morphism. Thus $$W'' \mathrel{\vcenter{:}}=\ker f \leq_L V$$, and $$W \cap W'' = 0$$ so $${ \left.{{f}} \right|_{{W}} } = \operatorname{id}_W$$. Since the dimensions add up correctly, we get $$V = W \oplus W''$$.

## 14.2 6.4: Preservation of Jordan decomposition

Let $$L \leq {\mathfrak{gl}}(V)$$ be a subalgebra with $$L$$ semisimple and finite-dimensional. Given $$x\in L$$, there are two decompositions: the usual JCF $$x = s + n$$, and the abstract decomposition $${ \operatorname{ad}}_x = { \operatorname{ad}}_{x_s} + { \operatorname{ad}}_{x_n}$$. $$L$$ contains the semisimple and nilpotent parts of all of its elements, and in particular the two above decompositions coincide.

The proof is technical, but here’s a sketch:

• Construct a subspace $$L \leq L' \leq_L {\mathfrak{gl}}(V)$$ such that $$l'$$ contains the semisimple and nilpotent parts of all elements where $$L\curvearrowright{\mathfrak{gl}}(V)$$ by $${ \operatorname{ad}}: L\to {\mathfrak{gl}}({\mathfrak{gl}}(V))$$.
• Check $$L' {\mathfrak{q}}N_{{\mathfrak{gl}}(V)}(L)$$ (the normalizer), so $$[LL'] \subseteq L$$.
• Show $$L' = L$$:
• If $$L'\neq L$$, use Weyl’s theorem to get a complement with $$L' = L \oplus M$$.
• Check $$[LM] \subseteq [LL'] \subseteq L$$ and $$[LM] \subseteq M$$ since $$M\leq_L M$$, forcing $$[LM] \subseteq L \cap M = 0$$.
• Use Weyl’s theorem on all of $$V$$ splits it into sums of irreducibles, bracket against the irreducibles, and use specific properties of this $$L'$$ to show $$M = 0$$.
• Since $$s, n\in L$$ when $$x=s+n$$ and $${ \operatorname{ad}}_x = { \operatorname{ad}}_s + { \operatorname{ad}}_n = { \operatorname{ad}}_{x_s} + { \operatorname{ad}}_{x_n}$$, the result follows from uniqueness of the abstract JCF that $$s=x_s, n=x_n$$ (using that $${ \operatorname{ad}}$$ is injective when $$L$$ is semisimple since $$Z(L) = 0$$).

If $$L \in \mathsf{Lie} \mathsf{Alg}_{/ {{\mathbf{C}}}}^{{\mathrm{ss}}}$$ (not necessarily linear) and $$\phi\in {\mathsf{L}{\hbox{-}}\mathsf{Mod}}$$, writing $$x=s+n$$ the abstract Jordan decomposition, $$\phi(x) = \phi(s) + \phi(n)$$ is the usual JCF of $$\phi(x)\in {\mathfrak{gl}}(V)$$.

Consider $${ \operatorname{ad}}_{\phi(L)}\phi(s)$$ and $${ \operatorname{ad}}_{\phi(L)}\phi(n)$$, which are semisimple (acts on a vector space and decomposes into a direct sum of eigenspaces) and nilpotent respectively and commute, yielding the abstract Jordan decomposition of $${ \operatorname{ad}}_{\phi(x)}$$. Now apply the theorem.

# 15 Ch. 7: Representations of $${\mathfrak{sl}}_2({\mathbf{C}})$$ (Wednesday, September 21)

If $$L\in \mathsf{Lie} \mathsf{Alg}$$ and $$s$$ is a semisimple element, then $$\phi(s)$$ is semisimple in any finite-dimensional representation $$\phi$$ of $$L$$. In particular, taking the natural representation, this yields a semisimple operator. For the same reason, $${ \operatorname{ad}}_s$$ is semisimple. Similar statements hold for nilpotent elements.

## 15.1 7.1: Weights and Maximal Vectors

Let $$L \mathrel{\vcenter{:}}={\mathfrak{sl}}_2({\mathbf{C}})$$, then recall

• $$x = { \begin{bmatrix} {0} & {1} \\ {0} & {0} \end{bmatrix} }$$
• $$h = { \begin{bmatrix} {1} & {0} \\ {0} & {-1} \end{bmatrix} }$$
• $$y= { \begin{bmatrix} {0} & {0} \\ {1} & {0} \end{bmatrix} }$$
• $$[xy] = h$$
• $$[hx] = 2x$$
• $$[hy] = -2y$$

Goal: classify $${\mathsf{L}{\hbox{-}}\mathsf{Mod}}^{\mathrm{fd}}$$. By Weyl’s theorem, they’re all completely reducible, so it’s enough to describe the simple objects.

Note that $$L \leq {\mathfrak{gl}}_2({\mathbf{C}}) = {\mathfrak{gl}}({\mathbf{C}}^2)$$, and since $$h$$ is semisimple, $$\phi(h)$$ acts semisimply on any finite-dimensional representation $$V$$ with $$\phi: L\to {\mathfrak{gl}}(V)$$. I.e. $$\phi(h)$$ acts diagonally on $$V$$. Thus $$V = \bigoplus _{ \lambda} V_{ \lambda}$$ which are eigenspaces for the $$\phi(h)$$ action, where \begin{align*} V_\lambda \mathrel{\vcenter{:}}=\left\{{v\in V {~\mathrel{\Big\vert}~}h.v = \lambda v, \lambda\in {\mathbf{C}}}\right\} .\end{align*} If $$V_\lambda \neq 0$$ we say $$\lambda$$ is a weight of $$h$$ in $$V$$ and $$V_\lambda$$ is the corresponding weight space.

If $$v\in V_ \lambda$$, then $$x.v \in V_ {\lambda+2}$$ and $$y.v \in V_{ \lambda- 2}$$.

Prove this using the commutation relations.

Note that if $$V$$ is finite-dimensional then there can not be infinitely many nonzero $$V_\lambda$$, so there exists a $$\lambda\in {\mathbf{C}}$$ such that $$V_{ \lambda} \neq 0$$ but $$V_{ \lambda+ 2} = 0$$. We call $$\lambda$$ a highest weight (h.w.) of $$V$$ (which will turn out to be unique) and any nonzero $$v\in V$$ a highest weight vector.

## 15.2 7.2: Classification of Irreducible $${\mathfrak{sl}}_2({\mathbf{C}}){\hbox{-}}$$Modules

Let $$V \in {\mathsf{L}{\hbox{-}}\mathsf{Mod}}^{{\mathrm{fd}}, {\mathrm{irr}}}$$ and let $$v_0\in V_{ \lambda}$$ be a h.w. vector. Set $$v_{-1} = 0$$ for $$i\geq 0$$ and $$v_i \mathrel{\vcenter{:}}={1\over i!}y^i v_0$$, then for $$i\geq 0$$,

1. $$h.v_i = (\lambda- 2i)v_i$$
2. $$y.v_i = (i+1)v_{i+1}$$
3. $$x.v_i = ( \lambda- i + 1) v_{i-1}$$.

In parts:

1. By the lemma and induction on $$i$$.

2. Clear!

3. Follows from $$i x.v_i = x.(y.v_{i-1}) = y.(x.v_{i-1}) + [xy].v_{i-1}$$ and induction on $$i$$.

Some useful facts:

• The nonzero $$v_i$$ are linearly independent since they are eigenvectors of $$h$$ with different eigenvalues – this is a linear algebra fact.

• The subspace of $$V$$ spanned by the nonzero $$v_i$$ is an $$L{\hbox{-}}$$submodule, but since $$V$$ is irreducible the $$v_i$$ must form a basis for $$V$$.

• Since $$V$$ is finite-dimensional, there must be a smallest $$m\geq 0$$ such that $$v_m \neq 0$$ but $$v_{m+1} = 0$$, and thus $$v_{m+k} = 0$$ for all $$k$$. Thus $$\dim_{\mathbf{C}}V = m+1$$ with basis $$\left\{{v_0, v_1, \cdots, v_m}\right\}$$.

• Since $$v_{m+1} = 0$$, we have $$0 = x.v_{m+1} = ( \lambda- m) v_m$$ where $$v_m\neq 0$$, so $$\lambda = m \in {\mathbf{Z}}_{\geq 0}$$. Thus its highest weight is a non-negative integer, equal to 1 less than the dimension. We’ll reserve $$\lambda$$ to denote a highest weight and $$\mu$$ an arbitrary weight.

• Thus the weights of $$V$$ are $$\left\{{m, m-2, \cdots, \star, \cdots, -m+2, -m}\right\}$$ where $$\star = 0$$ or $$1$$ depending on if $$m$$ is even or odd respectively, each occurring with multiplicity one (using that $$\dim V_{\mu} = 1$$ if $$\mu$$ is a weight of $$V$$).

Let $$V \in {\mathsf{L}{\hbox{-}}\mathsf{Mod}}^{{\mathrm{fd}}, {\mathrm{irr}}}$$ for $$L\mathrel{\vcenter{:}}={\mathfrak{sl}}_2({\mathbf{C}})$$, then

1. Relative to $$h$$, $$V$$ is a direct sum of weight spaces $$V_\mu$$ for $$\mu \in \left\{{m, m-2,\cdots, -m+2, -m}\right\}$$ where $$m+1=\dim V$$ and each weight space is 1-dimensional.

2. $$V$$ has a unique (up to nonzero scalar multiples) highest weight vector whose weight (the highest weight of $$V$$) is $$m\in {\mathbf{Z}}_{\geq 0}$$

3. The action $$L\curvearrowright V$$ is given explicitly as in the lemma if the basis is chosen in a prescribed fashion. In particular, there exists a unique finite-dimensional irreducible $${\mathfrak{sl}}_2{\hbox{-}}$$module of dimension $$m+1$$ up to isomorphism.

Let $$V \in {\mathsf{L}{\hbox{-}}\mathsf{Mod}}^{{\mathrm{fd}}, {\mathrm{irr}}}$$, then the eigenvalues of $$h\curvearrowright V$$ are all integers, and each occurs along with its negative with the same multiplicity. Moreover, in a decomposition of $$V$$ in a direct sum of irreducibles, the number of simple summands is $$\dim V_0 + \dim V_1$$.

Existence of irreducible highest weight modules of highest weight $$m \geq 0$$:

• $$m=0$$: take the trivial representation $$V={\mathbf{C}}$$.
• $$m=1$$: take $$V = {\mathbf{C}}^2$$ with the natural representation.
• $$m=2$$: take $$V = L$$ with the adjoint representation.

The formula in the lemma can be used to construct an irreducible representation of $$L$$ having highest weight $$\lambda= m$$ for any $$m\in {\mathbf{Z}}_{\geq 0}$$, which is unique up to isomorphism and denoted $$L(m)$$ (or $$V(m)$$ in Humphreys) which has dimension $$m+1$$. In fact, the formulas can be used to define an infinite-dimensional representation of $$L$$ with highest weight $$\lambda$$ for any $$\lambda\in {\mathbf{C}}$$, which is denoted $$M( \lambda)$$ – we just don’t decree that $$v_{m+1} = 0$$, yielding a basis $$\left\{{v_0, v_1,\cdots}\right\}$$. This yields a decomposition into 1-dimensional weight spaces $$M( \lambda) = \oplus _{i=1}^\infty M_{ \lambda-2i}$$ (Verma modules) where $$M_{ \lambda-2i} = \left\langle{v_i}\right\rangle_{\mathbf{C}}$$.

# 16 Ch. 8: Root space decompositions (Friday, September 23)

Recall that the relations from last time can produce an infinite-dimensional module with basis $$\left\{{v_0,v_1,\cdots}\right\}$$. Note that if $$m\in {\mathbf{Z}}_{\geq 0}$$, then $$x.v_{m+1} = ( \lambda- m ) v_m = 0$$. This says that one can’t raise $$v_{m+1}$$ back to $$v_m$$, so $$\left\{{v_{m+1},v_{m+2} \cdots}\right\}$$ spans a submodule isomorphic to $$M(-m-2)$$. Quotienting yields $$L(m) \mathrel{\vcenter{:}}= M(m) / M(-m-2)$$, also called $$V(m)$$, spanned by $$\left\{{v_0, \cdots, v_m}\right\}$$. Note that $$M(-m-2)$$ and $$L(m)$$ are irreducible.

Let $$L\in \mathsf{Lie} \mathsf{Alg}_{/ {{\mathbf{C}}}}^{{\mathrm{fd}}, {\mathrm{ss}}}$$ for this chapter.

## 16.1 8.1: Maximal toral subalgebras and roots

Let $$L\ni x = x_s + x_n \in L + L$$ be the abstract Jordan decomposition. then if $$x=x_n$$ for every $$x\in L$$ then $$L$$ is nilpotent which contradicts Engel’s theorem. Thus there exists some $$x=x_s\neq 0$$.

A toral subalgebra is any nonzero subalgebra spanned by semisimple elements.

The algebraic torus $$({\mathbf{C}}^{\times})^n$$ which has Lie algebra $${\mathbf{C}}^n$$, thought of as diagonal matrices.

Let $$H$$ be a maximal toral subalgebra of $$L$$. Any toral subalgebra $$H \subseteq L$$ is abelian.

Let $$T \leq L$$ be toral and let $$x\in T$$ be a basis element. Since $$x$$ is semisimple, it STS $${ \operatorname{ad}}_{T, x} = 0$$. Semisimplicity of $$x$$ implies $${ \operatorname{ad}}_{L, x}$$ diagonalizable, so we want to show $${ \operatorname{ad}}_{T, x}$$ has no non-zero eigenvalues. Suppose that there exists a nonzero $$y\in T$$ such that $${ \operatorname{ad}}_{T, x}(y) = \lambda y$$ for $$\lambda \neq 0$$. Then $${ \operatorname{ad}}_{T, y}(x) = [yx] = -[xy] = - { \operatorname{ad}}_{T, x} = - \lambda y\neq 0$$, and since $${ \operatorname{ad}}_{T, y}(y) = -[yy] = 0$$, $$y$$ is an eigenvector with eigenvalue zero. Since $${ \operatorname{ad}}_{T, y}$$ is also diagonalizable and $$x\in T$$, write $$x$$ as a linear combination of eigenvectors for it, say $$x = \sum a_i v_i$$. Then $${ \operatorname{ad}}_{T, y}(x) = \sum \lambda_i a_i v_i$$ and the terms with $$\lambda_i = 0$$ vanish, and the remaining element is a sum of eigenvectors for $${ \operatorname{ad}}_{T, y}$$ with nonzero eigenvalues. $$\contradiction$$

If $$L = {\mathfrak{sl}}_n({\mathbf{C}})$$ then define $$H$$ to be the set of diagonal matrices. Then $$H$$ is toral and in fact maximal: if $$L = H \oplus {\mathbf{C}}z$$ for some $$z\in L\setminus H$$ then one can find an $$h\in H$$ such that $$[hz] \neq 0$$, making it nonabelian, but toral subalgebras must be abelian.

Recall that a commuting family of diagonalizable operators on a finite-dimensional vector space can be simultaneously diagonalized. Letting $$H \leq L$$ be maximal toral, this applies to $${ \operatorname{ad}}_L(H)$$, and thus there is a basis in which all operators in $${ \operatorname{ad}}_L(H)$$ are diagonal. Set $$L_{ \alpha} \mathrel{\vcenter{:}}=\left\{{x\in L {~\mathrel{\Big\vert}~}[hx] = \alpha(h) x\,\,\forall h\in H}\right\}$$ where $$\alpha: H\to {\mathbf{C}}$$ is linear and thus an element of $$H {}^{ \vee }$$. Note that $$L_0 = C_L(H)$$, and the set $$\Phi\mathrel{\vcenter{:}}=\left\{{\alpha\in H {}^{ \vee }{~\mathrel{\Big\vert}~}\alpha\neq 0, L_\alpha\neq 0}\right\}$$ is called the roots of $$H$$ in $$L$$, and $$L_\alpha$$ is called a root space. Note that $$L_0$$ is not considered a root space. This induces a root space decomposition \begin{align*} L = C_L(H) \oplus _{ \alpha\in \Phi } L_\alpha .\end{align*}

Note that for classical algebras, we’ll show $$C_L(H) = H$$ and corresponds to the standard bases given early in the book.

Type $$A_n$$ yields $${\mathfrak{sl}}_{n+1}({\mathbf{C}})$$ and $$\dim H = n$$ for $$H$$ defined to be the diagonal traceless matrices. Define $${\varepsilon}_i\in H {}^{ \vee }$$ as $${\varepsilon}_i \operatorname{diag}(a_1,\cdots, a_{n+1}) \mathrel{\vcenter{:}}= a_i$$, the $$\Phi \mathrel{\vcenter{:}}=\left\{{{\varepsilon}_i - {\varepsilon}_j {~\mathrel{\Big\vert}~}1\leq i\neq j\leq ,n+1}\right\}$$ and $$L_{{\varepsilon}_i - {\varepsilon}_j} = {\mathbf{C}}e_{ij}$$. Why: \begin{align*} [h, e_{ij}] = \left[ \sum a_k e_{kk}, e_{ij}\right] = a_i e_{ii} e_{ij} - a_j e_{ij} e_{jj} = (a_i - a_j) e_{ij} \mathrel{\vcenter{:}}=({\varepsilon}_i -{\varepsilon}_j)(h) e_{ij} .\end{align*}

# 17 Monday, September 26

## 17.1 8.1 Continued

A Lie algebra is toral iff every element is semisimple – this exists because any semisimple Lie algebra contains at least one semisimple element and you can take its span. Let $$H$$ be a fixed maximal toral subalgebra, then we have a root space decomposition \begin{align*} L = C_L(H) \oplus \bigoplus _{\alpha\in \Phi \subseteq H {}^{ \vee }} L_\alpha, \qquad L_\alpha \mathrel{\vcenter{:}}=\left\{{ x\in L {~\mathrel{\Big\vert}~}[hx] = \alpha(h) x \,\forall h\in H}\right\} .\end{align*} Let $$L$$ be semisimple and finite dimensional over $${\mathbf{C}}$$ from now on.

1. \begin{align*} [L_\alpha, L_\beta] \subseteq L_{ \alpha + \beta} \qquad \forall \alpha, \beta\in H {}^{ \vee } .\end{align*}
2. \begin{align*} x\in L_{\alpha}, \alpha\neq 0 \implies { \operatorname{ad}}_x \text{ is nilpotent} .\end{align*}
3. If $$\alpha, \beta\in H {}^{ \vee }$$ and $$\alpha + \beta\neq 0$$ then $$L_ \alpha \perp L_ \beta$$ relative to $$\kappa_L$$, the Killing form for $$L$$.
1. Follows from the Jacobi identity.

2. Follows from (1), that $$\dim L < \infty$$, and the root space decomposition. This is because $$L_{\alpha}\neq L_{\alpha + \beta}\neq L_{\alpha + 2\beta} \neq \cdots$$ and there are only finitely many $$\beta$$ to consider.

3. If $$\alpha + \beta\neq 0$$ then $$\exists h\in H$$ such that $$(\alpha + \beta)(h) \neq 0$$. For $$x\in L_\alpha, y\in L_ \beta$$, \begin{align*} \alpha(h) \kappa(x,y) &= \kappa([hx], y) \\ &= -\kappa([xh], y) \\ &= -\kappa(x, [hy]) \\ &= -\beta(h)\kappa(x,y) \\ &\implies ( \alpha + \beta)(h) \kappa(x,y) = 0 \\ &\implies \kappa(x,y)=0 \text{ since } (\alpha + \beta)(h) \neq 0 .\end{align*}

$${ \left.{{\kappa}} \right|_{{L_0}} }$$ is nondegenerate, since $$L_0 \perp L_{\alpha}$$ for all $$\alpha\in \Phi$$, but $$\kappa$$ is nondegenerate. Moreover, if $$L_{ \alpha} \neq 0$$ then $$L_{- \alpha}\neq 0$$ by (3) and nondegeneracy.

## 17.2 8.2: $$C_L(H)$$

Let $$H \leq L$$ be maximal toral, then $$H = C_L(H)$$.

Skipped, about 1 page of dense text broken into 7 steps. Uses the last corollary along with Engel’s theorem.

If $$L$$ is a classical Lie algebra over $${\mathbf{C}}$$, we choose $$H$$ to be diagonal matrices in $$L$$, and $$x\in L\setminus H$$ is non-diagonal, then there exists an $$h\in H$$ such that $$[hx]\neq 0$$. Note that toral implies abelian and nonabelian implies nontoral, thus there is no abelian subalgebra of $$L$$ properly containing $$H$$ – adding any nontoral element at all to $$H$$ makes it nonabelian. This same argument shows $$C_L(H) = H$$ since nothing else commutes with $$H$$. This implies that $$L = H \oplus_{ \alpha \in \Phi} L_\alpha$$.

$${ \left.{{\kappa}} \right|_{{H}} }$$ is nondegenerate.

As a result, $$\kappa$$ induces an isomorphism $$H { \, \xrightarrow{\sim}\, }H {}^{ \vee }$$ by $$h\mapsto \kappa(h, {-})$$ and $$H {}^{ \vee } { \, \xrightarrow{\sim}\, }H$$ by $$\phi\mapsto t_\phi$$, the unique element such that $$\kappa(t_\phi, {-}) = \phi({-})$$. In particular, given $$\alpha\in \Phi \subset H {}^{ \vee }$$ there is some $$t_\alpha\in H$$. The next 3 sections are about properties of $$\Phi$$:

• Orthogonality,
• Integrality,
• Rationality.

## 17.3 8.3: Orthogonality properties of $$\Phi$$.

1. $$\Phi$$ spans $$H {}^{ \vee }$$.
2. If $$\alpha\in \Phi$$ is a root then $$-\alpha\in \Phi$$ is also a root.
3. Let $$\alpha\in \Phi, x\in L_{ \alpha}, y\in L_{- \alpha}$$, then $$[xy] = \kappa(x, y) t_\alpha$$.
4. If $$\alpha\in \Phi$$ then $$[L_{ \alpha}, L_{- \alpha}] = {\mathbf{C}}t_\alpha$$ is 1-dimensional with basis $$t_\alpha$$.
5. For any $$\alpha \in \Phi$$, we have $$\alpha(t_\alpha) = \kappa(t_ \alpha, t_ \alpha) \neq 0$$.
6. (Important) If $$\alpha\in \Phi, x_ \alpha\in L_{ \alpha}\setminus\left\{{0}\right\}$$ then there exists some $$y_ \alpha L_{- \alpha}$$ in the opposite root space such that $$x_ \alpha, y_ \alpha, h_ \alpha \mathrel{\vcenter{:}}=[x_ \alpha, y_ \alpha]$$ span a 3-dimensional subalgebra $${\mathfrak{sl}}(\alpha) \leq L$$ isomorphic to $${\mathfrak{sl}}_2({\mathbf{C}})$$.
7. $$h_\alpha = {2 t_\alpha \over \kappa(t_ \alpha, t_ \alpha)}$$, $$\alpha(h_ \alpha) = 2, h_{ \alpha} = h_{- \alpha}$$.
1. If it does not span, choose $$h \in H\setminus\left\{{0}\right\}$$ with $$\alpha(h) = 0$$ for all $$\alpha\in \Phi$$. Then $$[h, L_ \alpha] = 0$$ for all $$\alpha$$, but $$[h H] = 0$$ since $$H$$ is abelian. Using the root space decomposition, $$[h L] =0$$ and so $$h\in Z(L) = 0$$ since $$L$$ is semisimple. $$\contradiction$$

2. Follows from proposition 8.2 and $$\kappa(L_ \alpha, L_ \beta) = 0$$ when $$\beta\neq \alpha$$.

3. Let $$h\in H$$, then \begin{align*} \kappa(h, [xy]) &= \kappa([hx], y) \\ &= \alpha(h) \kappa(x, y)\\ &= \kappa(t_\alpha, h) \kappa(x, y) \\ &= \kappa( \kappa(x, y) t_ \alpha, h) \\ &= \kappa( h, \kappa(x, y) t_ \alpha) \\ &\implies \kappa(h, [xy] - \kappa(x,y)t_\alpha) = 0 \\ &\implies [xy] = \kappa (x,y) t_ \alpha ,\end{align*} where we’ve used that $$[xy]\in H$$ and $$\kappa$$ is nondegenerate on $$H$$ and $$[L_{ \alpha}, L_{ - \alpha}] \subseteq L_0 = H$$.

1. shows $$[L_{ \alpha}, L_{ - \alpha}]$$ is spanned by $$t_ \alpha$$ if it is nonzero. Let $$x\in L_ \alpha\setminus\left\{{0}\right\}$$, then if $$\kappa(x, L_{ - \alpha}) = 0$$ then $$\kappa$$ would have to be degenerate, a contradiction. So there is some $$y\in L_{ - \alpha}$$ with $$\kappa(x, y) \neq 0$$. Moreover $$t_\alpha\neq 0$$ since $$\alpha\neq 0$$ and $$\alpha\mapsto t_\alpha$$ is an isomorphism. Thus $$[xy] = \kappa(x,y) t_ \alpha$$.
4. Suppose $$\alpha(t_\alpha) = \kappa(t_{ \alpha}, t_{ \alpha}) = 0$$, then for $$x\in L_{\alpha}, y\in L_{ - \alpha}$$, we have $$[t_ \alpha, x] = \alpha(t_ \alpha)x = 0$$ and similarly $$[t_ \alpha, y] = 0$$. As before, find $$x\in L_{ \alpha}, y\in L_{ - \alpha}$$ with $$\kappa(x,y)\neq 0$$ and scale one so that $$\kappa(x, y) = 1$$. Then by (c), $$[x, y] = t_ \alpha$$, so combining this with the previous formula yields that $$S \mathrel{\vcenter{:}}=\left\langle{x, y, t_ \alpha}\right\rangle$$ is a 3-dimensional solvable subalgebra.15 Taking $${ \operatorname{ad}}: L\hookrightarrow{\mathfrak{gl}}(L)$$, which is injective by semisimplicity, and similarly $${ \left.{{ { \operatorname{ad}}}} \right|_{{S}} }: S\hookrightarrow { \operatorname{ad}}(S) \leq {\mathfrak{gl}}(L)$$. We’ll use Lie’s theorem to show everything here is a commutator of upper triangular, thus strictly upper triangular, thus nilpotent and reach a contradiction.

# 18 Wednesday, September 28

## 18.1 Continued proof

Recall the proposition from last time:

1. $$\Phi$$ spans $$H {}^{ \vee }$$.
2. If $$\alpha\in \Phi$$ is a root then $$-\alpha\in \Phi$$ is also a root.
3. Let $$\alpha\in \Phi, x\in L_{ \alpha}, y\in L_{- \alpha}$$, then $$[xy] = \kappa(x, y) t_\alpha$$.
4. If $$\alpha\in \Phi$$ then $$[L_{ \alpha}, L_{- \alpha}] = {\mathbf{C}}t_\alpha$$ is 1-dimensional with basis $$t_\alpha$$.
5. For any $$\alpha \in \Phi$$, we have $$\alpha(t_\alpha) = \kappa(t_ \alpha, t_ \alpha) \neq 0$$.
6. (Important) If $$\alpha\in \Phi, x_ \alpha\in L_{ \alpha}\setminus\left\{{0}\right\}$$ then there exists some $$y_ \alpha L_{- \alpha}$$ in the opposite root space such that $$x_ \alpha, y_ \alpha, h_ \alpha \mathrel{\vcenter{:}}=[x_ \alpha, y_ \alpha]$$ span a 3-dimensional subalgebra $${\mathfrak{sl}}(\alpha) \leq L$$ isomorphic to $${\mathfrak{sl}}_2({\mathbf{C}})$$.
7. $$h_\alpha = {2 t_\alpha \over \kappa(t_ \alpha, t_ \alpha)}$$, $$\alpha(h_ \alpha) = 2, h_{ \alpha} = h_{- \alpha}$$.

Part e: We have $$\alpha(t_ \alpha ) = \kappa(t_ \alpha, t_ \alpha)$$, so suppose this is zero. Pick $$x\in L_{ \alpha}, y\in L_{ - \alpha}$$ such that $$\kappa(x, y) = 1$$, then

• $$[t_ \alpha, x] = 0$$,
• $$[t_ \alpha, y] = 0$$,
• $$[x, y] = t_ \alpha$$.

Set $$S \mathrel{\vcenter{:}}={\mathfrak{sl}}( \alpha) \mathrel{\vcenter{:}}={\mathbf{C}}\left\langle{x,y,t_ \alpha}\right\rangle$$ and restrict $${ \operatorname{ad}}: L\hookrightarrow{\mathfrak{gl}}(L)$$ to $$S$$. Then $${ \operatorname{ad}}(S) \cong S$$ by injectivity, and this is a solvable linear subalgebra of $${\mathfrak{gl}}(L)$$. Apply Lie’s theorem to choose a basis for $$L$$ such that the matrices for $${ \operatorname{ad}}(S)$$ are upper triangular. Then use that $${ \operatorname{ad}}_L([SS]) = [ { \operatorname{ad}}_L(S) { \operatorname{ad}}_L(S)]$$, which is strictly upper triangular and thus nilpotent. In particular, $${ \operatorname{ad}}_L (t_ \alpha)$$ is nilpotent, but since $$t_\alpha\in H$$ which is semisimple, so $${ \operatorname{ad}}_L( t_ \alpha)$$ is semisimple. The only things that are semisimple and nilpotent are zero, so $${ \operatorname{ad}}_L( t_ \alpha) = 0 \implies t_\alpha = 0$$. This contradicts that $$\alpha\in H {}^{ \vee }\setminus\left\{{0}\right\}$$. $$\contradiction$$

Part f: Given $$x_ \alpha\in L_ \alpha\setminus\left\{{0}\right\}$$, choose $$y_ \alpha \in L_{ - \alpha}$$ and rescale it so that \begin{align*} \kappa(x_ \alpha, y _{\alpha} ) = {2\over \kappa(t_ \alpha, t_ \alpha)} .\end{align*} Set $$h_ \alpha \mathrel{\vcenter{:}}={2t_ \alpha\over \kappa(t_ \alpha, t_ \alpha) }$$, then by (c), $$[x_ \alpha, y_ \alpha] = \kappa( x_ \alpha, y_ \alpha) t_ \alpha = h_ \alpha$$. So \begin{align*} [ h_ \alpha, x_ \alpha] = {2\over \alpha(t_ \alpha) }[ t_ \alpha, x_ \alpha] = {2\over \alpha(t_ \alpha)} \alpha(t_ \alpha) x_ \alpha = 2x_ \alpha ,\end{align*} and similarly $$[h_ \alpha, y_ \alpha] = -2 y_ \alpha$$. Now the span $$\left\langle{x_ \alpha, h_ \alpha, y_ \alpha}\right\rangle \leq L$$ is a subalgebra with the same multiplication table as $${\mathfrak{sl}}_2({\mathbf{C}})$$, so $$S \cong {\mathfrak{sl}}_2({\mathbf{C}})$$.

Part g: Note that we would have $$h_{ - \alpha} = {2 t_{ - \alpha} \over \kappa( t_ { - \alpha}, t_{- \alpha} ) } = - h_ \alpha$$ if $$t_{ \alpha} = t_{ - \alpha}$$. This follows from the fact that $$H {}^{ \vee } { \, \xrightarrow{\sim}\, }H$$ sends $$\alpha\mapsto t_\alpha, -\alpha\mapsto t_{- \alpha}$$, but by linearity $$-\alpha\mapsto -t_{ \alpha}$$.

$$L$$ is generated as a Lie algebra by the root spaces $$\left\{{L_ \alpha{~\mathrel{\Big\vert}~}\alpha\in \Phi}\right\}$$.

It STS $$H \subseteq \left\langle{\left\{{L_\alpha}\right\}_{\alpha\in \Phi}}\right\rangle$$. Given $$\alpha\in \Phi$$, \begin{align*} \exists x_\alpha\in L_ \alpha, y_ \alpha\in L_{- \alpha} \quad\text{such that}\quad \left\langle{x_ \alpha, y_ \alpha, h_ \alpha\mathrel{\vcenter{:}}=[x_ \alpha, y_ \alpha] }\right\rangle \cong {\mathfrak{sl}}_2({\mathbf{C}}) .\end{align*} Note any $$h_\alpha\in {\mathbf{C}}^{\times}t_ \alpha$$ corresponds via $$\kappa$$ to some $$\alpha\in H {}^{ \vee }$$. By (a), $$\Phi$$ spans $$H {}^{ \vee }$$, so $$\left\{{t_\alpha}\right\}_{\alpha\in \Phi}$$ spans $$H$$.

## 18.2 8.4: Integrality properties of $$\Phi$$

Any $$\alpha\in \Phi$$ yields $${\mathfrak{sl}}(\alpha) \cong {\mathfrak{sl}}(\alpha)$$, and in fact that the generators entirely determined by the choice of $$x_\alpha$$. View $$L\in {}_{{\mathfrak{sl}}(\alpha)}{\mathsf{Mod}}$$ via $${ \operatorname{ad}}$$.

If $$M \leq L \in {}_{{\mathfrak{sl}}(\alpha)}{\mathsf{Mod}}$$ then all eigenvalues of $$h_\alpha\curvearrowright M$$ are integers.

Apply Weyl’s theorem to decompose $$M$$ into a finite direct sum of irreducibles in $${}_{ {\mathfrak{sl}}_2({\mathbf{C}}) }{\mathsf{Mod}}$$. The weights of $$h_\alpha$$ are of the form $$m, m-2,\cdots, -m+2, -m\in {\mathbf{Z}}$$.16

Let $$M = H + {\mathfrak{sl}}(\alpha) \leq L \in {}_{{\mathfrak{sl}}( \alpha)}{\mathsf{Mod}}$$, which one can check is actually a submodule since bracketing either lands in $${\mathfrak{sl}}(\alpha)$$ or kills elements. What does Weyl’s theorem say about this submodule? There is some intersection. Set $$K \mathrel{\vcenter{:}}=\ker\alpha \subseteq H$$, so $$\operatorname{codim}_H K = 1$$ by rank-nullity. Note that $$h_\alpha \not\in K$$, so $$M = K \oplus {\mathfrak{sl}}(\alpha)$$. Moreover $${\mathfrak{sl}}(\alpha)\curvearrowright K$$ by zero, since bracketing acts by $$\alpha$$ which vanishes on $$K$$. So $$K\cong {\mathbf{C}}{ {}^{ \scriptscriptstyle\oplus^{n+1} } }$$ decomposes into trivial modules.

Let $$\beta\in \Phi\cup\left\{{0}\right\}$$ and define $$M \mathrel{\vcenter{:}}=\bigoplus _{c\in {\mathbf{C}}} L_{\beta + c\alpha}$$, then $$L \leq L \in {}_{{\mathfrak{sl}}( \alpha)}{\mathsf{Mod}}$$. It will turn out that $$L_{\beta+ c \alpha} \neq 0 \iff c\in [-r, q] \subseteq {\mathbf{Z}}$$ with $$r, q\in {\mathbf{Z}}_{\geq 0}$$.

Let $$\alpha\in \Phi$$, then the root spaces $$\dim L_{\pm \alpha} = 1$$, and the only multiples of $$\alpha$$ which are in $$\Phi$$ are $$\pm \alpha$$.

Note $$L_\alpha$$ can only pair with $$L_{- \alpha}$$ to give a nondegenerate Killing form. Set \begin{align*} M \mathrel{\vcenter{:}}=\bigoplus _{c\in {\mathbf{C}}} L_{c \alpha} = H \oplus \bigoplus _{c \alpha\in \Phi} L_{c \alpha} .\end{align*} By Weyl’s theorem, this decomposes into irreducibles. This allows us to take a complement of the decomposition from before to write $$M = H \bigoplus {\mathfrak{sl}}(\alpha) \oplus W$$, and we WTS $$W = 0$$ since this contains all $$L_{c \alpha}$$ where $$c\neq \pm 1$$. Since $$H \subseteq K \oplus {\mathfrak{sl}}( \alpha)$$, we have $$W \cap H = 0$$. If $$c_ \alpha$$ is a root of $$L$$, then $$h_ \alpha$$ has $$(c \alpha)(h_ \alpha) = 2c$$ as an eigenvalue, which must be an integer by a previous lemma. So $$c\in {\mathbf{Z}}$$ or $$c\in {\mathbf{Z}}+ {1\over 2}$$.

Suppose $$W\neq 0$$, and let $$V(s)$$ (or $$L(s)$$ in modern notation) be an irreducible $${\mathfrak{sl}}( \alpha){\hbox{-}}$$submodule of $$W$$ for $$s\in {\mathbf{Z}}_{\geq 0}$$. If $$s$$ is even, $$V(s)$$ contains an eigenvector $$w$$ for $$h_ \alpha$$ of eigenvalue zero by applying $$y_\alpha$$ $$s/2$$ times. We can then write $$w = \sum_{c\in {\mathbf{C}}} v_{c \alpha}$$ with $$v_{ ca } \in L_{ c \alpha}$$, and by finiteness of direct sums we have $$v_{c \alpha} = 0$$ for almost every $$c\in {\mathbf{C}}$$. Then \begin{align*} 0 &= [h_ \alpha, w] \\ &= \sum_{c\in {\mathbf{C}}} [h_ \alpha, v_{ c \alpha} ] \\ &= \sum_{c\in {\mathbf{C}}} (c\alpha)( h _{\alpha} )v_{c \alpha} \\ &= \sum 2c v_{c \alpha} \\ &\implies v_{c \alpha} = 0 \text{ when } c\neq 0 ,\end{align*} forcing $$w\in H$$, the zero eigenspace. But $$w\in W$$, so $$w\in W \cap H = 0$$. $$\contradiction$$

# 19 Friday, September 30

## 19.1 8.4

$$\alpha\in \Phi\implies \dim L_{\pm \alpha} = 1$$, and $$\alpha, \lambda \alpha\in \Phi\implies \lambda = \pm 1$$.

Consider $$M \mathrel{\vcenter{:}}=\bigoplus \bigoplus _{ c\in {\mathbf{C}}} L_{c \alpha} \leq L\in {}_{{\mathfrak{sl}}( \alpha)}{\mathsf{Mod}}$$. Write $${\mathfrak{sl}}(\alpha) = \left\langle{ x_ \alpha, h_ \alpha, y_ \alpha}\right\rangle$$, we decomposed $$M = K \oplus {\mathfrak{sl}}(\alpha) \oplus W$$ where $$\ker \alpha \leq H$$ and $$W \cap H = 0$$. WTS: $$W = 0$$. So far, we’ve shown that if $$L(s) \subseteq W$$ for $$s\in {\mathbf{Z}}_{\geq 0}$$ (which guarantees finite dimensionality), then $$s$$ can’t be even – otherwise it has a weight zero eigenvector, forcing it to be in $$H$$, but $$W \cap H = 0$$.

Aside: $$\alpha\in \Phi \implies 2\alpha\not\in \Phi$$, since it would have weight $$(2\alpha)(h_ \alpha) = 2\alpha h_ \alpha = 4$$, but weights in irreducible modules have the same parity as the highest weight and no such weights exist in $$M$$ (only $$0, \pm 2$$ in $$K \oplus {\mathfrak{sl}}_2(\alpha)$$ and only odd in $$W$$). Suppose $$L(s) \subseteq W$$ and $$s\geq 1$$ is odd. Then $$L(s)$$ has a weight vector for $$h_ \alpha$$ of weight 1. This must come from $$c=1/2$$, since $$(1/2) \alpha (h_ \alpha) = (1/2) 2 = 1$$, so this is in $$L_{\alpha/2}$$. However, by the aside, if $$\alpha\in \Phi$$ then $$\alpha/2\not\in\Phi$$.

Thus it $$W$$ can’t contain any odd roots or even roots, so $$W = 0$$. Note also that $$L_{\pm\alpha}\not\subset K \oplus W$$, forcing it to be in $${\mathfrak{sl}}( \alpha)$$, so $$L_{ \alpha} = \left\langle{x_ \alpha}\right\rangle$$ and $$L_{- \alpha} = \left\langle{y _{\alpha} }\right\rangle$$.

Let $$\alpha, \beta\in \Phi$$ with $$\beta\neq \pm \alpha$$ and consider $$\beta + k \alpha$$ for $$n\in {\mathbf{Z}}$$.

1. $$\beta(h_ \alpha) \in {\mathbf{Z}}$$.
2. $$\exists r,q\in {\mathbf{Z}}_{\geq 0}$$ such that for $$k\in {\mathbf{Z}}$$, the combination $$\beta + k \alpha\in \Phi \iff k \in [-r, q] \ni 0$$. The set $$\left\{{ \beta + k \alpha {~\mathrel{\Big\vert}~}k\in [-r, q]}\right\} \subseteq \Phi$$ is the $$\alpha{\hbox{-}}$$root string through $$\beta$$.
3. If $$\alpha + \beta\in \Phi$$ then $$[L_ \alpha L_ \beta ] = L_{ \alpha + \beta}$$.
4. $$\beta- \beta(h_ \alpha) \alpha\in \Phi$$.

Consider \begin{align*} M \mathrel{\vcenter{:}}=\bigoplus _{k\in {\mathbf{Z}}} L_{ \beta + k \alpha} \leq L \quad\in {}_{{\mathfrak{sl}}( \alpha)}{\mathsf{Mod}} .\end{align*}

1. $$\beta(h _{\alpha} )$$ is the eigenvalues of $$h_ \alpha$$ acting on $$L_ \beta$$. But by the lemma, $$\beta(h_ \alpha)\in {\mathbf{Z}}$$.

2. By the previous proposition, $$\dim L_{ \beta+ k \alpha} = 1$$ if nonzero, and the weight of $$h_\alpha$$ acting on it is $$\beta( h _{\alpha} ) + 2k$$ all different for distinct $$k$$. By $${\mathfrak{sl}}_2{\hbox{-}}$$representation theory, we know the multiplicities of various weight spaces as the sum of dimensions of the zero and one weight spaces, and thus $$M$$ is a single irreducible $${\mathfrak{sl}}(\alpha){\hbox{-}}$$module. So write $$M - L(d)$$ for some $$d\in {\mathbf{Z}}_{\geq 0}$$, then $$h_ \alpha\curvearrowright M$$ with eigenvalues $$\left\{{d,d-2,\cdots, -d+2, -d}\right\}$$. But $$h_ \alpha\curvearrowright M$$ with eigenvalues $$\beta( h_ \alpha) + 2k$$ for those $$k\in {\mathbf{Z}}$$ with $$L_{\beta + k \alpha}\neq 0$$. Since the first list is an unbroken string of integers of the same parity, thus the $$k$$ that appear must also be an unbroken string. Define $$r$$ and $$q$$ by setting $$d = \beta(h_\alpha) + 2q$$ and $$-d =\beta( h_ \alpha ) - 2r$$ to obtain $$[-r, q]$$. Adding these yields $$0 = 2\beta( h_ \alpha) + 2q-2r$$ and $$r-q = \beta(h_ \alpha)$$.

3. Let $$M\cong L(d) \in {}_{{\mathfrak{sl}}(\alpha)}{\mathsf{Mod}}$$ and $$x_\beta \in L_ \beta\setminus\left\{{0}\right\}\subseteq M$$ with $$x_ \alpha\in L_{ \alpha}$$. If $$[x_ \alpha x_ \beta] = 0$$ then $$x_ \beta$$ is a maximal $${\mathfrak{sl}}(\alpha){\hbox{-}}$$vector in $$L(d)$$ and thus $$d = \beta(h_ \alpha)$$. But $$\alpha + \beta\in \Phi \implies \beta)(h_ \alpha) + 2$$ is a weight in $$M$$ bigger than $$d$$, a contradiction. Thus $$\alpha + \beta\in \Phi \implies [x_ \alpha x_ \beta] \neq 0$$. Since this bracket spans and $$\dim L_{ \alpha + \beta} = 1$$, so $$[L_ \alpha L_ \beta] = L_{ \alpha + \beta}$$.

4. Use that $$q\geq 0, r\geq 0$$ to write $$-r \leq -r+q \leq q$$. Then \begin{align*} \beta - \beta(h_ \alpha) \alpha = \beta - (r-q) \alpha = \beta + (-r+q \alpha)\mathrel{\vcenter{:}}=\beta + \ell\alpha \end{align*} where $$\ell\in [-r, q]$$. Thus $$\beta + \ell\alpha\in \Phi$$ is an unbroken string by (b).

Is it true that $$\bigoplus_{k\in {\mathbf{Z}}} L_{\beta+ k \alpha} = \bigoplus _{c\in {\mathbf{C}}} L_{\beta + c\alpha}$$? The issue is that $$c\in {\mathbf{Z}}+ {1\over 2}$$ is still possible.

## 19.2 8.5: Rationality properties of $$\Phi$$

Recall that $$\kappa$$ restrict to a nondegenerate bilinear form on $$H$$ inducing $$H {}^{ \vee } { \, \xrightarrow{\sim}\, }H$$ via $$\phi\mapsto t_\phi$$ where $$\kappa(t_\phi, {-}) = \phi({-})$$. Transfer to a nondegenerate symmetric bilinear form on $$H {}^{ \vee }$$ by $$(\lambda, \mu) \mathrel{\vcenter{:}}=\kappa(t_\lambda, t_\mu)$$. By prop 8.3 we know $$H {}^{ \vee }$$ is spanned by $$\Phi$$, so choose a $${\mathbf{C}}{\hbox{-}}$$basis $$\left\{{ \alpha_1,\cdots, \alpha_n}\right\} \subseteq \Phi$$. Given $$\beta\in\Phi$$, write $$\beta = \sum c_i \alpha_i$$ with $$c_i\in {\mathbf{C}}$$.

$$c_i \in {\mathbf{Q}}$$ for all $$i$$!

# 20 Monday, October 03

## 20.1 Integrality and Rationality Properties

Setup:

• Decompose $$L = H \oplus \bigoplus _{ \alpha\in \Phi} L_{ \alpha}$$

• Use the isomorphism \begin{align*} H & { \, \xrightarrow{\sim}\, }H {}^{ \vee }\\ \varphi &\mapsfrom t_{ \varphi} \end{align*} to define $$(\lambda, \mu) \mathrel{\vcenter{:}}=\kappa(t_ \lambda, t_ \mu)$$ on $$H$$.

• Choose a basis $$\left\{{ \alpha_i}\right\} \subseteq \Phi \subseteq H {}^{ \vee }$$

• For any $$\beta \in \Phi$$, write $$\beta= \sum c_i \alpha_i$$ with $$c_i\in {\mathbf{C}}$$. Then

\begin{align*} c_i\in {\mathbf{Q}} .\end{align*}

Write $$( \beta, \alpha_j) = \sum c_i (\alpha_i, \alpha_j)$$ and m \begin{align*} {2 (\beta, \alpha_j) \over (\alpha_j, \alpha_j) } = \sum c_i {2 (\alpha_i, \alpha_j) \over (\alpha_j, \alpha_j) } ,\end{align*} where the LHS is in $${\mathbf{Z}}$$, as is $$2( \alpha_i, \alpha_j) \over (\alpha_j, \alpha_j)$$. On the other hand \begin{align*} {2 (\beta, \alpha_j) \over (\alpha_j, \alpha_j) } = {2 (t_ \beta, t_{\alpha_j} ) \over \kappa(t_{ \alpha_j}, \kappa_{ \alpha_j} ) } = \kappa(t_ \beta, h_{\alpha_j} ) = \beta(h_{ \alpha_j}) \end{align*} using that $$( \alpha_j, \alpha_j) = \kappa( t_{ \alpha_j}, t_{ \alpha_j} )\neq 0$$ from before.17 Since $$\left\{{ \alpha_i}\right\}$$ is a basis for $$H {}^{ \vee }$$ and $$({-}, {-})$$ is nondegenerate, the matrix $$[ ( \alpha_i, \alpha _j) ]_{1\leq i, j\leq n}$$ is invertible. Thus so is $$\left[ 2 ( \alpha_i, \alpha_j) \over (\alpha_j, \alpha_j ) \right]_{1\leq i,j \leq n}$$, since it’s given by multiplying each column as a nonzero scalar, and one can solve for the $$c_i$$ by inverting it. This involves denominators coming from the determinant, which is an integer, yielding entries in $${\mathbf{Q}}$$.

Given $$\lambda, \mu \in H {}^{ \vee }$$ then \begin{align*} (\lambda, \mu) = \kappa(t_ \lambda, t_\mu) = \operatorname{Trace}( { \operatorname{ad}}_{t_ \lambda} \circ { \operatorname{ad}}_{t_\mu} ) = \sum_{ \alpha\in \Phi} \alpha(t_ \lambda) \cdot \alpha(t_\mu) ,\end{align*} using that both ads are diagonal in this basis, so their product is given by the products of their diagonal entries. One can write this as $$\sum_{ \alpha\in \Phi} \kappa(t_ \alpha, t_ \lambda) \kappa(t_ \alpha, t_\mu)$$, so we get a formula \begin{align*} ( \lambda, \mu ) = \sum_{ \alpha\in \Phi} ( \alpha, \lambda) (\alpha, \mu), \qquad (\lambda, \lambda) = \sum_{ \alpha\in \Phi} (\alpha, \lambda)^2 .\end{align*} Setting $$\lambda = \beta$$ and dividing by $$(\beta, \beta)^2$$ yields \begin{align*} {1\over (\beta, \beta)} = \sum_{ \alpha\in \Phi} {(\alpha, \beta)^2 \over (\beta, \beta)^2} \in {1\over 4}{\mathbf{Z}} ,\end{align*} since $$(\alpha, \beta)\over (\beta, \beta)\in {1\over 2} {\mathbf{Z}}$$. So $$(\beta, \beta)\in {\mathbf{Q}}$$ and thus $$(\alpha, \beta)\in {\mathbf{Q}}$$ for all $$\alpha, \beta\in \Phi$$. It follows that the pairings $$(\lambda, \mu)$$ on the $${\mathbf{Q}}{\hbox{-}}$$subspace $${\mathbb{E}}_{\mathbf{Q}}$$ of $$H {}^{ \vee }$$ spanned by $$\left\{{ \alpha_i}\right\}$$ are all rational.

$$({-}, {-})$$ on $${\mathbb{E}}_{\mathbf{Q}}$$ is still nodegenerate

If $$\lambda\in {\mathbb{E}}_{\mathbf{Q}}, ( \lambda, \mu) =0 \forall \mu\in {\mathbb{E}}_{\mathbf{Q}}$$, then $$( \lambda, \alpha_i) = 0 \forall i \implies (\lambda, \nu) = 0 \forall \nu\in H {}^{ \vee }\implies \lambda= 0$$.

Similarly, $$(\lambda, \lambda) = \sum_{ \alpha\in \Phi \subseteq {\mathbb{E}}_{\mathbf{Q}}} ( \alpha, \lambda)^2$$ is a sum of squares of rational numbers, and is thus non-negative. Since $$( \lambda, \lambda) = 0 \iff \lambda= 0$$, the form on $${\mathbb{E}}_{\mathbf{Q}}$$ is positive definite. Write $${\mathbb{E}}\mathrel{\vcenter{:}}={\mathbb{E}}_{\mathbf{Q}}\otimes_{\mathbf{Q}}{\mathbf{R}}= {\mathbf{R}}\left\{{\alpha_i}\right\}$$, then $$({-}, {-})$$ extends in the obvious way to an $${\mathbf{R}}{\hbox{-}}$$values positive definite bilinear form on $${\mathbb{E}}$$, making it a real inner product space.

Let $$L, H, \Phi, {\mathbb{E}}_{/ {{\mathbf{R}}}}$$ be as above, then

1. $$\Phi$$ is a finite set which spans $${\mathbb{E}}$$ and does not contain zero.
2. If $$\alpha\in \Phi$$ then $$-\alpha\in \Phi$$ and thus is the only other scalar multiple in $$\Phi$$.
3. If $$\alpha, \beta \in \Phi$$ then \begin{align*} \beta - \beta(h_ \alpha) \alpha = \beta - {2 (\beta, \alpha) \over ( \alpha, \alpha) } \alpha\in \Phi ,\end{align*} which only depends on $${\mathbb{E}}$$. Note that this swaps $$\pm \alpha$$.
4. If $$\alpha, \beta \in \Phi$$ then $$\beta(h_\alpha) = {2(\beta, \alpha) \over (\alpha, \alpha)}\in {\mathbf{Z}}$$.

Thus $$\Phi$$ satisfies the axioms of a root system in $${\mathbb{E}}$$.

Recall that for $${\mathfrak{sl}}_3({\mathbf{C}})$$, $$\kappa(x,y) = 6 \operatorname{Trace}(xy)$$. Taking the standard basis $$\left\{{v_i}\right\} \mathrel{\vcenter{:}}=\left\{{x_i, h_i, y_i \mathrel{\vcenter{:}}= x_i^t}\right\}$$, the matrix $$\operatorname{Trace}(v_i v_j)$$ is of the form \begin{align*} { \begin{bmatrix} {0} & {0} & {I} \\ {0} & {A} & {0} \\ {I} & {0} & {0} \end{bmatrix} }\qquad A \mathrel{\vcenter{:}}={ \begin{bmatrix} {2} & {-1} \\ {-1} & {2} \end{bmatrix} } .\end{align*} This is far from the matrix of an inner product, but the middle block corresponds to the form restricted to $$H$$, which is positive definite. One can quickly check this is positive definite by checking positivity of the upper-left $$k\times k$$ minors, which here yields $$\operatorname{det}(2) = 2, \operatorname{det}A = 4-1 = 3$$.

## 20.3 Ch. 9, Axiomatics. 9.1: Reflections in a Euclidean Space

Let $${\mathbb{E}}$$ be a fixed real finite-dimensional Euclidean space with inner product $$(\alpha, \beta)$$, we consider property (c) from the previous theorem: \begin{align*} \beta - {2( \beta, \alpha) \over (\alpha, \alpha)} \in \Phi \qquad\forall \alpha, \beta\in \Phi .\end{align*}

A reflection in $${\mathbb{E}}$$ is an invertible linear map on an $$n{\hbox{-}}$$dimensional Euclidean space that fixes pointwise a hyperplane $$P$$ (of dimension $$n-1$$) and sending any vector $$v\perp P$$ to $$-v$$:

If $$\sigma$$ is a reflection sending $$\alpha\mapsto - \alpha$$, then \begin{align*} \sigma_\alpha(\beta) = \beta - {2( \beta, \alpha) \over (\alpha, \alpha)} \alpha \qquad \forall \beta\in {\mathbb{E}} .\end{align*} One can check that $$\sigma_\alpha^2 = \operatorname{id}$$. Some notes on notation:

• Humphreys writes $${\left\langle { \beta},~{ \alpha} \right\rangle} \mathrel{\vcenter{:}}={2 ( \beta, \alpha) \over (\alpha, \alpha)}$$ This is linear in $$\beta$$ but not in $$\alpha$$!
• More modern: $$(\beta, \alpha {}^{ \vee }) \mathrel{\vcenter{:}}={\left\langle { \beta},~{\alpha} \right\rangle}$$ where $$\alpha {}^{ \vee }\mathrel{\vcenter{:}}={2\alpha\over (\alpha, \alpha)}$$ corresponds to $$h_\alpha$$.
• Modern notation for the map: $$s_\alpha$$ instead of $$\sigma_\alpha$$.

# 21 Wednesday, October 05

## 21.1 Reflections in $${\mathbb{E}}^n$$

Recall the formula \begin{align*} s_\alpha( \lambda) = \lambda- (\lambda, \alpha {}^{ \vee })\alpha, \qquad \alpha {}^{ \vee }\mathrel{\vcenter{:}}={2\alpha\over (\alpha, \alpha)}, \alpha\neq 0 ,\end{align*} which is a reflection through the hyperplane $$P_\alpha\mathrel{\vcenter{:}}=\alpha\perp$$:

Let $$\Phi \subseteq {\mathbb{E}}$$ be a set that spans $${\mathbb{E}}$$, and suppose all of the reflections $$s_\alpha$$ for $$\alpha \in \Phi$$ leave $$\Phi$$ invariant. If $$\sigma\in \operatorname{GL}({\mathbb{E}})$$ leaves $$\Phi$$ invariant, fixes a hyperplane $$P$$ pointwise, and sends some $$\alpha\in \Phi\setminus\left\{{0}\right\}$$ to $$-\alpha$$, then $$\sigma = s_\alpha$$ and $$P = P_\alpha$$.

Let $$\tau = \sigma s_ \alpha =\sigma s_{ \alpha}^{-1}\in \operatorname{GL}({\mathbb{E}})$$, noting that every $$s_\alpha$$ is order 2. Then $$\tau( \Phi) = \Phi$$ and $$\tau( \alpha) = \alpha$$, so $$\tau$$ acts as the identity on the subspace $${\mathbf{R}}\alpha$$ and the quotient space $${\mathbb{E}}/{\mathbf{R}}\alpha$$ since there are two decompositions $${\mathbb{E}}= P_ \alpha \oplus {\mathbf{R}}\alpha = P \oplus {\mathbf{R}}\alpha$$ using $$s_\alpha$$ and $$\sigma$$ respectively. So $$\tau - \operatorname{id}$$ acts as zero on $${\mathbb{E}}/{\mathbf{R}}\alpha$$, and so maps $${\mathbb{E}}$$ into $${\mathbf{R}}\alpha$$ and $${\mathbf{R}}\alpha$$ to zero, s $$(\tau - \operatorname{id})^2 = 0$$ on $${\mathbb{E}}$$ and its minimal polynomial $$m_\tau(t)$$ divides $$f(t) \mathrel{\vcenter{:}}=(t-1)^2$$.

Note that $$\Phi$$ is finite, so the vectors $$\beta, \tau \beta, \tau^2 \beta, \tau^3 \beta,\cdots$$ can not all be distinct. Since $$\tau$$ is invertible we can assume $$\tau^k \beta = \beta$$ for some particular $$k$$. Taking the least common multiple of all such $$k$$ yields a uniform $$k$$ that works for all $$\beta$$ simultaneously, so $$\tau^k \beta = \beta$$ for all $$\beta \in \Phi$$. Since $${\mathbf{R}}\Phi = {\mathbb{E}}, \tau^k$$ acts as $$\operatorname{id}$$ on all of $${\mathbb{E}}$$, so $$\tau^k - 1 = 0$$ and so $$m_\tau(t) \divides t^k - 1$$ for some $$k$$. Therefore $$m_\tau(t) \divides \gcd( (t-1)^2, t^k-1 ) = t-1$$, forcing $$\tau = \operatorname{id}$$ and $$\sigma = s_ \alpha$$ and $$P = P_\alpha$$.

## 21.2 Abstract root systems

A subset $$\Phi \subseteq {\mathbb{E}}$$ of a real Euclidean space is a root system iff

• R1: $${\sharp}\Phi < \infty$$, $${\mathbf{R}}\Phi = {\mathbb{E}}$$, and $$0\not\in \Phi$$,
• R2: $$\alpha\in \Phi \implies -\alpha\in \Phi$$ and no other scalar multiples of $$\alpha$$ are in $$\Phi$$,
• R3: If $$\alpha\in \Phi$$ then $$s_\alpha( \Phi) = \Phi$$,
• R4: If $$\alpha, \beta\in \Phi$$ then $$(\beta, \alpha {}^{ \vee }) = {2(\beta, \alpha) \over ( \alpha, \alpha) } \in {\mathbf{Z}}$$.

Notably, $$\ell\mathrel{\vcenter{:}}={\left\lVert {\beta - s_\alpha\beta} \right\rVert}$$ is an integer multiple of $$\alpha$$:

The Weyl group $$W$$ associated to a root system $$\Phi$$ is the subgroup $$\left\langle{s_\alpha, \alpha\in \Phi}\right\rangle \leq \operatorname{GL}({\mathbb{E}})$$.

Note that $${\sharp}W < \infty$$: $$W$$ permutes $$\Phi$$ by (R3), so there is an injective group morphism $$W \hookrightarrow\mathrm{Perm}(\Phi)$$, which is a finite group – this is injective because if $$w\curvearrowright\Phi$$ as $$\operatorname{id}$$, since $${\mathbf{R}}\Phi = {\mathbb{E}}$$, by linearity $$w\curvearrowright{\mathbb{E}}$$ by $$\operatorname{id}$$ and $$w=\operatorname{id}$$. Recalling that $$s_ \alpha( \lambda) = \lambda- (\lambda, \alpha {}^{ \vee }) \alpha$$, we have $$(s_ \alpha(\lambda), s_ \alpha(\mu)) = ( \lambda, \mu)$$ for all $$\lambda, \mu \in {\mathbb{E}}$$. So in fact $$W \leq {\operatorname{O}}({\mathbb{E}}) \leq \operatorname{GL}({\mathbb{E}})$$, which have determinant $$\pm 1$$ – in particular, $$\operatorname{det}s_\alpha = -1$$ since it can be written as a block matrix $$\operatorname{diag}(1, 1, \cdots, 1, -1)$$ by choosing a basis for $$P_\alpha$$ and extending it by $$\alpha$$.

Note that one can classify finite subgroups of $${\operatorname{SO}}_n$$.

Let $$\Phi = \left\{{ {\varepsilon}_i - {\varepsilon}_j {~\mathrel{\Big\vert}~}1\leq i,j \leq n+1, i\neq j}\right\}$$ be a root system of type $$A_n$$ where $$\left\{{{\varepsilon}_i}\right\}$$ form the standard basis of $${\mathbf{R}}^{n+1}$$ with the standard inner product, so $$({\varepsilon}_i, {\varepsilon}_j) = \delta_{ij}$$. One can compute \begin{align*} s_{{\varepsilon}_i - {\varepsilon}_j}({\varepsilon}_k) = {\varepsilon}_k {2 ({\varepsilon}_k, {\varepsilon}_i - {\varepsilon}_j) \over ({\varepsilon}_i - {\varepsilon}_j, {\varepsilon}_i - {\varepsilon}_j)}({\varepsilon}_i - {\varepsilon}_j) = {\varepsilon}_k - ({\varepsilon}_k, {\varepsilon}_i - {\varepsilon}_j)({\varepsilon}_i - {\varepsilon}_j) = \begin{cases} {\varepsilon}_j & k=i \\ {\varepsilon}_i & k=j \\ {\varepsilon}_k & \text{otherwise}. \end{cases} = {\varepsilon}_{(ij).k} \end{align*} where $$(ij) \in S_{n+1}$$ is a transposition, acting as a function on the index $$k$$. Thus there is a well-defined group morphism \begin{align*} W &\to S_{n+1} \\ s_{{\varepsilon}_i - {\varepsilon}_j} &\mapsto (ij) .\end{align*} This is injective since $$w$$ acting by the identity on every $${\varepsilon}_k$$ implies acting by the identity on all of $${\mathbb{E}}$$ by linearity, and surjective since transpositions generate $$S_{n+1}$$. So $$W\cong S_{n+1}$$, and $$A_n$$ corresponds to $${\mathfrak{sl}}_{n+1}({\mathbf{C}})$$ using that \begin{align*} [h, e_{ij}] = (h_i - h_j) e_{ij} = ({\varepsilon}_i - {\varepsilon}_j)(h) e_{ij} .\end{align*} In $$G = {\operatorname{SL}}_n({\mathbf{C}})$$ one can define $$N_G(T)/C_G(T)$$ for $$T$$ a maximal torus.

What are the Weyl groups of other classical types?

Let $$\Phi \subseteq {\mathbb{E}}$$ be a root system. If $$\sigma\in \operatorname{GL}({\mathbb{E}})$$ leaves $$\Phi$$ invariant, then for all $$\alpha\in \Phi$$, \begin{align*} \sigma s_{ \alpha} \sigma = s_{ \sigma( \alpha)}, \qquad (\beta, \alpha {}^{ \vee }) = (\sigma( \beta), \sigma(\alpha) {}^{ \vee }) \,\,\forall \alpha, \beta\in \Phi .\end{align*} Thus conjugating a reflection yields another reflection.

Note that $$\sigma s_ \alpha \sigma^{-1}$$ sends $$\sigma( \alpha)$$ to its negative and fixes pointwise the hyperplane $$\sigma(P_\alpha)$$. If $$\beta \in \Phi$$ then $$s_{ \alpha}( \beta) \in \Phi$$, so $$\sigma s_ \alpha ( \beta) \in \Phi$$ and \begin{align*} (\sigma s_ \alpha \sigma^{-1}) ( \sigma( \beta)) = \sigma s_ \alpha(\beta) \in \sigma\Phi ,\end{align*} so $$\sigma s_ \alpha \sigma^{-1}$$ leaves invariant the set $$\left\{{ \sigma( \beta) {~\mathrel{\Big\vert}~}\beta\in \Phi}\right\} = \Phi$$. By the previous lemma, it must equal $$s_{ \sigma( \alpha)}$$, and so \begin{align*} ( \sigma( \beta), \sigma( \alpha) {}^{ \vee }) = (\beta, \alpha {}^{ \vee }) \end{align*} by applying both sides to $$\sigma(\beta)$$.

This does not imply that $$(\sigma( \beta), \sigma( \alpha) ) = (\beta, \alpha)$$! With the duals/checks, this bracket involves a ratio, which is preserved, but the individual round brackets are not.

# 22 Friday, October 07

Let $$\Phi \subseteq {\mathbb{E}}$$ be a root system with Weyl group $$W$$. If $$\sigma\in \operatorname{GL}({\mathbb{E}})$$ leaves $$\Phi$$ invariant then \begin{align*} \sigma s_{\alpha} \sigma^{-1}= s_{ \sigma( \alpha)} \qquad\forall \alpha\in \Phi \end{align*} and \begin{align*} ( \beta, \alpha {}^{ \vee }) = ( \sigma(\beta), \sigma(\alpha) {}^{ \vee }) \qquad \forall \alpha, \beta \in \Phi .\end{align*}

\begin{align*} ( \sigma( \beta), \sigma( \alpha) ) \neq (\beta, \alpha) ,\end{align*} i.e. the $$({-}) {}^{ \vee }$$ is important here since it involves a ratio. Without the ratio, one can easily scale to make these not equal.

Two root systems $$\Phi \subseteq {\mathbb{E}}, \Phi' \subseteq {\mathbb{E}}'$$ are isomorphic iff there exists $$\phi: {\mathbb{E}}\to {\mathbb{E}}'$$ of vector spaces such that $$\phi(\Phi) = \Phi'$$ such that \begin{align*} (\varphi( \beta), \varphi(\alpha) {}^{ \vee }) = (\beta, \alpha) \mathrel{\vcenter{:}}={2 (\beta, \alpha) \over (\alpha, \alpha)} \qquad\forall \alpha, \beta \in \Phi .\end{align*}

One can scale a root system to get an isomorphism:

Note that if $$\phi: \Phi { \, \xrightarrow{\sim}\, }\Phi'$$ is an isomorphism, then \begin{align*} \varphi(s_{ \alpha}( \beta)) = s_{ \varphi( \alpha)}( \varphi(\beta)) \qquad \forall \alpha, \beta\in \Phi \implies \varphi \circ s_{ \alpha} \varphi^{-1}= s_{ \varphi( \alpha)} .\end{align*} So $$\phi$$ induces an isomorphism of Weyl groups \begin{align*} W & { \, \xrightarrow{\sim}\, }W' \\ s_{\alpha} &\mapsto s_{ \varphi( \alpha)} .\end{align*}

By the lemma, an automorphism of $$\Phi$$ is the same as an automorphism of $${\mathbb{E}}$$ leaving $$\Phi$$ invariant. In particular, $$W\hookrightarrow\mathop{\mathrm{Aut}}( \Phi)$$.

If $$\Phi \subseteq {\mathbb{E}}$$ is a root system then the dual root system is \begin{align*} \Phi {}^{ \vee }\mathrel{\vcenter{:}}=\left\{{ \alpha {}^{ \vee }{~\mathrel{\Big\vert}~}\alpha\in \Phi}\right\}, \qquad \alpha {}^{ \vee }\mathrel{\vcenter{:}}={2\alpha\over (\alpha, \alpha)} .\end{align*}

Show that $$\Phi {}^{ \vee }$$ is again a root system in $${\mathbb{E}}$$.

One can show $$W( \Phi) = W( \Phi {}^{ \vee })$$ and $${\left\langle {\lambda},~{ \alpha {}^{ \vee }} \right\rangle} \alpha {}^{ \vee }= {\left\langle { \lambda},~{ \alpha} \right\rangle} \alpha = (\lambda, \alpha {}^{ \vee })$$ for all $$\alpha\in \Phi, \lambda\in {\mathbb{E}}$$, so $$s_{\alpha {}^{ \vee }} = s_{\alpha}$$ as linear maps on $${\mathbb{E}}$$.

## 22.1 9.3: Example(s)

Let $$\Phi \subseteq {\mathbb{E}}$$ be a root system, then $$\ell \mathrel{\vcenter{:}}=\dim_{\mathbf{R}}{\mathbb{E}}$$ is the rank of $$\Phi$$.

Rank 1 root systems are given by choice of $$\alpha\in {\mathbf{R}}$$:

Recall $${2( \beta, \alpha) \over (\alpha, \alpha)} \in {\mathbf{Z}}$$, and from linear algebra, $${\left\langle {v},~{w} \right\rangle} = {\left\lVert {v} \right\rVert} \cdot {\left\lVert {w} \right\rVert} \cos( \theta)$$ and $${\left\lVert {\alpha} \right\rVert}^2 = ( \alpha, \alpha)$$. We can thus write \begin{align*} {\left\langle { \beta},~{ \alpha} \right\rangle} = {2( \beta, \alpha) \over (\alpha, \alpha)} = 2{{\left\lVert {\beta} \right\rVert}\over {\left\lVert {\alpha} \right\rVert}} \cos (\theta), \qquad {\left\langle {\alpha },~{\beta} \right\rangle}= 2{{\left\lVert {\alpha} \right\rVert}\over {\left\lVert {\beta} \right\rVert}} \cos( \theta) ,\end{align*} and so \begin{align*} {\left\langle {\alpha },~{\beta} \right\rangle}{\left\langle {\beta },~{\alpha } \right\rangle}= 4\cos^2( \theta) ,\end{align*} noting that $$L_{ \alpha, \beta} \mathrel{\vcenter{:}}={\left\langle {\alpha },~{\beta} \right\rangle}, {\left\langle {\beta },~{\alpha} \right\rangle}$$ are integers of the same sign. If positive, this is in QI, and if negative QII. This massively restricts what the angles can be, since $$0 \leq \cos^2( \theta) \leq 1$$.

First, an easy case: suppose $$L_{ \alpha, \beta} = 4$$, so $$\cos^2( \theta) = 1\implies \cos( \theta) = \pm 1\implies \theta= 0, \pi$$.

• If $$0$$, then $$\alpha,\beta$$ are in the same 1-dimensional subspace and thus $$\beta = \alpha$$. In this case, $${\left\langle {\beta },~{\alpha } \right\rangle}= 2 = {\left\langle {\alpha },~{\beta} \right\rangle}$$.
• If $$\pi$$, then $$\alpha = - \beta$$. Here $${\left\langle {\beta },~{\alpha } \right\rangle}= -2$$.

So assume $$\beta\neq \pm \alpha$$, and without loss of generality $${\left\lVert {\beta} \right\rVert}\geq {\left\lVert {\alpha} \right\rVert}$$, or equivalently $${\left\langle {\alpha },~{\beta } \right\rangle}\leq {\left\langle {\beta },~{\alpha} \right\rangle}$$. Note that if $${\left\langle {\alpha },~{\beta} \right\rangle}\neq 0$$ then \begin{align*} { {\left\langle {\beta },~{\alpha} \right\rangle}\over {\left\langle {\alpha },~{\beta} \right\rangle}} = {{\left\lVert { \beta} \right\rVert}^2 \over {\left\lVert { \alpha} \right\rVert}^2} .\end{align*}

The other possibilities are as follows:

$${\left\langle {\alpha},~{\beta} \right\rangle}$$ $${\left\langle {\beta},~{\alpha} \right\rangle}$$ $$\theta$$ $${\left\lVert {\beta} \right\rVert}^2/{\left\lVert {\alpha} \right\rVert}^2$$
0 0 $$\pi/2$$ Undetermined
1 1 $$\pi/3$$ 1
-1 -1 $$2\pi/3$$ 1
1 2 $$\pi/4$$ 2
-1 -2 $$3\pi/4$$ 2
1 3 $$\pi/6$$ 3
-1 -3 $$5\pi/6$$ 3

Cases for the norm ratios:

• $$1: A_2$$
• $$2: B_2 = C_2$$
• $$3: G_2$$

These are the only three irreducible rank 2 root systems.

Let $$\alpha, \beta\in\Phi$$ lie in distinct linear subspaces of $${\mathbb{E}}$$. Then

1. If $$(\alpha, \beta) > 0$$, i.e. their angle is strictly acute, then $$\alpha - \beta$$ is a root
2. If $$(\alpha, \beta) < 0$$ then $$\alpha + \beta$$ is a root.

Note that (2) follows from (1) by replacing $$\beta$$ with $$-\beta$$. Assume $$(\alpha, \beta) > 0$$, then by the chart $${\left\langle {\alpha },~{\beta } \right\rangle}=1$$ or $${\left\langle {\beta },~{\alpha } \right\rangle}= 1$$. In the former case, \begin{align*} \Phi\ni s_{ \beta}( \alpha) = \alpha - {\left\langle {\alpha },~{\beta } \right\rangle}\beta = \alpha- \beta .\end{align*} In the latter, \begin{align*} s_{ \alpha}(\beta) = \beta- \alpha \in \Phi\implies - (\beta- \alpha) = \alpha- \beta\in \Phi .\end{align*}

Suppose $$\operatorname{rank}( \Phi) = 2$$. Letting $$\alpha\in \Phi$$ be a root of shortest length, since $${\mathbf{R}}\Phi = {\mathbb{E}}$$ there is some $$\beta \in {\mathbb{E}}$$ not equal to $$\pm \alpha$$. Without loss of generality assume $$\angle_{\alpha, \beta}$$ is obtuse by replacing $$\beta$$ with $$-\beta$$ if necessary:

Also choose $$\beta$$ such that $$\angle_{ \alpha, \beta}$$ is maximal.

Case 0: If $$\theta = \pi/2$$, one gets $${\mathbf{A}}_1\times {\mathbf{A}}_1$$:

We’ll continue this next time.

# 23 Monday, October 10

## 23.1 Classification of Rank 2 Root Systems

If $$\beta\neq \pm \alpha$$,

• $$(\alpha, \beta) > 0 \implies \alpha - \beta\in \Phi$$
• $$(\alpha, \beta) < 0 \implies \alpha + \beta\in \Phi$$

Rank 2 root systems: let $$\alpha$$ be a root of shortest length, and $$\beta$$ a root with angle $$\theta$$ between $$\alpha,\beta$$ with $$\theta \geq \pi/2$$ as large as possible.

• If $$\theta = \pi/2$$: $$A_1 \times A_1$$.