--- title: "Problem Sets: Lie Algebras" subtitle: "Problem Set 1" author: - name: D. Zack Garza affiliation: University of Georgia email: dzackgarza@gmail.com date: Fall 2022 order: 1 --- # Problem Set 1 ## Section 1 :::{.problem title="Humphreys 1.1"} Let $L$ be the real vector space $\mathbf{R}^{3}$. Define $[x y]=x \times y$ (cross product of vectors) for $x, y \in L$, and verify that $L$ is a Lie algebra. Write down the structure constants relative to the usual basis of $\mathbf{R}^{3}$. ::: :::{.solution} It suffices to check the 3 axioms given in class that define a Lie algebra: - **L1 (Bilinearity)**: This can be quickly seen from the formula \[ a\cross b = \norm{a}\cdot \norm{b}\sin(\theta_{ab}) \hat{n}_{ab} \] where $\hat{n}_{ab}$ is the vector orthogonal to both $a$ and $b$ given by the right-hand rule. The result follows readily from a direct computation: \[ (ra)\cross (tb) &= \norm{ra} \cdot \norm{tb} \sin(\theta_{ra, tb}) \hat{n}_{ra, tb} \\ &= (rt) \norm{a} \cdot \norm{b} \sin(\theta_{a, b}) \hat{n}_{a, b} \\ &= (rt)\qty{a\cross b} ,\] where we've used the fact that the angle between $a$ and $b$ is the same as the angle between any of their scalar multiples, as is their normal. - **L2**: that $a\times a = 0$ readily follows from the same formula, since $\sin( \theta_{a, a}) = \sin(0) = 0$. - **L3 (The Jacobi identity)**: This is most easily seen from the "BAC - CAB" formula, \[ a\times (b\times c) = b\inner{a}{c} - c\inner{a}{b} .\] We proceed by expanding the Jacobi expression: \[ a\times(b\times c) + c\times (a\times b) + b\times (c\times a) &= {\color{blue} b\inner{a}{c} } - {\color{red} c\inner{a}{b} }\\ &\quad + {\color{green} a \inner{ c }{ b } } - {\color{blue} b \inner{ c }{ a } } \\ &\quad + {\color{red} c \inner{ a }{ b } } - {\color{green} a \inner{ b }{ c } } \\ &= 0 .\] For the structure constants, let $\ts{e_1, e_2, e_3}$ be the standard Euclidean basis for $\RR^3$; we can then write $e_i\times e_j = \sum_{k=1}^3 c_{ij}^k e_k$ and we would like to determine the $c_{ij}^k$. One can compute the following multiplication table: | $e_i\cross e_j$ | $e_1$ | $e_2$ | $e_3$ | |----------------- |-------- |-------- |-------- | | $e_1$ | $0$ | $e_3$ | $-e_2$ | | $e_2$ | $-e_3$ | $0$ | $e_1$ | | $e_3$ | $e_2$ | $-e_1$ | $0$ | Thus the structure constants are given by the antisymmetric Levi-Cevita symbol, \[ c_{ij}^k = \eps^{ijk} \da \begin{cases} 0 & \text{if any index $i,j,k$ is repeated} \\ \sgn \sigma_{ijk} & \text{otherwise}, \end{cases} \] where $\sigma_{ijk} \in S_3$ is the permutation associated to $(i, j, k)$ in cycle notation and $\sgn \sigma$ is the sign homomorphism. :::{.remark} An example to demonstrate how the Levi-Cevita symbol works: \[ e_1\times e_2 = c_{12}^1 e_1 + c_{12}^2 e_2 + c_{12}^3 e_3 = 0 e_1 + 0 e_2 + 1e_3 \] since the first two terms have a repeated index and \[ c_{12}^3 = \eps_{1,2,3} = \sgn(123) = \sgn(12)(23) = (-1)^2 = 1 \] using that $\sgn \sigma = (-1)^m$ where $m$ is the number of transpositions in $\sigma$. ::: ::: :::{.problem title="Humphreys 1.6"} Let $x \in \liegl_n(\FF)$ have $n$ distinct eigenvalues $a_{1}, \ldots, a_{n}$ in $\FF$. Prove that the eigenvalues of $\ad_x$ are precisely the $n^{2}$ scalars $a_{i}-a_{j}$ for $1 \leq i, j \leq n$, which of course need not be distinct. ::: :::{.solution} For a fixed $n$, let $e_{ij} \in \liegl_n(\FF)$ be the matrix with a 1 in the $(i, j)$ position and zeros elsewhere. We will use the following fact: \[ e_{ij} e_{kl} = \delta_{jk} e_{il} ,\] where $\delta_{jk} = 1 \iff j=k$, which implies that \[ [e_{ij} e_{kl} ] = e_{ij}e_{kl} - e_{kl}e_{ij} = \delta_{jk} e_{il} - \delta_{li}e_{kj} .\] Suppose without loss of generality[^diag_wlog] that $x$ is diagonal and of the form $x = \diag(a_1, a_2, \cdots, a_n)$. Then the eigenvectors of $x$ are precisely the $e_{ij}$, since a direct check via matrix multiplication shows $xe_{ij} = a_i e_{ij}$. We claim that every $e_{ij}$ is again an eigenvector of $\ad_x$ with eigenvalue $a_i - a_j$. Noting that the $e_{ij}$ are also left eigenvectors satisfying $e_{ij}x = a_j e_{ij}$, one readily computes \[ \ad_x e_{ij} \da [x, e_ij] = xe_{ij} - e_{ij} x = a_i e_{ij} - a_j e_{ij} = (a_i - a_j)e_{ij} ,\] yielding at least $n^2$ eigenvalues. Since $\ad_x$ expanded in the basis $\ts{e_{ij}}_{1\leq i, j \leq n}$ is an $n\times n$ matrix, this exhausts all possible eigenvalues. [^diag_wlog]: If $x$ is not diagonal, one can use that $x$ is diagonalizable over $\FF$ since $x$ has distinct eigenvalues in $\FF$. So one can reduce to the diagonal case by a change-of-basis of $\FF^n$ that diagonalizes $x$. ::: :::{.problem title="Humphreys 1.9, one Lie type only"} When $\characteristic \FF =0$, show that each classical algebra $L=\mathrm{A}_{\ell}, \mathrm{B}_{\ell}, \mathrm{C}_{\ell}$, or $\mathrm{D}_{\ell}$ is equal to $[L L]$. (This shows again that each algebra consists of trace 0 matrices.) ::: :::{.solution} We will check for this type $A_n$, corresponding to $L \da \liesl_{n+1}$. Since $[LL] \subseteq L$, it suffices to show $L \subseteq [LL]$, and we can further reduce to writing every basis element of $L$ as a commutator in $[LL]$. Note that $L$ has a standard basis given by the matrices - $\ts{x_i \da e_{ij} \st i > j}$ corresponding to $\lien^-$, - $\ts{h_i \da e_{ii} - e_{i+1, i+1} \st 1\leq i \leq n}$ corresponding to $\lieh$, and - $\ts{y_i \da e_{ij} \st i < j}$ corresponding to $\lien^+$. Considering the equation $[e_{ij} e_{kl} ] = \delta_{jk} e_{il} - \delta_{li}e_{kj}$, one can choose $j=k$ to preserve the first term and $l\neq i$ to kill the second term. So letting $t, i, j$ be arbitrary with $i\neq j$, we have \[ [e_{it} e_{tj}] = \delta_{tt} e_{ij} - \delta_{ij}e_{tt} = e_{ij} ,\] yielding all of the $x_i$ and $y_i$. But in fact we are done, using the fact that $h_i = [x_i y_i]$. ::: :::{.problem title="Humphreys 1.11"} Verify that the commutator of two derivations of an $\FF\dash$algebra is again a derivation, whereas the ordinary product need not be. ::: :::{.solution} We want to show that $[\Der(L) \Der(L)] \subseteq \Der(L)$, so let $f,g\in \Der(L)$. The result follows from a direct computation; letting $D \da [fg]$, we have \[ D(ab) = [fg](ab) &= (fg-gf)(ab) \\ &= fg(ab) - gf(ab) \\ &= f\qty{g(a)b + ag(b) } - g\qty{ f(a)b + af(b)} \\ &= f\qty{g(a)b} + f\qty{ag(b)} - g\qty{f(a)b} - g\qty{af(b)} \\ &= { {\color{blue} (fg)(a)b } + {\color{red} g(a)f(b)} } \\ &\quad + { {\color{red} f(a)g(b) } + {\color{green} a (fg)(b)} } \\ &\quad - { {\color{blue} (gf)(a) b } + {\color{red} f(a)g(b)} } \\ &\quad - { {\color{red} g(a)f(b) } - {\color{green} a(gf)(b)} } \\ &= {\color{blue} [fg](a) b} - {\color{green} a [fg](b) } \\ &= D(a)b - aD(b) .\] To see that ordinary products of derivations need not be derivations, consider the operators $D_x \da \dd{}{x}, D_y = \dd{}{y}$ acting on a finite-dimensional vector space of multivariate polynomials of some bounded degree, as a sub $\RR\dash$algebra of $\RR[x,y]$. Take $f(x,y) = x+y$ and $g(x,y) = xy$, so that $fg = g f = x^2 y+ xy^2$. Then $[D_x D_y] = 0$ since mixed partial derivatives are equal, but \[ D_x D_y (fg) = D_x \qty{x^2 + 2xy} = 2x + 2y \neq 0 .\] ::: :::{.problem title="Humphreys 1.12"} Let $L$ be a Lie algebra over an algebraically closed field $\FF$ and let $x \in L$. Prove that the subspace of $L$ spanned by the eigenvectors of $\ad_x$ is a subalgebra. ::: :::{.solution} Let $E_x \subseteq L$ be the subspace spanned by eigenvectors of $\ad_x$; it suffices to show $[E_x E_x] \subseteq E_x$. Letting $y_i \in E_x$, we have $\ad_x(y_i) = \lambda_i y_i$ for some scalars \( \lambda_i \in \FF \), and we want to show $\ad_x([y_1 y_2]) = \lambda_{12} [y_1 y_2]$ for some scalar \( \lambda_{12} \). Note that the Jacobi identity is equivalent to $\ad$ acting as a derivation with respect to the bracket, i.e. \[ \ad_x([yz]) = [\ad_x(y) z] + [y \ad_x(z)] \implies [x[yz]] = [[xy]z] + [y[xz]] .\] The result then follows from a direct computation: \[ \ad_x([y_1y_2]) &= [[xy_1]y_2] + [y_1 [xy_2]] \\ &= [ \lambda_1 y_1 y_2] + [y_1 \lambda_2 y_2] \\ &= (\lambda_1 + \lambda_2)[y_1 y_2] .\] ::: ## Section 2 :::{.problem title="Humphreys 2.1"} Prove that the set of all inner derivations $\ad_x, x \in L$, is an ideal of $\Der L$. ::: :::{.solution} It suffices to show $[\Der(L) \Inn(L)] \subseteq \Inn(L)$, so let $f\in \Der(L)$ and $\ad_x \in \Inn(L)$. The result follows from the following check: \[ [f\ad_x](l) &= (f\circ \ad_x)(l) - (\ad_x \circ f)(l) \\ &= f([xl]) - [x f(l)] \\ &= [f(x) l] + [x f(l)] - [x f(l)] \\ &= [f(x) l] \\ &= \ad_{f(x)}(l), \qquad \text{and } \ad_{f(x)} \in \Inn(L) .\] ::: :::{.problem title="Humphreys 2.2"} Show that $\mathfrak{s l}_n( \FF)$ is precisely the derived algebra of $\mathfrak{g l}_n( \FF)$ (cf. Exercise 1.9). ::: :::{.solution} We want to show $\liegl_n(\FF)^{(1)} \da [\liegl_n(\FF) \liegl_n(\FF)] = \liesl_n(\FF)$. $\subseteq$: This immediate from the fact that for any matrices $A$ and $B$, \[ \tr([AB]) = \tr(AB -BA) = \tr(AB) - \tr(BA) = \tr(AB) - \tr(AB) = 0 .\] $\supseteq$: From a previous exercise, we know that $[\liesl_n(\FF) \liesl_n(\FF)] = \liesl_n(\FF)$, and since $\liesl_n(\FF) \subseteq \liegl_n(\FF)$ we have \[ \liesl_n(\FF) = \liesl_n(\FF)^{(1)} \subseteq \liegl_n(\FF)^{(1)} .\] ::: :::{.problem title="Humphreys 2.5"} Suppose $\operatorname{dim} L=3$ and $L=[L L]$. Prove that $L$ must be simple. Observe first that any homomorphic image of $L$ also equals its derived algebra. Recover the simplicity of $\mathfrak{s l}_2( \FF)$ when $\characteristic \FF \neq 2$. ::: :::{.solution} Let $I\normal L$ be a proper ideal, then $\dim L/I < \dim L$ forces $\dim L/I = 1,2$. Since $L\surjects L/I$, the latter is the homomorphic image of a Lie algebra and thus $(L/I)^{(1)} = L/I$ by the hint. Note that in particular, $L/I$ is not abelian. We proceed by cases: - $\dim L/I = 1$. - In this case, $L/I = \FF x$ is generated by a single element $x$. Since $[xx] = 0$ in any Lie algebra, we have $(\FF x)^{(1)} = 0$, contradicting that $L/I$ is not abelian. $\contradiction$ - $\dim L/I = 2$: Write $L/I = \FF x + \FF y$ for distinct generators $x,y$, and consider the multiplication table for the bracket. - If $[xy] = 0$, then $L/I$ is abelian, a contradiction. $\contradiction$ - Otherwise, without loss of generality $[xy] =x$ as described at the end of section 1.4. In this case, $(L/I)^{(1)} \subseteq \FF_x \subsetneq L/I$, again a contradiction. $\contradiction$ So no such proper ideals $I$ can exist, forcing $L$ to be simple. Applying this to $L \da \liesl_2(\FF)$, we have $\dim_\FF \liesl_2(\FF) = 2^2-1 = 3$, and from a previous exercise we know $\liesl_2(\FF)^{(1)} = \liesl_2(\FF)$, so the above argument applies and shows simplicity. ::: :::{.problem title="Humphreys 2.10"} Let $\sigma$ be the automorphism of $\mathfrak{s l}_2(\FF)$ defined in (2.3). Verify that - $\sigma(x)=-y$, - $\sigma(y)=-x$, - $\sigma(h)=-h$. Note that this automorphism is defined as \[ \sigma = \exp(\ad_x)\circ \exp(\ad_{-y}) \circ \exp(\ad_x) .\] ::: :::{.solution} We recall that $\exp \ad_x(y) \da \sum_{n\geq 0}{1\over n!} \ad_x^n(y)$, where the exponent denotes an $n\dash$fold composition of operators. To compute these power series, first note that $\ad_t(t) = 0$ for $t=x,y,h$ by axiom **L2**, so \[ (\exp \ad_t)(t) = 1(t) + \ad_t(t) + {1\over 2} \ad_t^2(t) + \cdots = 1(t) = t \] where $1$ denotes the identity operator. It is worth noting that if $\ad_t^n(t') = 0$ for some $n$ and some fixed $t,t'$, then it is also zero for all higher $n$ since each successive term involves bracketing with the previous term: \[ \ad^{n+1}_t(t') = [t\, \ad_t^n(t')] = [t\, 0] = 0 .\] We first compute some individual nontrivial terms that will appear in $\sigma$. The first order terms are given by standard formulas, which we collect into a multiplication table for the bracket: | | $x$ | $h$ | $y$ | |----- |------ |------- |------- | | $x$ | $0$ | $-2x$ | $h$ | | $h$ | $2x$ | $0$ | $-2y$ | | $y$ | $-h$ | $2y$ | $0$ | We can thus read off the following: - $\ad_x(y) = h$ - $\ad_x(h) = -2x$ - $\ad_{-y}(x) = [-y x] = [xy] = h$ - $\ad_{-y}(h) = [-yh] = [hy] = -2y$ For reference, we compute and collect higher order terms: - $\ad_x^n(y)$: - $\ad_x^1(y) = h$ from above, - $\ad_x^2(y) = \ad_x([xy]) = \ad_x(h) = [xh] = -[hx] = -2x$, - $\ad_x^3(y) = \ad_x(-2x) = 0$, so $\ad_x^{\geq 3}(y) = 0$. - $\ad_x^n(h)$: - $\ad_x^1(h) = -2x$ from above, - $\ad_x^2(h) = \ad_x(-2x) = 0$, so $\ad_x^{\geq 2}(h) = 0$. - $\ad_{-y}^n(x)$: - $\ad_{-y}^1(x) = h$ from above, - $\ad_{-y}^2(x) = \ad_{-y}(h) = [-yh] = [hy] = -2y$, - $\ad_{-y}^2(x) = \ad_{-y}(-2y) = 0$, so $\ad_{-y}^{n\geq 2}(x) = 0$. - $\ad_{-y}^n(h)$: - $\ad_{-y}^1(h) = -2y$ from above, and so $\ad_{-y}^{\geq 2}(h) = 0$. Finally, we can compute the individual terms of $\sigma$: \[ (\exp \ad_x)(x) &= x \\ \\ (\exp \ad_x)(h) &= 1(h) + \ad_x(h) \\ &= h + (-2x) \\ &= h-2x \\ \\ (\exp \ad_x)(y) &= 1(y) + \ad_x(y) + {1\over 2}\ad_x^2(y) \\ &= y + h + {1\over 2}(-2x) \\ &= y+h-x \\ \\ (\exp \ad_{-y})(x) &= 1(x) + \ad_{-y}(x) x + {1\over 2} \ad^2_{-y}(x) \\ &= x + h +{1\over 2}(-2y) \\ &= x+h-y \\ \\ (\exp \ad_{-y})(h) &= 1(h) + \ad_{-y}(h) \\ &= h - 2y \\ \\ (\exp \ad_{-y})(y) &= y ,\] and assembling everything together yields \[ \sigma(x) &= (\exp \ad_x \circ \exp \ad_{-y} \circ \exp \ad_x)(x) \\ &= (\exp \ad_x \circ \exp \ad_{-y})(x) \\ &= (\exp \ad_x)(x+h-y) \\ &= (x) + (h-2x) - (y+h-x) \\ &= -y \\ \\ \sigma(y) &= (\exp \ad_x \circ \exp \ad_{-y} \circ \exp \ad_x)(y) \\ &= (\exp \ad_x \circ \exp \ad_{-y} )(y+h-x) \\ &= \exp \ad_x\qty{(y) + (h-2y) - (x+h-y) } \\ &= \exp \ad_x\qty{-x} \\ &= -x \\ \\ \sigma(h) &= (\exp \ad_x \circ \exp \ad_{-y} \circ \exp \ad_x)(h) \\ &= (\exp \ad_x \circ \exp \ad_{-y} )(h-2x) \\ &= (\exp \ad_x )( (h-2y) - 2(x+h-y) ) \\ &= (\exp \ad_x )(-2x -h ) \\ &= -2(x) - (h-2x) \\ &= -h .\] :::