--- title: "Problem Sets: Lie Algebras" subtitle: "Problem Set 3" author: - name: D. Zack Garza affiliation: University of Georgia email: dzackgarza@gmail.com date: Fall 2022 order: 3 --- # Problem Set 3 ## Section 5 :::{.problem title="5.1"} Prove that if $L$ is nilpotent then the Killing form of $L$ is identically zero. ::: :::{.solution} Note that if $L$ is nilpotent than every $\ell \in L$ is ad-nilpotent, so letting $x,y\in L$ be arbitrary, their commutator $\ell \da [xy]$ is ad-nilpotent. Thus $\ad_{[xy]} \in \Endo(L)$ is a nilpotent endomorphism of $L$, which are always traceless. The claim is the following: for any $x,y\in L$, \[ \Trace(\ad_{[xy]}) = 0\implies \Trace(\ad_x \circ \ad_y) = 0 ,\] from which it follows immediately that $\beta$ is identically zero. First we can use the fact that $\ad: L\to \liegl(L)$ preserves brackets, and so \[ \ad_{[xy]_L} = [\ad_x \ad_y]_{\liegl(L)} = \ad_x \circ \ad_y - \ad_y \circ\ad_x ,\] and so \[ 0 = \Trace(\ad_{[xy]}) = \Trace(\ad_x \ad_y - \ad_y \ad_x) = \Trace(\ad_x \ad_y) - \Trace(\ad_y \ad_x) .\] where we've used that the trace is an $\FF\dash$linear map $\liegl(L) \to \FF$. This forces \[ \Trace(\ad_x \ad_y) = - \Trace(\ad_y \ad_x) ,\] but by the cyclic property of traces, we always have \[ \Trace(\ad_x \ad_y) = \Trace(\ad_y \ad_x) .\] Combining these yields $\Trace(\ad_x \ad_y) = 0$. ::: :::{.problem title="5.7"} Relative to the standard basis of $\liesl_3(\FF)$, compute $\det \kappa$. What primes divide it? > Hint: use 6.7, which says $\kappa_{\liegl_n}(x, y) = 2n \Trace(xy)$. ::: :::{.solution} We have the following standard basis: \[ x_1=\left[\begin{array}{ccc} \cdot & 1 & \cdot \\ \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot \end{array}\right] \quad &x_2=\left[\begin{array}{ccc} \cdot & \cdot & 1 \\ \cdot & \cdot & \cdot \\ \cdot & \cdot & . \end{array}\right] &x_3=\left[\begin{array}{ccc} \cdot & \cdot & \cdot \\ \cdot & \cdot & 1 \\ \cdot & \cdot & \cdot \end{array}\right]\\ h_1=\left[\begin{array}{ccc} 1 & \cdot & \cdot \\ \cdot & -1 & \cdot \\ \cdot & \cdot & \cdot \end{array}\right] \quad &h_2=\left[\begin{array}{ccc} \cdot & \cdot & \cdot \\ \cdot & 1 & \cdot \\ \cdot & \cdot & -1 \end{array}\right] &\\ y_1=\left[\begin{array}{ccc} \cdot & \cdot & \cdot \\ 1 & \cdot & \cdot \\ \cdot & \cdot & \cdot \end{array}\right] \quad &y_2=\left[\begin{array}{lll} \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot \\ 1 & \cdot & \cdot \end{array}\right] &y_3=\left[\begin{array}{ccc} \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot \\ \cdot & 1 & \cdot \end{array}\right] .\] For notational convenience, let $\ts{v_1,\cdots, v_8}$ denote this ordered basis. Direct computations show - $[x_1 v_1] = [x_1 x_1] = 0$ - $[x_1 v_2] = [x_1 x_2] = 0$ - $[x_1 v_3] = [x_1 x_3] = e_{13} = x_2 = v_2$ - $[x_1 v_4] = [x_1 h_1] = -2e_{12} = -2x_2 = -2 v_2$ - $[x_1 v_5] = [x_1 h_2] = e_{12} = x_1 = v_1$ - $[x_1 v_6] = [x_1 y_1] = e_{11} - e_{22} = h_1 = v_4$ - $[x_1 v_7] = [x_1 y_2] = -e_{31} = -y_2 = v_6$ - $[x_1 v_8] = [x_1 y_3] = 0$ Let $E_{ij}$ denote the elementary $8\times 8$ matrices with a 1 in the $(i, j)$ position. We then have, for example, \[ \ad_{x_1} &= 0 + 0 + E_{2,3} -2 E_{2, 4} + E_{1, 5} + E_{4, 6} + E_{6, 7} + 0 \\ &= \left(\begin{array}{rrrrrrr} \cdot & \cdot & \cdot & \cdot & 1 & \cdot & \cdot \\ \cdot & \cdot & 1 & -2 & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & 1 & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 1 \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \end{array}\right) .\] The remaining computations can be readily automated on a computer, yielding the following matrices for the remaining $\ad_{v_i}$: - $\ad_{x_1} = 0 + 0 + E_{2,3} -2 E_{1, 4} + E_{1, 5} + E_{4, 6} + E_{8, 7} + 0$ - $\ad_{x_2} = 0 + 0 + 0 -E_{2, 4} -E_{2, 5} - E_{3, 6} + (E_{4, 7} +E_{5, 7}) + E_{1, 8}$ - $\ad_{x_3} = -E_{2,1} + 0 + 0 + E_{3, 4} -2 E_{3, 5} + 0 + E_{6, 7} + E_{5, 8}$ - $\ad_{h_1} = 2E_{1,1} + E_{2, 2} - E_{3, 3} + 0 + 0 -2 E_{6, 6} - E_{7,7} + E_{8, 8}$ - $\ad_{h_2} = -E_{1, 1} + E_{2,2} +2 E_{3,3} + 0 + 0 + E_{6,6} - E_{7,7} -2 E_{8,8}$ - $\ad_{y_1} = -E_{4, 1} + E_{3, 2} + 0 + 2E_{6, 4} - E_{6, 5} + 0 + 0 - E_{7, 8}$ - $\ad_{y_2} = E_{8,1} - (E_{4, 2} + E_{5, 2}) - E_{6, 3} + E_{7, 4} + E_{7, 5} +0 +0 + 0$ - $\ad_{y_3} = 0 - E_{1, 2} - E_{5, 3} - E_{8, 4} +2 E_{8, 5} + E_{7, 6} + 0 + 0$ Now forming the matrix $(\beta)_{ij} \da \Trace(\ad_{v_i} \ad_{v_j})$ yields \[ \beta = \left(\begin{array}{rrrrrrrr} \cdot & \cdot & \cdot & \cdot & \cdot & 2 & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 6 & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 6 \\ \cdot & \cdot & \cdot & 12 & -6 & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & -6 & 12 & \cdot & \cdot & \cdot \\ 2 & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & 6 & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & 6 & \cdot & \cdot & \cdot & \cdot & \cdot \end{array}\right) ,\] whence $\det(\beta) = (2\cdot 6\cdot 6)^2(12^2-36) = - 2^8 3^7$. ::: ## Section 6 :::{.problem title="6.1"} Using the standard basis for $L=\mathfrak{s l}_2(\FF)$, write down the Casimir element of the adjoint representation of $L$ *(cf. Exercise 5.5)*. Do the same thing for the usual (3-dimensional) representation of $\mathfrak{s l}_3(\FF)$, first computing dual bases relative to the trace form. ::: :::{.solution} A computation shows that in the basis $\ts{e_i} \da \ts{x,h,y}$, the Killing form is represented by \[ \beta = \mattt 004 080 400 \implies \beta^{-T} = \mattt 00{1\over 4} 0 {1\over 8}0 {1\over 4} 0 0 ,\] yielding the dual basis $\ts{e_i\dual}$ read from the columns of $\beta^{-T}$: - $x\dual = {1\over 4}y$, - $h\dual = {1\over 8} h$, - $y\dual = {1\over 4}x$. Thus letting $\phi = \ad$, we have \[ c_\phi &= \sum \phi(e_i)\phi(e_i\dual) \\ &= \ad(x) \ad(x\dual) + \ad(h) \ad(h\dual) + \ad(y)\ad(y\dual) \\ &= \ad(x) \ad(y/4) + \ad(h) \ad(h/8) + \ad(y)\ad(x/4) \\ &= {1\over 4} \ad_x \ad_y + {1\over 8} \ad_h^2 + {1\over 4} \ad_y \ad_x .\] For $\liesl_3$, first take the ordered basis $\ts{v_1,\cdots, v_8} = \ts{x_1, x_2, x_3, h_1, h_2, y_1, y_2, y_3}$ as in the previous problem. So we form the matrix $(\beta)_{ij} \da \Trace(v_i v_j)$ by computing various products and traces on a computer to obtain \[ \beta = \left(\begin{array}{rrrrrrrr} \cdot & \cdot & \cdot & \cdot & \cdot & 1 & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 1 & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 1 \\ \cdot & \cdot & \cdot & 2 & -1 & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & -1 & 2 & \cdot & \cdot & \cdot \\ 1 & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & 1 & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & 1 & \cdot & \cdot & \cdot & \cdot & \cdot \end{array}\right) \implies \beta^{-T} = \left(\begin{array}{rrrrrrrr} \cdot & \cdot & \cdot & \cdot & \cdot & 1 & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 1 & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 1 \\ \cdot & \cdot & \cdot & \frac{2}{3} & \frac{1}{3} & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \frac{1}{3} & \frac{2}{3} & \cdot & \cdot & \cdot \\ 1 & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & 1 & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & 1 & \cdot & \cdot & \cdot & \cdot & \cdot \end{array}\right) ,\] which yields the dual basis - $x_i\dual = y_i$ - $h_1\dual = {2\over 3}h_1 + {1\over 3}h_2$ - $h_2\dual = {1\over 3}h_1 + {2\over 3}h_2$ - $y_i\dual = x_i$ We can thus compute the Casimir element of the standard representation $\phi$ on a computer as \[ c_{\phi} &= \sum_i \phi(x_i)\phi(x_i\dual) + \phi(h_1)\phi(h_1\dual) + \phi(h_2)\phi(h_2\dual) + \sum_i \phi(y_i)\phi(y_i\dual) \\ &= \sum_i x_i y_i + h_1 h_1\dual + h_2 h_2\dual + \sum_i y_i x_i \\ &= \sum_i \qty{x_i y_i + y_i x_i} \\ &= {8\over 3}I .\] ::: :::{.problem title="6.3"} If $L$ is solvable, every irreducible representation of $L$ is one dimensional. ::: :::{.solution} Let $\phi: L\to V$ be an irreducible representation of $L$. By Lie's theorem. $L$ stabilizes a flag in $V$, say $F^\bullet = F^1 \subset \cdots F^n = V$ where $F^k = \gens{v_1,\cdots, v_k}$ for some basis $\ts{v_i}_{i\leq n}$. Since $\phi$ is irreducible, the only $L\dash$invariant subspaces of $V$ are $0$ and $V$ itself. However, each $F^k$ is an $L\dash$invariant subspace, which forces $n=1$ and $F^1 = V$. Thus $V$ is 1-dimensional. ::: :::{.problem title="6.5"} A Lie algebra $L$ for which $\operatorname{Rad} L=Z(L)$ is called reductive.[^reductive_examples] (a) If $L$ is reductive, then $L$ is a completely reducible ad $L$-module. [^6.5a_hint] In particular, $L$ is the direct sum of $Z(L)$ and $[L L]$, with $[L L]$ semisimple. (b) If $L$ is a classical linear Lie algebra (1.2), then $L$ is semisimple. *(Cf. Exercise 1.9.)* (c) If $L$ is a completely reducible $\ad(L)\dash$module, then $L$ is reductive. (d) If $L$ is reductive, then all finite dimensional representations of $L$ in which $Z(L)$ is represented by semisimple endomorphisms are completely reducible. [^reductive_examples]: Examples: $L$ abelian, $L$ semisimple, $L=\mathfrak{g l}_n(\FF)$. [^6.5a_hint]: If ad $L \neq 0$, use Weyl's Theorem. ::: :::{.solution .foldopen} **Part 1**: If $\ad(L) \neq 0$, as hinted, we can attempt to apply Weyl's theorem to the representation $\phi: \ad(L)\to \liegl(L)$: if we can show $\ad(L)$ is semisimple, then $\phi$ (and thus $L$) will be a completely reducible $\ad(L)\dash$module. Assume $L$ is reductive, so $\ker(\ad) = Z(L) = \Rad(L)$, and by the first isomorphism theorem $\ad(L) \cong L/\Rad(L)$. We can now use the fact stated in Humphreys on page 11 that for an arbitrary Lie algebra $L$, the quotient $L/\Rad(L)$ is semisimple -- this follows from the fact that $\Rad(L/\Rad(L)) = 0$, since the maximal solvable ideal in the quotient would need to be a maximal proper ideal in $L$ containing $\Rad(L)$, which won't exist by maximality of $\Rad(L)$. Thus $\ad(L)$ is semisimple, and Weyl's theorem implies it is completely reducible. To show that $L = Z(L) \oplus [LL]$, we first show that it decomposes as a sum $L = Z(L) + [LL]$, and then that the intersection is empty so the sum is direct. We recall that a Lie algebra is semisimple if and only if it has no nonzero abelian ideals. Since $L/Z(L)$ is semisimple, we have $[L/Z(L), L/Z(L)] = L/Z(L)$ since it would otherwise be a nonzero abelian ideal in $L/Z(L)$. We can separately identify $[L/Z(L), L/Z(L)] \cong [LL]/Z(L)$, since the latter is also semisimple and the former is an abelian ideal in it. Combining these, we have $[LL]/Z(L) \cong L/Z(L) \cong \ad(L)$, and so we have an extension \[ 0 \to Z(L) \to L \to [LL] \to 0 .\] Since this sequence splits at the level of vector spaces, $L = Z(L) + [LL]$ as an $\ad(L)\dash$module, although the sum need not be direct. To show that it is, note that $Z(L) \leq L$ is an $\ad(L)\dash$invariant submodule, and by complete reducibility has an $\ad(L)\dash$invariant complement $W$. We can thus write $L = W \oplus Z(L)$, and moreover $[LL] \subseteq W$, and so we must have $W = [LL]$ and $L = [LL] \oplus Z(L)$. Finally, to see that $[LL]$ is semisimple, note that the above decomposition allows us to write $L/Z(L) \cong [LL]$, and $\Rad(L/Z(L)) = \Rad(L/\Rad(L)) = 0$ so $\Rad([LL]) = 0$. **Part 2**: Omitted for time. **Part 3**: Omitted for time. **Part 4**: Omitted for time. ::: :::{.problem title="6.6"} Let $L$ be a simple Lie algebra. Let $\beta(x, y)$ and $\gamma(x, y)$ be two symmetric associative bilinear forms on $L$. If $\beta, \gamma$ are nondegenerate, prove that $\beta$ and $\gamma$ are proportional. > Hint: Use Schur's Lemma. ::: :::{.solution} The strategy will be to define an irreducible $L\dash$module $V$ and use the two bilinear forms to produce an element of $\Endo_L(V)$, which will be 1-dimensional by Schur's lemma. The representation we'll take will be $\phi \da \ad: L\to \liegl(L)$, and since $L$ is simple, $\ker \ad = 0$ since otherwise it would yield a nontrivial ideal of $L$. Since this is a faithful representation, we will identify $L$ with its image $V \da \ad(L) \subseteq \liegl(L)$ and regard $V$ as an $L\dash$module. As a matter of notation, let $\beta_x(y) \da \beta(x, y)$ and similarly $\gamma_x(y) \da \gamma(x, y)$, so that $\beta_x, \gamma_x$ can be regarded as linear functionals on $V$ and thus elements of $V\dual$. This gives an $\FF\dash$linear map \[ \Phi_1: V &\to V\dual \\ x &\mapsto \beta_x ,\] which we claim is an $L\dash$module morphism. Assuming this for the moment, note that by the general theory of bilinear forms on vector spaces, since $\beta$ and $\gamma$ are nondegenerate, the assignments $x\mapsto \beta_x$ and $x\mapsto \gamma_x$ induce vector space isomorphisms $V\iso V\dual$. Accordingly, for any linear functional $f\in V\dual$, there is a unique element $z(f) \in V$ such that $f(v) = \gamma(z(f), v)$. So define a map using the representing element for $\gamma$: \[ \Phi_2: V\dual &\to V \\ f &\mapsto z(f) ,\] which we claim is also an $L\dash$module morphism. We can now define their composite \[ \Phi \da \Phi_2 \circ \Phi_1: V &\to V \\ x &\mapsto z(\beta_x) ,\] which sends an element $x\in V$ to the element $z = z(\beta_x) \in V$ such that $\beta_x(\wait) = \gamma_z(\wait)$ as functionals. An additional claim is that $\Phi$ commutes with the image $V \da \ad(L) \subseteq \liegl(L)$. Given this, by Schur's lemma we have $\Phi\in \Endo_L(V) = \FF$ (where we've used that a compositions of morphisms is again a morphism) and so $\Phi = \lambda \id_L$ for some scalar $\lambda\in \FF$. To see why this implies the result, we have equalities of functionals \[ \beta(x, \wait) &= \beta_x(\wait) \\ &= \gamma_{z(\beta_x) }(\wait) \\ &= \gamma( z(\beta_x), \wait)\\ &= \gamma( \Phi(x), \wait) \\ &= \gamma(\lambda x, \wait) \\ &= \lambda\gamma(x, \wait) ,\] and since this holds for all $x$ we have $\beta(\wait, \wait) = \lambda \gamma(\wait, \wait)$ as desired. --- :::{.claim} $\Phi_1$ is an $L\dash$module morphism. ::: :::{.proof title="?"} We recall that a morphism of $L\dash$modules $\phi: V\to W$ is an $\FF\dash$linear map satisfying \[ \phi(\ell .\vector x) = \ell.\phi(\vector x) \qquad\forall \ell\in L,\,\forall \vector x\in V .\] In our case, the left-hand side is \[ \Phi_1(\ell . \vector x) \da \Phi_1( \ad_\ell(\vector x) ) = \Phi_1([\ell,\vector x]) = \beta_{[\ell, \vector x]} = \beta( [\ell, \vector x], \wait) .\] and the right-hand side is \[ \ell.\Phi_1(\vector x) \da \ell.\beta_{\vector x} \da (y\mapsto -\beta_{\vector x}( \ell. y)) \da (\vector y\mapsto -\beta_{\vector x}( [\ell, \vector y] )) = -\beta(\vector x, [\ell, \wait]) .\] By anticommutativity of the bracket, along with $\FF\dash$linearity and associativity of $\beta$, we have \[ \beta([\ell, \vector x], \vector y) = -\beta([\vector x, \ell], \vector y) = -\beta(\vector x, [\ell, \vector y]) \qquad \forall \vector y\in V \] and so the above two sides do indeed coincide. ::: :::{.claim} $\Phi_2$ is an $L\dash$module morphism. ::: :::{.proof title="?"} Omitted for time, proceeds similarly. ::: :::{.claim} $\Phi$ commutes with $\ad(L)$. ::: :::{.proof title="?"} Letting $x\in L$, we want to show that $\Phi \circ \ad_x = \ad_x \circ \Phi \in \liegl(L)$, i.e. that these two endomorphisms of $L$ commute. Fixing $\ell\in L$, the LHS expands to \[ \Phi(\ad_x(\ell)) = z(\beta_{\ad_x(\ell) }) = z(\beta_{[x\ell]}) ,\] while the RHS is \[ \ad_x(\Phi(\ell)) = \ad_x(z(\beta_\ell)) = [x, z(\beta_\ell)] .\] Recalling that $\Phi(t) = z(\beta_t)$ is defined to be the unique element $t\in L$ such that $\beta(t, \wait) = \gamma(z(\beta_t), \wait)$, for the above two to be equal it suffices to show that \[ \beta([x, \ell], \wait) = \gamma( [x, z(\beta_\ell)], \wait) \] as linear functionals. Starting with the RHS of this expression, we have \[ \gamma( [ x, z(\beta_\ell) ], \wait ) &= -\gamma( [z(\beta_\ell), x], \wait ) \quad\text{by antisymmetry}\\ &= -\gamma(z(\beta_\ell), [x, \wait]) \quad\text{by associativity of }\gamma \\ &= -\beta(\ell, [x, \wait]) \quad\text{by definition of } z(\beta_\ell) \\ &= -\beta([\ell, x], \wait) \\ &= \beta([x, \ell], \wait) .\] ::: ::: :::{.problem title="6.7"} It will be seen later on that $\mathfrak{sl}_n(\FF)$ is actually simple. Assuming this and using Exercise 6, prove that the Killing form $\kappa$ on $\mathfrak{s l}_n(\FF)$ is related to the ordinary trace form by $\kappa(x, y)=2 n \operatorname{Tr}(x y)$. ::: :::{.solution} By the previous exercise, the trace pairing $(x,y)\mapsto \Trace(xy)$ is related to the Killing form by $\kappa(x,y) = \lambda \Trace(x,y)$ for some $\lambda$ -- here we've used the fact that since $\liesl_n(\FF)$ is simple, $\Rad(\Trace) = 0$ and thus the trace pairing is nodegenerate. Since the scalar only depends on the bilinear forms and not on any particular inputs, it suffices to compute it for any pair $(x, y)$, and in fact we can take $x=y$. For $\liesl_n$, we can take advantage of the fact that in the standard basis, $\ad_{h_i}$ will be diagonal for any standard generator $h_i\in \lieh$, making $\Trace( \ad_{h_i}^2)$ easier to compute for general $n$. Take the standard $h_{1} \da e_{11} - e_{22}$, and consider the matrix of $\ad_{h_1}$ in the ordered basis $\ts{x_1,\cdots, x_k, h_1,\cdots, h_{n-1}, y_1,\cdots, y_k }$ which has $k + (n-1) + k = n^2-1$ elements where $k= (n^2-n)/2$. We'll first compute the Killing form with respect to this basis. In order to compute the various $[h_1, v_i]$, we recall the formula $[e_{ij}, e_{kl}] = \delta_{jk} e_{il} - \delta_{li} e_{kj}$. Applying this to $h_{1}$ yields \[ [h_{1}, e_{ij}] = [e_{11} - e_{22}, e_{ij}] = [e_{11}, e_{ij}] - [e_{22}, e_{ij}] = (\delta_{1i} e_{2j} - \delta_{1j} e_{i1}) - (\delta_{2i} e_{2j} - \delta_{2j} e_{i2}) .\] We proceed to check all of the possibilities for the results as $i, j$ vary with $i\neq j$ using the following schematic: \[ \left[\begin{array}{ c | c | c } \cdot & a & R_1 \, \cdots \\ \hline b & \cdot & R_2\, \cdots \\ \hline \overset{C_1}{\vdots} & \overset{C_2}{\vdots} & M \end{array}\right] .\] The possible cases are: - $a: i=1, j=2 \implies [h_1, e_{ij}] = 2e_{12}$, covering 1 case. - $b: i=2, j=1 \implies [h_1, e_{ij}] = -2 e_{21}$ covering 1 case. - $R_1: i=1, j>2 \implies [h_1, e_{ij}] = e_{1j}$ covering $n-2$ cases. - $R_2: i=2, j>2 \implies [h_1, e_{ij}] = e_{2j}$ covering $n-2$ cases. - $C_1: j=1, i>2 \implies [h_1, e_{ij}] = e_{i1}$ covering $n-2$ cases. - $C_2: j=2, i>2 \implies [h_1, e_{ij}] = e_{i2}$ covering $n-2$ cases. - $M: i, j > 2 \implies [h_1, e_{ij}] = 0$ covering the remaining cases. Thus the matrix of $\ad_{h_1}$ has $4(n-2)$ ones and $2, -2$ on the diagonal, and $\ad_{h_1}^2$ as $4(n-2)$ ones and $4, 4$ on the diagonal, yielding \[ \Trace(\ad_{h_1}^2) = 4(n-2) + 2(4) = 4n .\] On the other hand, computing the standard trace form yields \[ \Trace(h_1^2) = \Trace(\diag(1,1,0,0,\cdots)) = 2 ,\] and so \[ \Trace(\ad_{h_1}^2) = 4n = 2n \cdot 2 = 2n\cdot \Trace(h_1^2) \implies \lambda = 2n .\] :::