# Wednesday October 23 Recall from last time: For $\lieg$ a lie algebra, we define $T(\lieg)$ the tensor algebra, and the universal enveloping algebra $U(\lieg) = T(\lieg)/\sim$ where $x\tensor y - y\tensor x \sim [x, y]$. We also described the *PBW Theorem*, which provides a basis for $U(\lieg)$. ## Proof of PBW Theorem *Proof of PBW Theorem:* We have $T(\lieg) = \spanof\theset{x_{j_1} \tensor \cdots \tensor x_{j_k} \suchthat j_1, \cdots, j_k \in I}$, where we note that there are not required to be ordered. Thus $U(\lieg) = \spanof\theset{y_{j_1} \tensor \cdots \tensor y_{j_k} \suchthat j_1, \cdots, j_k \in I}$, where which are again not required to be ordered. We would thus like to express every term here as some linear combination of monomials in the $y_{i_j}$ with increasing indices. We proceed by inducting on $k$, the number of tensor factors occurring. The base case is clear. For $k> 1$, supposing that the element is *not* a PBW monomial, then there is some inversion in the indices $(j_1, \cdots, j_k)$, i.e. there is at least one $i$ such that $j_{i+1} < j_i$. Now for any two indices $a,b \in I$, we have $$ \iota(x_b \tensor x_a) = \iota(x_a \tensor x_b + [x_b, x_a]) \implies y_b y_a = y_a y_b + \iota([x_b, x_a]) $$ Since $[x_b, x_a] = \sum_t F x_t$ and $\iota[x_b, x_a] = \sum_t F y_t$. But then $y_{j_1} \cdots y_{j_k} = y_{i_1} y_{i_2} \cdots y_{j_k} + \text{ lower degree terms}$ where $i_1 \leq i_2 \cdots i_k$ is a non-decreasing rearrangement of the $j_i$. By the inductive hypothesis, the lower degree terms are spanned by PBW monomials, so we're done. *Proof of linear independence:* *Claim:* Let $\vector x \coloneqq x_{j_1} \tensor \cdots \tensor x_{j_n}$ for an arbitrary indexing sequence, and $\vector x_{(k)}$ be this tensor with the $j_k$ and $j_{k+1}$ terms swapped, and $\vector x_{[k]}$ be this tensor with $x_{j_k}, x_{j_{k+1}}$ replaced by their bracket. Then there exists a linear map \begin{align*} f: T(\lieg) \to R \coloneqq F[\theset{z_i}_{i\in I}] \\ f(x_{i_1} \tensor \cdots \tensor x_{i_n}) = z_{i_1} \cdots z_{i_n} \\ f(\vector x - \vector x_{(k)}) = f(\vector x_{[k]}) .\end{align*} By collecting terms, we can write $$ \vector x - \vector x_{(k)} - \vector x_{[k]} = x_{j_1} \tensor \cdots \tensor x_{j_{k-1}} \tensor \left( (x_{j_k} \tensor x_{j_{k+1}}) - (x_{j_{k+1}} \tensor x_{j_k}) - [x_{j_k}, x_{j_{k+}}] \right) \tensor \cdots $$ So we can take $J$ to be the ideal generated by all elements of this form, and we find that $J \subset \ker f$, and thus $f$ descends to a map $\overline f$ on $U(\lieg)$. We then know that if $\overline f$ applied to any PBW monomial is $z_{i_1}^{r_1} \cdots z_{i_n}^{r_n}$, which are linearly independent in $R$, then any PBW monomial will be linearly independent in $U(\lieg)$. *Proof of claim:* For each $\vector x$, define an *index* $$ \lambda(\vector x) = \#\theset{ (a,b) \in \theset{1, \cdots, n}^2 \suchthat a 0$, set $f(\vector x) = z_{j_1} \cdots z_{j_n}$ if $\lambda(\vector x) = 0$. We now induct on the index $k$ at a fixed power $n > 0$. The base case is clear. For $k>0$, there exists an inversion $(\ell, \ell+1)$, i.e. some indices $i_{\ell} > \i_{\ell+1}$. Set $f(\vector x) = f(\vector x_{(\ell)}) - f(\vector x_{[\ell]})$, where the LHS is in $T^{n, k}$ and the RHS terms are in $T^{n, k-1}$ and $T^{n-1}(\lieg$ respectively. **Step 2:** We'll check that $f$ is well-defined. In the above definition, note that $f(\vector x)$ can be defined using different inversions of the indices, we'd like to show that these yield the same map. Let $(\ell, \ell+1)$ and $(\ell', \ell'+1)$ be two distinct inversions. Then set \begin{align*} a = x_{j_\ell} \\ b = x_{j_{\ell+1} }\\ c = x_{j_\ell'} \\ d = x_{j_{\ell'+1}} \\ .\end{align*} Then we have several cases: **Case 1: $\ell + 1 < \ell'$.** Then \begin{align*} f(\vector x_{(\ell)}) + f(\vector x_{[\ell]}) &= f( \cdots b\tensor a \cdots c\tensor d \cdots ) \\ + f( \cdots \tensor [a, b] \tensor \cdots c\tensor d \cdots ) \\ &= f( \cdots b\tensor a \cdots d\tensor c \cdots ) + f( \cdots b\tensor a \cdots [c, d] \cdots ) + f( \cdots \tensor [a, b] \tensor \cdots d\tensor c \cdots ) + f( \cdots \tensor [a, b] \tensor \cdots [c, d] \cdots ) \\ &= f(\vector x_{(\ell']}) + f(\vector x_{[\ell']} ) .\end{align*} **Case 2: $\ell+1 = \ell'$** Then \begin{align*} f(\vector x_{(\ell)}) + f(\vector x_{[\ell]}) &= f(\cdots b\tensor a\tensor x) + f(\cdots [a,b] \tensor c) \\ &= f(b\tensor c \tensor a) + f(c\tensor [a,b]) + f(b\tensor [a,c]) + f([[a,b], c]) \\ &= f(c\tensor b \tensor a) + f(c\tensor [a,b]) + f(b \tensor [a,c]) + f([[a,b], c]) + f(b\tensor [a,c]) + f(a \tensor [b, c]) + f([[b,c], a])\\ &= f(\vector x_{(\ell')}) + f(\vector x_{[\ell']}) .\end{align*} where the last equality is found by expanding the expression backwards.