§ Classification of lie algebras, dynkin diagrams
§ Classification of complex lie algebras
- $L$ is a complex vector space with a lie bracket $[., .]$.
- For example, if $G$ is a complex Lie group. For a complex manifold, the transition functions are holomorphic.
§ Theorem (Leri)
- Every finite dimensional complex Lie algebra $(L, [.,.])$ can be decomposed as $L = R \oplus_s (L_1 \dots \oplus L_n)$, where $\oplus$is direct sum, $\oplus_s$ is the semidirect sum.
- $R$ is a solvable lie algebra.
- To define solvable, define $R_0 = R$, $R_1 = [R_0, R_0]$, $R_2 = [R_1, R_1]$, that is, $R_2 = [[R, R], [R, R]]$.
- We have that $R_{i+1}$ is a strict subset of $R_i$.
- If this sequence eventually stabilizes, ie, there is an $n$ such that $R_n = \{ 0 \}$, then $R$ is solvable.
- In the decomposition of $L$, the $R$ is the solvable part.
- We have $L_1$, \dots, $L_n$ which are simple. This means that $L_i$ is non-abelian, and $L_i$ contains no non-trivial ideals. An ideal of a lie algebra is a subvevtor space $I \subseteq L$ such that $[I, L] \subseteq I$. (It's like a ring ideal, except with lie bracket).
- The direct sum $L_1 \oplus L_2$ of lie algebras is the direct sum of vector spaces with lie bracket in the bigger space given by $[L_1, L_2] = 0$.
- The semidirect sum $R \oplus_s L_2$ as a vector space is $R \oplus L_2$. The lie bracket is given by $[R, L_2] \subseteq R$, so $R$ is an ideal. (This looks like internal semidirect product).
Remarks
- It is very hard to classify solvable Lie algebras.
- A lie algebra that has no solvable part, ie can be written as $L = L_1 \dots \oplus L_n$ is called as semi-simple .
- It is possible to classify the simple Lie algebras.
- We focus on the simple/semi-simple Lie algebras. Simple Lie algebras are the independent building blocks we classify.
§ Adjoint Map
- Let $(L, [., .])$ be a complex lie algebra. Let $h \in L$ be an element of the lie algebra.
- Define $ad(h): L \to L$ as $ad(h)(l) \equiv [h, l]$. Can be written as $ad(h) \equiv [h, -]$. This is the adjoint map wrt $h \in L$.
§ Killing form
- $K: L \times L \to \mathbb C$ is a bilinear map, defined as $K(a, b) \equiv tr(ad(a) \circ ad(b))$.
- See that $ad(a) \circ ad(b): L \to L$. the trace will be complex because $L$ is complex.
- Since $L$ is finite dimensional vector space, $tr$ is cyclic. So $tr(ad(a) \circ ad(b)) = tr(ad(b) \circ ad(a))$. This means that $K(a, b) = K(b, a)$, or that the killing form is symmetric!
- Cartan criterion: $L$ is semi-simple iff the killing form $K$ is non-degenerate. That is, $K(a, b) = 0$ implies $b = 0$.
§ Calculation wrt basis: $ad$ map.
- Consider for actual calculation the components of $ad(h)$ and $K$ with respect to a basis $E_1, \dots, E_{dim L}$.
- Write down a dual basis $\epsilon^1, \epsilon^{dim L}$.
- $ad(E_i)^j_k \equiv \epsilon^j (ad(E_i)(E_k))$.
- We know that $ad(E_i)(E_k) = [E_i, E_k]$ by definition.
- We write $[E_i, E_k] = C^m_{ik} E_m$ where the $C^m_{ik}$ are the structure constants.
- This gives us $ad(E_i)^j_k = \epsilon^j (C^m_{ik} E_m)$
- Pull out structure coefficient to get $ad(E_i)^j_k = C^m_{ik} \epsilon^j (E_m)$
- Use the fact that $E_m$ and $\epsilon_j$ are dual to get $ad(E_i)^j_k = C^m_{ik} \delta^j_m$
- Contract over repeated index $m$ to get $m=j$: $ad(E_i)^j_k = C^j_{ik}$
- This makes sense, since the $ad$ map is just a fancy way to write the bracket in coordinate free fashion.
§ Calculation wrt basis: Killing form.
- $K(E_i, E_j) = tr(ad(E_i) \circ ad(E_j))$
- Plug in $ad$ to become $K(E_i, E_j) = tr(C^l_{im} C^m_{jk})$ [see that the thing inside the trace is a matrix ]
- Execute trace by setting $l = k = o$. This gives us: $K(E_i, E_j) = C^o_{im} C^m_{jo}$. This is also easy to calculate from structure coefficients.
- Iff this matrix is non-degenerate, then the lie-algebra is semi-simple.
§ $ad$ is anti-symmetric with respect to the killing form.
- Recall that $\phi$ is called as an anti-symmetric map wrt a non-degenerate bilinear form $B$ iff $B(\phi(v), w) = - B(v, \phi(w))$.
- Fact: $ad(h)$ is anti-symmetric wrt killing form. For killing form to be non-degenerate we need $L$ to be semisimple.
§ Key Definition for classification: Cartan subalgebra
- If $(L, [.,.])$ is a lie algebra, then the cartan subalgebra denoted by $H$ ( $C$ is already taken for structure coeff.) is a vector space, and is a maximal subalgebra of $L$ such that there exists a basis $h_1, \dots, h_m$ of $H$that can be extended to a basis of $L$: $h_1, \dots, h_m, e_1, \dots, e_{dim(L)-m}$ such that the extension vectors are eigenvectors for any $ad(h)$ for $h \in H$.
- This means that $ad(h)(e_\alpha) = \lambda_\alpha(h) e_\alpha$.
- This can be written as $[h, e_\alpha] = \lambda_\alpha(h) e_\alpha$.
- Does this exist?
§ Existence of cartan subalgebra
- Thm Any finite dimensional lie algebra possesses a cartan subalgebra.
- If $L$ is simple, then $H$ is abelian . That is, $[H, H] = 0$.
- Thus, the $ad(h)$ are simultaneously diagonalized by the $e_\alpha$ since they all commute.
§ Analysis of Cartan subalgebra.
- $ad(h)(e_\alpha) = \lambda_\alpha(h) e_\alpha$.
- $[h, e_\alpha] = \lambda_\alpha(h) e_\alpha$.
- Since the LHS is linear in $h$, the RHS must also be linear in $H$. But in the RHS, it is only $\lambda_\alpha(h)$ that depends on $h$.
- This means that $\lambda_\alpha: H \to \mathbb C$ is a linear map !
- This is to say that $\lambda_\alpha \in H^*$ is an element of the dual space!
- The elements $\lambda_1, \lambda_2, \lambda_{dim L - m}$ are called the roots of the Lie algebra.
- This is called as $\Phi \equiv \{ \lambda_1, \dots, \lambda_{dim L - m} \}$, the root set of the Lie algebra.
§ Root set is closed under negation
- We found that $ad(h)$ is antisymmetric with respect to killing form.
- Thus, if $\lambda \in \phi$ is a root, $-\lambda$ is also a root (somehow).
§ Root set is not linearly independent
- We can show that $\Phi$ is not LI.
§ Fundamental roots
- Subset of roots $\Pi \subseteq \Phi$ such that $\Pi$ is linearly independent.
- Let the elements of $\Pi$ be called $\pi_1, \dots, \pi_r$.
- We are saying that $\forall \lambda \in \Phi, \exists n_1, \dots, n_f \in \mathbb N, \exists \epsilon \in \{ -1, +1 \}$such that $\lambda = \epsilon \sum_{i=1}^f n_i \pi_i$.
- That is, we can generate the $\lambda$ as natural number combinations of $\pi_i$, upto an overall global sign factor.
- Fact: such a set of fundamental roots can always be found.
§ complex span of fundamental roots is the dual of the cartan subalgebra
- In symbols, this is $span_{\mathbb C}(\Pi) = H^*$.
- They are not a basis of $H^*$ because they are not $\mathbb C$ independent (?)
- $\Pi$ is not unique, since it's a basis.
§ Defn: $H_{\mathbb R}^*$
- Real span of fundamental roots: $span_{\mathbb R}(\Pi)$.
- We have that $\Phi = span_{\pm \mathbb N}(\Pi)$.
- Thus $\Phi$ is contained in $span_{\mathbb R}(\Pi)$, which is contained in $span_{\mathbb C}(\Pi)$.
§ Defn: Killing form on $H^*$
- We restrict $K: L \times L \to \mathbb C$ to $K_H: H \times H \to \mathbb C$.
- What we want is $K^*: H^* \times H^* \to \mathbb C$.
- Define $i: H \to H^*$ given by $i(h) = K(h, \cdot)$.
- $i$ is invertible if $K$ is non-degenerate.
- $K^*(\mu, \nu) \equiv K(i^{-1}(\mu), i^{-1}(\nu))$.
§ $K^*$ on $H^*_{\mathbb R}$
- The restricted action of $K^*$ on $H^*_{\mathbb R}$ will always spit out real numbers.
- Also, $K^*(\alpha, \alpha) \geq 0$ and equal to zero iff $\alpha = 0$.
- See that $K$ was non-degenerate, but $K^*_{\mathbb R}$ is a real, bona fide inner product!
- This means we can calculate length and angles of fundamental roots.
§ Recovering $\Phi$ from $\Pi$
- How to recover all roots from fundamental roots?
- For any $\lambda \in Phi$, define the Weyl transformation $s_\lambda: H^\star_R \to H^\star_R$
- The map is given by $s_\lambda(\mu) = \mu - 2 \frac{K^*(\lambda, mu)}{K^*(\lambda, \lambda)} \lambda$.
- This is linear in $\mu$, but not in $\lambda$.
- Such $s_\lambda$ are called as weyl transformations.
- Define a $W$ group generated by the $s_\lambda$. This is called as the Weyl group.
§ Theorem: Weyl group is generated by fundamental roots
- It's enough to create $s_\Pi$ to generate $W$.
§ Theorem: Roots are prouced by action of Weyl group on fundamental roots
- Any $\lambda \in \Phi$ can be produced by the action of some $w \in W$ on some $\pi \in \Pi$.
- So $\forall \lambda \in \Phi, \exists \pi \in Pi, \exists w \in W$ such that $\lambda = w(\pi)$.
- This means we can create all roots from fundamental roots: first produce the weyl group, then find the action of the weyl group on the fundamental roots to find all roots.
- The Weyl group is closed on the set of roots, so $W(\Phi) \subseteq \Phi$.
§ Showdown
- Consider $S_{\pi_i}(\pi_j)$ for $\pi_i, \pi_j \in \Pi$.