Lie Algebras, 649, lectures from 2002 - Cornell...

84
Lie Algebras, 649, lectures from 2002 1. Introduction 1.1. Lie algebras. Definition. A Lie algebra is a vector space g over a field K with a Lie bracket [ , ]: g × g g which is (1) bilinear, i.e. [λx, y]= λ[x, y]=[x, λy] (2) skew symmetric [x, y]= -[y,x] (3) satisfies the Jacobi identity [[x, y],z ] + [[z,x],y] + [[y,z ],x]=0. 1.2. Examples. (1) Let A be any associative algebra. Then define [a, b] := ab - ba. The main such example is g = gl(n, K) (= M n (K)) n × n matrices with matrix multiplication. More generally, for any vector space V, we can define gl(V ) := { A : V V linear } with the same bracket as before. (2) Let sl(2, K) := { X = a b c d gl(2, K) : trX =0 } with the Lie algebra structure inherited from gl(2). Then sl(2) is a Lie algebra. We also say that it is a subalgebra of gl(2). (3) Let p 1 ,...,p n ,q 1 ,...,q n ,z be a basis of K 2n+1 . The Heisenberg al- gebra is defined by the bracket relations [p i ,q j ]= δ i,j z, [p i ,p j ]=[q i ,q j ]=[p i ,z ]=[q j ,z ]=0. Some of you remember the operators with relations {p 1 ,...,p n ,q 1 ,...,q n }, [p i ,p j ]=0, [q i ,q j ]=0, [p i ,q i ]= ij 1

Transcript of Lie Algebras, 649, lectures from 2002 - Cornell...

Page 1: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

Lie Algebras, 649, lectures from 2002

1. Introduction

1.1. Lie algebras.

Definition. A Lie algebra is a vector space g over a field K with a Liebracket

[ , ] : g× g→ g

which is

(1) bilinear, i.e. [λx, y] = λ[x, y] = [x, λy](2) skew symmetric [x, y] = −[y, x](3) satisfies the Jacobi identity

[[x, y], z] + [[z, x], y] + [[y, z], x] = 0.

1.2. Examples.

(1) Let A be any associative algebra. Then define

[a, b] := ab− ba.

The main such example is

g = gl(n,K) (= Mn(K))

n × n matrices with matrix multiplication. More generally, for anyvector space V, we can define gl(V ) := { A : V → V linear } withthe same bracket as before.

(2) Let

sl(2,K) := { X =

[a bc d

]∈ gl(2,K) : trX = 0 }

with the Lie algebra structure inherited from gl(2). Then sl(2) is aLie algebra. We also say that it is a subalgebra of gl(2).

(3) Let p1, . . . , pn, q1, . . . , qn, z be a basis of K2n+1. The Heisenberg al-gebra is defined by the bracket relations

[pi, qj ] = δi,jz, [pi, pj ] = [qi, qj ] = [pi, z] = [qj , z] = 0.

Some of you remember the operators with relations

{p1, . . . , pn, q1, . . . , qn}, [pi, pj ] = 0, [qi, qj ] = 0, [pi, qi] = hδij1

Page 2: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

2

from mathematical physics. The p’s represent positions and q’s mo-mentums. A representation is

0 p1 . . . pn z0 0 0 qn...

......

......

0 0 0 0 q1

0 0 0 0 0

(4) Let U ⊆ Rn be an open set. A vector field is a linear map

X : C∞(U)→ C∞(U)

satisfying X (f1 · f2) = X (f1) · f2 + f1 · X (f2).

Exercise 1. Show that there are ai ∈ C∞(U) such that

X (f) =∑

ai∂if.

The set of vector fields V ec(U) is a vector space /C and a moduleover C∞(U) under pointwise addition and multiplication. It is alsoa Lie algebra under

[X1,X2] := X1 ◦ X2 −X2 ◦ X1.

The algebra generated by V ec(U) is called the algebra of differentialoperators.

In dynamical systems/control theory . . ., one often considers theLie subalgebras generated by a set of vector fields. They are usuallyinfinite dimensional.

1.3. Structure constants. The previous example suggests that we can de-scribe a Lie algebra by choosing a basis {eα}. The Lie bracket is determinedby

[eα, eβ] =∑

ckαβek.

The ckα,β are called the structure constants. They have to satisfy ckαβ = −ckβαand relations coming from the Jacobi identity.

Exercise 2. Translate the Jacobi identity into a relation among the ckij.

Definition. Let g and h be two Lie algebras. A Lie (algebra) homomorphismis a linear map φ : g −→ h which commutes with the brackets, i.e.

φ([x, y]) = [φ(x), φ(y)].

It is called an isomorphism if φ−1 (exists and) is also a Lie algebra homo-morphism.

If φ has an inverse as a linear map, then the inverse is automaticallya Lie homomorphism. To classify Lie algebras means to list them up toisomorphism.

Page 3: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

3

Exercise 3. Assume K = C (or algebraically closed of characteristic zero).Classify all Lie algebras of dimension less than or equal to 3.

Besides the abelian ones, in dimension 2 there is an algebra generatedby {h, e} with structure constants [h, e] = e. You should also find theHeisenberg algebra H3

span {x, y, z}, [x, y] = z, [x, z] = [y, z] = 0.

A realization is given by 3× 3 matrices0 x z0 0 y0 0 0

x, y, z ∈ C

with the usual bracket.Another interesting algebra is sl(2,C), given by

span {e, h, f}, [h, e] = 2e, [h, f ] = −2f, [e, f ] = h,

{(a bc −a

)}with the usual bracket

h =

(1 00 −1

), e =

(0 10 0

), f =

(0 01 0

).

1.4. Generators and relations. Another way to describe Lie algebras isby generators and relations. Recall that the free algebra (with unit) gen-erated by a set X is a pair (F , i) consisting of an algebra F and a mapi : X −→ F such that if θ : X −→ G is any map to an associative algebrawith unit G, then there is a unique algebra homomorphism Θ : F −→ G suchthat θ = Θ ◦ i, and Θ(1) = 1. It is straightforward to show that the freealgebra on X exists and is unique up to isomorphism. For the existence, onetakes V, a vector space with basis indexed by X. Let T i(V ) = V ⊗ · · · ⊗ V︸ ︷︷ ︸

i

.

Then

T (V ) := K⊕ V ⊕ T 2(V )⊕ . . .with the obvious i and multiplication

(1.4.1) (x1 ⊗ · · · ⊗ xn) · (y1 ⊗ . . . ym) = x1 ⊗ · · · ⊗ xn ⊗ y1 ⊗ · · · ⊗ ym.

is the desired free algebra. The algebra with generators X and relations R(a subset of elements of FX) is (by definition) the quotient of FX by thetwo sided ideal generated by R. For example the symmetric algebra S(V ) isthe algebra with generators X and relations x1 ⊗ x2 − x2 ⊗ x1 = 0.

For a Lie algebra, the analogous definitions apply. But it is a bit moreawkward to show existence.

Exercise 4. Let LX be the Lie subalgebra generated by i(X) ⊂ FX . Showthat (L, i) is a free algebra.

Page 4: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

4

Here is an important set of examples of Lie algebras given by generatorsand relations. Let A be an n× n matrix of rank `. It is called a generalizedCartan matrix if it also satisfies

(1) aii = 2,(2) aij are non-positive integers,(3) if aij = 0 then aji = 0.

Let h be a vector space of dimension 2n − `, and Π = {α1, . . . αn} ⊂ h∗,be a set of independent functionals, and Π∨ = {α∨1 , . . . α∨n} ⊂ h be linearlyindependent sets in duality with respect to the standard pairing〈 , 〉 : h× h∗ −→ C satisfying 〈α∨i , αj〉 = δij . Define the Lie algebra g(A) tobe the Lie algebra with generators {ei, h, fj} subject to the relations

[ei, fj ] = δijα∨i , [h, h′] = 0, [h, ej ] = 〈h, αi〉ej , [h, fj ] = −〈h, αi〉fj .

Define g(A) := g(A)/r, where r is the largest ideal so that r ∩ h = (0).

Exercise 5. Show that r exists.

Examples. The matrix A = (2) gives g(A) ∼= sl(2). The matrix A =[2 −1−1 2

]gives rise to g(A) ∼= sl(3). But A =

[2 −2−2 2

]gives rise to an

infinite dimensional algebra called the affine Kac-Moody algebra A1. Theprevious examples have been known and studied for at least 100 years. TheKac-Moody algebras have been studied very intensely only in the last 40years and they have important applications in number theory and mathe-matical physics. Quantum groups are variants of the above ideas.

Exercise 6. Verify the assertion for A = (2).

1.5. Relations to Lie groups. One of the main uses of Lie algebras isthrough their representations. If G is a group we say (π, V ) is a representa-tion if V is a vector space/K and π is a group homomorphism

π : G→ Aut(V ).

If g is a Lie algebra, (π, V ) is a representation if

π : g→ End(V )

is linear and π([x, y]) = [π(x), π(y)].Usually the representations we consider in this course are finite

dimensional.A representation is called irreducible if there is no 0 6⊆ W 6⊆ V which

is invariant under π(g). It is called completely reducible if for any π(g)–invariant W ⊆ V , there is a π(g)-invariant W ′ such that W ∩W ′ = {0} andg = W +W ′.

Page 5: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

5

Examples.

(1) g = C, π(11) =

[r 00 −r

]is completely reducible.

(2) g = C, π(one) =

[0 r0 0

]is not.

A representation typically will have a Jordan-Holder type composition series

0 ⊆ V1 ⊆ V2 ⊆ · · · ⊆ Vn ⊆such that

(a) Vi is invariant(b) Vi+1/Vi is irreducible.There may be more than one such decomposition.

There is a strong relation between Lie groups and their Lie algebras. LetGL(n,R) := {g ∈ gl(n,R) | det(g) 6= 0} and G ⊆ GL(n,R) be a closedsubgroup. Recall

(1.5.1)

exp : gl(n,R)→ GL(n,R)

expX = eX :=∞∑n=0

1

n!Xn

Define

g :={X ∈ gl(n,R) | etX ∈ G for all t ∈ R

}.

Proposition. g is a Lie algebra.

A proof can be found in Godement’s notes on Lie groups and algebras,[Gd]. Here is a sketch. We need to show that if X,Y ∈ g, then λX,X +Y, [X,Y ] ∈ g as well. The first one is clear. For the second one, we computeetX · etY :

(1 +t

1X +

t2

2!X2 + t3 . . . )((1 +

t

1Y +

t2

2!Y 2 + t3 . . . ) =

1 + t(X + Y ) +t2

2(X2 + 2XY + Y 2) + t3 . . .

With a little work we can rewrite this relation as

(1.5.2) etXetY = et(X+Y )+O(tt).

(See Helgason for details in the case of an arbitrary Lie group). Then

[etnXe

tnY ]n = et(X+Y )+ 1

nO(t2)]

As n → ∞, the RHS tends to et(X+Y ) while the LHS is always in G. Be-cause G is assumed closed, the claim about X + Y follows. For the claimabout [X,Y ], one computes etXetY e−tXe−tY and applies a similar trick. Therelevant relation is

(1.5.3)(etnXe

tnY e−

tnXe−

tnY)n2

= et2[X,Y ]+O( 1

n).

Page 6: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

6

So to each closed subgroup one can attach a Lie algebra. Building onthese ideas one can show that closed subgroups of GL(n,R) are Lie groups,i.e. groups that have a differentiable structure so that multiplication and in-verse map are differentiable. There is a close connection between Lie groupsand their Lie algebras. For example, in order to study representations ofthe Lie group, one can study the corresponding Lie algebra representations.These are much easier to deal with because you are doing linear algebra.

Summary of properties of exp and log

eX =∑ Xn

n!.

(1) if X is a matrix, its norm is ||X|| =(∑|xij |2

)1/2or max|xij |. The

main point is that we need ||X + Y || ≤ ||X||+ ||Y ||, and||XY || ≤ ||X|| · ||Y ||.

(2) e0 = I.

(3)(X)∗

= eX∗.

(4) eX+Y = eX · eY if [X,Y ] = 0.

(5) CeXc−1 = eCXC−1.

(6) ||eX || ≤ e||X||.

logA =∑

(−1)m+1Am

mfor ||A− I|| < 1

(1) || log(I +B)−B|| ≤ c||B||2 for some c > 0 and any ||B|| < 1/2.

Some consequences are that all closed subgroups of GL(n,R) have a C∞

structure, in fact are analytic (Lie Groups). They are studied via theiralgebras. These are called linear groups.

Note: Not all Lie groups are of this form. The metaplectic group Mp(2,R)is

{(g, ε) : g ∈ SL(2,R), ε analytic function on the upper half plane, cz+d = ε2}

The multiplication rule is (g1, ε1)(g2, ε2) = (g1g2, ε1(g2 · z)ε1(z)). Here

g =

[a bc d

]g · z =

az + b

cz + d.

The fact that this is a group follows from the proerty of j(g, z) := cz + dsatisfying j(g1g2, z) = j(g1, g2 · z)j(g2, z).

Example. Recall that SO(3) is the group that preserves the form

〈v, w〉 :=∑

1≤i≤3

viwi,

in the sense that

〈π(g)v, π(g)w〉 = 〈v, w〉.

Page 7: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

7

It is a closed subgroup so the previous proposition applies.

g := {X : etX ∈ G for all t}(1.5.4)

〈etX · v, etXw〉 = 〈v, w〉 ∀ t ∈ R.(1.5.5)

Differentiate in t:

〈Xv,w〉+ 〈v,Xw〉 = 0

soX is skew hermitian. The converse is also true, i.e. ifX is skew symmetric,then eX is orthogonal.Example. Let U(n) ⊂ GL(n,C) be the unitary group, the subgroup ofmatrices that preserve the sesquilinear form

〈v, w〉 :=∑

1≤i≤nviwi,

i.e.

U(n) = {g ∈ GL(n,C) : 〈g · v, g · w〉 = 〈v, w〉}.

Its Lie algebra is the Lie algebra of skew hermitian matrices,

u(2) := { X ∈ gl(n,C) : X +X∗ = 0 }.

1.6. Invariants of group actions. Suppose (π, V ) is a representation of atopological group on a finite dimensional space. We would like to describethe orbits π(G)·v. One way to do this is to observe that G acts on R := P[V ],polynomial functions on V, by

g · f(v) := f(g−1v).

Suppose f ∈ R is such that g · f = f for all g ∈ G. Denote by RG the spaceof all such polynomials. Such f are constant on the orbits. We could hopethat the “levels” of RG actually completely describe the orbits.

Example. Suppose G = GL(n) is acting on V = gl(n) by

π(g)X := gXg−1.

Describing the orbits is the same as classifying matrices up to similarity. weknow that this is given by Jordan canonical forms. The ring of invariantsRG is generated by the polynomials pi(X) which occur in the expansion

det(tI −X) = tn − pn−1(X)tn−1 + · · ·+ (−1)np0(X).

Recall that pn−1(X) = trX and p0(X) = det(X). The “levels” of thesepolynomials classify semisimple matrices.

Page 8: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

8

Example. Let Vn be the space of homogeneous polynomials in two variablesx, y. These are called n-ary forms. Then SL(2) acts on Vn by

πn( [a bc d

] )xrys = (ax+ cy)r(bx+ dy)s.

One of the main problems of classical invariant theory is to describe RG.For n = 2 and 3 this is well known, but beyond that it is unknown. Oneof the reasons noetherian rings were invented is to prove that RG is finitelygenerated.

We show that the representation Vn is an irreducible representation ofSL(2) and that (πn, Vn) are all irreducible representations up to equivalence.We compute the representation of sl(2) attached to (πn, Vn). If X ∈ g, define

(1.6.1) π(X)v :=d

dt|t=0πn(etX)v.

We get(1.6.2)πn(e)xrys = sxr+1ys−1, πn(h)xrys = (r−s)xrys, πn(f)xrys = rxr−1ys+1.

A change of basis (actually just a rescaling) shows that this representationis equivalent to the one in section 1.11. Suppose that W ⊂ V is a G-invariant subspace. Then W is also g invariant, so either (0) or Vn. Thus Vnis irreducible as a G module. On the other hand, if V is a finite dimensionalirreducible representation of G, then a general theorem on Lie groups impliesthat the representation is C∞ so it differentiates to a representation of g.If it is not irreducible, then let (0) ⊂ W ⊂ V be a g invariant subspace.Because the exponential map is onto, W is a G invariant subspace so eitherequal to (0) or V. But then section 1.11 shows that V = Vn.

1.7. Spherical harmonics. Let ∆ = ∂21 + ∂2

2 + ∂23 . This acts on C∞

functions on R3 by differentiation. One problem that shows up a lot inanalysis is to solve

∆f = λf.

It is known that such solutions have to be analytic (called regularity the-orem). Let Cω be the vector space of analytic functions. Then the groupSO(3) acts

π : G→ Aut(Cω)(1.7.1)

(π(g)f)(x) := f(g−1 · x).(1.7.2)

You can check easily that π(g1g2) = π(g1)π(g2), π(11) = Id and so on.Furthermore,

∆(π(g)f)(x) = ∆(f)(g−1x).

Thus the space Cωλ := {f ∈ Cω | ∆f = λf} is invariant under all π(g), so itis a representation.

Problem: Describe the representation of G on Cωλ .

Page 9: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

9

Exercise 7. If X is skew hermitian, then eX is orthogonal. �

We can then define a representation of g on Cωλ :

π(X)f(x) :=d

dt

∣∣∣t=0

f(e−tX · x).

Now g is generated by three elements

e1 =

0 1 0−1 0 00 0 0

, e2 =

0 0 10 0 0−1 0 0

, e3 =

0 0 00 0 10 −1 0

.satisfying

[e1, e2] = −e3, [e2, e3] = −e1, [e3, e1] = −e2.

Their exponentials are

ete1 =

cos t sin t 0− sin t cos t 0

0 0 1

, ete2 =

cos t 0 sin t0 1 0

− sin t 0 cos t

,(1.7.3)

(1.7.4)

ete3 =

1 0 00 cos t sin t0 − sin t cos t

(1.7.5)

f

e−te1x1

x2

x3

= f(cos t · x1 − sin tx2,+ sin tx1 + cos tx2, x3)

So

e1 · f(x) = −x2∂1 + x1∂2 = (x1∂2 − x2∂1)(f)(1.7.6)

e2 · f(x) = (x1∂3 − x3∂1)(f)(1.7.7)

e3 · f(x) = (x2∂3 − x3∂2)(f).(1.7.8)

Definition. Let g be a Lie algebra over a field K, and let K ⊆ L be a fieldextension. Then we can define the extension

gL := g⊗K L.

If K = R ⊆ L = C we write gc and call it the complexification of g.

If {eα} is a basis for g, then gL has a basis {eα} with the same

[eα, eβ] =∑

ciαβei

but elements x =∑mαeα with mα ∈ ` instead of k.

Exercise 8. Show that so(3)C ' sl(2,C).√−1e1 ↔ h.

Page 10: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

10

1.8. We return to the example. We want to describe the action of ∆ =∂2

1 + ∂22 + ∂2

3 on C∞(Rn), analytic functions with complex values. We willanalyze

P := C[x1, x2, x3], polynomials with complex coefficients.

We know that ∆ commutes with xi∂j−xj∂i. Let Q(x) = x21 +x2

2 +x23. Then

(xi∂j − xj∂i)[Q · f ] = Q · (xi∂jf − xj∂if).

We can express this as

[xi∂j − xj∂i,mQ] = 0

where mQ is the operator given by multiplication by Q. Since so(3) com-mutes with ∆ and mQ, it also commutes with all the brackets of theseelements. We find

[∆,mQ] = 4(x1∂1 + x2∂2 + x3∂3 +

3

2

).

We find that1

2∆←→ f, −1

2mQ ←→ e, E +

3

2←→ h

forms an sl(2,C).

1.9. The six elements form a Lie algebra,

so(3)× sl(2) ⊆ End(P)

or sl(2)× sl(2) if we complexify so(3):

h↔ −2i(x1∂2 − x2∂1) = −2ie1(1.9.1)

e↔ e2 + ie3(1.9.2)

f ↔ e2 − ie3.(1.9.3)

1.10. Suppose g1 and g2 are Lie algebras. We can form a new algebra

g1 × g2 = {(x1, x2) : xi ∈ gi}

with operations

α1(x1, y1) + α2(x2, y2) = (α1x1 + α2x2, α1y1 + α2y2)(1.10.1)

[(x1, y1), (x2, y2)] = ([x1, x2], [y1, y2]).(1.10.2)

Question: What do representations of g1 × g2 look like?

For example if (πi, Vi) are representations of gi, then (π1 × π2, V1 � V2),

(π1 � π2)(x, y) = π1(x) � 11 + 11 � π2(y)

defines a new representation. A general theorem states that irreduciblerepresentations of a product are exterior tensor products of irreducible rep-resentations of the individual factors.

Page 11: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

11

1.11. Irreducible representations of sl(2). Say we are talking aboutfinite dimensional representations π, F ). Then π(h) has an eigenvector v =vλ 6= 0

π(h)v = λv.

Then π(e)v satisfies

π(h)π(e)v = π(e)π(h)v + π([h, e])v = (λ+ 2)π(e)v.

So since dimF <∞, there is v0 6= 0 such that

π(h)v0 = Λv0 π(e)v0 = 0.

Let W := span{π(f)kv0 := vk}.

Claim: W is g-invariant.

Proof. The following relations hold:

π(h)vk = π(h)π(f)kv0 =

(1.11.1)

= π(f)π(h)π(f)k−1v0 − 2π(f)kv0 = · · · = (Λ− 2k)π(f)kv0,

So vk is a (Λ− 2k) eigenvector for π(h). Then

π(e)vk = π(e)π(f)kv0 = π(f)π(e)π(f)k−1v0 + π(h)π(f)k−1v0

(1.11.2)

= π(f)π(e)π(f)k−1v0 + (Λ− 2k + 2)vk−1

(1.11.3)

= · · · (Λ + Λ− 2 + · · ·+ Λ− 2k + 2)vk−1 =(kΛ− 2 · k(k − 1)

2

)vk−1

(1.11.4)

= k(Λ− k + 1)vk−1.(1.11.5)

Since W 6= 0 it follows that W = V . Next recall that we want V to be finitedimensional. Let N be such that v−N 6= 0 but π(f)v−N = 0. Then

(1.11.6) (Λ−2N)v−N = π(h)v−N = −π(f)π(e)v−N = −N(Λ−N + 1)v−N

So we get the equation (N + 1)Λ = N(N + 1) and therefore Λ = N.The conclusion is that for each integer N ≥ 0 there is an irreducible

module

FN = {v−N , v−N+2, . . . , vN}.

In fact we get more. For each ν ∈ C we can construct a module

Vν = {vν−2n}

Page 12: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

12

with action

π(h)vν−2n := (ν − 2n)vν−2n(1.11.7)

π(f)vν−2n := vν−2n−2(1.11.8)

π(e)vν−2n := n(ν − n+ 1)vν−2n+2.(1.11.9)

If ν = N ∈ N, then this module has an irreducible quotient FN and asubmodule

0→ V−N−2 → VN → FN → 0.

The modules Vν are “characterized” by the existence of a vector vν satisfying

π(h)vν = νvν π(e)vν = 0.

This is called a highest weight vector. �

Remark: General irreducible representations of sl(2,C) are much morecomplicated.

1.12. Back to the problem of decomposing P. We look for eigenvectors of

−2i(x1∂2 − x2∂1),(x1∂1 + x2∂2 + x3∂3 +

3

2

)annihilated by

∆ and x1∂3 − x3∂1 + i(x2∂3 − x3∂2).

Note: ∆ acts locally nilpotently on P. This means that for any elementp ∈ P, there is n (depending on p) such that ∆np = 0. The second operatorin (a) has eigenvalue r + 3

2 on Cr[x1, x2, x3], homogeneous polynomials ofdegree r. The first operator in (a) acts by 2(−β + α) on

(x1 − ix2)α(x1 + ix2)βxγ3 .

It is better to use the basis

(x1 − ix2)α(x21 + x2

2 + x23)m (x1 − ix2)αx3(x2

1 + x22 + x2

3)mx3.

We find that the second operator in (b) annihilates

(x1 + ix2)α(x21 + x2

2 + x23)m

and ∆ annihilates these only when m = 0.

Exercise 9. Show the following.

(1) P decomposes

⊕Fα ⊗ Vα+ 32

(2) The spaces Hα generated by (x1 + ix2)α are the only solutions to∆f = Λf where Λ = 0. �

1.13. Some basic constructions of representations.

1.13.1. Direct sums. Let (π, V ), (ρ,W ) be representations of g. Then (π⊕ρ, V ⊕W ) given by (π ⊕ ρ)(x) = (π(x), ρ(x)).

Page 13: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

13

1.13.2. Exterior product. g1, g2 Lie algebras. We can form g1 × g2. If(πi, Vi) are representations, (π1�π2, V1�V2) is (π1�π2)(x1, x2) := π1(x1)⊗1 + 1 ⊗ π2(x2). If g1 ' g2 ' g, then g ↪→ g1 × g2 as the diagonal. Therestriction of π1 � π2 to g is called π1 ⊗ π2

(π1 ⊗ π2)(x) := π1(x)⊗ 1 + 1⊗ π2(x)

and is sometimes referred to as interior tensor product.

1.13.3. Adjoint representation.

ad : g→ End(g)(1.13.1)

adx(y) := [x, y].(1.13.2)

The Jacobi identity plays two roles:(a) ad is a Lie homomorphism, ad[x, y] = [adx, ady].(b) the image of ad is in Der(g) : adx([y, z]) = [adx(y), z] + [y, adx(z))].

Page 14: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

14

2. Universal Enveloping Algebra

References: Dixmier, Enveloping Algebras, Serre Lie Algebras

2.1. An associative algebra U(g) with unit together with a Lie algebra ho-momorphism ε : g → U(g) is called a universal enveloping algebra if forevery Lie algebra map

(2.1.1) φ : g→ Awhere A is an assocative algebra with unit, there is a unique extension

gφ−→ A

ε↘

ρ

↗U(g)

that makes the diagram commutative and satisfying ρ(1) = 1.

Lemma. If (ε,U(g)) exists, it is unique.

Proof. If there were two such objects (ε1,U1) and (ε2,U2), we would havemaps ρ1 : U1 −→ U2 and ρ2 : U2 −→ U1 which would make the diagramscorresponding to (2.1.1) commute. But then the uniqueness implies ρ1◦ρ2 =id, ρ2 ◦ ρ1 = id. �

2.2. Existence. Form

Tg := C⊕ T 1(g)⊕ T 2(g)⊕ · · · ,(recall T 0(g) = C and T 1(g) = g). This is the “universal associative algebra”;if

ϕ = g→ A

is any linear map, it extends uniquely to an algebra homomorphism

Φ : T (g)→ A

such that Φ(11) = 11. Let J be the two sided ideal generated by

x⊗ y − y ⊗ x− [x, y],

and define U(g) : T (g)/J . If ϕ is a Lie algebra homomorphism, then Φfactors to an algebra homomorphism

ρ : U(g)→ A, ρ(11) = 11.

Since U(g) is generated by g, ρ is unique.

Lemma. The canonical map from C to U(g) is an injection.

Remark: If g is the Lie algebra of a Lie group, then U(g) identifies withleft invariant differential operators.

Example: If g is abelian, then U(g) is the symmetric algebra S(g),

S(g) := T (g)/{x⊗ y − y ⊗ x}.

Page 15: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

15

Note that S(g) is graded because T (g)is graded, T (g) = ⊕Tn(g)

Tn(g) · Tm(g) ⊆ Tn+m(g)

and J is generated by homogeneous elements of order 2.In general this is not the case; U(g) only admits a filtration

Un(g) := image of T i(g) with i ≤ n.Then

(1) Un(g) · Um(g) ⊆ Un+m(g)(2) if a ∈ Un, b ∈ Um, [a, b] := ab− ba ∈ Un+m−1.

In such a case one defines the graded object,

Gr U(g) := ⊕ Un/Un−1.

Then Gr U(g) is a graded algebra and (2) implies it is commutative. Inaddition there is a well-defined map

Φ : U(g)→ Gr U(g) (also written x 7→ x)

This is not an algebra map. Now consider gab, the algebra with the sameunderlying space but trivial bracket. The composition map

gab → g→ U(g)→ Gr U(g)

is a Lie algebra map, so we get an algebra map

ψ : S(g)→ Gr U(g).

Theorem (Poincare-Birkhoff-Witt, PBW). ψ is an isomorphism. Actuallywe can define a map

ψ : S(g)→ U(g)

by the formula

x1 · · ·xn 7→1

n!

∑σ∈Sn

xσ(1) · · ·xσ(n).

The claim is equivalent to the fact that ψ is an isomorphism.

Exercise 10. (1) Check that ψ is well defined.

(2) Check that ψ = Φ ◦ ψ.

Proof. Theorem (2.2) can be rephrased as follows. Let {ei}i∈I be a basis forg. Choose a total ordering on I. If M = (i1, . . . , in) with ij ≤ ij+1 is ann-tuple, let

eM := ei1 · · · ein ∈ Un (eφ = 11).

Then theorem 2.2 is equivalent to

(2.2.1) {eM} is a k-basis for U(g).

We need to show that if∑cMeM = 0 then all cM = 0.

We define a representation of g on S(g) as follows. For M as before, letxM = ei1 · · · ein ∈ S(g)n. It is clear that these form a basis. Then defineπ(ei) by induction on |M |, the number of terms in M.

Page 16: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

16

(1) π(ei)xφ = x(i).(2) Suppose |M | = n− 1. If i ≤ i1, then set

π(ei)xM = x(i,i1,...,in−1).

If i > i1, then define

π(ei)xM = π(ei1) · (π(ei)x(i2···in)) + π([ei, ei1 ]) · x(i2···in).

It is “clear” that this is a Lie algebra homomorphism into End(S(g)).Thus we get a Lie homomorphism

π : U(g)→ End(S(g)).

If∑cMeM = 0, then

0 =∑

cMπ(eM )xφ =∑

cMxM ⇒ cM = 0.

2.3. Note that g acts on itself by ad :

ad : g→ End(g).

This extends to an action on T (g)

adx(x1 ⊗ · · · ⊗ xn) :=n∑i=1

x1 ⊗ · · · ⊗ adx(xi)⊗ · · · ⊗ xn.

Then the ideals generated by

I := {x⊗ y − y ⊗ x}, J := {x⊗ y − y ⊗ x− [x, y]}

are invariant subspaces. So we get actions

ad : g→ End(S(g))

g→ End(U(g)).

Exercise 11. Show that the second action coincides with

adx(u) := xu− ux �

Since the action preserves the Un, we also get

ad : g→ End(Gr U(g)).

Proposition. The maps

ψ : S(g)→ U(g), Φ : U(g)→ Gr U(g)

are intertwining maps, i.e. ,

ψ ◦ adx = adx ◦ ψ, Φ ◦ adx = adx ◦ Φ ∀x.

Proof. Clear. �

Page 17: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

17

2.4. Some consequences.

I: If g ' g1 × g2, then

U(g) ' U(g1)⊗ U(g2).

II: We can identify Gr U(g) with S(g). The map g→ U(g) is injective.III: Suppose h, k ⊆ g are subalgebras such that g = h+k. Let ` = h∩k.

There exists a unique linear map

ϕ : U(h)⊗U(`) U(k)→ U(g),

ϕ(v ⊗ w) = v · w.Here U(h) is a left, U(k) a right module for U(`).

IV: There exists a unique antiautomorphism T of U(g) such that xT =−x for x ∈ g (1T = 1).

V: If g is finite dimensional then U(g) is noetherian.VI: S(g)g ' U(g)g.VII: The algebra U(g) has a Hopf algebra structure.

Examples:

(a): g = sl(2). Any element in U(g) can be written uniquely as∑ci,j,kf

ihjek.

Let b be the subalgebra of upper triangular matrices, n the subal-gebra of strictly upper triangular matrices and n the subalgebra ofstrictly lower triangular matrices. Then

U(n)⊗C U(b)∼→ U(g).

(b): g = Hn, the Heisenberg algebra with basis p1 · · · pn, q1 · · · qn, z,and Lie bracket [pi, pj ] = [qi, qj ] = 0, [pi, qj ] = δijz. U(g) has a basis

{pAqBzC} where pA = pi11 · · · pinn and so on.

2.5. Proof of (V). Let A =⊕∞

n=0An be a graded noetherian ring (abelianor at least such that A0 is central).

Lemma. (1) A0 is noetherian.(2) A is a finitely generated algebra.

Proof. For (1), let A+ :=⊕∞

n=1An. Then A0 = A/A+, and the claim followsfrom standard ring theory. For (2), let x1, . . . , xs be homogeneous generators(for A as a left A-module) and di = deg(xi). Let B be the A0-subalgebragenerated by the xi. We claim that An ⊂ A by induction. Indeed, A0 ⊂ B.let y ∈ An. Then y =

∑yixi and deg(yi) = n− di < n. By induction yi ∈ B

so y ∈ B. �

Let now D be an associative ring with identity and an increasing filtrationDn of subspaces such that

(1) Dn = (0) for n < 0,(2)

⋃0≤nDn = D,

(3) 1 ∈ D0,

Page 18: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

18

(4) Dn ·Dm ⊂ Dn+m,(5) [Dn, Dm] ⊂ Dn+m−1,(6) Gr(D) is noetherian,(7) Gr1(D) generates Gr(D) as a D0-algebra.

Let A := Gr(D). By (4-6) it satisfies the conditions of the lemma. So D0 isa noetherian ring and we also get

Grn+1(D) = Gr1(D) ·Grn(D), Dn+1 = D1 ·Dn.

Proposition. D is a left and right noetherian ring.

Proof. Let I be a left ideal. We need to show it is finitely generated. It hasa filtration by In := I ∩Dn which satisfies

(2.5.1) Dn · Im ⊂ In+m, In = (0) for n < 0,∪In = I.

Then Gr(I) is an ideal in Gr(D), so it must be finitely generated as anA-module. This implies the following for the filtration In :

(1) In is finitely generated as a D0-module,(2) there is p > 0 such that Dn · Im = In+m for all n > 0 and all m > p.

These two properties imply that I is finitely generated. �

Remark: This setup is motivated by the theory of rings of differential op-erators. The universal enveloping algebra is such a ring, the left invariantoperators on a Lie group.

A filtration with these properties along with (2.5.1), is called a good filtra-tion.

2.6.

Lemma (Schur’s Lemma). Suppose (π, V ) is an irreducible representationof an algebra A (with 11), such that V has a countable basis. Then the centerof A acts by scalars.

Proof. Let a ∈ A and λ ∈ C. Then if a − λI has a nontrivial kernel, it isg-invariant so either equal to 0 or the whole space. If it is the whole space,done. So assume all a− λI are invertible. Let v 6= 0. Then

{(a− λ)−1v}λ∈Care linearly independent, a contradiction. Say

n∑i=1

ci(a− λi)−1v = 0.

Multiply by Π(a − λi). We find that there is a polynomial p(t) 6= 0 suchthat p(a)v = 0. It follows (since p(t) = Π(t− µj)) that some (a− µj)v = 0,a contradiction. �

Page 19: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

19

2.7. Note that g = sl(2) has trivial center but U(g) does not. First recallthat in general g acts on U(g) via

adx(u) :=∑

u1 · · · adx(ui) · · ·un = x · u− u · x.

So the center of U is the same as U(g)g, the elements centralized by theadjoint action of g. g also acts on S(g) by the same formula

x · (v1 · · · vn) :=∑i

v1 · · · adx(vi) · · · vn.

Exercise 12. (1) Prove property (VI) in section (2.4).(2) Show that S(g)g is a polynomial algebra generated by ef + 1

2h2.

(3) Show that U(g)g is generated by ef + fe+ h2 (use the map S(g)→U(g)). Check that this acts by a scalar on F (n). �

2.8. The Weyl algebra. Let Hn be the associative algebra generated by

{p1, . . . , pn q1, . . . , qn}

[pi, pj ] = [qi, qj ] = 0, [pi, qj ] = δijz.

Let

An ' U(Hn)/(z − 1).

Exercise 13. Show that An is isomorphic to the algebra of differential op-erators on Cn with polynomial coefficients.

Schur’s lemma implies that z must act by a scalalr on any irreduciblerepresentation of Hn.

Proposition. Every two sided ideal of An is either 0 or An. We say An issimple.

Corollary. If ϕ : An → An is an algebra homomorphism, then ϕ is injec-tive.

2.9. Relation to PDE’s. To a system of equations S := {Pif = 0} forsome Pi ∈ An we can associate the left moduleM := An/{

∑AnPi}. Then

HomAn [M,C[x1, . . . , xn]] ' Sol(S)

as vector spaces. Here

Sol(S) := {f ∈ C[x1, . . . , xn] : Pif = 0}.

Examples:

(1) Pi = xi. M = An/∑Anxi ' C[∂1 · · · ∂n]. The solution to this is

the delta function, not in Sol(S) which is 0. But still it is importantto consider such solutions.

(2) Pi = ∂i. M = An/∑An∂i ' C[x1 · · ·xn]. Sol(S) = C1.

Page 20: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

20

(3) The Fourier transform can be defined algebraically on An by theformulas

xi 7→ −√−1∂i

∂i 7→ +√−1xi.

It is an isomorphism of An which interchanges examples (1) and (2).

2.10. Jacobian Conjecture. If

F = (F1, . . . , Fn) : C[x1, . . . , xn]→ C[x1, . . . , xn]

is such that ∆F := det(∂Fi∂xj

)≡ 1, then F has a polynomial inverse. �

Such an F gives rise to a

ϕF : An → An.For (f1, . . . , fn), write

J(f1, . . . , fn) :=( ∂fi∂xj

).

Then the definition is in the next exercise.

Exercise 14. Show that the assignments

xi 7→ Fi

∂i : f 7→ det J(F1 · · · f · · ·Fn).

define a homomorphism ϕ : An → An.

Conjecture. Any nonzero homomorphism is onto. This conjecture impliesthe Jacobian conjecture ( cf. Coutinho, a primer of algebraic D-modules).

2.11. Fundamental Solutions of PDE with Constant Coefficients.Let p ∈ R[x1, . . . , xn] be such that p(x) ≥ 0 for all x. We can define adistribution

Tλ(f) :=

∫Rnp(x)λf(x)dx.

This is tempered for Reλ ≥ 0. Assume that Tλ extends meromorphically inλ in the complex plane. At λ = −1

Tλ(f) =∑n≥−N

(λ+ 1)nSn(f).

Then p · Tλ(f) := Tλ(pf) = Tλ+1(f) is well defined at λ = 0 and equals 1.Thus

p · Sn = 0 n < 0

pS0 = 1.

If p ∈ C[x1, . . . , xn], let q = p · p. Then q · S0 = 1 implies p · (pS0) = 1.Taking Fourier transforms

∂pS0 = δ0, p(x) = p(−√−1x)

Page 21: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

21

Say p(x) = x21 + · · ·+x2

n. Then one shows that Tλ extends meromorphicallywith poles at −1,−2 · · · . ( cf. Gelfand-Shilov, Generalized functions volume1).

Theorem (Bernstein). Let p ∈ C[x1, . . . , xn]. There is D(λ) ∈ An polyno-mial in λ and b(λ) ∈ C[λ] such that

D(λ)pλ+1 = b(λ)pλ.

Corollary. Tλ extends meromorphically with poles on finitely many arith-metic progressions {λj − k}k∈N.

Proof. The relation

Tλ(f) =

∫pλf =

1

b(λ)

∫pλ+1D(λ)∗f

follows from theorem (2.11). By iterating we can see that Tα can be ex-pressed in terms of pλ+n, which (as a distribution) is well defined and holo-morphic in λ for large enough n. The only poles can come from zeroes ofthe denominator which is a product of b(λ+ k). The claimn follows. �

2.12. Some Remarks on Exercise 14. Recall g ' sl(2,C). We arelooking for g–invariants in S(g). First observe that we can realize g as 2× 2matrices with trace 0.

Definition. [Dual Representation] If (π, V ) is a representation of g, wedefine (π∗, V ∗) where V ∗ is the (linear) dual of V and the action is

(π∗(x)f)(y) := −f(π(x)(y)).

It is easy to check that

π∗([a, b]) = [π∗(a), π∗(b)].

Lemma. If V ' g ' sl(2) and π = ad, then (π, V ) ' (π∗, V ∗).

Proof. The bilinear form(x, y) 7→ tr(xy)

is nondegenerate and satisfies

(ada · x, y) + (x, ada · y) = 0.

As a result, S(g) ' S(g∗) = P (g). Next observe that G = SL(2,C) :={2 × 2 matrices with determinant 1} also acts on g (and therefore on S(g)and P (g)) by

Ad : (g, x) 7→ gxg−1.

Then sl(2,C) is the Lie algebra of SL(2,C) and the action differentiates to

d

dt

∣∣∣t=0

etX · x · e−tX = Xx− xX = [X,x].

In other words, the differential of Ad is ad.

Page 22: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

22

Proposition. S(g)G = S(g)g. Clearly S(g)G ⊆ S(g)g.

To show the converse,

Lemma. exp : g→ G is a local diffeomorphism from an open set 0 ∈ U ⊆ gonto an open set 11 ∈ V ⊆ G.

Proof. Exercise. �

Hint: The differential of exp at 0 is I. use the inverse function theorem. �

Let

h :=

{[a 00 −a

]}⊂ g = sl(2).

Lemma. f ∈ P(g) 7→ f∣∣h

is an injection. If f ∈ P (g)G, then f∣∣h

is

symmetric.

Proof. A polynomial function is determined by its values on an open set. Usethe open set given by the diagonalizable matrices. Any matrix in this set canbe conjugated to one in h. An invariant polynomial is therefore determinedby its values on h. The fact that the image of the restriction map consistsof symmetric polynomials follows from the fact that the subgroup W ⊆ G

spanned by

[0 1−1 0

]leaves h invariant and acts by[

0 1−1 0

] [a 00 −a

] [0 −11 0

]=

[−a 00 a

]�

Proposition. The restriction map

Res : P(g)G −→ P(h)W

is an isomorphism.

Proof. The previous lemma shows that the map is injective and the imageis contained in the space of symmetric polynomials. This is a polynomialalgebra in the generator p(a) = a2. We need to see that this comes from anelement in P (g)G. Note that

P (X) := −det(X) = −det

[a bc −a

]= a2 + bc

is SL(2) invariant and restricts to p. �

2.13. Highest weight modules for SL(2). We use the notation in ex-ample (b) of section 2.4. Let λ ∈ C, and define a representation Cλ := CvΛ

of b byπ(e)vΛ = 0, π(h)vΛ = ΛvΛ.

Then we can form the module M(Λ) := U(g) ⊗U(b) CΛ. Then U(g) acts bymultiplication on the left. It is a highest weight module for g. This means

Page 23: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

23

that it is generated by an eigenvector for h which is annihilated by e. It isalso universal, in the sense that if any module (π, V ) of g has a vector vwhich transforms like CΛ, then there is a map

φ : M(Λ) −→ V, φ(vΛ) = v, φ(X · w) = X · φ(w)

This is clear, the map is φ(x⊗ vΛ) := π(x)v.We analyze the module M(Λ). The PBW theorem implies that a basis is

given byfn ⊗ vΛ.

The following relations hold in the universal enveloping algebra.

(2.13.1) hfn = fn(h− 2n), efn = fne+ nfn−1(h− n+ 1).

Thus in M(Λ),

(2.13.2) h ·fn⊗vΛ = (Λ−2n)fn⊗vΛ, e ·fn⊗vΛ = n(Λ−n+1)fn⊗vΛ.

We would like to know when M(Λ) is reducible. Let (0) ⊂ W ⊂ V be anontrivial submodule. Any vector w ∈W has a decomposition

(2.13.3) w =∑

wi, h · wi = λiwi.

We claim that wi ∈W as well. Indeed suppose not. Let w := w1+· · ·+wn ∈W, so that none of the wi ∈,W wi have distinct eigenvalues and the n isminimal with these properties. Then

(2.13.4) h ·∑

wi =∑

λiwi ∈W.

Assume as we may that λ1 6= 0. Then h·w−λ1w ∈W has the same propertiesas w but has fewer terms, a contradiction. Next we claim W must have ahighest weight different from vΛ. Indeed, e raises the eigenvalue of h by 2,all the eigenvalues of h on M(Λ) are of the form Λ − 2n with n ≥ 0, so ife has no kernel, it must eventually contain vΛ, But then W = M(Λ). ThusW contains a vΛ−2n which is annihilated by e. It follows that Λ = n− 1. Inthis case we get an exact sequence

(2.13.5) 0 −→M(−n− 2) −→M(n) −→ F (n) −→ 0,

and M(−n− 2) is irreducible.

Exercise 15. Show that the exact sequence in (2.13.5) does not split.

Page 24: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

24

3. Hopf Algebras

If A is an algebra, then A⊗A is also an algebra. the multiplication is

(a1 ⊗ a2) · (b1 ⊗ b2) := a1b1 ⊗ a2b2

Definition. An algebra A with an algebra homomorphism

∆ : A→ A⊗A

is called a coalgebra. It is called co–associative if the diagram

A⊗A 1⊗∆−−−−→ A⊗ (A⊗A)yid ycanA⊗A ∆×1−−−−→ (A⊗A)⊗A

commutes. The map is

can : a⊗ (b⊗ c)→ (a⊗ b)⊗ c.

It has a co-unit if htere is a map ε : A→ k making the diagram

A∆−−−−→ A⊗Ayid y1⊗ε

ACan−−−−→ A⊗ k

A∆−−−−→ A×Ay yε×1

ACan−−−−→ k ⊗A

commutative. The maps can are

can : a 7→ a⊗ 1 can : a 7→ 1⊗ a

Definition. An algebra A with unit which is at the same time a coassociativecoalgebra such that ε and ∆ are algebra maps is called a bialgebra. Anantipode S : A→ A is an algebra antihomomorphism satisfying

A∆−−−−→ A⊗A

i◦εy y1⊗S

Am←−−−− A⊗A

A∆−−−−→ A⊗Ay1◦ε

yS⊗1

Am←−−−− A⊗A

Here

ι : K→ A ι(x) := x11.

Definition. A bialgebra with an antipode (A,m, u,∆, ε, S) such that ∆ iscoassociative is called a Hopf algebra.

Page 25: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

25

Examples.

(I): The group algebra of a finite group G is

A = K[G] :=∑x∈G

cxδx cx ∈ C

where the multiplication is

δx · δy := δxy

and the comultiplication is

∆ : δx 7→ δx ⊗ δxforms a Hopf algebra. The antipode is S(δx) := S(δx−1 and ε : δx 7→1.

(II): The universal enveloping algebra A = U(g) with

ε : X 7→ 0, S(X) = −X, X ∈ g

(so S(X1 · · ·Xn) = (−1)nXn · · ·X1) and

∆(X) = 1⊗X +X ⊗ 1, X ∈ g

forms a Hopf algebra.(III): A = S(g), a special case of (II).(IV): Let A = P (V ) be the algebra of polynomial functions on a vector

space V. The multiplication, counit and antipode are

m(f ⊗ g) = f · g ε(f) = f(0) Sf(x) := f(−x).

Since P (V )⊗P (V ) ' P (V ⊕V ) the comultiplication can be writtenas

∆(f)(x1, x2) := f(x1 + x2)

LetP : A⊗A→ A⊗A P (a⊗ b) := b⊗ a.

A is called co-commutative if P ◦∆ = ∆.All our examples have this property.

Definition.

(1) An element a ∈ A is called primitive if ∆(a) = 1⊗ a+ a⊗ 1.(2) An element a ∈ A is called group like if ∆(a) = a⊗ a.

Lemma. Primitive elements P form a Lie algebra. Group like elements Gare closed under multiplication.

Proof. If a, b ∈ G,

∆(ab) = ∆(a) ·∆(b) = (a⊗ a) · (b⊗ b) = ab⊗ ab.If a, b ∈ P, then

∆(ab) = ∆(a)·∆(b) = (a⊗1+1⊗a)·(b⊗1+1⊗b) = ab⊗1+a⊗b+b⊗a+1⊗abso ∆(ab− ba) = (ab− ba)⊗ 1 + 1⊗ (ab− ba). �

Page 26: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

26

3.1. The original model for a Hopf algebra comes from cohomology. Let Xbe a topological space (compact manifold). The the cohomology H∗(X) :=⊕H i(X) is an algebra under cup product

H i ⊗Hj ∪−→ H i+j .

The Kunneth formula states that

Hk(X ×X) '∑i+j=k

H i(X)⊗Hj(X)

If there is a map µ : X ×X → X, we get a map

∆ : H∗(X)→ H∗(X ×X) ' H∗(X)⊗H∗(X).

If X has a (preferred) base point pt, we get a

ε : H∗(X)→ k ' H∗(pt).

To get a bialgebra we require that XiL−→ X × X µ−→ X, iL(x) = (pt, x),

iR(x) = (x, pt) induce identity on cohomology.

Definition. If A = ⊕n≥0An is a graded algebra, i.e. Am ·An ⊆ Am+n, then

for a graded bialgebra we require that

∆An ⊆ ⊕i+j=k

Ai ⊗Aj , ε|An = 0, n > 0, 11 ∈ A0.

We say that A is supercommutative if a · b = (−1)deg a·deg bb · a

Remark: The model for a supercommutative algebra is the exterior algebraΛ∗V = ⊕ΛiV , a · b := a ∧ b.

Theorem (Hopf). Any graded associative supercommutative algebra is anexterior algebra over generators of odd dimension.

Example: X = G a compact (connected) Lie group. µ : G × G → Gmultiplication. pt = e ∈ G is the base point. For example if G = SU(n), anontrivial result states that

H∗(G) ' Λ∗(x3, x5, . . . , x2n−1).

We consider the spacial case when the algebra is generated by a single ele-ment z ∈ An. Say A0 ' K as well. Then

∆z =∑

x0 ⊗ yn + · · ·+∑

xk ⊗ yn−k + · · ·+∑

xn ⊗ y0.

Now (ε⊗ id) ◦∆ = id and (id⊗ ε) ◦∆ = id. This implies

∆z = 1⊗ z +∑

0<k<n

xk ⊗ yn−k + z ⊗ 1

where each∑

0<k<n xk ⊗ yn−k represents a sum of elements in Ak ⊗ An−k.So any z is “almost” primitive.

Page 27: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

27

Suppose A is generated by a single element z, homogeneous of degree n.Then A` = 0 for 0 < ` < n. So

∆z = 1⊗ z + z ⊗ 1.

If n is odd, z2 = −z2 so z2 = 0. We get A = ΛCz and (∆z)2 = 0 aswell. If n is even, let P be the minimal polynomial of z. Then P must behomogeneous because z ∈ An. Thus zk = 0. Then

(3.1.1) 0 = ∆(zk) =∑(

k

i

)zi ⊗ zk−i.

If k > 0 this gives a contradiction. Because the {zi} with ` < k are linearlyindependent, every term in (3.1.1) must be zero. This contradicts the def-inition of k. It follows that A = K[z] with z of even degree n. This is notfinite dimensional.

Page 28: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

28

4. Nilpotent and solvable algebras

References:Humphreys, I.2 and I.3 , Jacobson, II.3, II.4, II.6 , Helgason III.1, III.2.

A subspace I ⊆ g is called an ideal if [g, I] ⊆ I.

Definition. The derived series is defined inductively by

(1) D1g = [g, g],(2) Di+1g = [Dig,Dig].

Lemma. Dig is an ideal.

Proof. We do an induction on i. D1g is an ideal by the Jacobi identity.Similarly let X ∈ Di, Y ∈ Di and Z ∈ g. Then

ad[X,Y ] = adX ◦ adY − adY ◦ adX

so[[X,Y ], Z] = [X, [Y, Z]]− [Y, [X,Z]]

Then apply the induction hypothesis. �

Definition. A Lie algebra is called solvable if Dng = 0 for some n > 0.

The lower central series is defined as

(1) C1g = [g, g],(2) Ci+1g = [g, Cig].

Definition. An algebra is called nilpotent if Cng = 0 for some n > 0.

Since Cng ⊇ Dng nilpotent implies solvable.

Proposition.

(1) Every subalgebra and every quotient of a solvable algebra is solvable.(2) If I ⊆ g is a solvable ideal such that g/I is solvable, then g is solvable.(3) The sum of two solvable ideals is solvable.

Proof. (1) If h ⊆ g then Dih ⊆ Dig. If 0 → I → gπ−→ h → 0, then

π−1Dih ⊆ Dig.(2) If I and h are solvable, π(Dnh) = 0 for some n. So Dng ⊆ I. But

then Dn+mg ⊆ DmI = 0 for large m.(3) follows from (1) and (2): 0 → I1 → I1 + I2 → I2/I1 ∩ I2 → 0,

I1 + I2/I1 ' I2/I1 ∩ I2. �

Examples: b =

∗ ∗. . .

0 ∗

, n =

0 ∗. . .

0 0

are solvable and nilpotent

respectively. sl(n) on the other hand is neither.

Proposition. The sum of two nilpotent ideals is nilpotent.

Page 29: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

29

Proof. Let X1 ∈ I1 and X2 ∈ I2. Then

(4.0.2) (adX1 + adX2)m(Y ) =∑

adXi1 ◦ · · · ◦ adXim(Y )

with ij = 1 or 2. If mi are such that CmiIi = (0), then (4.0.2) is containedin one of CmiIi so it is zero. Then apply Engel’s theorem which is provedlater. You can also prove directly that Cm(I1 + I2) = (0) but this is moretedious. �

Let S ⊂ g be a solvable ideal of maximal dimension. If I is any othersolvable ideal, S + I is solvable. Since S ⊆ S + I the maximality impliesS = S + I, or I ⊆ S. Thus S is unique; it is called the solvable radical.If S = (0), we say g is semisimple. If g has no proper nonzero ideals andDg 6= 0, we say g is simple.

Corollary. g contains a nilpotent ideal of maximal dimension N , called thenilradical.

Note that N ⊆ S.

Lemma.

(1) g/S is semisimple.(2) If g is simple, then g is semisimple.

Proof. (2) Let S ⊆ g be the radical. Either S = g or S = 0. If S = g, thenDS = Dg 6⊆ g so DS = 0. But then Dg = 0 which contradicts the definitionof simple. Thus S = (0), i.e. g is semisimple.

(1) Let I ⊆ g/S be solvable. Its inverse image contains S and is solvableby proposition 4. Thus the inverse image is equal to S, i.e. I = (0). �

Lemma. A finite dimensional Lie algebra g is solvable if and only if everynonzero ideal contains a subideal of codimension 1.

Proof. If g is solvable, Dg 6= g. Any linear subspace Dg ⊆ I ⊆ g is anideal. Since g is solvable, D1g 6= g, we can find an ideal I of codimension1. Conversely, since g contains an ideal of codimension 1, D1g 6= g. Byinduction Di+1 6= Di+1, and the claim follows from the fact that g is finitedimensional. �

Note: If I1 is a subideal of I, i.e. [I, I1] ⊆ I1, this doesn’t mean it is anideal of g.

4.1. Flags and Engel’s theorem.

Notes and talk prepared by Todd Kemp.

Proposition. If g/Z(g) is nilpotent, so is g.

Proof. There is an n such that gn ⊆ Z(g), since (g/Z(g))n = Z(g). Thusgn+1 = [ggn] ⊆ [gZ(g)] = 0. �

Lemma. If g is nilpotent, adx is nilpotent for any x ∈ g.

Page 30: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

30

Proof. Indeed, for n such that Cng = (0),

(4.1.1) [x1, [x2, [· · · [xny]] · · · ]]] = 0

for all xi, yi ∈ g. In other words, adx1adx2 · · · adxn(y) = 0, ∀ xi, yi ∈ g.Thus, letting xi = x ∀ i, (adx)n = 0, so adx a nilpotent endomorphism. �

We say x is ad-nilpotent if adx is a nilpotent endomorphism. Thus, if g isnilpotent, then all x ∈ g are ad-nilpotent. The converse is Engel’s theorem.

Theorem. If x ∈ gl(V ) is a nilpotent endomorphism, then adx is nilpotent.

Proof. Define Lx(y) = xy, Rx(y) = yx. Then if xn = 0, Lnx(y) = xny = 0,and Rnx(y) = yxn = 0. Thus, Lx, Rx are nilpotent endomorphisms. Sinceg`(V ) is associative, Lx and Rx commute.

Let a, b be nilpotent in a ring R, and ab = ba. Then

(4.1.2) (a+ b)n =∑(

n

m

)an−mbm

So suppose ak = 0, b` = 0. Choose n > ` + k. Then, in the term an−mbm,if n −m < k, then m > n − k ≥ `. So bm = 0. Otherwise n −m ≥ k, soan−m = 0. Thus (a+ b)n = 0.

So, Lx −Rx is nilpotent. But adx = Lx −Rx, so adx is nilpotent. �

Theorem. Let g be a subalgebra of g`(V ), V 6= 0 finite dimensional. Ifevery element of g is nilpotent, then ∃ v ∈ V \{0} with g · V = 0.

Proof. We do an induction on dim(g). In the case dim(g) = 1, this is just theclaim that a single nilpotent linear transformation always has an eigenvector(with eigenvalue 0). This is standard linear algebra.

So let dim(g) > 1. Let h ⊂ g be a subalgebra. By lemma (4.1), any adx isnilpotent, so acts on g as an algebra of nilpotent linear transformations. Inparticular h acts on g/h by nilpotent trasnformations. By induction, thereis x ∈ g not in h such that

(4.1.3) [x, h] ⊂ h.

This means that the normalizer of h is strictly larger than h. Choose amaximal proper subalgebra h. For such an algebra,

(4.1.4) Ng(h) = g,

So h is an ideal of g and therefore g/h is an algebra.We claim h is codimension 1. Indeed any subspace h ⊂ k ⊂ g is a subal-

gebra contradicting the maximality of h. Then we can find an element z /∈ gso that

(4.1.5) g = h + Kz.By the induction hypothesis, as dim h < dim g, there is 0 6= v ∈ V so thath · v = 0.; In other words, W = {v ∈ V ; h · v = 0} is nonzero. But h is anideal, so if x ∈ g, y ∈ h, [x, y] ∈ h. Thus

(4.1.6) (yx)w = xy · w − [xy]w = 0 · 0 = 0 ∀ w ∈W.

Page 31: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

31

So xW ⊆W . In particular z ∈ g \ h maps W →W . It is nilpotent, so thereis 0 6= v ∈W such that z · v = 0, The claim follows. �

Theorem (Engel). If all elements of g are ad-nilpotent, then g is nilpotent.

Proof. We do an induction on the dimension of g. The claim is true fordim g = 1. The assumptions imply that adg ⊆ g`(g) consists of nilpotentendomorphisms. So there is 0 6= x ∈ g satisfying ady(x) = 0 ∀ y ∈ g, i.e. ,[y, x] = 0 ∀ y ∈ g. Thus x ∈ Z(g) 6= 0 and so Z(g) 6= (0).

Thus g/Z(g) is clearly ad-nilpotent, and has smaller dimension. By theinduction hypothesis, g/Z(g) is nilpotent. So by proposition 4.1, g is nilpo-tent. �

Corollary. If g is nilpotent, then any nonzero ideal h has nontrivial inter-section with Z(g). In particular, Z(g) 6= 0.

Proof. g acts via ad on h, so by theorem 4.1, there is 0 6= x ∈ h killed by g,i.e. , [g, x] = 0, so x ∈ h ∩ Z(g). �

Definition. A flag in V is a chain (Vi) of subspaces

(4.1.7) 0 = V0 ⊆ V1 ⊆ · · · ⊆ Vn = V with dimVi = i.

We say x ∈ End(V ) stablizes (Vi) if x · Vi ⊆ Vi ∀i.

Corollary. If g ⊆ g`(V ) (V 6= 0 finite dimensional) consists of nilpotentendomorphisms, then there exists a g-stable flag (Vi).

Proof. By theorem 4.1 there is v 6= 0 in V satisfying g · v = 0. Set V1 = Fv.Let π : W → V/V1; the induced action of g on W is also by nilpotentendomorphisms. By induction on dimV , W has a flag stabilized by g,0 = W0 ⊆W1 ⊆ · · · ⊆Wn−1 = W . Then

(4.1.8) 0 = V0 ⊆ V1 = π−1(W0) ⊆ π−1(W1) ⊆ · · · ⊆ π−1(Wn−1)

is a flag stabilized by g. �

Theorem. Let (π, V ) be a representation of a solvable algebra g. Then Vhas a joint eigenvector for g.

Proof. We do an induction on the dimension of g. Let h ⊆ g be an ideal ofcodimension 1 (section 4), and let z be such that g = Cz + h. By inductionthere is λ : h→ C a linear functional, and 0 6= v ∈ V such that

π(x)v = λ(x)v ∀ x ∈ h.

Let Wλ be the sum of all h invariant subspaces of Vλ. It is nonzero becausev ∈Wλ and is again a representation of h.Claim: π(z) stabilizes Wλ.

First observe that π(z)Wλ ⊂ Vλ. Indeed in general if a, b ∈ End(V ), then

bm · a = abm +∑

k+`=m−1

bk · [b, a] · b`.

Page 32: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

32

Let b = π(y) − λ(y), a = π(z). Then [b, a] ∈ h so [b, a] leaves Wλ stable. Ifm is large enough either bk or b` annihilates any w ∈Wλ. Then

π(h)π(z)w = π(z)π(h)w + π([h, z])w ∈ π(z)Wλ +Wλ,

so Wλ + π(z)Wλ ⊂ Vλ is invariant under h. By the definition of Wλ, we getπ(z)Wλ ⊂Wλ.

Note that for any a, b ∈ End(V ), tr[b, a] = 0. If we let b = π(y), a = π(z)and V = Wλ, then tr π([y, z]) = 0 and also

tr([b, a]) == λ([y, z]) · dimWλ.

So λ([y, z]) = 0. If

W 0λ := {w | π(y)w = λ(y)w ∀y ∈ h},

then π(z)W 0λ ⊆W 0

λ , because

π(y)π(z)w = π(z)π(y)w + π([y, z])w = λ(y)π(z)w.

Let w0 6= 0 be an eigenvector for π(z) in W 0λ . This satisfies the claim of the

theorem. �

Corollary. Let (π, V ) be any finite dimensional representation of a solvablealgebra g. Then there is a basis of V such that all π(x) with x ∈ g are uppertriangular.

Proposition. If g is nilpotent,

V =∑λ∈g∗

whereVλ := {v ∈ V | (π(x)− λ(x))nv = 0 for some n}

Note that this fails for the algebra

g = {r(a, x) :=

[a x0 −a

]},

and the standard two-dimensional represenation. The generalized eigenvalues are

λ±(r(a, x)) = ±a,but there is no direct sum decomposition.

The statement applies to the adjoint action of g on itself. In this case we

(4.1.9) [gα, gβ] ⊂ gα+β.

More generally, suppose that y ∈ gα and v ∈ Vλ. We claim that π(x)v ∈Vλ+α. Indeed g⊗ V is a representation via π(x)[y ⊗ v] := ad(x)y ⊗ v + y ⊗π(x)v and the map m : g ⊗ V → V given by m(y ⊗ v) := π(y)v is a Liehomomorphism. Then

(4.1.10) (x− λ− α)n(y ⊗ v) =∑(

n

i

)(ad(x)− α)iy ⊗ (π(x)− λ)n−iv,

Page 33: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

33

so the claim follows. This says that Vλ is not necessarily a representation ofg.

4.2. Example. If 0 → h ⊆ g → k → 0 and h, k are nilpotent, it is notnecessarily true that g is nilpotent.

g =

(∗ ∗0 ∗

), h =

(0 ∗0 0

), k =

(∗ 00 ∗

).

4.3. Proof of proposition 4.1. We do an induction on dim g, and dimV.Suppose a, b ∈ End(W ), are such that there is n so that

(ada)n(b) = 0.

This implies that b preserves the generalized eigenspaces of b, because

(a− λ)mb =∑

k+l=m

(m

k

)ad(a− λ)k(b)(a− λ)l.

Now use the decomposition g = Cz+h, where h is an ideal of codimension1. Then

V =∑λ∈h∗

Vλ,

and it follows that z preserves the eigenspaces. Thus each Vλ is a represen-tation of g. By induction we can assume that V = Vλ, for a single λ ∈ h∗.Decompose V into generalized eigenspaces for z:

V =∑

Vρ.

By the same argument as before, with a = z and b ∈ h, we see that theVρ are also representations of g. By induction we may assume that V is ageneralized eigenspace for h corresponding to λ ∈ h∗, and for z correspondingto ρ. Let

µ(rz + x) := rρ+ λ(x).

We claim that V is the generalized eigenspace corresponding to µ. Indeed,let v ∈ V be such that π(rz + x) = µv, for all r ∈ R, x ∈ h. Then also,

π(x)v = λ(x)v, π(rz) = rρv.

In other words,µ(z) = ρ, µ

∣∣h

= λ.

The claim follows. �

Page 34: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

34

5. Cartan sublagebras

References:Humphreys II.4, II.5 , Jacobson III.1-III.6 , Helgason III.3.

5.1. Cartan subalgebras.

Definition. A subalgebra h ⊆ g is called a Cartan subalgebra (CSA) if

(1) h is nilpotent(2) h is its own normalizer.

If h ⊆ g is a subspace, let the normalizer of h be defined to be

N(h) := {X ∈ g : ad X(h) ⊆ h}.

Lemma. N(h) is a subalgebra.

Proof. ad[x, y](h) = adxady(h)− adyadx(h)). �

For α : h→ C, define

gα := {x ∈ g | (ad h− α(h))mx = 0 for some n, any h ∈ h}.Then there is a decomposition

g =∑α∈∆

gα.

An α such that gα 6= 0 is called a root.

Proposition. [gα, gβ] ⊆ gα+β.

Proof. Recall that D ∈ End(g) is called a derivation if D([x, y]) = [Dx, y] +[x,Dy]. For a derivation,

(5.1.1) (D − α− β)n[x, y] =∑k+`=n

(n

`

)[(D − α)kx, (D − β)`y].

If x and y are generalized eigenvectors with eigenvalues α, β, then for n largeenough, one of the terms in each bracket of the sum in (5.1.1) is zero. Thisimplies that [x, y] is in the generalized eigenspace of D for the eigenvalueα + β. Apply this to D = adh for h ∈ h to conclude the claim of theproposition. �

Corollary. g0 ⊇ h and any x ∈ gα, α 6= 0 is ad-nilpotent.

Proposition. A nilpotent subalgebra h ⊆ g is a CSA if and only if h = g0.

Proof. Since h is nilpotent, there is a decomposition

g = g0 +∑α 6=0

gα.

The normalizer N(h) is contained in g0. So if h = g0, then h = N(h). Onthe other hand, suppose h = N(h), and h ⊂ g0, but V := g0/h 6= (0). Then

Page 35: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

35

V is a representation of h where all elements act nilpotently. By Engel’stheorem, V has a nonzero joint eigenvector. Its inverse image in g0 is in thenormalizer of h, but not in h, a contradiction. �

We now show that every Lie algebra has a CSA. For an element h ∈ g,let

g0(h) : {x ∈ g | (ad h)mx = 0 for some m},and let

(5.1.2) p(t, h) := det(tI − adh) = tn + pn−1(h)tn−1 + · · ·+ p`(h)t`.

Note that p(0, h) = 0 because every adh has nontrivial kernel. So let ` > 0be the lowest integer so that p`(h) 6= 0 for some h. If p`(h) 6= 0, then in fact` = dim g0(h).

Definition. An element h ∈ g is called regular if dim g0(h) is minimal.Equivalently p`(h) 6= 0.

Proposition. Assume K is algebraically closed. If h0 is regular, then h =g0(h0) is a CSA. Conversely, every CSA is the generalized null space of anelement.

Proof. We need to show that if h ∈ h, then adh|h is nilpotent. Suppose not.Consider λh + µh0 for (λ, µ) ∈ K2 (K = C if you’re uncomfortable withother fields). There is a nonempty open set {(λ, µ) so that g0(λh+µh0) hasstrictly smaller dimension than h. Indeed, decompose g =

∑gα according

to generalized eigenvalues of h0. The subspaces gα are stabilized by λh+µh0.The set

{(λ, µ) : λh+ µh0 has no generalized eigenvalue 0 on any gα, α 6= 0}is Zariski open, so nonempty. For a pair in this set, g0(λh+ µh0) ⊆ g0(h0).If h is not nilpotent the generalized 0-eigenspace of a(λ, µ) = ad(λh+ µh0)is strictly smaller than h, which contradicts the fact that h0 was assumedregular. To prove this fact, consider the characteristic polynomial

p(t, λ, µ) = det[tId− a(λ, µ)|h].If adh|h is not nilpotent, then p(t, λ, 0) is only divisible by a power smaller

than tdim h. Thus there is a Zariski open set so that p(t, λ, µ) is not divisibleby tdim h.

The fact that g0(h0) is its own normalizer is easy.Conversely, let h be a CSA, and write

g = h⊕∑α∈∆

where ∆ are the nonzero roots. Since there are only finitely many, let h0 ∈ h,be such that α(h0) 6= 0 for any α ∈ ∆. Then g0(h0) = h. On the one hand,g0(h0) ⊂ h because the nonzero roots don’t vanish on h0. On the other hand,adh0 acts nilpotently on h because h is nilpotent. So h ⊂ g0(h0). �

Page 36: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

36

5.2. Remark. It is not clear that the element h0 is regular, as there maybe an element whose generalized eigenspace has strictly smaller dimensionwhich is not in h. A sketch of why this does not happen is at the end of thesection.

6. Cartan’s criteria

6.1. Cartan’s criterion for solvability.

Theorem. Suppose that g has a finite dimensional representation (π, V )such that

(1) kerπ is solvable.(2) tr(π(x)2) = 0 for all x ∈ D1g.

Then g is solvable.

Corollary. g solvable ⇔ tr((ad x)2) = 0 ∀ x ∈ D1g.

Proof. ad : g→ End(g), ker ad = Z(g) := {x ∈ g | [x, y] = 0 ∀y ∈ g}. �

6.2. Cartan’s criterion for semisimplicity.

Definition. For any Lie algebra g, define

By(x, y) := tr(ad x · ad y).

Theorem. g is semisimple ⇔ Bg is nondengerate.

Remark: Recall that g is semisimple means g has no nonzero solvableideal. Also Z(g) = 0.

Definition. Bg(x, y) := tr(adz ◦ ady) is called the Cartan-Killing form.

Proposition. B(x, y) = B(y, x), B(ad x(y), z) +B(y, ad x(z)) = 0.

6.3. Proof of Theorem 6.1. It is enough to show g 6= D1g, because theassumptions also hold for D1g, . . ..

Suppose not, i.e. g = D1g. Then g = h +∑α 6=0

gα and so

g = D1g =∑

[gλ, gµ].

But [gλ, gµ] ⊆ gλ+µ so

h = [h, h] +∑

[gα, g−α].

Recall that V =∑Vλ, where Vλ are generialzed eigenspaces for h. We will

show that π(g) is nilpotent by using Engel’s theorem. We start by showingthat π(h) is nilpotent. Since h is nilpotent, it is solvable and Lie’s theoremapplies. Thus it is enough to show that λ(h) = 0 for all h ∈ h. Sincetrπ(h)|Vλ = λ(h) dimVλ, we get that λ([h1, h2]) = 0. Let now eα ∈ gα ande−α ∈ g−α Write hα := [eα, e−α]. We compute tr(π(hα)). Let ρ be such that

Page 37: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

37

Vρ 6= (0). Let W :=∑i∈Z

Vρ+iα. Then eα, e−α and hα stabilize W . Therefore,

tr(π(hα)) = tr([eα, e−α]) = 0. On the other hand,

trW (π(hα)) =∑

(ρ+ iα)(hα) · dimVρ+iα,

so0 =

(∑dimVρ+iα

)ρ(hα) +

(∑idimVρ+iα)α(hα)

Solving for ρ(hα) we get

(6.3.1) ρ(hα) = rρ · α(hα), rρ ∈ QThen

(6.3.2) 0 = tr(π(hα)2) =( ∑λ(hα)6=0

r2λ · dimVλ

)α(hα)2.

If α(hα) = 0, then (6.3.1) shows that all λ(hα) = 0. If not, (6.3.2) shows thatr2λ dimVλ = 0. In either case, λ(hα) = 0 for all λ. It follows that the only

generalized eigenspace of h is V0. But then since π(eα) maps V0 to Vα andthe latter is (0), we get that π(eα) acts by zero. Thus π(g) is a nilpotentalgebra and therefore also solvable. Since kerπ is assumed solvable, g issolvable, contradicting g = D1g. �

6.4. Proof of Theorem 6.2. (⇒) Suppose g is semisimple. Let g⊥ be theradical of B, i.e.

g⊥ := {x ∈ g | B(x, y) = 0 ∀y ∈ g}.We need to show g⊥ = (0). First, g⊥ is an ideal because B(ad z(x), y) =−B(x, ad z(y)) = 0. Next, B(x, x) = tr((ad x)2) = 0, for x ∈ g⊥. Thus g⊥

is solvable by theorem 6.1, so must be zero.

(⇐) Suppose g is not semisimple. Let I ⊆ g be a nonzero abelian ideal.Such an ideal exists since there is a nontrivial solvable ideal S ⊆ g. Thenlet n be such that D(n)S 6= 0 and D(n+1)S = 0. Then take I to be DnS.D(n)S is an ideal since by induction ad x([y, z]) = [ad x(y), z] + [y, ad x(z)].Choose a basis of g so that the first k vectors are in I. Then for a ∈ g, andb ∈ I,

ada =

(∗ ∗0 ∗

)adb =

(0 ∗0 0

)We can then check that tr(adb · ada) = 0 So 0 6= I ⊆ radB.

6.5. Derivations.

Corollary. If g is semisimple then g = [g, g].

Exercise 16.

(1) Is the converse true?(2) Compute Bg for sl(n) and show that

tr(ad x · ad y) = ∗tr(x · y).

Page 38: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

38

6.6. Proof of corollary 6.5. Suppose h ⊆ g is any ideal. Then let

h⊥ := {x ∈ g | B(x, y) = 0 ∀ y ∈ h}.Now h ∩ h⊥ = (0). This is because h⊥ is an ideal, and tr[(adx)2] = 0. So byCartan’s criterion, it is solvable. Since g is semisimple, this ideal must be(0). h ∩ h⊥ = (0). Thus since dim g = dim h + dim h⊥ − dim(h ∩ h⊥) we get

g = h⊕ h⊥.

In particular, apply this to h = [g, g]. Then h⊥ ⊆ radB. Indeed,

B(x, [y, z]) = 0 ∀y, z ⇔ B([x, y], z) = 0 ∀y, zThis is equivalent to

[x, y] = 0 ∀y ∈ g⇔ x ∈ Z(g) = (0).

So g = [g, g]. �

Corollary. For any ideal h in a semisimple Lie algebra g, we have

g = h⊕ h⊥.

Definition. Let g be an (arbitrary) Lie algebra. Define

Der(g) := {D ∈ End g | D([x, y]) = [Dx, y] + [x,Dy]}.

Lemma. Der(g) forms a subalgebra. There is a natural map

ad : g→ Der(g)

with kernel Z(g). For a semisimple algebra, the map ad : g −→ Der(g) isan inclusion.

Note:

(1) ad g ⊆ Der(g) = D is an ideal because [D, ad x] = ad D(x). Indeed,

D · ad x− ad x ·D = ad D(x).

(2) BD(x, y) = Bg(x, y) for x, y ∈ g. (For any ideal I ⊆ g, BI(x, y) =Bg(x, y), x, y ∈ I.)

Proposition. If g is semisimple, g ∼= Der(g).

Proof. Let g⊥ be the orthogonal of g in D (with respect to BD). Then

g ∩ g⊥ = (0).

Indeed, if x ∈ g ∩ g⊥, 0 = BD(x, y) = Bg(x, y) ∀y ∈ g; so x = 0.. As beforedimD = dim g + dim g⊥ − dim(g ∩ g⊥), so

D = g + g⊥.

Furthermore if D ∈ g⊥, adD(x) = [D, ad x] ∈ g∩g⊥ = (0). But if ad D(x) =0, D(x) = 0 ∀x. So D = 0. Thus g⊥ = (0), and g = Derg. �

6.7. Cartan subalgebras of semisimple Lie algebras. This lecture wasgiven by E. Klebanov.

Page 39: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

39

6.8. Jordan decomposition. We recall the following basic results. LetV be a (not necessarily finite dimensional) vector space over a field K ofcharacteristic zero.

Definition. An element x ∈ End(V ) is called nilpotent if xn = 0 for somen. It is called locally nilpotent if for any v ∈ V, there is n such that xnv = 0.The element is called semisimple if any x-invariant subspace W, has an x-invariant complement.

If V = g is a Lie algebra, an element x ∈ g is called nilpotent (semisimpleif adx is nilpotent (semisimple).

If K is algebraically closed, semisimple is the same as diagonalizable.

Proposition. Let x ∈ End(V ), where V is a finite dimensional vector spaceover a field K. Then x = xs + xn uniquely, where xs is semisimple and xnis nilpotent and [xs, xn] = 0. Furthermore there are polynomials ps and pnso that xs = ps(x) and xn = pn(x). In particular if W ⊂ V is x-invariant,then it is xs and xn invariant as well.

For a proof, consult any standard linear algebra text or Humphreys, sec-tion 4.2.

6.9. We assume that the field is algebraically closed. This is not neces-sary, but makes the exposition simpler. Let V = g be an arbitrary finitedimensional Lie algebra and D ∈ Der(g).

Proposition. Let D = Ds+Dn be the Jordan decomposition. Then Ds, Dn

are derivations.

Proof. It is enough to show that Ds is a derivation, since Dn = D−Ds andderivations form a vector space. Let

(6.9.1) g =∑α

be the generalized weight decomposition. We know that [gα, gβ] ⊂ gα+β andDsv = γv for any v ∈ gγ . It is now easy to check that if v ∈ gα and w ∈ gβ,then

(6.9.2) Ds[v, w] = [Dsv, w] + [v,Dsw]

Thus Ds is a derivation. �

Corollary. If g is semisimple, then x = xs + xn so that xs is semisimple,xn is nilpotent and [xs, xn] = 0. The decomposition is unique.

Proof. The result follows from the previous proposition and the fact thatfor a semisimple algebra ad : g −→ Der(g) is an isomorphism. �

Page 40: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

40

6.10. The next theorem is the main result for CSA’a of finite dimensionalsemisimple Lie algebras.

Theorem. A subalgebra h ⊂ g is a CSA if and only if

(1) h is maximal abelian,(2) each h ∈ h is semisimple.

Proof. ⇐= . If an algebra h is abelian, it is certainly nilpotent. Let

(6.10.1) g = g0 +∑α 6=0

be the root decomposition corresponding to the adjoint action of h on g.Since h acts by generalized eigenvalue 0 on g0, and is formed of semisimpleelements, it actually commutes with all of g0. Let h0 ⊂ g0 be a CSA ofg0. Since it commutes with h and is its own normalizer, h ⊂ h0. Since[g0, gα] ⊂ gα, the normalizer of h0 in g is h0, so h0 is a CSA of g. If x ∈ g0,then xs, xn ∈ g0 as well because x normalizes h. Suppose there is a nilpotentelement x ∈ g0. Then tr(adx◦ady) = 0 for any y ∈ g (because this is true forany y ∈ gα as well as for y ∈ g0). Thus x = 0, so g0 is formed of semisimpleelements only. But then h0 is a nilpotent algebra formed of semisimpleelements only. Thus h0 is abelian, so by the maximality assumption of h, weconclude h = h0 is a CSA. Finally g0 also commutes with h, so is containedin its normalizer, i.e. h = g0.

=⇒ . If x ∈ h and h is a CSA, then xs, xn ∈ h as well because h isits own normalizer. If h contains any nilpotent element x then the aboveargument shows that tr(adx ◦ ady) = 0 for any y ∈ g, so x = 0. As before,since h is formed of semisimple elements only, it is abelian. Finally, anyabelian algebra containing h must be in N(h) so equals h. �

Page 41: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

41

6.11. Conjugacy classes of CSA’s.

References. For basic facts about the relations between Lie groups andtheir Lie algerbas consult F. Warner or Helgason. We are using the bareminimum for closed subgroups of GL(n,R). For this you can also consultthe course notes of Godement published by the University of Paris VII. Alsoconsult Jacobson IX.1 and IX.2.

Let g be an arbitrary Lie algebra over an algebraically closed field ofcharacteristic zero. Define the following groups:

Aut(g) := {g ∈ GL(g) : g([x, y]) = [gx, gy] ∀x, y ∈ g}(6.11.1)

Int(g) := the smallest closed subgroup of Aut(g) containing eadx ∀x ∈ g}(6.11.2)

Lemma. The Lie algebra of Aut(g) is Der(g). The Lie algebra of Int(g) isadg.

Proof. Suppose D ∈ gl(g) is such that etA ∈ Aut(g) for all t ∈ R. Then

0 =d

dt|t=0[etDx.etDy] = [Dx, y] + [x,Dy].

Thus D is a derivation. For the corresponding statement about Int(g), oneobserves first that since the exponential map

exp : gl(V ) −→ GL(V ), exp(X) := eX

has differential Id at 0, it is an isomorphism between a small open set con-taining 0 in adg and a small open set in Int(g) containing Id. The claimabout the Lie algebra follows. �

Theorem. Every CSA contains a regular element.

Proof. First note that the set of regular elements in g is open and dense,because it is the set where the polynomial p` of (5.1.2) does not vanish. Onthe other hand, there is a map

Ψ : Int(g)× h −→ g, Ψ(g,H) := g(H).

We show that the image of this map contains an open set. By the implicitfunction theorem, it is enough to show that the differential at a (Id,H0) isonto. The tangent space at this point is (adg, h). To compute DΨ(X,H) =DΨ(X, 0) +DΨ(0, H). For this we compute

d

dt|t=0Ψ(etadX , H0) = [X,H0],

d

dt|t=0Ψ(Id,H0 + tH) = H.

We conclude

(6.11.3) DΨ(X,H) = −adH0(X) +H.

Recall g = h+∑

α∈∆ gα and choose H0 so that none of the roots in ∆ vanishon it. The it is easy to see that the first part of (6.11.3) maps any gα onto

Page 42: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

42

itself and the second part shows that the image of DΨ contains h. The claimfollows. �

Exercise 17. Compute Aut(g) and Int(g) for the Heisenberg algebra Hn.

Page 43: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

43

7. Root structure of a semisimple algebra

7.1. Root space structure. Recall, g be a semisimple Lie algebra, h ⊆ ga CSA. The root decomposition is

(7.1.1) g = h +∑α∈∆

gα ∆ ⊆ h∗ the roots

and since h is ad-semisimple, the root spaces satisfy

(7.1.2) gα = {x ∈ g | [H,X] = α(H)X for all H ∈ h}

Theorem. (1) dim gα = 1 for all α ∈ ∆(2) If α, β ∈ ∆ are such that α+β 6= 0 then gα ⊥ gβ (with respect to B)(3) B|h is nondegenerate. For each α ∈ ∆ there is a unique Hα ∈ h such

that

B(H,Hα) = α(H).

(4) If α ∈ ∆ then −α ∈ ∆, [gα, g−α] = CHα and α(Hα) 6= 0.

Proof. (2) is easy. For (3), if H ∈ radB|h, note that

B(H, gα) = 0 for all α ∈ ∆

so H ∈ radB|g. Thus H = 0. For (1) and (4), note that

h =∑α∈∆

[gα, g−α].

Let Xα ∈ gα. X−α ∈ g−α. Then

(7.1.3) B([Xα, X−α], H) = B(Xα, [X−α, H]) = α(H)B(Xα, X−α).

Thus [Xα, X−α] = B(Xα, X−α)Hα. If B(Xα, X−α) = 1, then because of (3),[Xα, X−α] = Hα. Note from a previous proof that for any root β we have

β(Hα) = rβα(Hα) rβ ∈ Q

Then since

B(H1, H2) =∑γ∈∆

γ(H1)γ(H2)

we get

B(Hα, H) = α(Hα)∑β∈∆

rββ(H).

If α(Hα) = 0, B(Hα, H) = 0 for all H ∈ h, so Hα = 0. But B(Hα, H) =α(H) which is not identically zero. So α(Hα) 6= 0.

Note that{Xα,

2α(Hα)Hα, X−α

}forms an sl(2,C). Write

{Xα, Hα, X−α} = sl(2)α.

Let β ∈ ∆ and consider the space

(7.1.4)∑n∈Z

gβ+nα.

Page 44: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

44

This is an sl(2)α module. It follows that Hα has to act by integers on gβ+nα:

(7.1.5) (β + nα)(Hα) ∈ Z.

So we get2B(Hβ, Hα)

B(Hα, Hα)∈ Z. Now let q ≤ n ≤ p be the integers so that

gβ+(p+1)α = 0, gβ+(q−1)α = 0. Then

(7.1.6) (β + qα)(Hα) = −(β + pα)(Hα).

(7.1.7) (β, α) + 2q = −(β, α)− 2p.

So

(7.1.8) p+ q = −2B(Hα, Hβ)

B(Hα, Hα).

Suppose D−α ∈ g−α is linearly independent of X−α. We assume as we may,D−α 6= 0 and B(D−α, X+α) = 0. Then [D−α, X+α] = 0 by the reason givenright after(7.1.3) So D−α is a null vector for adXα and [Hα, D−α] = −2D−α.This cannot happen in an sl(2)–module. Thus dim gα = 1 for all α ∈ ∆. Inparticular

∑gβ+nα is an irreducible sl(2)α-module.

7.2.

Theorem. (1) The only roots proportional to α are ±α and 0.(2) [gα, gβ] = gα+β if α+ β ∈ ∆.

Proof. (1) Suppose r · α with r 6= 0 is a root. Then Hrα = rHα and we find

2B(Hα, Hα) · rB(Hα, Hα)

∈ Z,2B(Hα, Hα)r

r2B(Hα, Hα)∈ Z.

The only nonzero choices for r are ±12 , ±1, ±2. So suppose α is a root so

that 2α is a root as well. Let β = α and look at the string through 2α. Thensince ±3α is not a root, p = −1 and q = 1. Then

(7.2.1) 0 =2(α, 2α)

(2α, 2α= 1

a contradiction.

(2) follows from the fact that∑

gα+nβ is an irreducible sl(2)β module.�

7.3.

Theorem. Let hR :=∑α∈∆

RHα. Then

(1) B|hR is real positive definite,(2) h = hR + ihR.

Page 45: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

45

Proof. For (1), we compute α(Hα):

α(Hα) = B(Hα, Hα) =∑β∈∆

β(Hα) · β(Hα)

(7.3.1)

=∑β∈∆

1

2α(Hα)β(Hα) · 1

2α(Hα)β(Hα) =

α(Hα)2

4

∑β∈∆

β(Hα)2(7.3.2)

so α(Hα) is nonnegative, in fact since it is nonzero it is positive. (2) followseasily. �

7.4.

Theorem. Let h ⊆ g be a CSA. There is a basis Xα ∈ gα, α ∈ ∆ such that

[Xα, X−α] = Hα, [H,Xα] = α(H)Xα, [Xα, Xβ] = Nα,β

satisfying

Nα,β = 0 if α+ β /∈ ∆

Nα,β = −N−α,−β.Furthermore,

N2α,β =

q(1− p)2

α(Hα)

where β + nα, p ≤ n ≤ q is the α-string through β.

This is called Weyl’s normal form. A proof can be found in Helgason,Jacobson or Samelson, Lie algebras.

7.5. Real forms. A real vector space V is said to have a complex structureif there is an R-linear J : V −→ V such that J2 = −Id. Then V is a complexvector space with scalar multiplication

(a+ ib)v = av + bJv.

Conversely if E is a complex vector space, it is also a real vector space withcomplex structure via J = i Id.

Definition. A real algebra g is said to have a complex structure if there isa complex structure J such that adX ◦ J = J ◦ adX.

For a complex algebra, a real form is a subalgebra gR ⊂ g satisfying gR ∩igR = (0) and g = gR + igR.

Recall also that if g is a real Lie algebra, its complexification is definedas gc := g⊗RC with the obvious bracket structure. Then g is a real form ofgc.

Exercise 18. Show that a real Lie algebra g is semisimple, solvable, nilpo-tent if and only if gc is semisimple, solvable, nilpotent.

Page 46: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

46

7.6. Compact forms.

Definition. A real Lie algebra g is called compact if the Cartan-Killing formB is negative definite.

In such a case, the groups Aut(g) and Int(g) are compact because theyare closed subgroups of the (compact) unitary group U(B), linear transfor-mations that are unitary with respect to B.

Let g be complex semisimple and ∆ be the roots. A subset ∆+ is calleda positive system if

(1) α, β ∈ ∆+, then α+ β ∈ ∆+ or else is not a root.(2) ∆+ ∪ (−∆+) = ∆,

Such systems always exist; choose an H0 ∈ hR such that α(H0) 6= 0 for anyα ∈ ∆. Then

∆+ := {α : α(H0) > 0}.Conversely any positive system is given by this procedure, but this will beproved later.

Theorem. Every semisimple Lie algebra g has a a compact real form.

Proof. Choose a positive system ∆+ and consider the subspace

gk = ihR +∑α∈∆+

R(Xα −X−α) +∑α∈∆+

R(Xα + iX−α)

where the root vectors are in normal form as in 7.4. It is not hard to seethat B is negative definite when restricted to this subspace; the vectors Xα

have to satisfy B(Xα, X−α) = 1 since [Xα, X−α] = Hα. The fact that it is aa subalgebra also follows from theorem 7.4. �

Exercise 19. Show that g is nilpotent, solvable semisimple iff and only ifgR is.

Recall

(7.6.1) ad : g→ Der(g).

For g semisimple this is an isomorphism. Recall

(7.6.2) Aut(g) := {A ∈ GL(g) | A([x, y]) = [Ax,Ay]}.Int(g) := the closure of the subgroup of Aut(g) generated by eadx withx ∈ g. If g is semisimple, the Lie algebras of these groups are the same,Der(g) ' g.

Suppose now g is real and compact. Then

(7.6.3) B(Ax,Ay) = tr(adAx ◦ adAy) = tr(adx ◦ ady)

because

(7.6.4) ad(Ax)(g) = [Ax, y] = A([x,A−1y]] = A ◦ adx ◦A−1.

Since B is negative definite, Aut(g) and Int(g) are closed subgroups of theunitary group, therefore compact.

Page 47: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

47

Theorem. Suppose G is a compact Hausdorff topological group. Then Ghas a unique biinvariant (Borel) measure.

(7.6.5) f ∈ Cc(G) 7→∫Gf(x)dµ(x)

such that

(7.6.6)

∫Gf(gx)dµ(x) =

∫Gf(xg)dµ(x) =

∫Gf(x)dµ(x)

for all g ∈ G and f ∈ Cc(G).

Proposition. Suppose (π, V ) is a representation of a compact group G. Va complex vector space. Then V has a G–invariant inner product.

Proof. Let 〈 , 〉 be any inner product. Define

(7.6.7) (v, w) =

∫G〈π(x)v, π(x)w〉dµ(x).

Exercise 20. Complete the proof.

Corollary. Any finite dimensional representation of G is completely re-ducible.

Proof. Let 〈 , 〉 be a G–invariant inner product. If W ⊆ V is an invariantsubspace, then W⊥ is also G–invariant. �

Theorem. Let (π, V ) be a finite dimensional representation of a complexsemisimple Lie algebra. Then (π, V ) is completely reducible.

Proof. Let gR be a compact real form. A nontrivial result asserts that thereis a simply connected Lie group GR with Lie algebra gR. Standard propertiesof Lie groups imply that (π, V ) exponentiates to a representation of GR. LetW be a g–invariant subspace. It is GR–invariant, so it has a GR–invariantcomplement, W ′. Then W ′ is gR invariant. Since g = gR ⊗R C, and W ′ iscomplex, W ′ is g invariant. �

References: F. Warner, Hausner-Schwartz

Page 48: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

48

7.7. An algebraic proof. We would like to give an algebraic proof thatworks for other fields as well. Let us consider the case of sl(2) first. Recallthat finite dimensional irreducible modules F (n) are parametrized by n ∈ N.F (n) has the following properties:

(1) h acts semisimply, the eigenvalues are n, n − 2, . . . ,−n with multi-plicity 1.

(2) e and f act nilpotently.

Denote

(7.7.1) h := Kh, n := Ke, b := Kh+ Ke, n := Kh+ Kf, b := Kh+ Kf.Let λ ∈ K. Define the Verma module

(7.7.2) M(λ) := U(g)⊗U(b) Kλ

where Kλ is the 1–dimensional module of b

(7.7.3) π(h)11λ = λ11λ, π(e)11λ = 0.

Then M(λ) is a representation of g:

(7.7.4) π(x)(y ⊗ 11λ) := xy ⊗ 11λ.

Proposition. (1) M(λ) is semisimple as an h module. The eigenvaluesare λ− 2n, n ∈ N occurring with multiplicity 1.

(2) e acts locally nilpotently and f acts freely on M(λ).(3) M(λ) is irreducible except when λ ∈ N.

In this case, the Jordan–Holder series is

(7.7.5) 0→M(−λ− 2)→M(λ)→ F (λ)→ 0.

M(−λ − 2) is the largest proper submodule. F (λ) is the unique irreduciblequotient.

Proof. M(λ) has a basis {fn ⊗ 11λ}.(7.7.6) f : fn ⊗ 11λ → fn+1 ⊗ 11λ

(7.7.7) h · fn11λ = fn(h− 2n)⊗ 11λ = (λ− 2n)fn ⊗ 11λ

efn = ef · fn−1 = (h+ fe)fn−1

= (λ− 2n+ 2)fn−1 + (λ− 2n+ 4)fn−1 + · · ·+ [nλ− n(n− 1)]fn−1

= n(λ− n+ 1)fn−1.

Lemma. Suppose (π, V ) is a module generated by a vector v such that e·vλ =0, h · vλ = λvλ. Then there is a unique g–homomorphism

(7.7.8) M(λ)→ V → 0

that sends 11λ to vλ.

Page 49: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

49

Proof. The map fn ⊗ 11λ 7→ π(f)nvλ is well defined. �

Proposition. Assume h acts semisimply on V . Then V is completely re-ducible.

Proof. Write

(7.7.9) V = ⊕Ni=−NVi

(7.7.10) Homg(F (N), V ) ' Homg(M(N), V )

and has dimension m = dimVN . In fact there is a nontrivial map

(7.7.11) 0→ ⊕mF (N)→ V.

We want to show this is a direct summand.

(7.7.12) Homg(V, F (N)) ' Homg(F (N)∗, V ∗) ' Homg(F (N), V ∗).

But

(7.7.13) V ∗ ' ⊕Ni=−NV ∗iand

(7.7.14) dimVi = dimVi = dimV ∗i .

So there are as many quotients as there are submodules of type F (N). �

Even in this case it is not so easy to show that h always acts semisimply.Assume (π, V ) is a representation of a semisimple Lie algebra, and letW ⊆ Vbe an invariant subspace.

A complement W ′ is associated with a projection e ∈ End(V ) such thate2 = e, and V is the 1-eigenspace of e. Two such projections e, e′ have theproperty that a = e− e′ is zero on W and maps V to W . Let

(7.7.15) X := {a ∈ End(V ) | a(V ) ⊆W,a|W = 0}.

We note also that

(7.7.16) π(x)W ⊆W ∀x ∈ g⇔ π(x)e− eπ(x) ∈ X ∀x.

In fact in this case X is a representation of g,

(7.7.17) x · a := π(x)a− a · π(x).

Proposition. W has a g-invariant complement iff for any e, there is a ∈ Xsuch that

(7.7.18) π(x)e− eπ(x) = x · a.

Exercise 21. Show that e − a is the projection which gives the invariantcomplement.

Page 50: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

50

We can phrase this as follows. e defines a map

(7.7.19) f : g→ X .This map satisfies

(7.7.20) f([x, y]) = x · f(y)− y · f(x) (Jacobi identity).

We would like to show that for such a map, there is a ∈ X satisfying

(7.7.21) f(x) = x · a.This fits into a more general framework. Let (π, V ) be a representation ofg. Define

(7.7.22) Ci(g, V ) := Hom(Λig∗, V ).

Then there is a map

(7.7.23) d : Cn → Cn+1

given by

dω(x0, . . . , xn) =∑

(−1)iπ(xi) · ω(x0 · · · xi · · ·xn)(7.7.24)

−∑

(−1)i+jω([xi, xj ], x0 · · · xi · · · xj · · ·xn).

Exercise 22. Show that d2 = 0.

Define H i(g, V ) := ker dn/imdn−1. Then

(7.7.25) C0(g, V ) = Hom(C, V ) ' V, C1(g, V ) ' Hom(g, V ),

(7.7.26) ϕa ∈ C0, dϕa(x) = π(x)a,

(7.7.27) f ∈ C1, df(x, y) = −f([x, y]) + x · f(y) + y · f(x).

We are trying to show that H1(g, V ) = 0 for any V .Note also that

H0(g, V ) = V g = {v ∈ V | π(x)v = 0 ∀x}.

Page 51: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

51

8. Lie algebra cohomology

8.1. Define

(8.1.1) Cn(g, V ) := Λng′ ⊗ V ' HomC(Λng, V )

where (π, V ) is a representation of g. There is a differential

(8.1.2) d : Cn → Cn+1

given by

dω(x0 · · ·xn) =∑

(−1)iπ(xi) · ω(x0 · · · xi · · ·xn)(8.1.3)

+∑

(−1)i+jω([xi, xj ], x0 · · · xi · · · xj · · ·xn).

8.2. In general, suppose V is a (finite dimensional) vector space, V ′ its dual.We define

(8.2.1) Λ∗V := T (V)/{x⊗ y + y ⊗ x} x, y ∈ V.There is a pairing between Λ∗V and Λ∗V ′

(8.2.2) 〈f1 ∧ · · · ∧ fk, v1 ∧ · · · ∧ vk〉 := det[(fi, vj)].

Note that (Λ∗V)′ ' Λ∗V ′. ΛkV ′ is identified with alternating functions. Theexterior algebra has an algebra structure given by ∧. If ω ∈ ΛkV ′, ν ∈ Λ`V ′then define

(8.2.3)(ω ∧ ν)(x1, · · · , xk+`) :=

∑σ∈Sk+`

det(σ)1

k!`!ω(xσ(1), · · · , xσ(k))

· ω(xσ(k+1), · · · , xσ(k+`)).

We say α ∈ (End Λ∗V)i is of degree |α| = i if α(ΛjV) ⊆ Λj+iV. It is calleda derivation if in addition

(8.2.4) α(uv) = α(u)v + (−1)|u|·|α|uα(v).

If f ∈ V ′, there is a map

(8.2.5) ⊗kV → Λk−1V

(8.2.6) x1 ⊗ · · · ⊗ xk 7→∑

(−1)i+1〈f, xi〉x1 ∧ · · · xi · · ·xk.

This gives an ι(f) ∈ (End ΛV)−1,

(8.2.7) i(f)(x1 ∧ · · · ∧ xk) :=∑

(−1)i+1〈f, xi〉x1 ∧ · · · ∧ xi ∧ · · · ∧ xk.

Similarly, if u ∈ ΛV ′, let

(8.2.8) ε(u)ω := u ∧ ω.

Proposition. Given any linear map

(8.2.9) β : V → Λk+1Vthere is a unique derivation α ∈ (DerΛV)k such that α|V = β. (α |K= 0).

Page 52: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

52

Proof. Exercise. Compare with the definition of the differential in differen-tial geometry, e.g. in Helgason. �

Now suppose V = g is a Lie algebra. There is a linear map

(8.2.10) ∂ : Λ2g→ Λ1g ' g ∂(x ∧ y) := [x, y]

Then −∂t : g′ → Λ2g′ gives a derivation

(8.2.11) d ∈ (DerΛg)1.

We also define ∂ to be −dt.For each x ∈ g, θ∗(x) ∈ (DerΛg)0, is defined to be the adjoint action,

and θ(x) = −θ∗(x)t.

Proposition.

d =1

2

∑ε(fi)θ(xi)

∂ =1

2

∑θ∗(xi)i(fi)

where f1 · · · fn, x1 · · ·xn are dual bases;

Note: ∂ is not a derivation, but d is.

Proposition. d2 = 0.

Proof. Exercise. By the previous proposition you only need to check d2f = 0for f ∈ g′. �

8.3. Properties.

(1) d ◦ θ(x) = θ(x) ◦ d, ∂ ◦ θ∗(x) = θ∗(x) ◦ ∂(2) d ◦ ix + ix ◦ d = θ(x), ∂ ◦ εx + εx ◦ ∂ = θ∗(x)(3) θ(x) ◦ iy − iy ◦ θ(x) = i([x, y]), θ∗(x) ◦ εy − εy ◦ θ∗(x) = ε([x, y])

(4) d(u ∧ v) = du ∧ v + (−1)|u|u ∧ dv.

In particular recall H i(g) := ker di/ imdi. We get a cup product (wedge)

(8.3.1) H i(g)⊗Hj(g)∧→ H i+j(g).

8.4. If (π, V ) is a representation of g, we define

(8.4.1) d = d0 + dV , ∂ = ∂0 + ∂V

where d0 and ∂0 are as before, and

(8.4.2) dV =∑

ε(fi)⊗ π(xi) ∂V =∑

π(xi)⊗ ι(fi).

8.5. Now suppose g is semisimple (actually all the results hold for reductivealgebras). Then g and g′ are canonically identified via B. We can assumex1 · · ·xn, f1 · · · fn are all in g and θ∗(x) = ad x and so θ(x) = −ad x.

Page 53: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

53

8.6. The Casimir element. A semisimple Lie algebra has a distinguishedinvariant in U(g) corresponding to an invariant in S2(g). This is the analogof the Laplace operator in Riemannian geometry.

Suppose b is a symmetric nondegenerate form on g which is invariant,i.e. it satisfies

(8.6.1) b(adx y, z) + b(y, adx z) = 0.

Let x1, . . . , xn be a basis and f1, . . . , fn be the dual basis with respect to b.

Proposition. x1f1 + · · ·+ xnfn ∈ S2(g) and C = x1f1 + · · ·+ xnfn ∈ U(g)are invariant.

Proof.

(8.6.2) [x, x1f1 + · · ·+ xnfn] =∑

[x, xi]fi +∑

xj [x, fj ].

But

[x, xi] =∑

b([x, xi], fj)xj =∑

b(x, [xi, fj ])xj(8.6.3)

[x, fj ] =∑

b([x, fj ], xi)fi =∑

b(x, [fj , xi])fi

and we get

(8.6.4)∑i,j

b(x, [xi, fj ])xjfi + b(x, [fj , xj ])xjfi = 0.

Recall that if g is semisimple,

(8.6.5) g = a1 ⊕ · · · ⊕ ak

where ai are simple ideals.

Proposition. If g is simple, there is a unique symmetric bilinear form b( , )(up to a scalar) satisfying

(8.6.6) b(ad x · y, z) + b(y, ad x · z) = 0.

Proof. Recall that g ' g′ even as representations. Then

(8.6.7) Homg[C, g∗ ⊗ g] = Homg[g, g].

Because g is simple, ad is an irreducible representation. Schur’s lemmaimplies that the dimension of Homg[g, g] equals 1. Now

(8.6.8) g∗ ⊗ g ≈ g⊗ g = S2(g)⊕ Λ2g.

But S2(g) identifies with symmetric bilinear forms and the Cartan-Killingform is invariant. �

In general, there are as many invariants as there are simple factors in g.

Page 54: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

54

Lemma. Let (π, V ) be an irreducible representation of g such that kerπ =(0). Then

(8.6.9) π(C) =dim g

dimVId.

Proof. π(C) is a scalar by Schur’s lemma. So π(C) = λId. Then

(8.6.10) tr π(C) = λ · dimV

On the other hand consider b(x, y) := tr(π(x)π(y)). This is a nondegenerateform on g. We can view g ↪→ End(V ) and consider a := rad b. It is an idealin g that satisfies

(8.6.11) b(x, x) = tr(π(x)2) = 0 ∀ x ∈ a.

By Cartan’s criterion, a is solvable, therefore 0. Choose a basis x1, . . . , xnand let f1, . . . fn be the dual basis. Then

(8.6.12) trπ(C) =∑

trπ(xi)π(fi) =∑

b(xi, fi) = dim g.

The claim follows. �

Proposition. (Kuga’s lemma)

(8.6.13) d∂ + ∂d = 1⊗ π(C).

Every general representation (π, V ) decomposes

(8.6.14) V = V0 ⊕ V+

where π(C) acts nilpotently on V0 and is an isomorphism on V+.

Corollary. H i(g, V ) = H i(g, V0). The irreducible constituents of V0 areall isomorphic to the trivial representation. Furthermore, H1(g, V0) = 0 aswell.

Proof. Kuga’s lemma proves the first part; note that 1 ⊗ π(C) commuteswith d, ∂ and θ.

If f : g → K, such that f([x, y]) = 0 for all x, y ∈ g is identically zerobecause g = [g, g]. Thus H1(g,K) = (0).

To see that g acts trivially on all constituents of V0, suppose W is an irre-ducible submodule. Then π(C) acts by zero because its kernel is g invariant.Because g is semisimple, it is a direct sum of simple algebras. Let a be anysimple subalgebra of g. Then if π |a is nonzero, it has to be injective. Theabove lemma says that π(C) is nonzero, a contradiction.

If 0→ X → Y → Z → 0 is an exact sequence, there is a long exact

0→ H0(X )→ H0(Y)→ H0(Z)→ H1(X )→ H1(Y)(8.6.15)

→ H1(Z)→ · · · .If H1(X ) = H1(Z) = 0, then H1(Y) = 0. So if dimV0 > 1, do an induction;take

(8.6.16) 0→ V ′ → V0 → V ′′ → 0

Page 55: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

55

so that dimV ′, dimV ′′ < dimV0. �

Corollary. H i(g, V ) ' H i(g,K)⊗ V g.

8.7. Lie algebra cohomology with trivial coefficients. We considerthe case of a general Lie algebra first. The formulas simplify to

(8.7.1) d =1

2

∑ε(fi)θ(xi) ∂ =

1

2

∑θ∗(xi)ι(fi).

If π : h→ g, is a Lie homomorphism, then it induces π∗ : Λh→ Λg so we getπ∗ : H(g) → H(h). Since Λg′ is paired with Λg and d and ∂ are transposeof each other,

(8.7.2) H∗(g) ' H∗(g)′.

There is a distinguished element f0 ∈ g′

(8.7.3) f0(x) := tr ad x.

Definition. g is unimodular if tr ad x = 0 for all x ∈ g.

Proposition. Hn(g) = K if and only if g is unimodular.

Proof.

θ(x)(f1 ∧ · · · ∧ fn) = (tr ad∗ x)(f1 ∧ · · · ∧ fn),

θ∗(x)(x1 ∧ · · · ∧ xn) = (tr ad x)(x1 ∧ · · · ∧ xn).

If g is not unimodular, there is x ∈ g such that tr ad x = c 6= 0. Then

(8.7.4) (dι(x) + ι(x)d)(f1 ∧ · · · ∧ fn) = cf1 ∧ · · · ∧ fn 6= 0.

(8.7.5)1

cd[ι(x)(f1 ∧ · · · ∧ fn)] = f1 ∧ · · · ∧ fn

so Hn(g) = 0.Conversely, if f1 ∧ · · · ∧ fn = dω, then ∂(x1 ∧ · · · ∧ xn) 6= 0 because

(8.7.6) 1 = 〈f1 ∧ · · · ∧ fn, x1 ∧ · · · ∧ xn〉 = 〈ω, ∂(x1 ∧ · · · ∧ xn)〉.Suppose θ(x)f1 ∧ · · · ∧ fn = 0 for all x ∈ g. Then dι(x)f1 ∧ · · · ∧ fn = 0so ε(x)∂(x1 ∧ · · · ∧ xn) = 0 for all x. This implies ∂(x1 ∧ · · · ∧ xn) = 0 acontradiction. Thus there is x such that

(8.7.7) θ(x)f1 ∧ · · · ∧ fn = tr ad x(f1 ∧ · · · ∧ fn) 6= 0.

Corollary. g is unimodular if and only if ∂(x1 ∧ · · · ∧ xn) = 0.

Now let u ∈ Λkg′, v ∈ Λn−kg′ be in ker d. Set

(8.7.8) 〈[u], [v]〉 := 〈u ∧ v, ω〉.

Proposition. 〈 , 〉 is nonsingular if g is unimodular. In this case, Hn−k(g)is dual to Hk(g).

Page 56: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

56

Proof. Fix 0 6= ω ∈ Λng. Define a map

(8.7.9) Ψ : Λ∗g′ → Λ∗g

ψ(u) := ι(u)ω. ψ is a bijection.

Claim: If g is unimodular, then ψ carries coboundaries to boundaries andcocycles to cycles. Indeed,

(8.7.10) dε(u)− (1)|u|ε(u)d = ε(du)

(8.7.11) (−1)|u|∂ι(u)− ι(u)∂ = ι(du).

If g is unimodular, evaluate on ω to get (−1)|u|∂ι(u)ω = ι(du)ω for allu. So du = 0 implies ∂(ι(u)ω) = 0. It also says that coboundaries go toboundaries.

We claim ∂(ι(u)ω) = 0 if and only if du = 0. Let v be arbitrary.∂(ι(u)ω) = ±(du). Then

(8.7.12) 〈du ∧ v, ω〉 = 〈v, ι(du)ω〉 = 0.

But also

(8.7.13) 0 = 〈du ∧ v, ω〉 = ±〈du, ι(v)ω〉 for all v ∈ Λn−kg′.

Since v 7→ ι(v)ω is an isomorphism, the claim follows. �

8.8. Now use complete reducibility. Assume g is semisimple. Then g actson Λg via the adjoint action and

(8.8.1) d ◦ θ(x) = θ(x) ◦ d.Thus

(8.8.2) Λkg = (Λkg)g ⊕ g · Λkg = J (g)⊕ gΛ·g.

Note that there is a map J (g)→ H ·(g) because of the formula (8.7.1).

Lemma. If x ∈ g and ω ∈ ker d, then θ(x)ω ∈ Im d.

Proof.

(8.8.3) θ(x)ω = (dι(x) + ι(x)d)ω = d(ι(x)ω).

Theorem. If g is semisimple, then the map

(8.8.4) J (g)→ H∗(g)

is a ring isomorphism.

Proof. The map is surjective because ker d is invariant under the action ofg and so also decomposes into (ker d)g ⊕ g · kerd. Then apply the previouslemma and observe that J (g) ∩ ker d = (ker d)g. Remains to show thatJ (g) ∩ Im d = (0). Write

(8.8.5) d =1

2

∑ε(fi)θ(xi) =

1

2

∑θ(xi)ε(fi)−

1

2

∑[θ(xi), ε(fi)].

Page 57: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

57

But

(8.8.6) [θ(xi), ε(fi)] = ε(θ(xi)fi).

Write

(8.8.7) f0 :=∑

θ(xi)fi.

Then

(8.8.8) 〈f0, y〉 = −∑〈fi, [xi, y]〉 = tr ad y = 0

for all y. Thus the second term is zero and the first term shows that Im d ⊆gΛ·g. �

Exercise 23. (1) Compute the cohomology of sl(2).(2) Write bi = dimH i(g). Show that b1 = b2 = 0 but b3 6= 0.(3) Show that H(g⊕ h) = H(g)⊗H(h).

In general, (g still semisimple)

(8.8.9) Λg = Im ∂ ⊕ J(g)⊕ Im d.

If ∆ = d∂ + ∂d, then ker ∆ = J(g).

Exercise 24. Read the section on bi-invariant forms in Helgason’s neweredition of the text on Lie groups, differential geometry and symmetric spacesas well as the section on Hodge’s theorem in F. Warner.

8.9. Abstract homology theory. In an abelian category the A the ho-mology of a covariant functor F is defined as follows. Let V be an object inA. Take a resolution formed of projective objects

(8.9.1) Pn → Pn−1 → · · · → P0 → V → 0,

truncate and apply F :

(8.9.2) → F (Pn)→ · · · → F (P0)→ 0

Then RF∗(V ) is the homology of the above complex. We assume that thefunctor F is right exact. This has the effect that RF0(V ) = F (V ). Forcohomology assume that the functor is contravariant and right exact.

For Lie algebra cohomology we consider the functor

(8.9.3) V 7→ V g.

For Lie algebra homology consider the functor

(8.9.4) V 7→ V/(gV ).

We can construct a resolution by free objects as follows.If (π, V ) is a representation, let

(8.9.5) P(V ) := U(g)⊗C V

This is free as a module for the left action of g. There is a g-equivariantmap

(8.9.6) P(V )→ V → 0

Page 58: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

58

(8.9.7) u⊗ v 7→ π(u)v.

Dually one can take

(8.9.8) I(V ) := HomC(U(g), V ).

Then

(8.9.9) 0→ V → I(V )

via

(8.9.10) v 7→ fv(u) := π(u)v.

8.10. Koszul Complex. One of the ways to “compute” or identify theabstract theory with the specific theories is to construct an acyclic resolutionof C.

(8.10.1) Kn := U(g)⊗ Λng, κ(x)(u⊗ ω) = xu⊗ ω + u⊗ adx ω.

The differential is

(8.10.2) ∂(u⊗ x1 ∧ · · · ∧ xk) =∑

(−1)i+1uxi ⊗ x1 ∧ · · · ∧ xi ∧ · · · ∧ xk.

This induces a map

(8.10.3)

Homg(U(g)⊗ Λng, V )d−−−−→ Homg(U(g)⊗ Λn+1g, V )∥∥∥ ∥∥∥

HomC(Λng, V )d−−−−→ HomC(An+1g, V ).

(df)(1⊗ x0 ∧ · · · ∧ xn) = f(d(1⊗ x0 ∧ · · · ∧ xn))

= f(∑

(−1)ixi ⊗ x0 ∧ · · · ∧ xi ∧ · · · ∧ xn)

=∑

(−1)iπ(xi)f(x0 ∧ · · · ∧ xi ∧ · · · ∧ xn)

+∑

(−1)if(ad xi(x0 ∧ · · · ∧ xi ∧ · · · ∧ xn)).

To see that d is exact, recall the filtration on U(g). We get

(8.10.4) ∂ : U(g)k ⊗ Λng→ U(g)k+1 ⊗ Λn−1g,

which induces a map

(8.10.5) ∂ : S(g)⊗ Λng→ S(g)⊗ Λn−1g.

Replace S(g) by P (g). It is enough to show ∂ is exact.

(8.10.6) ∂ : K[x1 · · ·xn]⊗Λn(x1, . . . , xn)→ K[x1, . . . , xn]⊗Λn−1(x1 · · ·xn)

(8.10.7) ∂(p⊗ ω) =n∑i=1

xip⊗ ι(xi)ω.

Page 59: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

59

Then define

(8.10.8) d(p⊗ ω) =

n∑i=1

∂ip⊗ ε(xi)ω.

Exercise 25. Compute d ∂+∂ d and show it is an isomorphism commutingwith d and ∂.

We conclude that (Kn, ∂n) is an acyclic resolution and therefore the ab-stract definition coincides with the specific one.

8.11. Levi’s Theorem.

Theorem. Let g be an arbitrary Lie algebra and a an ideal such that thequotient s is semisimple. Then a has a complement which is a Lie algebra.

We can rephrase this as follows. Suppose

(8.11.1) 0 −→ a −→ gπ−→ s −→ 0

is an exact sequence of Lie algebras such that s is semisimple. Then thereis a Lie algebra homomorphism ε : s −→ g such that π ◦ ε = Id.

Proof. Suppose a has a nontrivial proper ideal a1. Then there is an exactsequence

(8.11.2) 0 −→ a/a1 −→ g/a1π−→ s −→ 0

By induction on the dimension of a, we can assume that a/a1 has a comple-ment s1. Then there is an exact sequence

(8.11.3) 0 −→ a1 −→ g1π−→ s −→ 0

where g1 is the inverse image of s1. By induction on the dimension of g,there is a Lie algebra complement V to a1 in g1. Then a dimension countshows that this is a complement to a in g as well.

Thus we may assume that a has no proper ideals. Let r be the solvableradical. Then its image under π is a solvable ideal in s, therefore 0. In otherwords r ⊂ a. If r = (0), the whole algebra g is semisimple and we’re done.If r = a, then a is solvable, so [a, a] is a proper ideal therefore zero. Thus ais abelian. Then the adjoint action of g factors through a and a is a modulefor s. The fact that a has no proper ideals means that this module is simple.If this module were trivial, then a would be contained in the center of g.Then g/a = s acts on all of g via the adjoint action. Since s is semisimple,the representation is completely reducible and the claim again follows.

Thus we may assume we are in the case when a is abelian and an irre-ducible nontrivial module for s. Let V be any vector space complement ofa, and write σ : s −→ V for an isomorphism. Then consider the map

(8.11.4) Ψ(x1, x2) = [σ(x1), σ(x2)]− σ([x1, x2]).

Composing with π we get zero, i.e. Ψ : s × s −→ a. Clearly this mapis alternating, and a straightforward calculation using the Jacobi identity

Page 60: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

60

shows that dΨ = 0. It is also easy to see that V is a subalgebra if and onlyif Ψ = 0. Two complementry spaces V, V ′ differ by a map

(8.11.5) α := σ − σ′ : s −→ a.

To say that σ′ = σ − α has Ψ′ = 0 comes down to the equation Ψ = dα.Thus s has a complement if and only H2(s, V ) = 0. But we know that thesegroups are zero for all i if V is nontrivial. �

Another proof appears in Bourbaki and Serre’s book. This is the subjectof Yan Zhang’s talk.

8.12. Semidirect products. Suppose (π, a) is a representation of a Liealgebra s onto another Lie algebra a. Then we can form a new Lie algebrag := sn a as follows. The space of g is s× a. The bracket is

(8.12.1) [(x1, a1) , (x2, a2)] = ([x1, x2] , [a1, a2]− π(x2)(a1)).

This algebra is called the semidirect product of s and a.

Corollary. Every Lie algebra is the semidirect product of its solvable radicalwith a semisimple algebra.

The semisimple algebra is called the Levi component of g. The corollaryfollows from section 8.11.

The Levi component is unique up to conjugation by inner automorphisms(elements of the connected Lie group in Aut(g) whose Lie algebra is thealgebra of derivations of g.) We omit the second part, but note that theproof is very similar to the analogous statement about the conjugacy of allCartan subalgebras.

Page 61: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

61

9. Root Systems

Fernando Schwartz’s lecture goes here. The material is in Chapter 6 ofBourbaki.

Page 62: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

62

10. Direct sums of root systems

10.1. Assume that the field K = R and R ⊂ V is a root system. Recall VQfrom the previous lecture.

Proposition. Define

(10.1.1) (x | y) :=∑α∈R〈α, x〉 · 〈α, y〉 x, y ∈ VQ.

Then (· | ·) is symmetric nondegenerate and invariant under W (R), takingvalues in Q. Its extension to V is positive definite.

Proof. Exercise. �

Suppose V = ⊕ri=1Vi. Then V ∗ identifies with ⊕ri=1V∗i .

Lemma.

(1) If Ri ⊂ Vi are root systems for Vi then R = ∪Ri is a root system forV . Its dual (inverse) is R = ∪Ri.

(2) Under the conditions in (1),

(10.1.2) W (R) =∏

W (Ri).

Proof. (1) is clear. For (2) note that if α ∈ Ri and j 6= i, the kernel of αcontains Vj . �

Definition. We say R is irreducible if R 6= ∅ and is not a direct sum ofproper root systems.

10.2. Let V = ⊕ri=1Vi and R ⊂ V a root system. Set Ri = Vi ∩ R. This isa root system in V ′i := spanα∈RiKα.

Proposition. The following are equivalent:

(1) Vi are W (R)-stable.(2) R ⊂ V1 ∪ · · · ∪ Vr.(3) For all i, Ri is a root system for Vi and R is the direct sum of the

Ri.

Proof. (3) ⇒ (1) is obvious.(1) ⇒ (2) Recall that W (R) is the group generated by the sα. If α ∈ R,

sα stabilizes each Vi. Since the −1 eigenspace is 1-dimensional, it must beId on all but one Vi0 . It follows that α ⊂ Vi0 .

(2) ⇒ (3). Clearly each Ri generates Vi. The other conditions are satis-fied. �

10.3.

Definition. A representation (π, V ) where V is a vector space over a fieldK is called irreducible if it has no nontrivial invariant subspaces. it is calledabsolutely irreducible if for any K ⊂ K′ the extension (π, V ⊗K K′) is alsoirreducible.

Page 63: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

63

Example. Let V = R2, W (R) = {±1}. Then π(1) =

(0 1−1 0

)is a repre-

sentation. It is irreducible; there is no proper nonzero invariant R-subspace.Vc ' V ⊗R C is not irreducible. Such a representation is called irreduciblebut not absolutely irreducible.

Proposition. The following are equivalent:

(1) R is irreducible.(2) The W (R) module V is simple.(3) The W (R) module V is absolutely simple.

Proof. (1) ⇔ (2) is the previous proposition plus the following fact.Suppose G is a finite group and (π, V ) a representation. Then every

G-stable subspace

(10.3.1) W ⊆ Vhas an invariant complement.

Indeed, let e : V → W be the projection, i.e., W = {w ∈ V | e(w) = w}and e2 = e; or e2 = e and e(V ) ⊆W . Define

(10.3.2) e(x) :=1

|G|∑g∈G

π(g) ◦ e ◦ π(g−1)(x).

Then e(V ) ⊆W and e2 = e. Furthermore the kernel of e is the G–invariantcomplement. �

Exercise 26. Show that (2) ⇒ (3).

Hint: Say K = R, V = VR, then R is a root system for VC . If VC isreducible, R is a direct sum of root systems. Show that VR is reducible.

10.4. Assume R is reduced, i.e. the only multiples of α ∈ R that are rootsare ±α.

Let α, β ∈ R. The α-string through β is the set

(10.4.1) I := {β + jα, j ∈ Z | β + jα ∈ R}.Let p be the largest number so that β + pα ∈ R and q the largest so thatβ − qα ∈ R. Both p, q ≥ 0 because we have assumed that β is a root.

Proposition. β + jα is a root for all −q ≤ j ≤ p. Furthermore,

(10.4.2) 〈β, α〉 = q − p.

Proof. If β + pα is a root, so is

(10.4.3) sα(β + pα) = β − (〈β, α〉+ p)α.

So

(10.4.4) 〈β, α〉+ p ≤ q, 〈β, α〉 ≤ q − p.Similarly

(10.4.5) sα(β − qα) = β + (−〈β, α〉+ q)α.

Page 64: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

64

So −〈β, α〉+ q ≤ p. We conclude

(10.4.6) 〈β, α〉 = q − p.Suppose 0 ≤ s is such that β+sα ∈ R, β+(s+1)α /∈ R but β+(s+2)α ∈ R.

Consider the α–string through β+sα. The p equals 0, so 〈β+sα, α〉 ≥ 0.On the other hand the α–string through β + (s+ 2)α has q = 0. So

(10.4.7) 〈β + (s+ 2)α, α〉 ≤ 0.

This is a contradiction. �

Corollary. If 〈α, β〉 > 0 then α− β ∈ R. If 〈α, β〉 < 0 then α+ β ∈ R.

Proof. Exercise. �

Lemma. The numbers 〈α, β〉 equal 0,±1,±2,±3,±4 only. If the root sys-tem is reduced, ±4 does not occur.

Proof. Use the Cauchy-Schwartz inequality:

(10.4.8) 0 ≤ (α, β) · (α, β) =4(α, β)2

(α, α)(β, β)≤ 4.

References: Chapter III of Humphreys.

Page 65: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

65

11. Simple roots and the Weyl group

11.1.

Definition. A subset Π ⊂ R is called a base if

(I): Π is a basis of V(II): Any root β can be written

β =∑

nαα nα ∈ Z

either all positive or all negative. The roots in Π are called simple.

Properties:

(1) A root is called positive if all the nα ≥ 0, negative if all the nα ≤ 0.Then

(11.1.1) R = R+ ∪R−, R+ ∩R− = ∅

where R± are the positive (negative) roots.(2) If α, β ∈ R+, then α+ β ∈ R+ or is is not a root.(3) We say α ≤ β if β − α ∈ R+. α, β ∈ Π α 6= β then (α, β) ≤ 0 and

α− β is not a root.

Proof of (3). If α− β is a root either γ ∈ R+ in which case

(11.1.2) α = β + γ violates (II)

or −γ ∈ R+ in which case

(11.1.3) β = α+ (−γ) violates (II).

So α− β is not a root and (α, β) ≤ 0. �

Theorem. Every root system has a base.

Proof. Let Hα be the hyperplane where α is zero. Let 0 6= h0 ∈ V \ ∪Hα.Then

(11.1.4) α(h0) 6= 0 for all α ∈ R.

Let R+ := {α | α(h0) > 0}. Then R = R+∐R− where R− = −R+. We

say α ∈ R+ is indecomposable if α cannot be written as

(11.1.5) α = β + γ β, γ ∈ R+.

Claim: The set of indecomposable elements is a base.

Proof. Call this set Π.(1) Each α ∈ R+ is an N-linear combination of elements in Π. (2) α, β ∈ Π

distinct, then α − β is not a root and (α, β) ≤ 0. Otherwise β = α + γ orα = β + γ with γ ∈ R+.

Page 66: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

66

(3) Π forms a linear independent set. Suppose

(11.1.6)∑α∈Π

nαα = 0.

Then we get a relation

(11.1.7) r =∑

nαα =∑

mββ nα,mβ > 0

and the two sets are disjoint. But then

(11.1.8) (r, r) =∑

nαmβ(α, β) ≤ 0.

So r = 0. But then

(11.1.9) 0 = (r, h0) =∑

nα〈α, h0〉 =∑

mβ〈β, h0〉

so all nα,mβ = 0. �

Proposition. Each base is obtained in this fashion.

Proof. Choose h0 such that 〈α, h0〉 > 0 for all α ∈ Π. Let R± be the positiveand negative systems corresponding to h0. Clearly

(11.1.10) R+ ={β ∈ R | β =

∑α∈Π

nαα, nα ∈ N}.

11.2. Weyl Chambers. A Weyl chamber is a connected components C ofV −

⋃α∈RHα. Recall that a regular element h0 ⊂ V is an element so that

(11.2.1) 〈α, h0〉 6= 0 for any α ∈ R.Each h0 regular determines a Weyl chamber C(h0)

(11.2.2) {Weyl chambers} ←→ {Bases}.

Lemma. The Weyl group W (R) permutes the Weyl chambers.

Proof. Clear. �

Lemma. If γ ∈ R+ is not simple, there is α ∈ Π such that γ − α ∈ R+.

Proof. There is α ∈ R+ such that (γ, α) > 0. This implies the claim of thelemma. If (γ, α) ≤ 0 for all α ∈ Π, then since

(11.2.3) γ =∑β∈Π

nββ, nβ ≥ 0,

we get

(11.2.4) (γ, γ) =∑

nβ(γ, β) ≤ 0 so γ = 0.

Corollary. Every root γ ∈ R+ can be written as

(11.2.5) γ = α1 + · · ·+ αk αj ∈ Π

such that α1 + · · ·+ αi is a root for each i ≤ k.

Page 67: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

67

Proof. Exercise. �

Lemma. Let α ∈ Π. Then

(11.2.6) sα(R+) = (R+ \ {α}) ∪ {−α}.

Proof. Clearly sα(α) = −α. Suppose γ 6= α is in R+. Then

(11.2.7) γ =∑β 6=α

nβ · β + nα · α

sα(β) = β+m ·α with m ≥ 0. There is at least one nβ 6= 0 in the first sum.It follows that the coefficient of β in sα(γ) is > 0. So all coefficients are ≥ 0.Thus sα(γ) ∈ R+. �

Corollary. If ρ := 12

∑α∈R+ α, then

(11.2.8) sβ(ρ) = ρ− β for any β ∈ Π.

Proof. Exercise. �

Theorem (Main Theorem). Let Π be a base of R.

(1) If h0 ∈ V is regular, there exists w ∈W such that

〈w(h0), α〉 > 0 for all α ∈ R.

(2) If Π′ ⊂ R is another base, there is a w ∈W such that w(Π′) = Π.(3) If α ∈ R is any root, there is w ∈W such that

(11.2.9) wα ∈ Π.

(4) W is generated by sα, α ∈ Π.(5) If w(Π) = Π, then w = 1.

Proof. Let W ′ be the subgroup geneated by sα, α ∈ Π. We prove (1)-(3) forW ′ and then (4).

(1) Let ρ = 12

∑α∈R+ α where R+ is defined by Π. Let w ∈ W ′ be such

that

(11.2.10) 〈w(h0), ρ〉 is maximal.

Let α ∈ Π. Look at

(11.2.11) (sαw(h0), ρ) = (w(h0), ρ− α) ≤ (w(h0), ρ).

Thus (w(h0), α) ≥ 0 as claimed; in fact > 0 because (w(h0), α) 6= 0 for anyα.

(2) Exercise.(3) Because of (2), it is enough to see that for any α ∈ R there is at least

one base it belongs to. Look at V \Hα. Since R is reduced, all other Hβ,β 6= ±α are distinct from Hα. Choose h0 so that

(11.2.12) (h0, α) = ε > 0 |(h0, β)| > ε β 6= ±α.

Then α ∈ Π(h0).

Page 68: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

68

(4) Let α ∈ R. Then there is w ∈W ′ such that w(α) = α1 ∈ Π. We get

(11.2.13) sα = sw(α1) = w ◦ sα1 ◦ w−1

so sα is a product of simple root reflections. (w = sβ1 ◦ sβ2 · · · )(5) Suppose w(Π) = Π. Write a minimal expression

(11.2.14) w = s1 ◦ · · · ◦ sk, si = sαi αi ∈ Π

not necessarily distinct. Then write

(11.2.15) wi = s1 · · · si, (in particular wk = w).

(11.2.16) w(αk) = wk−1 · sk(αk) = wk−1(−αk) = −wk−1(αk).

So wk−1 maps αk to a negative root. In such a case we can show that thereis t < i such that

(11.2.17) wk−1 = s1 · · · st−1 st+1 · · · sk−1

(in other words there is a shorter expression for w).Let t be the smallest so that

(11.2.18) γ = st+1 · · · sk−1(αk) > 0.

Then since st(γ) is negative, γ = αt. So

(11.2.19) st = (st+1 · · · sk−1)sk(sk−1 · · · st+1).

Plug this into the reduced expression for w. �

Page 69: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

69

12. Irreducible systems

We assume that the root system R is irreducible. Recall that α < βmeans that β − α is a sum of positive roots.

Lemma. There is a unique maximal root.

Proof. Let β be a maximal root. Then

(12.0.20) β =∑α∈Π

mαα mα ≥ 0.

Observe that

(12.0.21) (γ, β) ≥ 0 for all γ ∈ Π.

Otherwise (γ, β) < 0 implies γ + β is a root, which is strictly bigger.Define

(12.0.22) Π1 := {α ∈ Π | mα > 0}, Π2 := {α ∈ Π | mα = 0}If γ ∈ Π2, then

(12.0.23) (γ, β) =∑α∈Π1

mα(γ, α) ≤ 0.

So (γ, β) = 0 and therefore also(γ, α) = 0 for all α ∈ Π1. Then R+ =R+

1 ∪R+2 a contradiction. Thus all mα > 0 and

(12.0.24)(β, α) ≥ 0 ∀ α ∈ Π, (β, α) > 0 for at least one simple root

If β′ is another maximal root,

(12.0.25) (β, β′) > 0

so β − β′ is a root. One of β, β′ cannot be maximal. �

Lemma. The W orbit of any root spans V .

Proof. Follows from previous facts about irreducibility of V . �

Lemma. R has at most 2 root lengths.

Proof. Let α1, α2 be two roots. We can conjugate one of the roots by theWeyl group so that they have nonzero inner product. The classification ofroot systems of rank 2 shows that the ratio of the lengths of such roots iseither 2 or 3. If for example ‖α2‖2 = 2‖α1‖2 and ‖α3‖2 = 3‖α1‖2 then‖α3‖2 = 3

2‖α2‖2, a contradiction. �

Lemma. The maximal root is long.

Proof. Suppose ‖β‖2 < ‖α‖2 for some α. We may as well assume (by con-jugating by W ) that

(12.0.26) (α, γ) ≥ 0 ∀ γ ∈ Π.

Then β − α is a root in R+. So

(12.0.27) (α, β − α) ≥ 0

Page 70: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

70

and therefore

(12.0.28) (β, β) = (β − α, β − α) + 2(α, β − α) + (α, α).

Page 71: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

71

13. Classification of irreducible root systems

The irreducible (or simple) root systems are characterized by their corre-sponding Cartan matrices.

Definition (Cartan Matrix). An n × n matrix A = [aij ] is called a gener-alized Cartan matrix if

(1) aii = 2,(2) aij ≤ 0 for i 6= j,(3) aij = 0 if and only if aji = 0.

It is called a Cartan matrix if in addition detA 6= 0.

If R is an irreducible root system, then the matrix A = [〈αi, αj〉] is aCartan matrix.

Proposition. Two root systems with the same Cartan matrix are isomor-phic.

A Cartan matrix gives rise to a Coxeter-Dynkin Graph. This has onevertex for each simple root α1 · · ·α`. Two of them are joined if 〈αi, αj〉 < 0as many times as the ratio of lengths. The arrow points to shorter root.

The classification of the Dynkin diagrams is described in Humphreys chap-ter III.

Page 72: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

72

14. Lie algebra associated to a matrix

Note: For the root system of a Lie algebra, V = h∗, so the roots α ∈ h∗

while the coroots α ∈ h. �

14.1. We start with a complex n× n matrix A of rank `. A realization is atriple (h,Π, Π) so that h is a finite dimensional vector space,

(14.1.1) Π = {α1, . . . , αn} ⊆ h∗ are linearly independent

(14.1.2) Π = {α1, . . . , αn} ⊆ h are linearly independent

(14.1.3) α(αi) = aij .

Proposition. If (h,Π, Π) is a realization of A, then dim h ≥ 2n− `. Thereexists a realization so that dim h = 2n−`. This is unique up to isomorphism.Furthermore the isomorphism is unique if detA 6= 0.

Proof. If dim h = m, complete αi and αj to bases. The matrix

(14.1.4) αi(αj) =

( n m− nn A ∗m− n ∗ ∗

)is nondegenerate. Therefore the first n rows are linearly independent. Sothe upper corner is rank ≥ n−`. Thus m−n ≥ n−` or m = dim h ≥ 2n−`.Write

(14.1.5) A =

[A11 A12

A21 A22

]where A11 is `× ` nonsingular. Let

(14.1.6) C =

A11 A12 0A21 A22 In−`0 In−` 0

.Then C is nonsingular (detC = ±detA11). Let h = C2n−`, α1, . . . , αn thefirst n coordinate functions and α1, . . . , αn be the first n rows of C. This isa minimal realization of A.

Let now (h,Π, Π) and (h1,Π1, Π1) be two such realizations and denote byβi, βj the roots and coroots of the second one. We say they are isomorphicif there is an isomorphism ϕ : h→ h1 so that

(14.1.7) βi ◦ ϕ = βi, ϕ(αi) = βi.

Complete Π1 to a basis of h1, and choose βn+1, · · · , β2n−` so that

(14.1.8) βi(βj) =

A11 A12 0A21 A22 IB1 B2 0

.

Page 73: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

73

Since the matrix (14.1.8) is nonsingular, β1 · · ·β2n−` form a basis of h∗1. Byusing row reduction we can change the basis so that B1 = 0. Then B2 isnonsingular. Multiplying by

(14.1.9)

I 0 00 I 00 0 B−1

2

we get C. Note that we only changed βn+1, . . . and βn+1, . . . .

Note: If (h,Π, Π) is a minimal realization of A, (h∗, Π,Π) is a minimalrealization of tA. �

14.2.

Definition. The algebra g(A) is the algebra with generators {ei, fi}1≤i≤nand a basis of h and relations

(1) [h, h′] = 0 ∀ h, h′ ∈ h(2) [ei, fj ] = δijαi(3) [h, ei] = αi(h)ei, [h, fj ] = −αj(h)fj.

Let

(14.2.1) n+ := 〈e1, . . . , en〉, n− := 〈f1, . . . , fn〉be the Lie algebras generated by the ei and fj .

Theorem.

(1) The map θ : ei 7→ −fi, fi 7→ −ei, h 7→ −h extends to an involutiveautomorphism of g(A).

(2) n+ is a free algebra generated by the ei.(3) n− is a free algebra generated by the fi.(4) g(A) = n− ⊕ h⊕ n+.

Proof. The first part is clear; the relations are stable under the map θ.Let V be a vector space with basis v1 · · · vn and T (V ) be the tensor

algebra. For each λ ∈ h∗ define a representation ϕλ of g(A) as follows.

h · 11 = 〈h, λ〉11h · vi1 ⊗ · · · ⊗ vis = (λ− αi1 · · · − αis)(h)vi1 ⊗ · · · ⊗ vis

fi · 11 = vi, fj(vi1 ⊗ · · · ⊗ vis) = vj ⊗ vi1 ⊗ · · · ⊗ visej · 11 = 0,(14.2.2)

ej · (vj1 ⊗ · · · ⊗ vjs) = vj1 ⊗ e · (vj2 ⊗ · · · ⊗ vjs)+ δji1〈αj , λ− αj2 − · · · − αjs〉vj2 ⊗ · · · ⊗ vjs .

Exercise 27. Show that (14.2.2) defines a representation ϕλ of g(A).

We define a map (for each λ)

(14.2.3) n− → T (V ), y 7→ ϕλ(y)11.

Page 74: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

74

This map takes fi to vi and takes n− to the free algebra with generatorsv1, . . . , vn. It is easy to see this is a Lie algebra homomorphism and alsoonto. So n− is the free algebra as well. For n+ apply θ.

Now recall that the universal algebra of n− is T (V ). Let

(14.2.4) a = n− + h + n+.

Then a is stable under adei, adfi and adh. So a is an ideal containing allthe generators. Thus a = g(A).

Now suppose n−+h+n+ = 0. Then (n−+h)·11 = 0 so n− ·11+λ(h)11 = 0.Thus n−11 = 0 so n− = 0. Since this holds for all λ, h = 0. Note n−11 ∈T+(V ) and λ(h)11 ∈ T0(V ). �

14.3. Let

(14.3.1) Q =∑

Zαi, Q+ =∑

Nαi.

Q is called the root lattice. Define ht(Σmiαi) := Σmi. We say λ ≥ µ ifλ− µ ∈ Q+, λ, µ ∈ h∗.

If M is an abelian group, an M–grading of g is

(14.3.2) g =∑m∈M

gm [gm, gm′ ] ⊆ gm+m′ .

Theorem.

(1) g(A) has a Q-grading where

g(A)α = {v ∈ g | [h, v] = α(h)v}.Also

g(A)α ={

[ei1 , . . . , eis ] |∑

αij = α}.

(2) There is a unique maximal ideal r so that r ∩ h = (0). Furthermorer ∩ n± are ideals in g(A).

Proof. The first part is omitted. For the second part we prove a lemma.

Lemma. Suppose h is an abelian Lie algebra, (π, V ) a representation suchthat

(14.3.3) V = ⊕Vλ Vλ := {v ∈ V | [h, v] = λ(h)v},i.e. h acts semisimply. Then h acts semisimply on any submodule. Precisely,if any U ⊆ V is a submodule,

(14.3.4) U = ⊕(U ∩ Vλ).

Proof. Say u =∑uλ ∈ U . We want to show that all uλ ∈ U . Since there

are finitely many λ such that uλ 6= 0 and they are distinct, we can find h ∈ hso that λ(h) are distinct. Let p(t) be a polynomial which is zero on all butone λ(h). Then

(14.3.5) p(π(h))u =∑

p(λ(h))µλ = p(λ0(h))uλ0 ∈ U.

Page 75: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

75

Apply this to an ideal r ⊆ g(A) so that r ∩ h = (0);

(14.3.6) r = ⊕α 6=0r ∩ g(A)α.

So a sum of such ideals has the same property. Now use Zorn’s lemma to getthe maximal ideal. Finally, [fi, r ∩ n±] ⊆ (n± + h) ∩ r so [fi, r ∩ n±] ⊆ n±.The last assertion follows. �

14.4. Recall that a generalized Cartan matrix (GCM) is an n×n matrix Aso that

(1) aii = 2, i = 1, . . . , n(2) aij ∈ −N(3) aij = 0 implies aji = 0.

Definition. The algebra g(A) := g(A)/r is called the Kac-Moody algebraassociated to A.

Page 76: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

76

15. Construction of finite dimensional simple algebras

15.1. Let g′(A) := [g(A), g(A)].

Proposition. g′(A) is the subalgebra of g(A) generated by {ei, fj , αk}. Fur-thermore

(15.1.1) g′(A) = n− ⊕ h′ ⊕ n+ where h′ = span{αk}.

Proof. Let b := n− + h′ + n+, a = 〈ei, fj , αk〉.Claim: b ⊆ a ⊆ g′(A) ⊆ b.

(1) n−, n+ ⊆ a is clear. Furthermore, αi = [ei, fi] ∈ a so b ⊆ a.(2) Choose h ∈ h so that αi(h) 6= 0 ∀i. Since

[h, ei] = αi(h)ei, fi, ei ∈ g′(A) [h, fi] = −αi(h)fi,

n− + n+ ⊆ g′(A). So a ⊆ g′(A).(3) b is stable under adei, adfj , adαi. So b is an ideal and g(A)/b ' h/h′

is abelian. So g′(A) ⊆ b.

Lemma. If a ∈ n+ is such that [a, fi] = 0, ∀i, then a = 0. Similarly fora ∈ n−.

Proof. a = Σaα. Then [fi, aα] = 0 ∀α such that aα 6= 0. So assume a ∈ gα.Let a be the ideal generated by a. This ideal is ⊆ n+ because n− ·a = 0. �

Proposition. Let

(15.1.2) h0 := {h ∈ h : αi(h) = 0 ∀i}.Then h0 is the center of g(A) as well as g′(A).

Proof. Clearly h0 ⊆ Z(g(A)). Let c ∈ Z(g(A)),

(15.1.3) c =∑

cα + h.

If α > 0, [ei, cα] = 0 ∀i if α < 0, [fi, cα] = 0, ∀i so cα = 0. So c ∈ hand then [c, fi] = −αi(c)fi implies c ∈ h0. So c = h0. Note that dim h0 =(2n− `)− n = n− `. But dim(h0 ∩ h′) is also n− ` so h0 ⊆ g′(A). �

Proposition. Let A be an n × n matrix and suppose for each i, j there isi1 = i, i2 · · · is = j so that

(15.1.4) aii1 · · · aisjs 6= 0.

Then every ideal of g(A) either contains g′(A) or is in the center.

Proof. Let I ⊆ g(A). Then I is Q–graded, so

(15.1.5) I = (I ∩ h)⊕ (⊕α∈∆I ∩ gα).

(1) I ∩ h 6⊆ Z. Let h be such that αi(h) 6= 0 for some i. Then [h, ei] ∈ Iso ei ∈ I. Similarly fi ∈ I and αi ∈ I. Furthermore for any j,look at the sequence i1 = i, · · · , is = j: [αi, ei2 ] = aii2ei2 so ei2 ∈ I,fi2 ∈ I and so on. We get αj , ej , fj ∈ I so I ⊇ g′(A).

Page 77: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

77

(2) I ∩ h ⊆ Z. Then I ∩ gα = (0) for α ∈ ∆+. Otherwise let α beof minimal height so that I ∩ gα 6= (0). Then there is fi so that[fi, I ∩ gα] 6= (0) ( because of the previous lemma) and so α = αi.Then αi = [ei, fi] ∈ I. But αi /∈ Z, a contradiction.

Corollary. A satisfies (15.1.4) and detA 6= 0, ⇔ g(A) is simple.

Proof. ⇒ is clear.⇐ If detA = 0 then the center of g(A) is 6= 0 so not simple.Suppose (15.1.4) holds for 1, i i = 1 · · · s but not for i = s+ 1 · · ·n.So

(15.1.6) A =

[A11 0A21 A22

].

Then

(15.1.7) a =

s∑i=1

Cαi +∑

at least one ij≤sC[ei1 · · · eir ] + C[fi1 · · · fiv ]

is an ideal. �

Theorem. Assume A is the Cartan matrix of a simple root system R. Theng(A) is simple with root system R. Furthermore the ideal r ⊆ g(A) is gen-erated by

(15.1.8) θij := (adei)1−aij (ej) (i 6= j)

(15.1.9) θ−ij := (adfi)1−aij (fj).

Proof. Since detA 6= 0 and (15.1.4) is satisfied (just inspect the classifica-tion), g(A) is simple. Note that θij ∈ n+, θ−ij ∈ n−. So

(15.1.10) [h, θij ] ∈ n+, [h, θ−ij ] ∈ n−,

(15.1.11) [ek, θij ] ∈ n+, [fk, θ−ij ] ∈ n−.

Now if k 6= ij, then [fk, θij ] = 0. If k = j, then [fk, ei] = 0. So [fk, θij ] =(adei)

1−aij (αj = 0 because 1− aij > 0.If k = i, [fi, θij ] ∈ n+ by an sl(2) calculations. Thus 〈θij , θ−ij〉 ⊆ r. Let s

be the ideal 〈θij , θ−ij〉. We show that g(A)/s equals g(A).

Note that g(A)/s = g is generated by α1, . . . , αn, e1, . . . , en, f1, . . . , fnand

(15.1.12) [ei, fj ] = δijαi, [h, ei] = αi(h)ei, [h, fj ] = −αj(h)fj .

We prove the claim of the theorem in in several parts.

(1) adei and adfi are locally nilpotent. Let ai ⊆ g be

(15.1.13) ai := {v ∈ g | (adei)kv = 0 for some k}.

Then ai is a subalgebra which contains all ej , fj , αj .

Page 78: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

78

(2) The Weyl group of R acts on h and on g.

(15.1.14) si = eadei ◦ e−adfi ◦ eadei

makes sense because of (1).(3) dim gαi = 1 and dim gmαi = 0 unless m = 0,±1. Follows from g(A);

s does not contain the ei, fi.(4) If α ∈ R, dim gα = 1 (because R is finite every root is conjugate to

a simple root).(5) If λ = Σciαi, λ 6= mα, ci ∈ R, there is w ∈ W so that wλ has

a ci < 0 and a ci > 0. Let h ∈ hR be so that αi(h) > 0 ∀i, butλ(h) = 0. Then 0 = Σciαi(h) so one ci > 0, another < 0.

(6) If 0 6= λ /∈ R, then gλ = (0). λ = Σmαα, mα ∈ Z. By (5) we get theconclusion.

(7) g is finite dimensional. Follows from (5) and (6).(8) g is semisimple. Any abelian ideal is zero. The fact that s = r,

follows from the fact that the image of r in g does not intersect h.Recall that if I ⊆ g is an ideal satisfying I ∩ h = (0), then

(15.1.15) I = (I ∩ n+)⊕ (I ∩ n−)

and both I ∩n± are ideals. This contradicts the semisimplicity of g.

For further material, consult Kac, Infinite dimensional Lie alge-bras chapter 4 or Wan, Introduction to KAc-moody algebras.

Page 79: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

79

16. Quantum Groups

References:

(1) Lectures on Quantum Groups, J. C. Jantzen, Graduate Studies inMath, vol.6, AMS.

(2) Chari-Pressley, A Guide to Quantum Groups.

We consider the case of A a Cartan matrix. Let R be the root systemand Π be a choice of simple roots.

Recall from the previous section that g(A) is the algebra generated by{hα, eα, fα}α∈Π subject to

(16.0.16) [hα, hβ] = 0, [eα, fβ] = δαβhα

(16.0.17) [hα, eβ] = aαβeβ, [hα, fβ] = −aαβfβ.For g(A) we add the relations

(16.0.18) (adeα)1−aαβ (eβ) = 0, (adfα)1−aαβfβ = 0 (α 6= β).

If R is simple, g(A) is the simple Lie algebra with root system R.We can change the construction slightly to get U(g(A)). Note that

(16.0.19) adeα = Lα −Rα, adfα = Lfα −Rfα.Then since LxRx = RxLx, the relations become

(16.0.20)∑

(−1)i(

1− aαβi

)e

1−aαβ−iα eβe

iα = 0,

similarly for the fα :

(16.0.21)∑

(−1)i(

1− aαβi

)f

1−aαβ−iα fβf

iα = 0.

Note that it is not completely clear why this is U(g(A)).

16.1. Fix a reduced root system R, not necessarily simple. On VR, fix a

W–invariant inner product so that (α, α) = 2, 4 or 6. Let dα = (α,α)2 , and

write D for the diagonal matrix with entries dα. Then DA is symmetric andcorresponds to the inner product.

The quantum group associated to A called Uq(g(A)) is a deformation ofU(g(A)).

Let K be a field, char K 6= 2 and also char K 6= 3 if R has any G2 factors.Let 0 6= q ∈ K and define

(16.1.1) qα := qdα .

If a ∈ Z, write

(16.1.2) [a]α :=qaα − q−a+α

qα − q−1α

=qadα − q−adα

qdα − q−dα.

Write [n]α! and

[an

for the obvious expressions.

Page 80: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

80

We can define q to be an indeterminate and use the field of fractions K(q)for coefficients. The expression [a]α is in fact a Laurent polynomial. So wecan specialize to q = 1.

The quantized enveloping algebra Uq(g) is defined as the K algebra withgenerators

(16.1.3) {eα, fα, kα, k−1α }

subject to

(1) kα · k−1α = k−1

α · kα = 1, kα · kβ = kβ · kα(2) kαeβk

−1α = q(α,β)eβ, kαfβk

−1α = q−(α,β)fβ. (Note (α, β) is α(β) and

aαβ = 2(α,β)(α,α) .)

(3) eαfβ − fβeα = δαβkα−k−1

α

qα−q−1α

(4)∑

(−1)i[1− aαβ

i

e1−aαβ−iα fβe

iα = 0

(5)∑

(−1)i[1− aαβ

i

f1−aαη−iα eβf

iα = 0.

As before, Uq(g) is the algebra with the first 3 relations, Uq(g) all 5 relations.

As before we have Q = Z〈α〉α∈Π, U±q , U0q and U±q , U0

q . The analogoustheorems to the ones proved for Kac–Moody algebras hold.

The Q-grading is deg eα = α, deg eα = −α and deg kα = deg k−1α = 0.

If λ ∈ Q, let kλ be the corresponding element,

(16.1.4) kλ = Πkmαα , λ =∑

mαα.

Then if u ∈ Uq(µ),

(16.1.5) kλuk−1λ = q(λ,µ)u.

Note that {eα, fα, kα, k−1α } gives rise to maps Uqα(sl(2))→ Uq(g) and Uqα(sl(2))→

Uq(g). For sl(2) there are no relations (4) and (5). These maps turn out tobe injective.

16.2. One of the main features of U(g) that we want to generalize is theHopf algebra structure.

If M1, M2 are H-modules, then so is M1 ⊗M2.If ∆(h) =

∑h1i ⊗ h2

i , then

(16.2.1) h · (m1 ⊗m2) :=∑

h1im1 ⊗ h2

im2.

For example if H = U(g),

(16.2.2) ∆(h) := h⊗ 1 + 1⊗ h h ∈ g

(16.2.3) ε(h) = 0 if deg h > 0, ε(1) = 1

(16.2.4) S(h) = −h h ∈ g

(16.2.5) h · (m1 ⊗m2) = h ·m1 ⊗m2 +m1 ⊗ hm2.

Page 81: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

81

If H = F(G) where G is a group,

(16.2.6) ∆(h) = h⊗ h h ∈ G

(16.2.7) ε(h) = 0 h 6= 11, ε(h) = 1 h = 11

(16.2.8) S(h) = h−1

(16.2.9) h · (m1 ⊗m2) = hm1 ⊗ hm2.

If M is an H–module, then so is M∗ via

(16.2.10) (h · f)(m) := f(S(h)m).

There is a natural map

(16.2.11) p : M1 ⊗M2 →M2 ⊗M1, P (m1 ⊗m2) := m2 ⊗m1.

But this is not always an H-homomorphism. Denote by τ : H⊗H → H⊗Hthe corresponding flip, τ(h1 ⊗ h2) = h2 ⊗ h1.

We say H is co-commutative if

(16.2.12)

H∆−−−−→ H ⊗Hyτ

H ⊗Hcommutes. U(g) and F(G) have this property. But Uq(g) does not.

Proposition. The formulas

(16.2.13) ∆(eα) = eα ⊗ 1 + 1⊗ eα, ∆(fα) = fα ⊗ k−1α + 1⊗ fα

(16.2.14) ∆(kα) = kα ⊗ kα

(16.2.15) ε(eα) = e(fα) = 0, ε(kα) = 1

(16.2.16) s(eα) = −k−1α eα, s(fα) = −fαfα, s(kα) = k−1

α

make Uq(g) into a Hopf algebra.

Corollary. There is a unique Hopf algebra structure on Uq satisfying theabove formulas.

Example. H = Uq(sl(2)) generated by {E,F,K,K−1}

(16.2.17) KK−1 = K−1K = 1

(16.2.18) KEK−1 = q2E, KFK−1 = q−2F

(16.2.19) EF − FE =K −K−1

q − q−1

(16.2.20) ε(F ) = ε(F ) = 0, ε(K) = ε(K−1) = 1

Page 82: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

82

(16.2.21) S(E) = −K−1E, S(F ) = −FK, S(K) = K−1

(16.2.22) ∆(E) = E ⊗ 1 +K ⊗ E, ∆(F ) = F ⊗K−1 + 1⊗ F

(16.2.23) ∆(K) = K ⊗K, ∆(K−1) = K−1 ⊗K−1.

Then τ ◦∆(E) = 1⊗ E + E ⊗K 6= E ⊗ 1 +K ⊗ E.

The fact that the map

(16.2.24) P : M ′ ⊗M →M ⊗M ′, P (m′ ⊗m) = m⊗m′

is not a Uq module homomorphism is one reason why these objects wereinvented.

A weaker property than co-commutativity is the following. We wantfunctorial isomorphisms

(16.2.25) RM,M ′ : M ⊗M ′ ∼→M ′ ⊗Mso that

(16.2.26) R−1M,M ′ ◦∆(u) ◦RM,M ′ = P ◦∆(u).

In particular we can take M = M ′ = U ; This gives an isomorphism

(16.2.27) RU,U : U ⊗ U −→ U ⊗ UIf R = RU,U (1⊗ 1), then

(16.2.28) RU,U (u) = RU,U (1⊗ 1)u.

Suppose there is an invertible R ∈ H ⊗H so that τ ◦∆(u) = R−1 ·∆(u) ·R.Then P ◦ π ⊗ π′(R) intertwines π ⊗ π′ and π′ ⊗ π.

ρ ◦ (π ⊗ π′)(R) ◦ (π ⊗ π′)(∆(a)) = P ◦ (π ⊗ π′)(R ·∆(a))(16.2.29)

= P ◦ π ⊗ π′(τ ◦∆(a) ·R) = P ◦ π ⊗ π′(τ ◦∆(a)) ◦ (π ⊗ π′)(R)

= P ◦ π ⊗ π′(τ ◦∆(a)) ◦ P ◦ P ◦ (π ⊗ π′)(R)

The fact that Uq(g) satisfies such an “almost commutative” property isbasic.

Now consider M ⊗M ′ ⊗M ′′. There is a natural isomorphism

(16.2.30) M ⊗ (M ′ ⊗M ′′)→ (M ⊗M ′)⊗M ′′.We will consider the case when M ′ = M ′′ = M . If A : M ⊗M → M ⊗Mwrite

(16.2.31) A12 := A⊗ id : M ⊗M ⊗M →M ⊗M ⊗M.

H is called quasitriangular if

(16.2.32)

{(∆⊗ id)(R) = R13 ◦R23

(id⊗∆)(R) = R13 ◦R12.

In this case we find that

(16.2.33) R12R13R23 = R23R13R12

Page 83: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

83

(16.2.34) (ε⊗ id)(R) = 1 = (id⊗ ε)(R)

(16.2.35) (S ⊗ id)(R) = R−1 = (id⊗ S−1)(R)

(16.2.36) S ⊗ S(R) = R.

Theorem. Uq(g) is quasitriangular.

Example: g = sl(2) {E,H,F}

(16.2.37) R =

∞∑n=0

Rn(h)e12H⊗HEn ⊗ Fn

where

(16.2.38) Rn(h) =q

12n(n+1)(1− q−2)n

[n]q!

q = e−h2 , K = e−

hH2 .

Note that R /∈ U ⊗ U but rather in some kind of completion.But (π ⊗ π′)(R) makes sense because one assumes π, π′ are finite dimen-

sional, E and F will act nilpotently.Let σij be the map that interchanges mi and mj ; σ12(m1 ⊗m2 ⊗m3) =

m2 ⊗m1 ⊗m3. Replace Rij by σRij . Then

(16.2.39) R12R23R12 = R23R12R23.

Replace Rij by σijRij . Then

σ12R12σ13R13σ23R23 = σ12σ13R32σ23R12R23(16.2.40)

= σ12σ13σ23R23R12R23

so

(16.2.41) R23R12R23 = R12R23R12 (braid group).

The braid group Bn is the group generated by T1, . . . , Tn−1 subject to therelations

(16.2.42) TiTj = TjTi |i− j| > 1

(16.2.43) TiTi+1Ti = Ti+1TiTi+1

Two braids are called Markov equivalent if one can be obtained from theother by a series of operations of the form

(I): β = γβ′γ−1 or(II): β = β′T±1

n , Bn ⊂ Bn+1 in the obvious way.

Page 84: Lie Algebras, 649, lectures from 2002 - Cornell Universitypi.math.cornell.edu/~barbasch/courses/649-13/liealgebras-old.pdfThe set of vector elds Vec(U) is a vector space =C and a module

84

So if we have a QYBE matrix R, we have a representation ρn of Bn.We are looking for an “invariant” of a braid i.e. a quantity which is

unchanged if we apply the equivalences (I) or (II). Then trρn(β) satisfies(I).

For (II) we need something more complicated.Consider an (R, f, λ, µ) where f ∈ End(V ⊗ V ) is invertible so that

(1) f ⊗ f commutes with R(2) λ, µ ∈ K so that

(16.2.44) tr2(σ ◦R ◦ (f ⊗ f)) = λµf, tr2(R−1 ◦ σ ◦ (f ⊗ f)) = λ−1µf.

The notation is as follows. If F ∈ End(V ⊗m) ' End(V ⊗(j−1))⊗End(V )⊗End(V ⊗(m−j)) then trj(F ) ∈ End(V ⊗m−1) is the element obtained by tak-ing trace of the j-th element.

Lemma. Suppose F ∈ End(V m+1) G ∈ End(V m). Then

(16.2.45) tr(tracem+1(F )) = trace(F )

(16.2.46) trm+1((G⊗ id) ◦ F ) = G ◦ trm+1(F )

The invariant that satisfies (I) and (II), is

P (β) = λ−α(β)µ−m+1tr(ρm(β) ◦ f⊗m), α(T±1i ) = ±1

with

R = ehn∑i=0

Eii ⊗ Eii +∑i 6=j

Eii ⊗ Ejj + (eh − e−h)∑i<j

Eij ⊗ Eji

f = diag(e−nh, e(−n+2)h, . . . , enh), λ = e−(n+1)h, µ = −1.