§ Projective spaces and grassmanians in AG
§ Projective space
- Projective space is the space of all lines through Rn.
- Algebraically constructed as (V−{0})/R×.
- We exclude the origin to remove "degenerate lines", since the subspace spanned by {0} when acted on with R×is just {0}, which is zero dimensional.
§ Grassmanian
- G(m,V): m dimensional subspaces of V.
- G(m,n): m dimensional subspaces of V=kn.
- G(m+1,n+1) is the space of m planes Pm in Pn. Can projectivize (n+1) eqns by sending (x0,x1,…,xn)∈kn+1 to [x0:x1:⋯:xn]∈Pn.
- Duality: G(m,V)≃G(dim(V)−m,V⋆). We map the subspace W⊆V to the annihilator of W in V⋆: That is, we map W to the set of all linear functionals that vanish on W [ie, whose kernel is W].
- The above implies G(1,V)=G(n−1,V). G(1,V) is just projective space P1. n−1 subspaces are cut out by linear equations, c0x0+…cn−1xn−1+cn=0.
§ G(2, 4)
- These are lines in P3. This will give us two pairs of points of the form (x0,y0,z0,w0) and (x1,y1,z1,w1). That is, we're considering "lines" between "points" (or "vectors") in R3. Exactly what we need to solve stabbing line problems for computer graphics :)
- Start by taking a 2D plane. The line will pass through a point in the 2D plane. This gives us two degrees of freedom.
- Then take a direction in ordinary Euclidean R3 (or S2 to be precise). This gives us two degrees of freedom.
- Can also be said to be a 2-dim. subspace of a 4-dim. vector space.
- In total, G(2,4) should therefore have four degrees of freedom.
- Take W⊆V where V≃k4, and W is 2-dimensional subspace.
- W is spanned by two vectors v1,v2. So I can record it as a 2x1 matrix: [a11a21a12a22a13a23a14a24]. Vector vi has coordinates ai.
- If I had taken another basis (v1′,v2′), there would be an invertible matrix B∈K2×2 ( det(B)=0) that sends (v1,v2) to (v1′,v2′). Vice Versa, any invertible matrix B gives us a new basis.
- So the redundancy in our choice of parametrization of subspaces (via basis vectors) is captured entirely by the space of Bs.
- Key idea: compute 2×2 minors of the 2×4 matrix (v1,v2).
- This is going to be (a11a22−a12a21,…,a13a24−a14a23)∈K6.
- Note here that we are computing 2 minors of a rectangluar matrix, where we take all possible 2×2 submatrices and calculate their determinant.
- In this case, we must pick both rows, and we have (24)=6 choices of columns, thus we live in K6.
- We represent this map as m:K2×4→K6 which sends m((aij))≡(a11a22−a12a21,…,a13a24−a14a23)which maps a matrix to its vector of minors.
- The great advantage of this is that we have m(B⋅(aij))=det(B)⋅m((aij)), since the minor by definition takes a determinant of submatrices, and determinant is multiplicative.
- Thus, we have converted a matrix redundancy of B in aij into a scalar redundancy (of det(B)) in m(aij) .
- We know how to handle scalar redundancies: Live in projective space!
- Therefore, we have a well defined map G(2,4)→P5. Given a subspace W∈G(2,4), compute a basis v1,v2∈K4 for W, then compute the minor of the matrix m((v1,v2))∈K6, and send this to P5.
§ G(2,4), projectively
- These are lines in P3.
- So take two points in P3, call these [a0:a1:a2:a3] and [b0:b1:b2:b3]. Again, this gives us a matrix:
[a0b0:a1:b1:a2:b2:a3:b3]
- We define Sij≡aibj−ajbi which is the minor with columns (i,j).
- Then we compress the above matrix as (S01:S02:S03:S12:S13:S23)∈P5. See that Sii=0 and Sji=−Sij. So we choose as many Ss as "useful".
- See that if we change a or b by a constant, then all the Sij change by the constant, and thus the point itself in P5 does not change.
- We can also change b by adding some scaled version of a. This is like adding a multiple of the second row to the first row when taking determinants. But this does not change determinants!
- Thus, the actual plucker coordinates are invariant under which two points a,b we choose to parametrize the line in P3.
- This gives us a well defined map from lines in P3 to points in P5.
- This is not an onto map; lines in P3 have dimension 4 (need 3+1 coefficiens, ax+by+cz+d), while P5 has dimension 5.
- So heuristically, we are missing "one equation" to cut P5 with to get the image of lines in P3 in P5.
- This is the famous Plucker relation:
S02S13=S01S23+S03S12
- It suffices to prove the relationship for the "standard matrix":
[10:0:1:a:c:b:d]
- In this case, we get c (-b) = 1(ad - bc) + d (-a)
- In general, we get plucker relations :
Si1…ikSj1…jk=∑Si1′…ik′Sj1′jk′.
§ Observations of G(2,4)
- Suppose m(v1,v2)=(aij) has non-vanishing first minor. Let B be the inverse of the first minor matrix. So B≡[a11a21a12a22]. Set (v1′,v2′)≡B(v1,v2).
- Then m(v1′,v2′)=[1001y11y21y12y22].
- So the first 2×2 block is identity. Further, the yij are unique.
- As we vary yij, we get we get different 2 dimensional subspaces in V. Thus, locally, the grassmanian looks like A4. This gives us an affine chart!
- We can recover grassmanian from the P5 embedding. Let p0,…,p5 be the coordinate functions on P5 ( p for plucker).
- The equation p0p5−p1p4+p2p3 vanishes on the grassmanian. We can show that the zero set of the equation is exactly the grassmanian.
§ Computing cohomology of G(2,5)
- Take all points of the following form:
[10:0:1:∗:∗:∗:∗]
- Let's look at the first column: it is [10]. Why not [11]? Well, I can always cancel the second row by subtracting a scaled version of the first row! (this doesn't change the determinants). Thus, if we have a 1 somewhere, the "complement" must be a 0.
- Next, we can have something like:
[10:∗:0:0:1:∗:∗]
- Here, at the second second column [∗0], if we didn't have a 0, then we could have standardized it and put it into the form of [01] which makes it like the first case! Thus,we must have a 0 to get a case different from the previous.
[10:∗:0:∗:0:0:1]
[00:1:0:0:1:∗:∗]
[00:1:0:∗:0:0:1]
[00:0:0:1:0:0:1]
- If we count the number of ⋆s, which is the number of degrees of freedom, we see that 1 of them (the last one) has zero stars ( A0), 1 of them has 1 star ( A1), two of them have 2 stars ( A2), one of them has 3 stars, and one of them as 4 stars.
- This lets us read off the cohomology of the grassmanian: we know the cellular decomposition. Ie, we know the number of n cells for different dimensions.
- Alternatively, we can see that over a finite field k, we have k0+k1+2k2+k3+k4 points. On the other hand, P4 has k0+k1+k2+k3+k4 points. Thus the grassmanian is different from projective space!
- Borcherds