The third post is the September 25, 2010 version here. This fortnight’s reading is 4.2-5.2. At the end of it, you will be ready to know what a scheme is in general (so I’ve included 5.3 and 5.4 in case you want to read ahead, and see a few examples).
Corrections promised after the last post are included. There are some comments I still haven’t had a chance to include and digest (including from Kamal Khuri-Makdisi).
General comments.
Affine schemes are the local models for schemes in general. We use them both to prove theorems (many proofs begin with “we reduce to the affine case”) and to do explicit calculations. Learners should try to get comfortable both with dealing with quite general affine schemes, and also explicit examples.
For learners.
As always, doing many exercises is essential. Which ones are right for you depend on who you are, so in this posting I’ll divide them up a little. Here are some of the more important ones.
Understanding specific examples quite explicitly will be helpful.
- You should do some of the affine space examples A^1_Q (4.2.C), A^2_C (4.2.D), A^2_Q (4.2.E), A^n_Z (4.2.M).
- 4.2.K or 4.2.L will give you practice with maps of these objects.
- 4.2.P is fun and surprising, and will give you a sense of why the dual numbers might have something to do with differentials.
- 4.6.F deals with a generic point.
- Try one of 4.6.O, 4.6.Q, and 4.7.A, which deal with reducible schemes.
There are points of theory that it is important to understand well in order to move forward. I’ve flagged them in the text.
- 4.2.F and 4.2.G tell you how primes behave under quotients and localization. Know both, and do at least one.
- Nilpotents and the nilradical are important, 4.2.N.
- The radical of an ideal is closely related, and is discussed in 4.4.D; 4.4.I and 4.5.E will be important for later, and 4.4.F is a simpler exercise if you have less time.
- 4.2.I and 4.4.G deal with maps.
- 4.7.E on how V and I are “opposites” is essential.
- 4.5.A (the distinguished base on the Zariski topology) is good practice with bases for topologies, and we’ll use it. And
5.1.A (sections of the structure sheaf over distinguished open sets) is useful, and will give you excellent practice in thinking in this new way.
Things I’d like to ask you:
- If there are examples that look like they are supposed to be easy but aren’t, please let me know. (In particular: Is 4.7.B on the equations cutting out the polynomial axes too hard given what you now know? If it is, I should remove it; or if a hint could make it easy, I’d like to add one.)
- There are two sections on visualizing schemes. These are things that are much better said in person, in conversation. How (in)comprehensible are they when reading them? More helpful to me: are there things that could be said that would help? (I realize that these sections will be more helpful to some, and less helpful to others, and I want them to be as helpful as possible to the first group, while signaling clearly to the second group that they should pass by these sections and not worry about it.)
For experts.
- Throughout the notes I rely very much on affine covers, more than some people prefer (coming up for example in the affine covering lemma, the treatment of quasicoherent sheaves, the development of cohomology). The resulting descriptions tend to be scheme-specific, and more thought is needed to extend them to ringed spaces in general. But that’s fine by me — I would prefer to do things easily, and to later see that they generalize with a little thought, than to do things more generally than is needed, and then to specialize.
- I use the phrase local ringed space rather than the standard locally ringed space. I currently feel no guilt, because “locally ringed space” can be misleading (“locally” isn’t being used in the same way we use it elsewhere — I similarly use factorial instead of locally factorial), and there is no possibility for confusion, and I tell them how the rest of the world speaks. If this really bothers you, please speak up! (And if it really doesn’t bother you, please let me know too, so you can outvote the others…)
- We’re approaching the example of projective schemes, which I must admit end up being more confusing than I had hoped.
September 26, 2010 at 4:01 pm
Hi Ravi,
Firstly, in the warning before 2.3.C, I think you want 0 not in S.
Secondly, can I ask what surjectivity means in 3.4.E? Later on in 3.4 we see that surjectivity of a sheaf map is not equivalent to surjectivity on each open set, so I’m not sure what the question is asking.
[Thanks Amy! I’ve fixed the warning before 2.3.C. And about 3.4.E, the goal is to show that the morphism of sheaves is an isomorphism, so in this case on every open set you are expecting injectivity and surjectivity. You’re right that in general “surjectivity” of a sheaf map isn’t open set by open set, but in this case where you’re hoping for an isomorphism, it is. If an edit might help, just let me know. (I’ve just changed “sheaves” to “sheaves of sets” in this problem to pre-empt other issues. Hopefully the reader will realize that the proof also works for sheaves of abelian groups, etc., and that I needn’t give a laundry list of “set-like categories”. But I’m happy to hear otherwise.) — R.]
September 26, 2010 at 5:55 pm
A scheme is locally factorial if all of its local rings are UFDs, i.e., factorial rings. If Spec(A) is locally factorial it does not imply that A is a domain, and if it is it does not imply A is factorial.
A ring R is a local ring if and only if for every f in R either f or 1 – f is invertible in R. A locally ringed space is a ringed space (X, O) all of whose stalks are local rings. It turns out that this is equivalent to: (*) for any open U and any f in O_X(U) there exists a covering U = \bigcup U_i such that for each i either f|_{U_i} or (1 – f)|_{U_i} is invertible. (Nice exercise.) A local scheme is the spectrum of a local ring.
I hope you see where I am going with this…
September 28, 2010 at 5:11 am
Hi Johan,
I see where you are going, but I’m not yet convinced. (As you know, this means I still have an open mind, and might be convinced!)
We are close to having the following convention, which is easy to remember: “locally X” means “you can check some condition related to X on any affine cover” (what I’ve called “affine-local”), and “X” then means “locally X and quasicompact”. Following this convention immediately reduces the amount of terminology a learner needs to remember, and clunky terminology is a cross that algebraic geometers have to bear.
One exception to this convention as you note is “locally factorial”, and I’m glad you brought it up; I was going to raise it in the next posting, where factoriality is introduced in the notes. This is again a “stalk-local” condition, and the phrase “locally factorial” can be misleading; if you’re not paying attention, you will assume it means the wrong thing. (Even if you are paying attention, you are using up some of your attention on something that would be best used for something else.) Instead, I prefer “factorial”, and here I can hide behind tradition: Mumford uses this, undoubtedly for the same reason. Plus I think no one will be confused.
Now about locally ringed spaces: I hadn’t pondered that nice exercise before, and I see your point that in the sense of this exercise “local ring is to ring as locally ringed space is to ringed space”. But that seems to once again suggest “local” not “locally”. I don’t see how grammatically “locally” makes sense there — it is not “how” the space is ringed. It is meant to modify “ring”, not in any way “space”.
And there is a danger of misinterpretation here, because the notion of locality usually *does* have something to do with the space. When you also take into account the fact that “locally X” already has a (sensible) meaning that is different from this one, the case seems strong for me for dropping the adverbial construction.
Update written later: I can see a case for adding a hyphen: “local-ringed space”. So I’ve now done this, but it can of course be reversed if/when you convince me to stop tilting against this windmill.
September 28, 2010 at 8:55 am
There are no easy rules for figuring out what some given terminology or jargon means. Each time you have to look up the definition. For example “finite presentation” does not mean “locally of finite presentation” and “quasi-compact”. “Locally connected” does not mean every point has a connected neighborhood. A “normal scheme” is one whose local rings are normal domains (and not one whose affine rings are all normal domains).
It is likely the case that in many situations the terminology was introduced by non-native English speakers, and so it may not sound like it means what it means. But I think this is usually the least of the students problems.
As Bob Friedman pointed out Mumford introduced in GIT the notions of “stable points”, “semi-stable points” and “unstable point”. But it is not the case that “not stable” = “unstable”. Clearly this is insane, but everybody uses it. So I don’t care if Mumford used “factorial scheme” somewhere. I think what we should do in algebraic geometry (when we write a paper, book, etc) is figure out what the standard terminology is and use it. In EGA IV, 21.6.9 Grothendieck defines the notion of a locally factorial scheme; Hartshorne, AG II Proposition 6.11 introduces the same terminology and I say we use it.
Please, please, please stick with standard terminology. I think you are doing readers a disfavor if you do not.
PS: If you find some nonstandard terminology in the stacks project then please point it out!
September 28, 2010 at 11:56 am
Dear Ravi,
I’m seconding Johan again. It has to be “locally ringed space”; this is the standard terminology, and I think your forced to go with it. It also sounds nicer than “local ringed space”.
Cheers,
Matt
September 30, 2010 at 5:01 am
Thanks Johan and Matt! I’m still willing to consider trying to tweak established conventions in ways that are helpful that won’t confuse any reader (but may annoy some). Language can and does evolve (in mathematics as elsewhere) — for example, I’m glad that the phrase “prescheme” is now gone.
At this point, I’m intending to go with convention here (when I get the time to make the changes), and to parenthetically warn readers and even to state how I secretly think of the notion (just as I described how I secretly think of “D(f)” as the “doesn’t vanish” set, which would make Grothendieck vomit). What has tipped the balance in my mind are less your arguments, because I also had thought of them, and knew you would make them forcefully and well. Translation: I’d already factored them into my thinking. But also no one else (especially people learning for the first or second or third time, or people nearer the beginning) wrote (even privately to me in an email) to say that this was indeed something that confused them and they think this is a good change. (Maybe someone will now…) So if there are powerful arguments in favor of the status quo, and not one person who thinks it would be better to be changed, then that is pretty convincing.
Johan, “finite presentation” is I think the other key example I know of that I didn’t mention (hence I guessed you were going to mention it). (I wonder if there are others I’m forgetting? If so, I’d like to flag them in the text.)
Also, I’m still not sold on factorial vs. locally factorial, as the former is firmly established in the literature, even if it is a minority view. But perhaps you and others will convince me otherwise.
I’m unconvinced about the “semistable” example: that’s bad terminology, but changing it would cause confusion. But there may be ways of changing some terminology where the meaning is transparent; and indeed the best terminology is of this sort.
October 10, 2010 at 4:39 pm
A comment from a reader received by email: As a topologist who has put a lot of effort into learning algebraic geometry over the years, I can report that I was confused for a long time about the terminology “locally ringed space”, for exactly the reason you describe. (Especially before I had enough intuition/understanding about affine schemes to know why asking for your stalks to be local rings is a natural condition.) I think “local-ringed space” would be much less confusing — and after all, shouldn’t we should strive to make all our terminology self-documenting?
October 12, 2010 at 5:08 am
Whoops, I see that you did not change the wording as you promised above. One more attempt to convince you: In Hartshorne the definition of a locally ringed space comes before the definition of a scheme. There is no possible confusion with coverings by affine opens when you just consider a ringed space. Please, pretty please! Also, the fact that somebody who was confused by the terminology speaks up is not good evidence that the majority of readers of Hartshorne (or other texts) were confused as after all the ones that were not confused are less likely to speak up. (Include me in that group for example.)
Cetero censeo carthaginem delendam esse!
[Finally done! — R.]
October 1, 2010 at 7:46 pm
Two small comments: (i) is the 0 ring local? (ii) Is Spec(0) a local scheme? Just as fields are required to be nonzero by definition (to have a good theory of polynomial rings over them, and so on), it seems the right convention that a local ring should have a canonically associated residue field (and a local scheme a unique closed point).
So in other words, a ringed space defines a locally ringed topos (in the sense of the SGA4 “exercise”, which is where (*) comes from) if and only if its stalks are *either* 0 or local. “Spreading out” the condition “1=0”, its the same to say it is a usually locally ringed space along a closed set on whose complement the structure sheaf is 0.
I think that the definition of “locally ringed space” suggested above is slightly “wrong” insofar as it permits a non-empty open over which the structure sheaf vanishes. It’s true that the above more expansive definition has no effect on the subcategory of schemes, but a locally ringed space in which points don’t have associated fields (over the open where structure sheaf vanishes) seems contrary to the purpose of introducing the concept…
October 4, 2010 at 4:07 am
Brian: Oops, yes you are right! I screwed that up. Sorry! With the addition of having the global section 1 be nowhere zero it does work as you say. I was trying to make the point that you do not have to look at stalks in order to define what it means to be locally ringed. Thanks for the correction Brian!
September 26, 2010 at 9:30 pm
I think most of the typos are fixed already, but let me tell you what I think the remaining ones are in this third version.
2.3.3. Warning (again) : I pointed out last time it would be weird if S contains 0, so S should not contain 0 for that inclusion.
3.3.A. last line : presheaves form a functor -> presheaves form a category
4.2.G. last paragraph : A=C[x,y]/(xy) -> A=k[x,y]/(xy) (not consistent)
p.89 line -4 : discuss discuss -> discuss
4.2.M. line -2 : think of A^n_Z is -> think of A^n_Z as
Thank you for your clear explanation!!
[Thanks, fixed! I hope I’ve fixed everything around 4.2.G; I changed k to C rather than the other way around to deliberately give a “concrete” example, even though the reader should realize that C isn’t special here. — R.]
September 28, 2010 at 2:59 am
I found two typos during the first reading of 4.3.-5.2.
page 106, first line – there should be D(f_{i}) instead of D(A_{f_{i}})
page 108, second ‘sandwitch’ sequence of rings – the two rings on the right are quotients of \mathbb{C}[x,y] not of \mathbb{C}[x].
As to Your questions – I was able to do most of excercises (in particular 4.7.B), but skipped 4.6.G (in (a) my first idea was to use isomorphism theorem, but showing that our ideal is exactly the kernel of epimorphism from C[w,x,y,z] to C[a^{3},a^{2}b,ab^{2},b^{3}] seems a little problematic, so I have to think more about it) and 4.6.J.
However I’ve seen already most of facts from commutative algebra that You used in this sections (including those about Zariski topology on prime spectrum), so probably I’m not a good person to judge the level of difficulty here.
I think that sections about visualising schemes were ok, and I’m waiting intrigued for the formalizations of concepts given in the second one.
[Hi max,
Thanks for the corrections! Your first idea in 4.6.G(a) was precisely what I was hoping people would try. It’s a bit tricky, but do-able. I just took a look to see if I could steer people in that direction, but there is already something there, and given that you already had the idea I was hoping you would, I’ll leave it as is.
I realize most readers will have some commutative algebra background (whether they realize it or not). Hopefully someone with less background than max might write and say what things threw them for a loop.
About visualizations: I think if I could make better pictures in latex, I would draw them in a different way (so “shreds of space” would look the same if the space in question were 1-dimensional or two). At some point I’ll learn more about this. — R]
February 28, 2011 at 9:09 pm
Not to soapbox, but since you mention pictures here, you might consider inkscape for the more challenging pictures. It has super nice output to eps, and even supports latex in your figures! I think it has potential to save you a lot of time.
Sorry if this is off-topic.
March 14, 2011 at 10:54 am
Bryan, that’s a worthy soapbox. Odds are good that I’ll deal with figures en masse much later, once it’s clearer what figures I want and need. So the ones currently included are intended to be sufficient but not great. I’d heard about inkscape before (when googling for precisely this reason), and will certainly try it! I hope it is self-explanatory, and that I can use it off the shelf. (I like self-explanatory tools in much the same way that I like self-explanatory terminology for mathematical ideas.)
September 28, 2010 at 5:35 am
I found the following email from Harry Lakser illuminating, so I am posting it (with his permission). I’ve put his pdf here (click).
Dear Professor Vakil,
I suppose I am at the opposite chronological extreme from most of the people reading through your notes “Maths 216: Foundations of Algebraic Geometry”—I am a retired professor of mathematics at the University of Manitoba. My area of research is Universal Algebra and Lattice Theory, but I have developed an amateur interest in learning modern Algebraic Geometry.
I am really enjoying your notes. It is very important to get an insight into how the practitioner in the field thinks, something that cannot always be gotten from books and papers, where the formalism tends to obscure the thought processes and insights.
However, I should like to make some comments on your approach to “sheaves on a base” applied to Spec A. When I first saw this approach in, eg., Eisenbud-Harris it bothered me that for a particular open subset $U$ in the base, there were many (isomorphic) $F(U)$, one for each $g$ with $U = D(g)$. I think that this feeling of unease is best articulated in Iitaka, page 39, in the first paragraph of Section 1.11 (just before Lemma 1.20). Although Iitaka has a rather elegant solution to this difficulty, it is somewhat artificial, and, in any event, I can never reproduce his details—I always have to look them up again.
One way around this difficulty is to arbitrarily choose one particular $f$ for each distinguished open set $U$. But this is somewhat of a (not too serious) pain—for example $F(D(f) \cap D(g))$ is not necessarily $A_{fg}$, although it is isomorphic to it. This messiness is also evident in Eisenbud-Harris, page 17, in the proof of Corollary I-14 (on gluing sheaves)—see the statement “…we choose arbitrarily a set $U$…”.
However, I think the most elegant, and the simplest, approach is to realize that, rather than a base of the topology, we are working with an indexed family of basic open sets, that is, $U_i$ and $U_j$ may be the same open set with $i$ distinct from $j$. Then, what you would call $F(U_i)$ really does not depend on the basis element $U_i$ but on the index $i$, that is, should be written $F_i$. Of course, from the definition of a sheaf on an indexed base, it follows that if $U_i = U_j$, then $F_i$ is isomorphic to $F_j$.
I have taken the liberty to write up this idea in the attached pdf file “sheaves.pdf”. I did not bother writing up a proof of Theorem 1—a proof would read virtually the same as that you presented in your notes from previous years, or, alternatively, as the elegant, but somewhat longer, approach through (inverse) limits on page 17 of Eisenbud-Harris. Of course, this approach would entail using the directed structure determined on the indexing set by the containment relation on the open sets.
I have not seen this approach using an indexed family of basic open subsets in any source I have managed to look at. I find that very surprising—it is such a simple idea. Perhaps only I and Iitaka were bothered by the usual approach.
I should appreciate it if you would take a look at the enclosed pdf—hopefully I am not off-base.
Sincerely yours, and much appreciative,
Harry Lakser
[His final pun of not being off-base was particularly appreciated. — R.]
September 28, 2010 at 10:33 am
“I would prefer to do things easily, and to later see that they generalize with a little thought, then [sic] to do things more generally than is needed, and then to specialize.”
In view of this comment, I have to ask why you introduce the inverse image sheaf at all. When you are working with schemes (as opposed to ringed spaces), there is a much nicer construction of the pullback of a quasicoherent sheaf, given locally as follows: When f: Spec A -> Spec B, and M is a B-module, we let $f^*(\tilde{M})$ be the quasicoherent sheaf with global sections $M \otimes_A B$. Although I am certainly not an expert, I can’t think of any occasions to use the inverse image sheaf in algebraic geometry when some other simpler construction (e.g., restriction to an open set, or pullback of a quasicoherent sheaf) would not work. Am I missing something important?
P.S. Since it is not obvious from my message above, I thought I should probably mention that I have a very high opinion of your notes so far.
September 28, 2010 at 10:47 am
That should be Spec B -> Spec A, and M an A-module.
October 14, 2010 at 8:32 am
Hi Charles,
I will respond at greater length when I begin to catch up, but the short answer is that you raise a very good point. If memory serves me, I initially did what you propose, and then had this forced upon me by something later in the notes. But I’ll have to go back and check, and even if that is true, that’s a good reason to flag the hard construction as something that readers should not think much about until they need to later (if they ever do). And the grammatical error is now fixed, thanks!
October 2, 2011 at 2:54 pm
Hi Charles,
I’ve now returned to this (after a very long delay from your original comment). As you know, I strongly agree with the philosophy you express in your comment. But I’ve currently decided to keep this, because (i) it is only two pages, (ii) it is so fundamental that there is a good chance that I will use some intuition about it later, and (iii) I think 3.6.G will get used (with the caveat that the way it is currently used is bogus — this still has to get fixed). I may still return to this… and you may also convince me still…
I’ll reach this around the end of the second week of this fall’s class, so I’ll get some feedback about how horrible this topic is (and how horrible my exposition is).
September 29, 2010 at 4:04 pm
In exercise 4.2.A(b), you refer to the localization of k[x]_(x), using the notation for localization at a prime ideal which I don’t think you’ve defined yet. (If I am mistaken and you have, it might be good to add a reminder since you’re recalling notation that hasn’t been used recently.)
Speaking just from my own experience, I found the notation to be somewhat confusing when dealing with the two different flavors of localization, especially in the case of localizing at a principal ideal, as then the only difference notationally is a pair of parentheses: Z_2 is very different from Z_(2). I think it might be good if you added a little bit of a warning about this in your discussion following exercise 4.2.G since it will probably save at least a few people some grief later.
Also, as someone who has had these topics presented in a class on varieties before, I’m mildly curious as to why you separated the discussion of V and I.
—
Some miscellaneous typos / comments:
– Example 1 in 4.2: “the quotient by this ideal is field” => “is a field”
– The A^1 _ k-bar in the second line of Example 3 in 4.2 looks kind of strange: the bar makes it look like a fraction. You also wrote “it’s value” twice when you meant “its value”.
– Exercise 4.2.E is misworded: I think you meant either something like “containing” instead of “corresponding”, or you meant “corresponding to {(sqrt 2, sqrt 2), (-sqrt 2, -sqrt 2)}”.
– In 4.2.P, if one wishes to be pedantic, there are still epsilons.
– In 4.4, in the definition of the vanishing set, the “V” isn’t bolded. (I assume you used the script V, but it looks funny that way.)
– I assume you will at least mention the phrase “equivalence of categories” in conjunction with the result of theorem 4.7.1, at some point. By the way, theorem 4.7.1 is something I have also seen called Hilbert’s Nullstellensatz; e.g., in Dummit/Foote.
November 15, 2010 at 9:43 am
Sorry for the long-delayed reply! Many thanks.
4.2.A(b): I now refer back to where localization is introduced. There the reader is cautioned about the confusing notation you describe.
I separated the discussion of V and I because I wanted the reader to focus on the Zariski topology before being brought back to the algebra. (I’m not wedded to this though.)
4.2 Example 1 and 4.2.E: fixed.
4.2 Example 3 and 4.4: most latex-related and formatting issues I’m deferring for (much) later, as it isn’t clear if the notes might migrate in some way.
4.2.P: that’s actually kind of funny, so I’ve altered the text a little.
4.7.1: I’ve decided not to mention equivalence of categories right here, but I now mention that some call this the Nullstellensatz.
September 30, 2010 at 9:11 am
I’ve been making a number of improvements suggested by Bjorn Poonen, and there is one that I’ll do later when I’ve had a chance to think through how best to word it. I’m copying it here both as a reminder to myself, and in case it is helpful to people before the change is actually made.
“In the proof of base gluability in 5.1 (i.e., 5.1.2.1),
one possibility for wording this is to have a
Case 1. I is finite
and to finish this case completely before doing
Case 2. I is arbitrary
using your argument on p.107.
(The content of the argument is the same. For some reason,
I find it slightly easier to digest this way.)”
Update May 10, 2011: change now made.
October 1, 2010 at 10:56 am
Exercise 4.2.L has a lot of undefined variables, and so it took a while for me to divine what was meant by the problem (although having to think hard about this . I think you should explicitly say that the f_i are elements of C[x_1, … , x_n], that I and J are ideals in C[x_1, … , x_n] and C[y_1, … , y_m] respectively, and then say that \Phi is the k-algebra homomorphism / ring map from C[y_1, … , y_n] to C[x_1, … , x_m] whose action on generators is given by the polynomials f_i.
When explicitly stated this way, part (b) makes a great deal more sense, since I had initially thought you meant for Phi to be a map from C^n to C^n (which of course it isn’t), since you’ve used the x_i both to denote variables in a polynomial ring and coordinates in C^n. It might be better to use z_i to denote the complex coordinates in part (b) to underscore the difference.
October 14, 2010 at 12:44 pm
Hi Evan,
Thanks, you’re absolutely right; this (and the many other comments currently unanswered) will be dealt with before long. And when I deal with this, I’ll also deal with a suggestion of Jordan Ellenberg’s (transcribed here so I won’t forget): `in problem 4.2.L I think the students here were confused by precisely what was meant by “the converse to (a).” ‘
October 2, 2011 at 3:30 pm
A long-delayed response: I’ve reworded this problem (largely following a suggestion from Jason Ferguson), and removed the statement that Jordan pointed out was confusing.
October 6, 2010 at 11:39 am
Small question:
Example 4 in 4.2:
Since we allow the zero ring, shouldn’t we allow the entire ring to be prime? I guess I usually think P in R is prime if and only if R/P is a domain, which the zero ring is a domain. In that case Spec 0 in nonempty. Am I missing something here about the zero ring having no prime ideals?
October 7, 2010 at 11:11 am
Actually – I remembered a domain requires 1 \neq 0.
[Great! — R.]
October 6, 2010 at 6:08 pm
I read it on a plane this time so I didn’t prepare detailed comments, but I have a couple of small ones.
4.6. I think you should give an exercise right after the definition of “irreducible” saying that if A is a domain, then Spec(A) is irreducible. This is used without comment several times (eg in 4.6.g).
4.6.5 : It seems a little historically deceptive to say that this is what Gordon called theology — wasn’t Gordon really talking about Hilbert’s solution to the “fundamental problem of invariant theory”, of which the basis theorem was only a small piece?
4.7.D, 4.7.1 : Don’t these only work for f.g. algebras over an algebraically closed field? The proofs I know go through the Nullstellensatz.
October 7, 2010 at 10:01 am
Whoops, I now see the light on 4.7.D and 4.7.1. I now only request that you choose a different letter for your ideal in 4.7.D.
[Thanks Andy! All fixed. I added an exercise after 4.6 with a hint (as I want readers to have no problem with it); removed the Gordan quote; and changed that confusing letter. — R]
October 7, 2010 at 5:03 am
[…] Conrad complained here that the statement above is not true because the zero ring is not a local ring. I agree with him. […]
October 7, 2010 at 12:31 pm
Question about 4.2.A a)
May we assume k = \bar{k} and epsilon is transcendental over k?
[I’m not sure how k=\bar{k} can help, so normally I’d say fine, but I’d rather not, because I’d fear it would lead you in the wrong direction. Also, this is a matter of how you choose to use terminology, but I’d say that epsilon is algebraic over k, because it satisfies an algebraic relation: epsilon^2=0. But I’d see a better case for not saying it is algebraic, as it doesn’t sit in any field extension of k, as in no field extension can you have something whose square is zero that is itself not zero! I hope that helps; it’s hard to discuss things in text not at a blackboard. — R.]
October 13, 2010 at 6:03 pm
For 4.2.O or 4.2.10 – I don’t think this is true for the zero ring unless we say zero is not nilpotent.
[I see what you are saying, but it is still fine if the “empty intersection” (intersection of no sets) of a bunch of subsets of a set X is taken to be everything (i.e. X). So by making appropriate (and appropriately confusing) conventions around the empty set, these contradictions can be avoided. But feel free to make an exception for the 0-ring — I often do! — R.]
December 27, 2010 at 3:22 am
Dear Prof. Vakil,
I think you forgot something in exercise 4.4.H.: “Show that Spec B/I is naturally a closed subset of Spec B” should probably be “Show that $Spec B/I$ ($Spec S^{-1}B$) is naturally a closed (open) subset of $Spec B$”.
Your notes are really great, it’s so much fun learning with them.
Thank you!
January 4, 2011 at 1:48 pm
Actually, it isn’t true that Spec S^{-1} B is always an open set — it’s true for example if S is {1,f,f^2, …}, but isn’t true if B = k[x] and S = B \setminus (x).
February 28, 2011 at 9:03 pm
I have a question about (5.1.2.1):
You say we are showing that the first and second map compose to be zero, but I am having trouble seeing this. Even after understanding your proof, I am having trouble producing a “diagrammatic” proof.
Here is the issue. If we consider the map from A into $A_{f_i}$, then take the product of them we have a unique map from A to the product. Now take the map from each $A_{f_i}$ to $A_{f_if_j}$ (we can see that this is just the further localization at f_j) and then take the product of these double localizations, we see the composition of these morphisms give maps from our first product to all of the double localization, thus by universality, a unique map to the product of double localizations. Now we have canonically defined the maps of (5.1.2.1). From this, it is not obvious why the composition is zero. In fact, this seems like it should not be the case.
At this point I realize the flaw in my logic, my diagram has not taken into account that the localizations agree on overlaps. Even if I make cocartesian squares out of the pairs of localizations, I can’t seem to make this work.
So, how can I fix up my argument to see this fact. I ask mostly because I think the current proof is unnecessarily tedious. Thank you very much!
March 14, 2011 at 11:03 am
Hi Bryan,
Thanks for your comment, I realize what was missing from the exposition, that would cause confusion. The issue is what that last arrow actually is: there is a sign involved. In particular, if you get from A to A_{f_i f_j} through A_{f_i} or A_{f_j}, you will get the same thing. So in order to make the contribution zero, you want to have one of those “routes” contribute the negative of what you might first expect. I’ve fixed it in the next version (to come out in just under 3 weeks), and have included the latex here so no one has to wait.
[begin cut]
(Aside: experts will realize that we are trying to show exactness
of
\begin{equation}
0 \rightarrow A \rightarrow \prod_i A_{f_i} \rightarrow \prod_{i \neq
j} A_{f_i f_j}.
\end{equation}
Be careful interpreting the right-hand map — signs are involved!
The map $A_{f_i} \rightarrow A_{f_i f_j}$ should be taken to be the
“obvious one” if $ij$.
Base identity corresponds to injectivity at $A$.
The composition of the right two morphisms is trivially zero, and gluability
is exactness at $\prod_i A_{f_i}$.)
[end cut]
March 14, 2011 at 11:06 am
And a little more that I wanted to have in a separate comment: the fact that the composition is 0 isn’t too hard. The hardest part is showing that anything in the kernel of the last map indeed comes from A. I may indeed try something else (see for example Kamal Khuri-Makdisi’s comments here, which at the time I’m writing this has yet to be dealt with).
September 24, 2011 at 4:55 pm
A small comment from an “actual” beginner, i.e. someone who has barely even seen varieties. Section 4.2 (the underlying set of affine schemes) has a lot of geometric intuition, but I think it might benefit from an explicit (not necessarily formal) description of how what you mean by points corresponding to prime ideals, i.e. that you consider the prime ideal to be the set of points that vanish for all functions in that ideal. This is implied in your “translations” section, but never really explicit. Because the first example is C[x], where you associate (x-a) to a, the reader might be mislead into thinking that the association is somehow more superficial (oh, there’s an a on both sides). Again, anyone who has heard of varieties will probably be happy with this section, but I found myself struggling to understand/gain intuition without relying on “background knowledge” about varieties.
September 25, 2011 at 12:49 pm
One note on this correspondence: A prime ideal corresponds to a single (possibly non-geometric) point. It is the _closure_ of this point that is the set of points that vanish for all functions in that ideal.
There is, actually, an worthwhile note here: Moderately experienced algebraic geometers can move fluidly between talking about a “point of a scheme” and talking about the corresponding closed subset (i.e., the closure of the point). But it is important (and sometimes confusing) to understand the difference.
[March 25 2012: I’ve attempted to incorporate some of this discussion into the notes. — R.]
September 5, 2012 at 7:17 pm
Hey, I know this is a bit far back from what you’re thinking about now but I think I’ve found a counterexample to Remark 2.6.3 (SpecA is not connected iff A is isomorphic to the product of two rings).
Consider A = ( (F_2 x F_2)[epsilon] ) / (epsilon^2). (F_2 being the field with two elements).
A has eight elements and SpecA is homeomorphic to Spec(A/(epsilon)) = Spec(F_2xF_2) = {F_2x(0), (0)xF_2} which is a discrete space with two elements, so in particular SpecA is not connected.
Suppose A is isomorphic to A_1 x A_2 for some non zero rings A_1, A_2. Then we have wlog that |A_1|=2, |A_2|=4.
Note that the nilradical of A has 4 elements, (F_2xF_2)epsilon.
A_1 must be F_2, so the only nilpotent elements of A_1xA_2 are contained in {0}xA_2, but (0,1) is not nilpotent so A_1xA_2 has at most three nilpotent elements. So A cannot be isomorphic to A_1xA_2.
September 5, 2012 at 7:18 pm
Sorry that was meant to be Remark 4.6.3
September 5, 2012 at 7:38 pm
I have, of course, just realized my mistake.
September 5, 2012 at 8:02 pm
Cool!