Here we have a wine critic who’s puzzled about how to approach a wine that got a 90 from Parker (this critic was also put off by the fact that the wine was rated by one of Parker’s employees rather than Parker himself) and an 84 from the Wine Spectator.
Every critic has his methods, but this one strikes me as odd, and adds to my long list of concerns about the 100-point ratings scale.
#1: Why did he seek out the opinions of others before tasting the wine himself? Sometimes you can’t avoid hearing things about high-profile brands, but it seems to me the critic should approach the evaluation without pre-conceived notions whenever possible. His write-ups include scores from other publications, which is a good, democratic idea, but he himself should start out by tasting the wine blind.
#2: His conclusion that “Someone is right and someone is wrong” regarding the 90 vs. the 84. And this, on a 6-point spread. Is anyone ever right or wrong when it comes to evaluating something that we eat or drink? Or read? Or take in at the theater? Life would be a lot simpler is this wasn’t subjective, but it is. And I don’t see a big discrepancy in that spread. Just a different take.
An 84 vs. 90 communicates to me, right or wrong, that the wine must be at least a clean, well-crafted representative of the type. Beyond that, it seems that the style suited Mr. Parker’s guy better than it did the Spectator’s. They each have a right to their opinions. We may or may not agree.
Recently, I was researching wines for a staff tasting and discovered that a highly regarded, fairly pricey Merlot we wanted to taste got a 94 from Robert Parker and, hang on for this – 78 from the Wine Spectator – pretty much a slap in its $65.00 face. Once we tasted it, we began to understand why. The “herbal note” that you expect from Merlot wasn’t subtle – it was something akin to vegetal. It was big and luscious, very clean and well made, so we respected it, but most of us weren’t crazy about it. How do you fairly score a wine like that?
Something like that definitely needs descriptors so that those of us who aren’t vege inclined can make another selection. That’s the fatal flaw in the phoenix rising from the Wine X ashes. Justwinepoints.com goes to the ridiculous extreme that “nothing else matters”. Just the points. But numbers don’t tell you what it tastes like. I appreciate their wish to be unpretentious and concise, but it actually seems rather egotistical to suggest that we will like it simply because they say “98”. For instance, in the category of sparkling wine, the 2003 Schramsberg Cremant received 98 points. Period. No comment. So, some unknowing enthusiast might run out and buy the Cremant to go with his oysters on the half shell, completely unaware that this particular wine is sweet. Yuk! I don’t care how “good” it is – no sweet wine with my oysters, please!!! It “matters”.
I’ve already ragged sufficiently on some of the other problems with numerical ratings such as the producers of the world being at the mercy of a small handful of powerful critics and questioning how one differentiates between an 89 and a 90.
So, the debate rages on. I know that some of the flowery, over-the-top descriptors are more laughable than informative. That sort of self-indulgent writing can send you running and screaming to the numerical scores. But, don’t you think most reviewers are genuinely trying to be helpful? From the vast sea of wine publications, there’s no doubt you can find a writer or two whose tastes and sensibilities are somewhat aligned with yours, whether or not they use points. And if it tastes like a 94 to them and a 78 to you, who is right? Not that they’re mutually exclusive, but couldn’t some well-chosen words give you a better idea of what to expect in terms of aroma and flavor and whether there are any characteristics that may be controversial? Or if the critic views the wine as an outstanding or poor example of the type? Numerical ratings? I guess I’d give them a 71.