The reviewer’s fallacy: when critics aren’t critical enough. – Slate

The Reviewer’s Fallacy is a different sort of phenomenon, less premeditated than baked into today’s critical enterprise. One of the root causes stems from Sturgeon’s law, named after its originator, science-fiction author Theodore Sturgeon, who once observed, “It can be argued that 90% of film, literature, consumer goods, etc. is crap.” The “It can be argued” part usually isn’t quoted, and the figure is very ballpark. But it’s inarguable that the majority of what comes down the pike, in any medium, is mediocre or worse.


It would be tiresome for critics to constantly be counting the ways that the work under review is crap, nor would their editors and the owners of the publications they write for be happy with a consistently downbeat arts section. The result is an unconscious inclination to grade on a curve. That is, if something isn’t very good, but is better than two-thirds of other entries in the genre—superhero epics, quirky or sensitive indie films, detective novels, literary fiction, cable cringe comedies—give it a B or B-plus.

I’ve been reviewing stuff for more than two decades: music, books, software, hardware, theater, and more. If something is really crap, I generally don’t bother writing a review. But I’ve spent my time trying it out; time that is unpaid. Occasionally – just very occasionally – I’ve published bad reviews, and they are intended as warnings to the public who might be interested in purchasing the item in question.

One such example is this 2006 review of Sting’s album of songs by John Dowland, one of my favorite Renaissance composers. I said:

Next is Flow My Tears, based on the melody from Dowland’s “hit” instrumental, Lachrimae. Stings sounds like a mediocre singer in a shower. His voice is terrible, his tone is slightly off, and it makes me want to howl at the moon. He basically massacres this song – though you don’t hear him gasping any more – and his braying is a sorry sound indeed.

But, I also concluded the review saying this:

Now, I have to admit that it is entirely possible that Sting’s performance is closer to actual Elizabethan performance than we in the 21st century can imagine. Shakespeare scholars, for example, have shown that no Shakespeare play was met with the same awe and respect that we modern theatre-goers show; it is very possible that this performance accurately reflects the majority of Elizabethan minstrels. Well, all but the part with the processed voices.

Here’s another record I panned, which earned me an angry email from the musician question. My criticism was his questionable choice of tempo:

The liner notes mention something curious that motivated the guitarist’s playing. He “tried to keep an ideal ‘tactus’, that of my heartbeat at rest (53 on the metronome) throughout the Variations.” This is one of the most questionable reasons to record at a given tempo that I can imagine. Whatever motivated this odd decision certainly ruined this recording.

I ended that review saying:

It’s rare that I hear a recording that is this lugubrious and disappointing. I strongly suggest avoiding this disc.

Source: The reviewer’s fallacy: when critics aren’t critical enough.

5 thoughts on “The reviewer’s fallacy: when critics aren’t critical enough. – Slate

  1. I value your reviews, Kirk, for their honesty, and because you usually explain how your experience and criteria might not mirror those of some of your readers. I agree that there is a widespread problem of fawning reviews. This forms an unbalanced dysfunctional pair with the many reviews whose only goal is snark. I suppose that snark, too, sells papers. The two styles often share the trait, that a major theme of the review is how much more the reviewer knows than anyone else.

    I’m thankful for the quality, utility, and sometimes even modesty of your reviews.

  2. Re: “. If something is really crap, I generally don’t bother writing a review.”

    I understand the rationale here…and it may be almost necessary at some level. But it reminds me of a complaint I’ve had that goes back decades and has been repeated many times since.

    Back in my teens and twenty-something years, I subscribed to Stereo Review. Among other things, I relied on it for advice about what audio equipment to buy. However, I noticed a problem: Almost every review that appeared in the magazine was at least mildly positive. “Are there no truly bad products?” I queried. “It doesn’t seen possible.” I was not alone in my concern. In one issue, a Letter to the Editor appeared with precisely this question. There answer was (paraphrasing): “If a product is truly bad, we don’t deem it worthy of review — except occasionally when it is such a popular item that we are certain our readers would want to know about it. The result is that almost all of our reviews are positive.”

    I was skeptical. I figured at least a good part of the true rationale was that they didn’t want to risk losing money by dissing products made by their advertisers — resulting in the companies pulling their ads.

    Regardless, it didn’t work for me. If I went to a store and narrowed my choice for new speakers down to 3 brands — all major brands — and then checked Stereo Review and found only 1 item reviewed — what was I supposed to make of that? That the other two speakers were crap and that’s why they were not reviewed? Or that the magazine simply hadn’t gotten around to reviewing them yet? Or what?

    While I understand that one need not review every piece of crap out there, especially products that no one is likely to know about or buy, I do feel there is an obligation to post negative reviews of significant products even if that means posting many negative reviews.

    Movie reviewers have more of the right approach here. Almost every movie that gets released gets a review. And some of my favorite reviews have been of truly horrible movies. Roger Ebert published an popular book made up entirely of such reviews: Your Movie Sucks.

    • With something like stereo equipment, I would expect the bad products to be reviewed. But then you run into the problem that a manufacturer of a product that is panned might not want to advertise. So the self-censorship is such that only the decent or good products get reviewed. Fortunately, we now have easy access to non-reviewer opinions, so we can find out about such products.

      For the reviews I write on this website, I don’t want to waste time telling people about things that aren’t bad. And writing for Macworld, no editor ever told me to not review bad apps or products, and, at times, I did, but I don’t really enjoy it.

      • This website is not intended as an exhaustive/encyclopedic reference. So your approach makes sense. I would be less sympathetic to a source like Macworld if they took the same approach.
        But as you say, Macworld did not. When I did reviews for them, they never once pressured me to upgrade a bad review.
        I did not mean my prior comment to be critical of you specifically. Rather, it was just a comment on the general issue being raised.

        • Yes, I understand.

          Macworld only once queried my mouse rating, for an app where the rating was a bit higher than the editor thought was appropriate for my review. But they never tried to influence my ratings in any way.

          It is difficult, because, like you, I want to know what is crap. Audio equipment is especially problematic, because of the incestuous relationship between publications and manufacturers regarding advertising. I’m sure there are other types of products like that too, but none that I follow.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.