How Amazon’s Algorithms Curated a Dystopian Bookstore

This content has been archived. It may no longer be accurate or relevant.

From Wired:

Among the best-selling books in Amazon’s Epidemiology category are several anti-vaccine tomes. One has a confident-looking doctor on the cover, but the author doesn’t have an MD—a quick Google search reveals that he’s a medical journalist with the “ThinkTwice Global Vaccine Institute.” Scrolling through a simple keyword search for “vaccine” in Amazon’s top-level Books section reveals anti-vax literature prominently marked as “#1 Best Seller” in categories ranging from Emergency Pediatrics to History of Medicine to Chemistry. The first pro-vaccine book appears 12th in the list. Bluntly named “Vaccines Did Not Cause Rachel’s Autism,” it’s the only pro-vaccine book on the first page of search results. Its author, the pediatrician Peter Hotez, a professor in the Departments of Pediatrics and Molecular Virology & Microbiology at the Baylor College of Medicine , has tweeted numerous times about the amount of abuse and Amazon review brigading that he’s had to fight since it was released.

Over in Amazon’s Oncology category, a book with a Best Seller label suggests juice as an alternative to chemotherapy. For the term “cancer” overall, coordinated review brigading appears to have ensured that “The Truth About Cancer,” a hodgepodge of claims about, among other things, government conspiracies, enjoys 1,684 reviews and front-page placement. A whopping 96 percent of the reviews are 5 stars—a measure that many Amazon customers use as a proxy for quality. However, a glance at Reviewmeta, a site that aims to help customers assess whether reviews are legitimate, suggests that over 1,000 may be suspicious in terms of time frame, language, and reviewer behavior.

Once relegated to tabloids and web forums, health misinformation and conspiracies have found a new megaphone in the curation engines that power massive platforms like Amazon, Facebook, and Google. Search, trending, and recommendation algorithms can be gamed to make fringe ideas appear mainstream. This is compounded by an asymmetry of passion that leads truther communities to create prolific amounts of content, resulting in a greater amount available for algorithms to serve up … and, it seems, resulting in real-world consequences.

. . . .

Over the past decade or so, we’ve become increasingly reliant on algorithmic curation. In an era of content glut, search results and ranked feeds shape everything from the articles we read and products we buy to the doctors or restaurants we choose. Recommendation engines influence new interests and social group formation. Trending algorithms show us what other people are paying attention to; they have the power to drive social conversations and, occasionally, social movements.

Curation algorithms are largely amoral. They’re engineered to show us things we are statistically likely to want to see, content that people similar to us have found engaging—even if it’s stuff that’s factually unreliable or potentially harmful. On social networks, these algorithms are optimized primarily to drive engagement. On Amazon, they’re intended to drive purchases. Amazon has several varieties of recommendation engine on each product page: “Customers also shopped for” suggestions are distinct from “customers who bought this item also bought”. There are “sponsored” products, which are essentially ads. And there’s “frequently bought together,” a feature that links products across categories (often very useful, occasionally somewhat disturbing). If you manage to leave the platform without purchasing anything, an email may follow a day later suggesting even more products.

Amazon shapes many of our consumption habits. It influences what millions of people buy, watch, read, and listen to each day. It’s the internet’s de facto product search engine—and because of the hundreds of millions of dollars that flow through the site daily, the incentive to game that search engine is high. Making it to the first page of results for a given product can be incredibly lucrative.

Unfortunately, many curation algorithms can be gamed in predictable ways, particularly when popularity is a key input. On Amazon, this often takes the form of dubious accounts coordinating to leave convincing positive (or negative) reviews. Sometimes sellers outright buy or otherwise incentivize review fraud; that’s a violation of Amazon’s terms of service, but enforcement is lax. Sometimes, as with the anti-vax movement and some alternative-health communities, large groups of true believers coordinate to catapult their preferred content into the first page of search results. (Amazon disputes this characterization and says they carefully police reviews.)

Amazon reviews appear to figure prominently in the company’s ranking algorithms. (The company will not confirm this.) Customers consider the number of stars and volume of reviews when deciding which products to buy; they’re seen as a proxy for quality. High ratings can lead to inadvertent free promotion: Amazon’s Prime Streaming video platform launched with a splash page that prominently featured Vaxxed, Andrew Wakefield’s movie devoted to the conspiracy theory that vaccines cause autism.

Perhaps compounding the problem, Amazon allows content creators to select their own categories and keywords.

. . . .

With a product base as large as Amazon’s, it’s probably a challenge for the company to undertake any kind of review process. This is likely why quackery shows up in classified as “Oncology” or “Chemistry.” It’s a small reminder that Amazon isn’t exactly a bookstore or library.

Link to the rest at Wired

Perhaps PG is mistaken, but his take on contemporary society in the US is that there are a great many people who are anxious to silence those with whom they disagree.

It appears to him that, other than a few types of content that are clearly illegal, that “violate laws or copyright, trademark, privacy, publicity, or other rights” plus pornography, Amazon is content to allow its audience the right to decide what it would and would not like to read.

54 thoughts on “How Amazon’s Algorithms Curated a Dystopian Bookstore”

  1. It boggles the mind that people can claim to be “pro-science” then use phrases like “the science is settled” or just assume that there are no questions we could be asking. Vaccines are a great idea. One we virtually stopped researching and just assume they work as they always have. Anyone who disagrees is labeled and ignored. That’s not how science works.

    • Jeff,

      We most certainly are not assuming that vaccines work as they have in the past. The CDC and WHO are taking data on vaccination and disease outbreaks. That is the definition of research being conducted on vaccine effectiveness.

      • And on side effects.
        That is why we know there is no link between vaccines and autism: by extensive research.

        http://www.newser.com/story/272121/with-new-vaccine-study-a-truth-has-emerged-on-autism.html
        —-

        Per CNN, the whole autism-vaccines connection began with a since-discredited 1998 study by Andrew Wakefield, flawed research that was latched onto by concerned parents, including celebrities and politicians.

        —-

        What was notable about this research was the inclusion of children said to be at risk for autism, says Dr. Paul Offit, a Children’s Hospital of Philadelphia vaccination expert who wasn’t involved with the study. “At this point, you’ve had 17 previous studies done in seven countries, three different continents, involving hundreds of thousands of children,” Offit tells CNN. “I think it’s fair to say a truth has emerged.”

        —-

        That is hardly “assuming they just work”.

        One bad report vs seventeen extensive ones.
        That sounds pretty settled.

        • Well, there you go. You trust the government is doing whats good the people, and you trust the media is telling the truth. Meanwhile, Doctors don’t give their children more than one vaccine at a time and if you (a patient) try to do that, you get labeled as NON-COMPLIANT. Meanwhile, there have been hundreds of deaths of healthy teenage girls following the HPV vaccine. They say it is just a coincidence. Of course, you’re going to tell me that the cause of death was ruled something else… because why would the government have an interest in vaccines not being the cause?

          Seriously, in the entire history of governments is it so inconceivable that they are lying to everyone to “protect” the public good? That the same people who tell us the vaccines are safe and have few side effects (really?) have zero interest in even questioning the effectiveness and use of them?

          Vaccines cause deaths, that is a FACT.

          How do I know this? Because someone, somewhere is allergic to something in every medicine ever made. People have died after using Tylenol, but I’m supposed to believe the vaccines are 100% harmless and no deaths have ever resulted in their use?

          That’s not science. That’s government (maybe maliciously, maybe not) ignoring the needs of the few, for the many.

          It’s a fun note to point out, the only people who fund studies are the government and the people who make the vaccines. Any other studies are crucified. Like the one you mention. You should read the ACTUAL study. It doesn’t say what you think it says.

          This government of ours has lied and lied and lied to us yet we keep going back like Oliver Twist, “please sir, may I have another?”

          I just don’t get it.

          • Vaccines cause deaths, that is a FACT.

            Yeah, Jeff-boyo, you’re right. A handful die every year from vaccines.

            You know what else causes deaths? Diphtheria, tetanus, whooping cough, measles, and polio. By the thousands and the millions.

            You pays your money, you takes your choices.

            Ain’t freedom grand?

            • Thank you for making my point. Now, please tell me about the studies the government is doing to reduce these deaths, surely with a government that funds such studies as,”Do you need sleep?” They can spare a few million to do this.

              Don’t worry, I’ll wait. I’m sure there is at least a dozen on how to reduce vaccine related deaths.

              Of course, first they would have to admit a vaccine killed someone, and they will NEVER do that.

          • Because it’s not “the government”.

            It’s dozens of independent scientific organizations in different nations working independently. Unless you think it’s all part of the Global Lizard people conspiracy to kill off mammalians, there is no rational reason to expect those people, who don’t even know each other, to be coordinating and spending years of their lives and tons of money just to hide an insignificant “truth” that isn’t true. Rather, they do it to determine why a handful die.

            Yes, vaccines are “only” 99.9999% safe.
            But rather than trying to hide it, researchers are working to add a few more nines to their safety by figuring out why the handful of casualties catch the disease each year.

            That is hardly evil government at work.

            But it might be the lizard people, right?

              • Did I say you were crazy?
                I merely asked…

                Other answers are valid, but it would be nice to see something a bit more substantial than evil government conspiracies.

                I’m of a generally libertarian bent myself and no friend of big government but I also have seen how the government works from the inside and they just aren’t competent enough to run a conspiracy that spans decades and a dozen foreign governments.

          • Vaccines cause deaths, that is a FACT.

            Exactly. And the same government tells us peanuts are safe.

      • One interesting view on the vaccine issue is that when they were first developed, they were administered individually, so a kid got them over a significant period of time.

        now they are all administered at one time.

        It’s very possible that the load on the immune system from having to deal with all of them at once has an effect on a small percentage of individuals that does not happen when the vaccines are spread out over more time.

      • I don’t have enough of a medical background to have an opinion on whether vaccines work, are safe, are dangerous, whatever. Hardly anybody in this country does. I just have to laugh at the medical part of the whole anti-vax movement.

        What I do have enough of a background in to have strong opinions is philosophy, religion, political science and ethics.

        I will never, ever support the government making it mandatory for me to inject chemicals into my body, or that of my child, when I’m neither sick, nor contagious. Giving up that level of freedom — the right to choose what you put into your own body, when you are not a direct danger to anyone, and you only represent the POSSIBILITY of a FUTURE threat — is complete and total madness. Stop and think about it for a second.

        • I would agree if I lived in China.
          But mandatory vaccination isn’t coming from the government; rather it comes from doctors. It is our government that allowed opt-outs for no reason so an extinct disease is now back and killing more than vaccines ever did.

          Paranoia is a survival trait but there’s paranoia and then there’s paranoia. That is, context matters.

          • The people who lived in China didn’t have the same government 100 years ago. It’s not paranoia, it’s simply an understanding of history.

            And philosophical exemptions are already gone in several states. They will eventually be gone everywhere. That is inevitable at this point. Religious exemptions will follow, leaving only medical exemptions. Which you, the individual, don’t get to decide upon.

  2. I recall some pretty sketchy, unverified medical claims on the shelves of brick-and-mortar bookstores (like the entire diet and weight loss section…)so why isn’t the OP going after bookstores too?

  3. I recall “Eating for your blood type” being a #1 NY Times bestseller despite being full of bad science.

  4. Ah, ‘Wired’, your ADS is strong today.

    “… many curation algorithms can be gamed in predictable ways …”

    Yes, and even paid for – which was those tables as you walked into most bookstores. Why wasn’t ‘Wired’ up in arms over that? Oh, because it wasn’t Amazon doing it, gotcha.

    It’s okay if Oprah toots some book, that there’s a Dr. Oz talking bad diets, but Amazon can’t sell books ‘Wired’ doesn’t think are good? ADS for the win …

    • You mean like all those erotica novels Amazon placed into the dungeon after they received complaints from consumers?
      When that was happening, I remember commenters on this site saying that it wasn’t censorship because people could still buy the books they wanted, the only thing that changed was that Amazon was no longer displaying those books in the search results, and that Amazon was a private platform and could ban or sell what it liked.

      • Which they can.
        But they shouldn’t and they rarely do.
        ‘Tis a puritanical land, forsooth.

  5. An interesting article, that recognizes the complexity of the problem, and that censorship is not the solution:

    It’s a complex and thorny problem, and Amazon is suffering from it as well. There are concerns that tackling this problem could lead to censorship. But there’s a big gulf between outright removing the content, or refusing to sell books, and rethinking amplification and categorization. Amazon can start by doing better when it comes to recommending and categorizing pseudoscience that may have a significant impact on a person’s life (or on public health)

    I don’t know about the USA, but in the rest of the world there are plenty of laws and regulations regarding defamation, the press and advertising. For instance, there are restrictions in cigarette advertising. This certainly damages advertising and cigarette companies, but many societies have decided that is for the common good. I don’t see why Amazon, or other tech giants, should be excluded by this in principle.

    There is one comment by PG, that bugs me a little:

    Amazon is content to allow its audience the right to decide what it would and would not like to read.

    I might be misunderstanding what PG is saying here, but it seems to me that is painting Amazon as a mere conduit, that want to let people choose for themselves. Amazon is just offering the truth of what people want, as created by an objective technology. This is also a common argument when people talk about regulating tech giants.

    Well that is not true, just like in a bookstore, the company decides what and how to show it to the people. The people might then choose what to buy among the goods offered. Even in free searches, there are clues for what an user should look for. The algorithms are chosen to maximize profits, there is nothing necessarily wrong with that, but it is not a free world, objectively presented.

    For instance, Amazon could have tweaked its algorithm to nudge people into buying a vaccine conspiracy book because it has noticed that, once you buy a book of a tight group of related books, then there is an high chance that you buy a whole lot of them. Just like when you buy the first book in a series, you are probably going to buy more of them. Or when you get an hobby you buy many things related to the hobby.
    This does not even take into account the categorization that could appear misleading to some people. For example, if a book appears in the category called “Medicine” people might reasonably assume that is written by a medical expert. There is a whole world of regulations about false advertising and misleading claims that could reasonably be applied to this situation.
    There is no malice in this by Amazon, but it is not a value-neutral process that presents the best books for an argument that people are searching for. It presents content in a specific way geared to make more money.

    What I am saying is that Amazon is not evil, but is also not good. It does not just offer to the people what they want, it creates a whole presentation to nudge people into wanting what Amazon wants. There should be no moral or legal qualms into regulating this process.

    • There is a difference between front table payola and algorithms.

      Payola is the supplier telling the retailer what to highlight. Driven by human bias and self interest.

      Whereas algorithms observe aggregate consumer behavior, abstract it and use the result to highlight compatible products. Humans aren’t directly choosing. Instead, humans choose the figures of merit the algorithms use to weigh the variety of alternatives against. The result is that the algorithm might weigh cost a bit more heavily than some would prefer (say, publishers) or weight narrative accessibility more than others (the literati) would prefer. And since popularity and velocity of sales are factors, the snowball effect is a common outcome.

      But in the end the algorithms don’t force anybody to buy anything. All they do is present a list of options based on other people’s behavior. It’s not 100% neutral but it is as a lot closer to neutral than having humans make the options list.

      • I have to disagree somewhat. I wrote a few sales prompting algorithms myself back in the 1980s as part of a point-of-sale system. The algorithms were great at selling booze to alcoholics. Was that fair or ethical?

        To a certain extent, that is what Amazon is doing. They identify customers who are have a certain buying pattern: people who are driven for whatever reason to make decisions without regard to their better judgement when you offer to scratch some itch.

        I know how to identify products that certain identifiable classes of consumers can’t resist. I know how to write algorithms that will spot people on chemotherapy who are driven to try anything to save themselves. Is it ethical to recommend a worthless high profit item over an effective low profit item? I can easily tune an algorithm to do that. Should I?

        My ethics stop at the point of shoving merchandise in front of a customer because the algorithm says they can’t resist without regard to the best interests of the consumer. Doesn’t Bezos always put the customer first?

        Of course, there are fine distinctions to be drawn. Bartenders make them all the time. Algorithm writers should too. The fact that it is an algorithm doesn’t take you off the hook.

        • “My ethics stop at the point of shoving merchandise in front of a customer because the algorithm says they can’t resist without regard to the best interests of the consumer.”

          This is why my local Walmart has the milk/eggs all the way in the back of the store, their algorithm told them to have customers walk past all that other food/supplies/junk in the hopes I’d by something I didn’t need (or something I did need but forgot to put on the list!)

          “Doesn’t Bezos always put the customer first?”

          Yes, he does his best to give them what they want. If he doesn’t they’ll shop at some place that does.

          “Of course, there are fine distinctions to be drawn. Bartenders make them all the time. Algorithm writers should too. The fact that it is an algorithm doesn’t take you off the hook.”

          Hate to tell you (and if you wrote them you already know), but the algorithm writer doesn’t then control what all the company will use it for. As you said, your “algorithms were great at selling booze to alcoholics”, but they also sell those diaper pail buckets and sleeves to new parents (and those taking care of the elderly as I am.)

          Amazon has been showing other businesses how things could be run/done – are you suggesting they’re also showing how stupid some people can be at buying things? Sadly that isn’t new either, though Jeff and company might be taking it to a new level. (Which, come to think about it, I don’t mind. It might force some of the current walking dummies to start thinking instead of reacting to everything. 😉 )

          • I am not as wealthy as a lot of other algorithm writers and you are free to call me a failure, but I quit writing that stuff. I’m saying that I count ethics above wealth. You can crap on that if you want.

            • I never called you a failure, I merely pointed out that you had no where near the control to dictate how your algorithms were used.

              And I’m sure you would walk rather than help some company ‘do evil’, but that doesn’t stop the next guy/gal that needs money to put food on the table.

              Heck, the US nuking Japan is seen as a great evil by many, but few seem to consider the millions of lives on both sides that were saved. So, are nukes good or evil? 😉

              Algorithms are not good or evil, they are only tools.

              • I agree. Algorithms are not good or evil. But the humans writing and using them do have the capacity for doing good or evil under my definitions of good and evil.

                And I did have control of how my algorithms were used. When I realized that the telemarketing code I was writing would make people’s lives miserable, I convinced a product committee to drop the project while it was still early. Eventually, I got a bonus for that because it lead us into predictive maintenance instead and that was quite successful.

                I’ve never had to make a decision that I considered unethical to keep food on the table. I guess I am just lucky there. Maybe I won’t be so lucky in the future.

                The ethics of dropping a nuke are not clear to me and I’m glad I didn’t make that decision.

                • We are at war. We have a bomb the other side won’t believe possible without proving we not only do have it but are also willing to use it.

                  If we don’t use the bomb we have to do it the hard way – by invasion, with millions dying on both sides and their country ground into the dirt.

                  If we use the bomb none of our people need die, and only tens of thousands of theirs.

                  A hard choice, yes, but one that most agree changed the course of that war for the better.

                  An algorithm that had gone too far ‘over the top’ might be a good thing if it gets people to actually think – and may help make them more immune to other bad algorithms when they come across them.

                  Rather than ‘childproofing the algorithms’ I think we should be ‘algorithm-proofing the children’. 😉

        • Again: who makes the distinctions?
          Who decides the “special” criteria?
          Are consumers mindless lemmings unable to exercise judgement, that must be protected from the consequences of their own choices? Even alcoholics are taught they have a choice.

          Adults have a right to make their own choices whether you think they are right or wrong. And then they get to live with the consequences.

          • As the author of the algorithm, I make a decision. Do I write an algorithm to serve a drunk as many drinks as i can shove down his throat? The drunk can and will decide for himself, but am I innocent when I push a drink in front of him? He has a choice, but I have one also.

            • How does someone writing code push a drink in front of a drunk? Does that mean the programmer wrote code that advertises products to people based on previous purchases?

            • How did you know a customer was an alcoholic?
              Were they required to show their AA token?
              Sign up as one?
              Or merely be a heavy buyer of booze?

              For that matter, did anybody complain or were you preemptively protecting them from temptation on your own authority?

              Should we perhaps be banning booze ads from mass media while we’re in the business of protecting people from their own choices?

              That said:

              It selling “dangerous thoughts” to all comers really equivalent to targeting alcoholics with booze ads?

              If tweaking your code was something you felt your ethics compelled you, that’s fine and dandy, but what makes your personal values universal?

              Different folks have different ethical compasses. Shouldn’t one allow others the courtesy of living by their own standards?

              There’s the whole slippery slope question of where do you stop. Me, I think it is safest not to start. In this case if people don’t like how Amazon’s algorithms work, either ignore them or if they’re really offended, shop elsewhere. Like WalMart.

              • To answer your first question, I didn’t know. My algorithm, which was dirt simple, basically read something like “if a customer buys items from a class of merchandise at greater than X rate, push an item from the class.” I had a cheesy self-learning kicker that fiddled with the rate X when the push succeeded. Watching the system over a few months, it worked best when the product class was alcohol.

                From that observation, I decided the slope was too slippery for me and got out of writing code for merchandising. I did the same thing a decade later when found myself writing code to prompt telemarketers. So I agree. It’s safest not to start.

                My ethics are as simple as my cheezy algorithms: I avoid doing things I wouldn’t want the whole world to do. (Thanks Mr. Kant.)

                I’m surprised that Amazon’s merchandising algorithms aren’t much better than the cheese I used to slice. They push stuff the customer was interested in yesterday. I’m glad I didn’t write them.

                • I buy econ books at a rate I am sure is quite a bit higher than normal. Amazon responds by pushing more econ books at me. I complete the circle by buying more econ books.

                  I was interested yesterday, I’m interested today, and I suspect I will be interested tomorrow. My interests tend to last for a while. I think it’s great.

                • So, Terrence. Do you stick tighter to your interests because Amazon prompts you to? Or because you like what you like?

                  If you started buying books on, say, statistics because Amazon inferred that you like statistics based on your taste for economics, I’d be impressed.

                • I like what I like. It’s what I liked yesterday, and Amazon caters to that like by promoting what I liked yesterday. Each time I buy an econ book, it signals them today that I still like what I liked yesterday.

                  It’s also reasonable to observe the venue through which I satisfy my likes also serves to sustain them. Libraries, bookstores, and journal reviews did that before the internet. They still do, and have been joined by blogs and Amazon.

                  They do the same thing with some specialized tools and camping and hiking equipment. All based on yesterday’s interests.

                • I like SF&F. (Shocking, I know.)
                  My mother likes romance plus SF&S. A few crime procedural mysteries.
                  Since her Kindle runs off my Account, 90% of the ads Ama,on shows me are those. No litfic, tough guy action, or children’s books. Not even non-fiction, which I do buy with some regularity.

                  I buy other merchandise with them but they hardly ever offer up anything else. To them I’m an escapist genre indie author shopper first and foremost.

                  Suits me just fine.
                  The algorithms don’t need to stray very far to keep customers coming back. If anything, showing off how much they know might be creepy and counterproductive, as Target discovered.

          • “Are consumers mindless lemmings unable to exercise judgement, that must be protected from the consequences of their own choices?”

            Childproofing the world instead of world-proofing the child …

    • > mere conduit

      Possibly the most likely scenario.

      Back when bookstores still existed in my area, they had entire sections, not just shelves, dedicated to “foot reflexology”, “aromatherapy”, “naturopathy”, and crackpot diets. I don’t remember ever seeing an actual medical book in any of them.

      It might have been a self-defining selection; almost all medical books are written for specialists, either for textbooks or as reference books; cast in Greater Indirect Medicalise, possibly more to show the erudition of the authors than to inform any readers.

      Even if you did find a “popularized medicine” book, it was just as likely to be written by a crackpot as an MD (not a guarantee against crackpottery). The publisher’s only concern would have been “how many of these can we sell?”, possibly balanced by “likelhood of lawsuit.”

    • There is no malice in this by Amazon, but it is not a value-neutral process that presents the best books for an argument that people are searching for.

      You can’t have a value neutral process that selects the best.

      • You can’t have a value-neutral process that selects the best.

        I checked and found that value-neutral is actually a technical term already in use to mean judgement independent from a value system. So you are right in the common meaning of the term.
        But I actually meant something else, I meant something that does not try to alter the original values. So What I meant with “a value-neutral process” is that the algorithm it is not choosing the best according to the values of some person. It is not trying to choose what the person would choose had all the information. It is not an objective machine. It is trying to increase sales, so it might suggest something that you would have never tried on your own, but statistical analysis says you might like.
        Amazon is a publisher rather than a platform: it is not simply presenting what people want, but rather presenting some items in a certain way. For example, they have a system in place that behaves differently with adult content. It is not blocked, but it is compartmentalized. They could very well do the same if, as a society, we agree that this is potentially harmful content in the wrong hands.

        • I meant something that does not try to alter the original values.

          What values? Whose? Derived how? Discerned how? Math is value neutral. Applying it to sell books not.

          One could make a case that a neural net is value neutral, but that has to get over the problem of how the net is trained.

          In the Cold war, NATO trained an early net to identify Soviet tanks from aerial photos. It worked like a charm until it was deployed, and was a miserable failure. Seems the training data had pics of Soviet tanks on cloudy days, and NATO tanks in sunny days. The net learned that NATO tanks had shadows, and Soviet tanks did not.

  6. It’s a brave new world out there– in this case, brought to us by automated algorithms…. they save Amazon lots of money, you know.

    • You don’t have to actually pay attention to the algorithms, though. I don’t. When I go to Amazon I know what I’m looking for.
      Like it’s always been since Hammurabi’s time: buyer beware.

      • All this reminds me of the authors a few years back telling us consumers were being trained to pay less for books.

  7. Yet another veiled call for Amazon to begin censoring or censuring its offerings. Let people read what they want to read without any interference from an algorithm. What is wrong with people these days? That dystopian future in the movies isn’t coming from the right. It’s coming from leftists who want to make sure people are only exposed to ideas which have the approval of the leftist thought police.

    • If you let people read what they want, they might not read what I want, and worse, they might read what I don’t want.

  8. I’d be content if Amazon would tweak its algorithms not to show me books by Laurie LaRue when I enter the search term “Diana Gabaldon.” Is that too much to ask?

    • There are many sites I wish had an ‘except’ along with their ‘and’ buttons – but that would then limit what you might buy from them … 😉

  9. Amazon recently delisted a book about political Islam by Tommy Robinson after selling it for 18 months. He ran afoul of the BBC. Protocols of The Learned Elders of Zion and Mein Kampf are available.

  10. Couple things here.

    1) Its not really surprising that ANTI-vax books are the top sellers. People who aren’t anti-vax aren’t looking for books to buy on the subject. So of course the top 11 sellers are anti-vax books. That’s not an Amazon algorithm problem, that’s just human nature.

    2) So, if a “medical journalist” isn’t qualified to research and write a book on this topic, does that mean that a “war correspondent” isn’t qualified to give news reports regarding military actions, and sports journalists who have never played pro football aren’t qualified to give analysis regarding a team’s strategies? This isn’t a book on “how to make a vaccine.”

Comments are closed.