From The New Publishing Standard:
Over at Book Riot, Arvyn Cerezo takes us through the process and then explains why they will still recommend a book you have absolutely no interest in.
Machine learning systems called recommender systems, or recommendation systems, use data to assist users in finding new products and services … These algorithms, however, need a decent amount of data to choose a recommendation strategy in order to produce meaningful and personalized recommendations. This data may include past purchase histories, contextual data, business-related data, user profile-based information about products, or content-based information. Then, all of these are combined and analyzed using artificial intelligence models so that the recommender system can predict what similar users will do in the future.
All very clever, but…
The limitations of content-based filtering include its inability to comprehend user interests beyond simple preferences. It knows some basic stuff about me, but that’s as far as it can get. What if it recommends a racist book? What if it recommends a book that might trigger readers without some heads-up? What if it recommends a book that is problematic? The keyword is nuance, and algorithms can’t tell the difference between two books that have similar stories.
And don’t we know it? Fifteen or more years buying books on Amazon and it will still recommend books I would eat shards of glass than read.
I always figured that was just Uncle Jeff getting revenge for one of my less complimentary posts about Amazon, but it seems in fact it’s just that the recommendations system is as useless today as it was fifteen years ago.
“With all the pitfalls of algorithms — and AI in general — it seems like nothing beats book recommendations done by an actual human being. They are more accurate and more personal. Most of all, you can also find hidden gems that you really like rather than the bestsellers (and what everyone’s reading) that these machine learning systems always spit out.”
Two points arise.
First, “rather than the bestsellers (and what everyone’s reading) that these machine learning systems always spit out” is fundamental to the problem. Algorithms – especially for a commercial operation like Amazon – have the sole purpose of selling more books. They and the company do not give a flying fig about our personal preferences.
Link to the rest at The New Publishing Standard
BooHoo, Amazon presents books it thinks that the person who signed in will want to buy based on their past buying, browsing and searching habits.
As far as “personal preferences” are concerned, PG supposes that some people have “personal preferences” in books that they don’t want to buy or read or do something with, but is Amazon somehow required or expected to understand someone’s personal preferences that have not been reflected in their previous and current activity on Amazon?
If PG was as concerned about Amazon and his personal preferences, he would open a new Amazon account and be careful not to let anyone else use it. Within a few weeks, Amazon would understand PG’s personal preferences by what he did on the site with the new login ID.
As far as “book recommendations done by an actual human being,” without being a snob about it, PG has never met a person working in a bookstore who would have been likely to give him a good and precise suggestion for a book that PG would like to read. The most PG has ever received is something like, “Our twentieth-century history books are over there,” or “Fantasy and Science Fiction is on aisle three.”
To be fair, if PG in his current instantiation ran into PG at age thirty working in a bookstore, current PG doubts his thirty-year-old self would understand much about PG, the elder’s preferences in books.
If PG was good friends with a bookstore employee and had spent hours talking about books with that person, the results might be better if PG showed up when the bookstore was open and the employee was working at the time.