Discovery, User Experience and the Long Tail

Print Friendly

There has recently been some debate as to whether the long tail for ebooks exists:

– “Dispelling the Ebook Long Tail Myth” by Marceloa Vena, head of the digital trade book business at Italian publisher RCS Libri
– “New Data on the Long Tail Impact” By Mike Shatzkin, industry consultant and Digital Book World Conference chairman

The Long Tail for ebooks does exist. It is even getting longer and more stretched out than ever before, but life for those authors and publishers out in the long tail isn’t improving, at least not yet.

Chris Anderson’s article “The Long Tail” first appeared in Wired magazine in October 2004 (almost 10 years ago) and was later (2006) published by Hyperion in extended form as a book. Despite its impact, it may be among the most misunderstood business books ever written.

The central thesis of Chris Anderson is that the Internet has changed retailing, not production:

– The Internet has reduced the cost of holding inventory, especially content-based inventory, to near zero. There is no real benefit in restricting the size of the catalog, as a physical book shop must. On the contrary, a larger catalog creates a stronger value perception of what the retailer can offer (also true for physical bookshops, but it comes at considerable cost).

– There is increased profit potential for retailers because the margins of long tail items are superior because consumers are less price sensitive when it comes to niche content.

– But in order to exploit the long tail retailers have to build “filters” that actively guide users into the long tail. This part of the book is often missed or overlooked.

Chris Anderson book “The Long Tail” has been misunderstood in many ways and one presumes especially by readers who did not take the time to read the second part of the book in depth:

– Chris does not argue that a retailer can offer long-tail content only without having the best-sellers — the “head” of the tail. On the contrary, the head is essential to attract consumers to visit (familiarity) and it behooves the retailer to then guide the user to long-tail content where retailer margins are higher. Implicit in this message is that availability by itself, greatly improved by the Internet though it is, does not favor the long tail. It is availability PLUS discovery filters (those “algorithms” so much disliked by many in publishing) that drive purchases into the long tail.

– Also Chris does not argue that producing long-tail content is more profitable than producing block-buster content unless your overhead is near zero (as it is for user-generated content, but certainly not professionally produced content).

Some of the key exhibits Chris shows are subscription services like Rhapsody (Spotify would be a better contemporary example) and Netflix. These subscription services require users to continuously discover and consume new content or the user will unsubscribe, which represents a loss of income for the service operator. Thus subscription services have strong interest in a user continuously discovering new content. Spotify understands this very well as its purchase of Echonest (the foremost “discovery algorithm” company in the music industry) and several playlist-curation start-ups shows. Improved discovery beyond a honeymoon period of several weeks to a month is not an implicit result of having all-you–eat access to content, but a business imperative for subscription providers to avoid losing subscribers and which requires additional effort by the service operator.

As an aside: Many ebook subscription services, like Oyster argue they are “aspirational services” like gym memberships. Users remain members even if they don’t actively use the service out of guilt or for reasons of optionality (cancel your gym membership and you have to pay a sign-up fee to join a new gym). Book readers have alternatives in the form of simply going to a bookshop or borrowing from their local library. A book subscription service that is not actively used on the other hand may be dropped in a heartbeat (nothing is more annoying than money leaving your account every month).

Returning to one of my earlier points: The internet per se, does not improve content discovery, which is what the study published earlier this week on the DBW blog reaffirms. The Internet makes content more readily available, for example via Google, if you already know it exists (awareness), have developed an interest in the content (interest) and are sufficiently motivated (desire) to search for it (action), the four steps in the traditional AIDA marketing funnel.

We know from mobile operator portals, especially in the WAP era [WAP = Wireless Access Protocol or how we accessed content on mobile phones before there was the iPhone] that cumbersome user interfaces (and few things are more tedious than pressing tiny buttons on a phone to navigate from one screen to the next) REDUCE discoverability even in the presence of very long tails.

Amazon offers a superb shopping experience, if you already know what you are looking for (awareness) and want to buy it (desire/action). Through low prices, ease of use, trust and reliability Amazon has built a platform that is not a discovery portal, but a destination where you mostly buy content discovered elsewhere (the much loathed show-rooming effect).

Amazon does make recommendations but these are optimized to up-sell or cross-sell (increase your basket value) or are based on re-targeting (reminding you of products you previously clicked on). Up-selling for Amazon means guiding a user to higher margin products, which are typically not books. It is also worth bearing in mind that Amazon’s recommendation engines are optimized to be extremely fast. This comes at the cost of recommendation quality. Any delay would lead to consumers abandoning their shopping basket and hence reduced revenue instead of increased revenue.

In a nutshell, improved availability alone does not lead to improved discoverability. The critical component to better online discovery experiences are great user interfaces and recommendation algorithms and we have yet to see the power of personalisation, the Internet’s biggest strength, being fully deployed.

Bookshops have been optimizing the discovery experience for hundreds of years, online book shops (and Amazon in particular) have not.

The future is still ahead of us.

Andrew Rhomberg

About Andrew Rhomberg

Andrew is the founder of Jellybooks, a start-up focused on exploring, sampling and sharing ebooks. He previously worked at txtr (whitelabel ebook retail platform), Skype (internet telephony), Reciva (internet radio), gate5 (now Nokia Maps), and Shell (oil). He holds a science Ph.D. from MIT. Follow him on Twitter at @arhomberg.

Related Posts:

8 thoughts on “Discovery, User Experience and the Long Tail

  1. Andrew, well put—it’s always worth remembering that navigation, curation and filtering are key to creating an intersection where the purchaser meets a product they are likely to buy. Because inventory is so much more readily available for digital products, the potential is that ANY title can become a “frontlist” title at any time, through smart marketing and promotion. The danger of course is that a glut of products creates too much supply and too much noise for the consumer—in your example above, this is why Amazon’s recommendation engine works fast and not smart—because they win no matter what is purchased.

    Despite my background as a marketer, there always seems to be a little magic in how a thoughtful, coordinated marketing campaign produces measurable results.

    • We live in an age of abundance and in the case of ebooks that abundance has increased significantly:

      - self-published (some soso, but many sparkling gems, too)
      - nothing goes “out of print” any longer so the number of ebooks we can chose from increases relentlessly

      Abundance is one of the things that is very, very unlikley to chnage in the 10 or 20 year in book publishing.

      With abaundance comes an extended long tail and ever increasing challeneg for *any* book to be discovered

  2. One of the premises of Chris Anderson theory is that the cost of maintaining a quasi-unlimited inventory of digital books is almost nil: i.e. there’s almost no overhead.
    Unfortunately, this is not true.
    As Andrew stresses, availability without discoverability does not sell (or rent) books. And discoverability has a cost. It involves resources; they might be IT resources, better and richer metadata, social networking activity… you name it!
    Moreover, this cost is almost proportional to the number of items. If you have an inventory of 10,000 books and you want them to be discovered, it’ll cost you more than if you just have 100. Maybe not 100 times more but, certainly, much more… And if you stop, the book will quickly sink back into oblivion.
    But, in my opinion, the biggest problem is that, alas, digital products go stale.
    A printed book might spend five years in a warehouse and almost look (and \work\) like new. But how many five-year old e-books could be used now? Will the epub3 format have many adepts in 2019?
    I guess that the tail might be \longish\, but not as long as Chris Anderson describes it.

    • Well yes, “but”

      today’s best seller it’s tomorrow’s long-tail item. In the case of ficton it might be as relevant to new readers in 5 years as it is today (without the buzz or water cooler chat, though).

      Some non-fiction age rapildy, though. That is correct. Some Porter books are still a great read 20 years later and Chalers Darwin’s “origin of Species” is still a good read, too, more than 100 years after it was published. Off course a lot of non-fiction from the 19th Century hasn’t stood the test of time quite as well.

      As regards the cost of a longer tail: hosting cost are so low that the incremental cost of an extra 10,000 or 1 million books is negibile. I should know, becasue we run this as a free service at Jellybooks. What costs money is the human effort of creating and mainting the code base, the discovery algorithms, etc. This costs real money, but that human effort is almost the same whether it’s 10,ooo books or 1million books. While the number goes up 100-fold the ffort may only increase 10-20% (more exceptions are generated but nowhere near linear to the number of books).

      Digital Businesses like Twitter, Facebook and Google are in fact scale business. They become more profitable as numbers go up and the same is true for Amazon. Many digital business are in fact “too cheap to meter”

      5 year old ebook (that would be 2009) work just fine, as the big change-over was 2007 (launch of the Kindle). ePub is aresonable “open” standard, so there in fact far less “Betamax” risk than we have seen in music, movies and the like. The same can not necessarily be said for some propietary software like the ACS encryption protocol that many publishers insist on.

  3. Excellent corrections to Vena’s and Shatzkin’s misrepresentation of The Long Tail. Just what I thought: Did they read the book? Do they understand it’s not an either/or dynamic?

    It’s difficult to get statistics on Long Tail sales in publishing. We certainly aren’t going to get them from traditional publishers still investing in bestseller tactics, and neither Amazon nor wholesalers release hard data for public analysis. While we do know that ebook sales are a growth trend, small press and indie sales are not fully tabulated either, and that’s where the bulk of Long Tail activity occurs.

    We do know that the number of titles is increasing according to Bowkers ISBN sales, so the Long Tail of publishing is growing, whether it be limp, fat, or lumpy. It’s funny that traditional publishers think that it should magically serve up profits for their back catalogs… DUH, the sales are in the niches, and those in the trenches are following authorearnings.com to learn how to leverage their positions down the Tail.

  4. Thanks for this interesting piece, Andrew.

    I’m a little skeptical of your conclusion, largely because I’m just not sure recommendation algorithms will ever be as good as you imply. I think you might be overstating bricks-and-mortar’s ability to make personalised recommendations a little: I’m a bookseller, and even when I know my customer well I can struggle to recommend the right book for them. I’m also wary of the tendency to treat ‘book recommendation’ as a problem with a determinate answer which can be solved when enough computing power is thrown at it – a necessary assumption when designing the algorithms, sure – because humans, and human tastes in particular, are so much messier and more fickle than that. I’ve rarely encountered a customer who wouldn’t accept a good recommendation in a genre they like which is ‘close enough’ – pretty much exactly what Amazon is doing now.

    On the other hand, Amazon has Goodreads, and maybe once they start to use that data in earnest I’ll be proven completely wrong.

    Either way, I don’t feel that recommendation algorithms/marketing is necessarily an either/or question. I’m drawn back to this paragraph from Shatzkin’s piece:

    \…the boosts publishers can give a book — even their catalogs provide more marketing lift than most self-published books start with — will become increasingly important as the market becomes increasingly flooded. If the data Vena has presented turns out to be the future trend, the increase in self-published titles will drive more and more sales to a smaller number of winners, and my hunch would be that the winners will most likely be from publishers. That would indeed be a paradox and a totally unintended consequence.\

    I feel like maybe you’re talking past each other a little. I don’t think Shatzkin ever says recommendation algorithms can’t or won’t play an important role in increasing long tail discoverability – just that marketing power is also important. And that’s undeniably true.

    • I never assumed that brick-and mortar boo stores could make superior persoianlised recommendations as in “hand selling”. I generally assume the opposite.

      What I menat is thet the physical book form and how it is sdipalyed ina books tore makes for a browsing and “stumble upon” experience that has been perfected ofr a very long time.

      Theer are innate physical experiences that digital doesn;t replicate well becasue how we use and experience digital services is different.

      And yes, I am of the opinion that ion the long run (5-10 years) algorithm will be far, far more powerful in takciling some issues than we can imgaine today. They will not be perfect. I don;t claim that, but they will be much more powerfula nd they have a huge innate advantage: cpmputers can deal with large data sets much better than humans can. They are also superb at filtering human opinion and recommendations.

      Now there is are interesting questions that haven’t been fully addressed.:

      - are the BIG 5 bets palced to maximize the “head”
      - will the BIG 5 be substantially the same companies (though increased in size by swallng smllaer publishers) the same in 5 or 10 years time as today?

  5. The long tail is less of a problem if one knows how to guide users to view other books.
    This can easily be done in an online book store as suggested by the great article.
    With Helicon Books technologies this can also be done inside the book by adding a page at the end of the book suggesting users other books by this author etc.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>