Expert publishing blog opinions are solely those of the blogger and not necessarily endorsed by DBW.
Today’s publishers have it pretty rough compared to previous generations: they’ve seen the sequential rise of the Internet and ebook software upend their traditional business models, and in many cases have risen to the challenge admirably. Despite this shift, though, the last thing they want to hear is that more change is coming.
Data management technologies have been maturing over the past few years and are now at the stage where they are ready to be used widely in the publishing world. However, there’s still a widespread lack of understanding of what those technologies are, and what their implications are for publishers’ business models.
Textbooks: A Matter of Localization
The educational publishing sector has much to gain by using technology to innovate business models. Let’s think about what companies selling textbooks actually do: they have a bank of educational content, and sell that content in the medium of textbooks to various territories. The problem is that each country has a different educational syllabus, so each textbook must be created in a bespoke fashion to match those requirements.
Naturally this localization process takes quite a lot of time and effort, and this is frustrating to see, given that the underlying content is essentially the same—the rules of calculus don’t change depending on whether you’re in South Africa or Canada. There is therefore a growing understanding among educational publishers that there needs to be an alternative to labor-intensive localization.
Specific service providers offer a potent mix of linked data technologies and data analysis which can accomplish this. What these technologies bring is the capability to understand text-based data in a much deeper way by isolating and labelling entities within text. The sentences are broken down into subject, predicate and object, and the relationship between these data points is saved in specialized databases. This technology may sound complex, but it’s already mature and available on the market, and publishers that take advantage of it will gain a competitive advantage. They will get a more nuanced understanding of their existing assets with fewer man hours, and will be empowered to repackage their content in new ways.
An example may help us to see how this can benefit educational publishers. You’re a well-known textbook provider and you’ve been commissioned for a math textbook by the Canadian government. Most countries’ education departments publish specifications for what knowledge needs to be taught and at what stage, usually called “learning objectives.” Under current processes, you would have to spend a lot of manual time and effort matching Canada’s learned objectives requirements with your own content bank of math topics.
However, by storing your content in a linked database, you would be able to automate this content matching. What’s more, by utilizing natural language processing, you’d be able to automate this process regardless of what set of “learning objectives” you’re working toward. By training a computer system to make these content matching connections automatically, you’re significantly reducing the amount of time your employees are required to spend doing dull repetitive tasks, freeing up their productivity for more added-value work.
Non-Fiction Publishing: Unlock Your Content
As you may have picked up on, this technology is just as useful for the broader publishing market of non-fiction and reference books. If there’s one thing that ebooks have taught publishers, it’s that they have to accept that they no longer have complete control over the platform on which their media is consumed. In order to be successful in this new era, publishers need to stop seeing themselves as a seller of a product and more as a provider of a service—the service of imparting knowledge.
Once publishers can make this leap, all sorts of new avenues become available. It no longer matters if your content is delivered via a printed book, an ebook or some exciting new technology yet to be invented: it’s still your content and you’re still generating revenue from it.
If non-fiction publishers link up their data, they will be able to free their content from the bounds of book covers, and distribute according to who is able to pay the best price. Storing linked data in a specialized database makes it orders of magnitude easier to isolate relevant existing content for reuse.
For example, imagine you are the content manager for a botanical encyclopedia. If the Chelsea Flower Show approaches you about a content partnership, you’ll want to be able to cut a good deal through selling them your content, while at the same time minimizing the cost and effort required on your end to actually retrieve that content for your new customer. As it stands, your content is likely trapped in print form and would require a great deal of expense in order to repackage it for the Chelsea Flower Show. However, if this content were digitized and stored as linked data, it would be much easier: the facts would already be organized around metadata categories, such as flower genus, region or preferred climate. A great example of this in action is the BBC nature website, where you can view content on animals by region, behavior or genus.
Publishers have likely heard this all before: “new technologies will allow you to break free of the book and control your content more effectively!” What I would emphasize here is that the technology is now mature and ready for market. The point has been reached where the overhead cost and associated risk of taking on a new technology is now lower than the cost of inaction.
To get all the ebook and digital publishing news you need every day in your inbox at 8:00 AM, sign up for the DBW Daily today!