Ten Reasons You Can’t Trust Everything You Read About the Author Earnings Report
Last week I contested some of the key conclusions Hugh Howey reached through the data in his Author Earnings report. i09.com covered that discussion, and Howey took to that site’s comments section to register a rebuttal.
In doing so he put forth a number of mistaken claims about his Author Earnings report as well as the 2014 Digital Book World and Writer’s Digest Author Survey, which I coauthored.
Here are ten of them.
“DBW released a survey that ignored the 99% of people who query along the traditional route and never get published at all. That is, they looked at the 1% who get published, and compared this to the 100% of self-published authors, all of whom get published.”
The Digital Book World and Writer’s Digest Author Survey was open to a range of authors and did not ignore anyone, deliberately or otherwise (more on this below).
In terms of our analysis, I have never presented on this website a comparison strictly of income of self-published vs. traditionally published authors based on the findings of that survey.
Rather, my examination of author income used entirely different categories. The charts I presented compare four types of authors: aspiring (not yet published), traditionally published only, indie only and hybrid (both traditionally published and indie published).
Howey has repeatedly objected to comparisons of self-publishing vs. traditional publishing that were simply never done on this site using the Author Survey. He claims we unfairly compared the income of two types of authors—indie vs. traditional. Yet we compared four types of authors, including the unpublished ones he says we ignored.
Howey mistakenly faulted the Author Survey for not surveying authors waiting in the slush pile when in fact we had. He contended that, as a result of ignoring these aspiring authors (which we did not), we misrepresented the success of indie publishing, making it seem less than it actually is. Howey’s contention has been that our analysis took the successful authors from traditional publishing and compared them side by side with the unwashed self-published masses. Howey’s suggested solution was to include slush-pile authors in the counts of traditionally published authors even though they are not published so that we would have a “fair” comparison with income from indie publishing.
What he suggests is bad research practice. Howey’s preferred method ignores the activities of hybrid authors and where they fall in a traditional vs. indie tally. Moreover, as I explain in my post on authors’ publishing decisions, his approach would require that we guess at what people will do (e.g. self-publish, traditionally publish, continue to wait and try to make it through the slush pile or give up altogether) and that we estimate that they are making no money whether or not they eventually will—an approach tantamount to making up data.
The decision in the Author Survey to separate out the aspiring authors and report on what respondents told us they did and earned has been criticized as an example of the study’s “flawed” or “biased” analysis. Yet the analysis is based on sound research methodology and an accurate reflection of the facts the authors themselves presented.
“Focusing on the top 7K or 50K bestsellers is the best way to avoid this oranges/apples comparison. Here, we’re looking at the top 1.5% of both routes. It’s a fair comparison. Their survey wasn’t.”
A fair comparison of what—the top indie earners to the top traditionally published earners? This was never a question our research was trying to answer.
Moreover, Howey’s data deliberately ignores the 98.5% of authors published on Amazon who haven’t made it into the elite and fully neglects the unpublished in the slush pile or with books in their file drawers—those he supposedly champions by disparaging our methods.
“Keep in mind that this rebuttal was written by someone who defends a survey that polled the readers of Writer’s Digest, and 40% of their respondents HAD NOT YET WRITTEN A NOVEL. 30% of those who had only had A SINGLE NOVEL under their belt. They have not been open and honest about their methodologies or their sampling bias.”
If you examine any of the posts I have written for DBW, you will see that we are very “open and honest” that the survey we conducted was non-scientific and that it was drawn from a voluntary sample. The survey was distributed more broadly beyond the readers of Writer’s Digest. Invitations went to Writer’s Digest mailing list and were also sent out to the Romance Writers of America and the Science Fiction and Fantasy Writers of America. It was also announced on Twitter.
The goal of the survey was to understand what authors—all kinds of authors and not merely the elite—want. The purpose was to provide useful information to publishers, self-publishing service providers and authors alike. The data we gathered and our analysis of it forthrightly bears out that purpose.
“They sell the results for $300 a pop (pressure from indie authors has resulted in a recent price drop). They use the result to lure people to a conference in NYC.”
The results I have been posting on this blog have all been entirely free to the public. Phil Sexton, the publisher of Writer’s Digest, and I also did a free webinar on the findings in the Author Survey, and we also appeared on the Self-Publishing Roundtable to discuss the survey and its findings with the broader community. My comments on the report and on the survey have been posted on Slideshare and on my own website.
The report is priced highly and is targeted to publishers. It also includes only a fraction of the information we collected in the survey, much of which will be disseminated on this blog and in subsequent reports aimed at authors and others (as a great deal of it already has, for free).
“It’s a racket, which is why they are coming after me.”
I regret if Howey felt victimized by the re-analysis of his data I undertook using widely accepted methods, following the invitation implied by his decision to make his numbers available. Far from “coming after” him, the post took his data collection exercise seriously and attempted to show its contribution to the limited knowledge we have at hand. (Mike Shatzkin has written thoughtfully on the issues confronting researchers seeking to compare traditional and indie publishing.) While I endorse neither Howey’s data nor his interpretation, my examination offers conclusions that may be safely drawn from the numbers provided should others chose to do so.
“I, on the other hand, state every limitation and bias in my survey.”
Perhaps Howey believes he has covered all possible considerations, but that is simply not the case. Mike Shatzkin and I both point to non-overlapping concerns about the limits of the data and potential bias in the sample (which from a research standpoint is not technically a “survey”). Others have thoughtfully drawn attention to further issues with the data and the unsupported conclusions Howey draws from the charts and figures he presents in his report.
“I am up-front in saying that publishing is not a gold rush, that success comes to very few and requires a lot of luck. I make the full data set available for free, so people can reach their own conclusions. I state in the survey that I may be wrong about all of this, that we need better data, that I hope we will move forward and discover great truths together, as a community.”
Such a desire could only be furthered by embracing a scientifically grounded critique and analysis of his data and using it as a launching point for additional study.
“I do all of this at great expense to myself in time and money. Because I want authors to have more data at hand when making decisions. Better data than a 1% to 100% comparison among people who haven’t written a book or may have written one. Bad data that will set you back three benjamins.”
Howey’s data are a comparison of the top 1.5% to 4% of genre-fiction authors published on Amazon. This may be good data if that is the population you as a writer or publisher or service provider care about. Otherwise, it tells you very little about the experiences or earnings of the average published author and nothing at all about the average aspiring author.
As stated above, the Digital Book World and Writer’s Digest Author Survey does not compare the 1% (successfully traditionally published authors) to 100% (all self-published authors), nor was it designed to do so, and much of the analysis of the Author Survey has been reported on this website for free.
The data are not perfect, but they certainly aren’t “bad.” The careful and professional design of the questionnaire ensures that the information we collected meets a very high standard of data integrity, while the volume of responses make the findings worthwhile from a qualitative perspective. In other words, it is hard to dismiss as useless or “bad” 9,210 extensive and carefully crafted interviews with authors, whether or not they are the same authors of greatest interest to another researcher.
“It’s not hard to see where this is coming from, and it’s not from a desire to help people.”
I am not an employee of Digital Book World or Writer’s Digest, contract or otherwise. Like Howey, I have engaged in designing and analyzing the survey without compensation because I am passionate about studying authors’ careers (more information about my research is available on my website). Working with Digital Book World’s editorial director Jeremy Greenfield and Writer’s Digest publisher Phil Sexton on the Author Survey provided an unprecedented opportunity to learn about the experiences of a wide range of authors and provide information to them not otherwise available—something I care about greatly as a fiction writer myself.
In any case, questioning my personal motives and integrity or those of Digital Book World or Writer’s Digest does nothing to ameliorate the data and analysis issues around Howey’s Author Earnings data and report.
“It’s from a desire to protect a profit-making scheme of selling to publishers the news that their ship is unsinkable, that the noise they heard in the middle of the night was a brush with an iceberg, but everything is okay.”
The results from the Author Survey have not, in fact, delivered news to publishers that “their ship is unsinkable.” Quite the contrary. Taken together, the blog posts I’ve written so far as well as the report aimed at publishers show that traditional publishing alone is not the most promising route for authors and that authors’ experiences with traditional publishing call into question publishers’ usual value propositions.
Talks I delivered at Digital Book World 2014 and in the earlier mentioned Digital Book World webinar focused precisely on the challenges and threats self-publishing poses for publishers. On both occasions I proposed that publishers rethink the services and benefits they present to authors, or risk losing those authors to the improving terms indie publishing offers.
Those recommendations were based not only on sound data, including the Author Survey, but on an understanding of the insights that could and could not be drawn from them.