The Dunning-Kruger-Howey Effect

Standard

As a followup to the post I put together linking critical analyses of Hugh Howey’s Author Earnings report, I have something brief to say: It’s clear that Howey’s data isn’t all that great, which he knows. It’s also clear that the conclusions he’s jumping to–even before he gets to analyzing B&N or whatever he’s doing next–are not supported by the data.

That’s too bad because this could have been the data I’m looking for. The book I published before last was self-published, and this year I expect to self-publish five more times. As I consider small press offers to put out the books, it would be really helpful to have numbers to look out.

Sadly, despite Mr. Howey’s bold conclusions, I don’t. Yeah okay the guy keeps talking about the limits of the data he’s collected, but he also talks as though the data has proved him right. Actually, he’s claiming to be proved righter than ever.

As the links in that previous post demonstrate, that’s not the case. It’s pretty clear that, once Howey got the data, he didn’t really know how best to use it, nor did he know what was absolutely not allowed. The enthusiasm and certitude behind his conclusions are textbook Dunning-Kruger Effect.

We’re all prone to confirmation bias. How many people dismissed what he said without really looking at it? How many people really looked at the report, recognized the flaws, then decided to believe it all anyway? It’s easy to believe flattery. It’s easy to stand in the mirror in just the right way to catch yourself at a good angle. We exert that sort of unconscious control all the time; that’s why we need smart knowledgeable people who know the rules. Howey may know how to write a bestseller but when it comes to data analysis he’s just another thriller writer. Also, it seems that his “Data Guy” is really just “Coder Guy.”

It’s too bad. I could have used expert advice. Unfortunately, he doesn’t have any to offer and he doesn’t even know it.