CoderRank is a Big Idea

CoderRank is to text analytics what Google’s PageRank has been to search.

 

Editor’s Note: This post is part of our Big Ideas Series, a column highlighting the innovative thinking and thought leadership at IIeX events around the world. Stu Shulman will be speaking at IIeX North America (June 13-15 in Atlanta). If you liked this article, you’ll LOVE IIeX NA. Click here to learn more.

By Stu Shulman

CoderRank is a big idea.  CoderRank is to text analytics what Google’s PageRank has been to search. Just as Google said not all web pages are created equal, links on some pages rank higher than others, I argue that not all human coders are created equal; the accuracy of observations by some coders invariably rank higher than others.

The major idea is that when training machines for text analysis, greater reliance should be placed on the specific inputs of those humans most likely to create a valid observation. I proposed a unique way to measure and rank humans on trust and knowledge vectors, and called it CoderRank. The U.S. Patent and Trademark Office agreed it was a novel approach to machine-learning and issued a patent March 1, 2016. Not bad for a political scientist.

In 2011 I read a few very important and influential books. These books brought years of laboratory experiments into sharper focus contributing directly to the big idea.  One was What Would Google Do?: Reverse-Engineering the Fastest Growing Company in the History of the World by Jeff Jarvis. I already knew about PageRank and the history of search technology through other books; however, Jarvis introduced me to a compelling way to think about where value is created in distributed software systems. What Google does is let end users and builders of systems create value on top of their web-based infrastructure.

Another source of inspiration for CoderRank was Everything Is Miscellaneous: The Power of the New Digital Disorder by David Weinberger. Weinberger writes compellingly about the difference between organizing books in a library using the rigid Dewey Decimal system versus the way we filter information in online databases using different observations,  different people,  different systems, and influenced by different reasons. A take-away point, however, is that every observation matters, but some matter more depending on the context.

There is no more important book in the formation of this big idea than James Gleick’s The Information: A History, A Theory, A Flood. The story of information conceptualization is fascinating. In every epoch, the innovators built new tools for collecting, measuring, and processing information. From Plato’s deep concerns about the frustrating effects of categorization disagreements, through the dawn of machine-learning, Gleick surfaces fundamental problems with information management. The problems with categorization cannot easily be ignored or planned out of existence. However, identifying the best tools, methods, and measurements fits squarely in the long history of information.

The big idea of CoderRank builds on these known experimental and theoretical problems. It suggests a universal law: for every categorization problem, some humans will do better than others. How we deal with this fact is a challenge for data scientists and qualitative researchers going forward.

data sciencetext analytics

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Stu Shulman

Stu Shulman

1 article

author bio

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Why Online Surveys Need Smarter Quality Assurance Now
Data Quality, Privacy, and Ethics

Why Online Surveys Need Smarter Quality Assurance Now

Survey fraud is evolving fast. Discover how AI-based coherence checks and behavioral tracking are key to protecting data quality in modern research.

Sebastian Berger

Sebastian Berger

Head of Science at ReDem

Truth To Be Told: Five Realities About Online Sample That Compromise Data Quality
Data Quality, Privacy, and Ethics

Truth To Be Told: Five Realities About Online Sample That Compromise Data Quality

Explore five key truths about sampling, uncovering fraud, low-quality respondents, and transparency issues that have eroded data quality over two deca...

Karine Pepin

Karine Pepin

Co-Founder at The Research Heads

Rethinking Data Quality: Addressing the Industry’s Trust Deficit
Data Quality, Privacy, and Ethics

Rethinking Data Quality: Addressing the Industry’s Trust Deficit

Market research faces a trust gap due to data quality issues. A clearinghouse model and better participant experience can restore trust and elevate in...

Bob Fawson

Bob Fawson

CEO and Founder at Data Quality Co-op

How to Avoid Malicious Traffic in Your Market Research
Data Quality, Privacy, and Ethics

How to Avoid Malicious Traffic in Your Market Research

Protect market research from malicious traffic with firewalls and traffic filtering. Analyze patterns, update systems, and train teams for data securi...

Michael Chukwube

Michael Chukwube

Digital Marketer & Content Writer at StartUp Growth Guide

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers