Proposed: The Controversy Index
Many an index has surfaced in recent years. Indicators and scientific approaches to making use of the bounty of data available, and to make sense of the world in new ways, have appeared in everything from a plethora of financial indices; various Happiness Indices (particularly as a counterpoint to GDP, most famously the GNH from Bhutan); sports, wine and other specialty indices; numerous health indicators, such as WHO’s Children’s Environmental Health Indicators; and of course we planners have an impressive slew of urban indicators, such as UN-Habitat’s, and in the world of transportation alone there are many, such as the U.S.-based BTS indicators, sustainable transport indices from VTPI and performance measures from the EPA, of course the various walkability indices including the online Walk Score, and for bicycling, in the BICY Project we developed several, and a tip of the hat is due to the Copenhagenize Index.
One of Your Meggsy’s personal favorites is the Corruption Perception Index or CPI, one of numerous commendable analysis offers from Transparency International, aiming to daylight and address corruption in the international arena.
In the realm of truth in science and media, however, there seems to be a dearth of investigative power brought to bear. Noam Chomsky famously counted lines of copy in newspapers for various issues to indicate the degree to which stories were given more or less exposure and public value, a promising experiment.
Only recently have scientific journals begun to make a more concerted effort to simply ask and report funding sources for published papers, and more rarely to ask disclosure identifying more broadly, any conflicts of interest from authors.
More recently in this Web 2.0 world, popularity indices have erupted wherein crowd sourced rankings are collected from those motivated to provide them, usually resulting in an average represented on a 5-star scale.
But what happens if there is controversy over a given online subject, such as a book or video, and an organized opposition emerges which votes based not on the quality of the writing but on lines of political disagreement?
In this case of a vote war we would expect to see a split vote, with many low rankings, few average rankings, and again a spike for positive rankings. Yet the typical representation by hosting websites would be an average, showing something in the middle.
Why not provide a Controversy Index, to flag and quickly identify, search for, and organize those issues where an online battle is taking place? A simple statistical analysis on the inversion of the expected normal distribution should suffice in those cases; simply being able to view the histogram of voting could go a long way. (Your Meggsy has in fact repeatedly written online crowd sourced media giants such as YouTube and Amazon to suggest such features.)
In the world of scientific literature, it appears there is still no established effort to measure and report controversy. It’s well known that for many years, the tobacco industry funded scientific studies, resulting in a split body of literature where side by studies found opposite results. There are many areas of research where large financial interests, and/or strong ideological forces, might influence results, and certainly there can be found more lines of research with a split literature.
Indices to identify and monitor these trends in truth-swaying would be very useful, simply to increase the public awareness of and dialogue about how trustworthy scientific findings are.
Proposed: The Controversy Indices, a suite of metrics for gauging the likelihood of bias and distortion in the public discourse, from online media to scientific literature. A summary index, The Controversy Index, could be a vital new force for the public interest.