Source – blog.oup.com
Big Data analytics have become pervasive in today’s economy. While they produce countless novelties for businesses and consumers, they have led to increasing concerns about privacy, behavioral manipulations, and even job losses.
But the handling of vast quantities of data is anything but new. Since the 1960s, efforts have been made to devise new ways to administer increasing volumes of information stored by corporations and public organizations. Today, Big Data systems are no longer just employed to deal with large amounts of data but are utilized primarily to predict individual behavior. The actual potential of Big Data therefore lies in its underlying science: the ability to target individuals’ cognitive biases to home in on previously unobservable private attitudes and beliefs.
To illustrate, consider that when someone is shown the last two digits of their social security number and is then asked to value the absolute price of an item, say, a bottle of wine, the anchor point of their social security number will influence their valuation of this item. This same valuation is then tempered by the person’s own understanding of relative value. An ordinary bottle of wine would thus not be valued as much as a house, a car, or an exclusive bottle of champagne.
Collective anchors have a significant influence on our perceptions of value, and can also set up some apparent anomalies. A digital music album is often sold at more or less the same price as a physical one, even though there are differences in manufacturing and distribution costs. People regularly are willing to pay the relatively high price for a digital track because their initial anchor point is the price of a physical copy. Once the anchor point has been set, evaluation of value is undertaken in a way that is consistent with the initial estimate.
Mainstream responses to problems in such domains default to providing people with additional information in the form of evidence about product attributes, such as the hidden fees of credit card transactions, the performance specifications of a cell phone, or a car’s miles per gallon of gas. Recent research emphasizes that, in order to allow people to make an informed choice, they must be given information about how a product will be used, how often it will be used, how it compares to the same product of a different brand, and must receive information in accessible, attention-getting ways.
But on closer examination, the additional information hypothesis is often backwards. Receiving additional information can deepen individuals’ interpersonal disagreements and their illusions about the world around them.
In the environmental regulatory context, for example, it is conventionally presumed that individuals who are more scientifically literate and generally more proficient at processing quantitative information will be more likely to understand that climate change and its associated risks are real. But findings from cognitive psychological research demonstrate that views on climate change are governed more by social and cultural vantage points than by scientific literacy and numeracy: if individuals are given quantitative information revealing the implications of manmade climate change, and if such information is presented in a way that affects people’s social and cultural identities, the information can intensify rather than reduce disagreement on whether human activities increase or decrease global warming, depending on whether people believe in climate change or whether they are skeptical of it. Even scientifically learned individuals fail to gravitate toward the truth and will simply be more skilled at defending their original convictions.
These insights may help us understand how the law can assist citizens in getting the facts right when there is a tension between individual and collective self-interest. Such tensions exist in many parts of our societies, such as when the government tries to regulate the environment, the smooth working of capital markets in times of crises, the adverse implications of predictive algorithms, or the communication of scientific insights to the public.
For instance, when regulating the environment, it undeniably is in our collective self-interest to cope with the facts on climate change and to act accordingly. However, individuals who believe in climate change but live in a community of climate change skeptics behave rationally if they follow the skeptics in order to get along with their own social group, particularly in light of the seemingly unlikely possibility that their actions alone can make an appreciable difference on the environment.
Or consider also the depiction in today’s political discourse of the manner in which algorithms constrict our viewpoints when they provide us with news that confirm what we already believe. Such information tends to preserve our thinking and fuels the glaring political divergence that encourages our propensity to sustain our innate creeds.
In all these contexts there are issues involved according to which particular beliefs, once they have become socially or culturally entrenched, are very difficult to change. What is more, changing such beliefs is no longer simply a matter of educating people through the provision of additional information. Instead, the solution requires an answer to the question of how, how often and to what extent we anchor upon a particular interpretation of facts.
Big Data analytics surely raise countless novel problems for lawmakers to resolve. What they are most ignorant about, however, is the ability to exploit precisely those cognitive biases that conspire to generate a distinct mental experience. The combination of these phenomena is all it takes to instigate the dynamics just described. Attempting to sort out their contradictions should be one of the central tenets of present-day legal research, and is perhaps one of the most important issues to emerge from the behavioral sciences in the past few decades.