By Paramjeet Berwal
Do people really care when companies (mis)appropriate or steal their data? On the basis of my interaction with my colleagues, friends and students, I have often concluded that people, generally, are not concerned with what happens to their data. In fact, people are very less likely to engage in any intellectual exercise that requires them to think about implications and modalities of stealing of their data by companies. Jim Hagemann Snabe, Chairman, Siemens AG, at Davos 2019 (WEF), while talking in a panel on ‘Setting Rules for the AI Race’, pointed in the direction that people or consumers do not consent to or even know the entailing connotations of what they are exposed to in digital environment
In the wake of advancements being made in the field of AI, it has become important to understand how human beings interact with law and its various underlying concepts. Law is a means to shape human behavior in society. However, there are certain legal tools like contracts that require informed understanding of human parties to that tool in order to fix liability and further achieve the goal that the tool was used for in the first place. However, legal sphere has been operating in an extremely ‘hyper-technical’ manner, for it excludes whether humans really know, understand, contextualize and, thereafter, take informed decisions when dealing with user consent forms and utilities, click wrap agreements et al. The GDPR related consent forms are one such thing. The purpose of GDPR is to make sure that internet users should be in a position to control their data and protect their privacy. It is seen that users hyper-mechanically click on ‘I agree’ or ‘I consent’ without even reading the details thereof. Have people ever been afforded conditions conducive to formation of an independent and informed decisions? The answer is most likely to be a ‘no!”.
Noam Chomsky’s and Edward S. Herman’s 1988 book, Manufacturing consent: the political economy of the mass media, clearly highlights how human beings can be manipulated into consenting to something they are systemically not given time and opportunity to institutionally analyse and understand. In case of consent forms on internet, people do not have enough time to read the terms and conditions. Negotiating through those forms per one’s requirements that are practically irrelevant in respective economic reality of that people is highly implausible. Also, not agreeing to the click wrap agreements or consent form does not usually serve the purpose, for one does not get access to the service or the product in that case. Therefore, consent forms become merely a façade or a gimmick to create an impression of exercise of choice when there is none. Not only this, such forms coerce one into submitting to something that has been put forth unilaterally without mutual deliberation. True application of principles of freedom and liberty require that people should be afforded tools to contextualize content of what is the subject matter of specific consent form and thereafter, to be given substantial bargaining power and opportunity to negotiate the terms. It might not be practical to do this on such a large scale but still what prevails is not the answer.
Another important issue is that of privacy. According to a study conducted by Malwarebytes, the way people perceive privacy and the measures that they adopt to achieve it do not reflect the reality. Also, it is seen that privacy is seen mostly in the negative connotation of its application. In other words, privacy is often invoked when the subject matter of the right is used against the right holder. The people do not seem to mind whether their information or data is being accessed by someone else unless that information is specifically used against the former.
In view of the aforementioned, it has to be understood that paternalistic legislations or those that have been framed under the guise of some other logic or reasoning have to be done away with. Affording data to anyone who is engaged in developing AI applications should be the primary concern of authorities. Also, it should be made sure that the data so acquired is used only for the good of society and for unhindered development of advanced AI.
Disclaimer: Disclaimer: The views expressed in this post are that of the Author and do not necessarily reflect the views of the institutions he represents, the Tbilisi Tomorrow Institute or the organisations supporting the Tbilisi Tomorrow Institute.