Consumers are essentially experimental subjects and must be compensated for their data according to the formal protocols of ethical experimental research. 

Image Courtesy: Pixabay

Present day consumers of products and services have their behaviour recorded with ever increasing frequency and with increasing minuteness. Every site visit, Facebook Like, headline click, item purchased and the amount of time spent browsing online is a potential source of training data for machine learning systems of AI development companies the world over. The shift towards greater automation in nearly every aspect of offline commercial activities further expands the pool of data available for AI companies to tap.  

User data, so obtained, is used to understand behavioural patterns of consumers, predict future behaviour, improve the customer experience and, ultimately, to maximize company profits. The question that arises is this “Ought consumers be compensated for their data, commensurate with the profits acquired by companies harvesting this data?” 

As one ventures into this relatively unmapped territory, one may turn towards historical precedents to inform the present discussion. Consider the fact that data acquisition by AI companies is not unlike experimental data collection by academicians for research purposes in a variety of fields. The latter follows a specific protocol requiring a) the express approval of an Ethics Committee, b) a statement of the manner in which the data is to be stored and used, c) a statement of the risks posed to the subject by the experimental procedure, d) a statement of the benefits of running the experimental study, e) a statement of consent from the subject and f) most significantly, some form of remuneration that is made to subjects participating in these experiments. In other words, subjects are required to be compensated for their data, in cash or in kind. 

Large scale data collection through AI devices and services is no different from data gathered from smaller populations that are studied in labs, and ought to be subject to the same protocols. More specifically, subjects or users ought to be paid for their participation in what would essentially be an ongoing experimental study of the user’s response as this data is fed to a variety of machine learning algorithms for analyses. 

Payment to users ought to be subject to a minimum amount along the lines of a wage standard. This would be particularly meaningful as an AI-assisted society is expected to take away a number of jobs and reduce earnings in general. Payment for data could help compensate for salary losses to some extent. Indeed, paying users for their data would be the fair thing to do since user data would, in essence, be used ultimately, to reduce user earnings. 

While it has been argued that automation would lead to the creation of new types of jobs, this view remains largely speculative. There is no evidence to suggest that these new jobs will be of the same or higher quality, that these jobs will come with comparable or higher salaries or that there will be an adequate number of such jobs for all affected adversely by automation. If anything, the rising online “clickforce” hired to tag and label data for AI systems incapable of doing so at the human level, reflects a dangerous trend towards underpaid grunt work to aid the development of ultimately highly profitable AI products [1].

There is also the claim that automation of low/medium skill jobs involving drudgery will “free people up” for creative pursuits in the arts and sciences.  However, this promise of AI can only be fulfilled if a) people have sufficient resources to purchase the training required for high skill jobs and b) more importantly, if they have the resources to cover their basic necessities of food, shelter and mental wellbeing with enough left over for creative pursuits.  A severely low paying data labeling job is unlikely to provide these resources.

Additionally, compensation provided to users for their data must take the form of lifetime royalties rather than a one-time payment. The reason for this is as follows. Academic researchers are required to specify the duration of storage of subject data and the method of eventual safe disposal of this data. Thus far there has been no such requirement from AI companies collecting user data. User data is currently for all practical purposes placed in permanent storage and may be put into use by AI companies at any time. Further, unlike academic contexts, data collection in tech companies is profit oriented.   Therefore, given that companies may profit indefinitely and significantly from even a single use of a dataset to build a product, it seems only reasonable that users be remunerated accordingly.

While the present discussion may appear hostile to AI companies, it is in fact the case that compensating users for their data will ultimately benefit these companies. For, a user with meagre resources would be one with limited buying power, and the resultant inability to purchase AI products and services.  How would AI companies survive in the absence of a healthy market for their products?

Another factor to take into account is that AI companies rely significantly on the research output of academic institutions, research that is funded to a large extent by citizen taxes. However, citizens without sufficient earnings will mean decreased tax-based funding for technological development and research. 

Thus it seems that compensating users for their data on the level of a salary that is commensurate with the profits accrued by AI companies, in the form of a lifetime royalty would not only be the ethical thing to do as far as users are concerned but would also ensure the very survival of AI companies.

Historical Notes:  

  1. The establishment of an Ethics Protocol for research activities was a reaction to the unregulated experimentation carried out by physicians during the period of Nazi Germany [2]. 
  2. The mechanization of a wide variety of skills during the industrial revolution boosted production and profits for industry owners which did not get passed on to industry workers. It was this realization that led to the shortening of the work week. However the gap between benefits to the two has remained significant and has only widened with increasing automation [3].

Questions to Consider:

  1. How do we ensure that jobs lost to automation are replaced with those of comparable quality
  2. How do we encourage transparency of AI companies regarding their use of consumer data?
  3. How do we transition from a developer-consumer mindset to one where an equal partnership exists between users and AI companies and where users are recognized as active contributors to AI research?   

References

  1. Lee, David. “Why Big Tech pays poor Kenyans to teach self-driving cars.” BBC. November 3, 2018. www.bbc.com/news/technology-46055595  
  2. Katz, J. (1996). “The Nuremberg Code and the Nuremberg Trial. A reappraisal”. JAMA. 276 (20): 1662–6. doi:10.1001/jama.1996.03540200048030. PMID 8922453.
  3. Walsh, T., “The End of Work”, in 2062, The World That AI Made, Speaking Tiger Books, 2020

Views expressed by the author are personal and need not reflect or represent the views of Centre for Public Policy Research.

Avatar photo
+ posts

Dr Monika Krishan's academic background includes a Master’s in Electrical Engineering from the Indian Institute of Science, Bangalore, India and a Ph.D. in Cognitive Psychology from Rutgers University, New Jersey, USA. Her research interests include image processing, psychovisual perception of textures, perception of animacy, goal based inference, perception of uncertainty and invariance detection in visual and non-visual domains. Areas of study also include the impact of artificial intelligence devices on human cognition from the developmental stages of the human brain, through adulthood, all the way through the aging process, and the resulting impact on the socio-cognitive health of society. She has worked on several projects on the cognitive aspects of the use and misuse of technology in social and antisocial contexts at SERC, IISc as well as the development of interactive graphics for Magnetic Resonance Imaging systems at Siemens. She is a member of Ohio University’s Consortium for the Advancement of Cognitive Science. She has offered services at economically challenged schools and hospitals for a number of years and continues to be an active community volunteer in the field of education and mental health

Dr Monika Krishan
Dr Monika Krishan
Dr Monika Krishan's academic background includes a Master’s in Electrical Engineering from the Indian Institute of Science, Bangalore, India and a Ph.D. in Cognitive Psychology from Rutgers University, New Jersey, USA. Her research interests include image processing, psychovisual perception of textures, perception of animacy, goal based inference, perception of uncertainty and invariance detection in visual and non-visual domains. Areas of study also include the impact of artificial intelligence devices on human cognition from the developmental stages of the human brain, through adulthood, all the way through the aging process, and the resulting impact on the socio-cognitive health of society. She has worked on several projects on the cognitive aspects of the use and misuse of technology in social and antisocial contexts at SERC, IISc as well as the development of interactive graphics for Magnetic Resonance Imaging systems at Siemens. She is a member of Ohio University’s Consortium for the Advancement of Cognitive Science. She has offered services at economically challenged schools and hospitals for a number of years and continues to be an active community volunteer in the field of education and mental health

Leave a Reply

Your email address will not be published. Required fields are marked *