Human-Tech Partnership Series

The article, Language Models and Training Data: A Mutual Learning Partnership,” describes how LLMs (Large Language Models) and their users could work together to potentially “induce positive transformation in human society and for human users to be active participants in the development of AI tools”. This article explores the potential for LLMs to contribute to the discourse on non-sexual harassment, whether online or offline. It is the next addition to our Human-Tech Partnership series.

There has been a significant increase in the prevalence of online crimes against girls and women, more than doubling during the period from 2020 to 2024, as per the National Cybercrime Reporting Portal (NCRP). This rise in sexually charged harassment of women online presents a substantial challenge to the protective and prohibitive measures and models that are currently in place. Further, even basic logistical dependencies can lead forestalling of successful implementation of the soundest of support programs (Online Crimes Against Women in India: Deepfakes, Doxxing, and Digital Abuse).

A facet of Violence Against Girls and Women (VAGW) that has perhaps not received as much attention is that of harassment of a non-sexual nature. The gap is understandable given the salience and staggering scale of sexual harassment online. Non-sexual abuse is particularly insidious and can be hard to address because it tends to be expressed in the language of common, socially acceptable exchanges.

LLMs with their encyclopaedic expanse could help create a roadmap for the resolution of the malaise of non-sexual harassment, online or offline, by breaking the problem down into smaller chunks, providing summaries of existing policies, and suggesting ways to expand these policies based on the outcome of policy applications worldwide. LLMs could potentially help speed up the process of policy-making across its various stages by reducing the time taken to acquire and vet information relevant to the discussion on non-sexual harassment.

ChatGPT was recently posed several questions on the subject of VAGW. The responses obtained indicated a fairly thorough knowledge of the problem and even a nuanced categorization of the various types of harassment. ChatGPT was able to present in sufficient detail differences between abuse that was: Sexual vs Non-sexual, Verbal vs Non-verbal, and Overt vs Covert (Fig. 1).

Fig. 1

CoPilot (Fig 2) was able to pen several articles on the subject of policy making in the context of non-sexual harassment and even a policy statement taking into account currently existing polices and avenues for their development. Many interesting observations emerged from this exercise, one of which is that the more specific the prompt or question, the better the output was. The training is continuous and can reflect new data that has gone online, which is up to a day old.

Fig 2

Forthcoming articles will feature LLM-authored notes and policy outputs, along with a discussion on how best to employ language models as “copilots” in our understanding and mitigation of the non-sexual harassment and other such societal trials.


Dr Monika Krishan is a Senior Fellow (Cognitive Science and Artificial Intelligence) at the Centre for Public Policy Research (CPPR), Kochi, Kerala, India.

Views expressed by the authors are personal and need not reflect or represent the views of the Centre for Public Policy Research (CPPR).

Avatar photo
Senior Fellow (Cognitive Science and Artificial Intelligence) at  | [email protected] | Website |  + posts

Dr Monika Krishan's academic background includes a Master’s in Electrical Engineering from the Indian Institute of Science, Bangalore, India and a Ph.D. in Cognitive Psychology from Rutgers University, New Jersey, USA. Her research interests include image processing, psychovisual perception of textures, perception of animacy, goal based inference, perception of uncertainty and invariance detection in visual and non-visual domains.

Dr Krishan's areas of study also include the impact of artificial intelligence devices on human cognition from the developmental stages of the human brain, through adulthood, all the way through the aging process, and the resulting impact on the socio-cognitive health of society. She has worked on several projects on the cognitive aspects of the use and misuse of technology in social and antisocial contexts at SERC, IISc as well as the development of interactive graphics for Magnetic Resonance Imaging systems at Siemens.

Dr Monika Krishan
Dr Monika Krishan
Dr Monika Krishan's academic background includes a Master’s in Electrical Engineering from the Indian Institute of Science, Bangalore, India and a Ph.D. in Cognitive Psychology from Rutgers University, New Jersey, USA. Her research interests include image processing, psychovisual perception of textures, perception of animacy, goal based inference, perception of uncertainty and invariance detection in visual and non-visual domains. Dr Krishan's areas of study also include the impact of artificial intelligence devices on human cognition from the developmental stages of the human brain, through adulthood, all the way through the aging process, and the resulting impact on the socio-cognitive health of society. She has worked on several projects on the cognitive aspects of the use and misuse of technology in social and antisocial contexts at SERC, IISc as well as the development of interactive graphics for Magnetic Resonance Imaging systems at Siemens.

Leave a Reply

Your email address will not be published. Required fields are marked *