
A recent article by Anu Maria Francis, Senior Associate at CPPR, highlights a worrying rise in online crimes against girls and women in India. According to data from the National Cybercrime Reporting Portal (NCRP), these incidents more than doubled between 2020 and 2024. This surge in digitally mediated abuse—especially sexually charged harassment—poses a significant challenge to current protective and regulatory frameworks.
Her article, “Online Crimes Against Women in India: Deepfakes, Doxxing, and Digital Abuse,” examines the logistical hurdles that can hinder effective implementation of support systems for victims. Yet, an aspect of violence against girls and women (VAWG) that receives less attention is non-sexual harassment. This form of abuse is particularly insidious because it often hides behind ordinary, socially acceptable language and behaviour.
With technology evolving rapidly, an important question emerges: Can we recruit AI to help address online harassment—sexual or otherwise?
Recently, ChatGPT was posed a series of questions related to VAWG. Its responses demonstrated a nuanced understanding of the issue, including clear categorisations of harassment as:
Below are selected excerpts from those responses.
ChatGPT responded that harassment is not rooted in anything inherent to women, but in the social, cultural, and structural conditions that permit or excuse such behaviour. It listed nine commonly identified causes and concluded:
“Harassment exists because unequal systems and norms allow it to exist.”
When societies change these systems—through laws, education, cultural shifts, and support for victims—harassment decreases.
To be precise, while much focus is on online harassment, we must also remember that ingrained social norms offline feed into how abuse spills over into digital spaces.
ChatGPT noted that covert harassment is difficult to address because it often appears normal from the outside and leaves no evidence. It identified patterns such as:
Its assistance would focus on clarity, documentation, safety, and strategy.
Covert online harassment often leaves little proof—but recognizing it is crucial to providing support.
Regarding perpetrators who recognise their behaviour but refuse to change, ChatGPT stated:
“No single person can ‘fix’ someone whose behaviour is rooted in entitlement, hostility, or a desire for control.”
However, it can help by recommending ways to shift environments, limit opportunities for harm, and empower victims and institutions.
With tools like ChatGPT possessing near-instant access to a vast corpus of human knowledge, it may be time to explore how AI can be systematically integrated into policy design, public engagement, and protection frameworks.
One key question is: can AI reduce online harassment by actively participating in online platforms — for example, flagging abusive behaviour, guiding bystanders, or offering support to victims?
As the scale of VAWG grows, embracing technology as a constructive partner in prevention and response may no longer be optional—but essential.
Dr Monika Krishan is a Senior Fellow (Cognitive Science and Artificial Intelligence) at the Centre for Public Policy Research (CPPR), Kochi, Kerala, India.
Views expressed by the authors are personal and need not reflect or represent the views of the Centre for Public Policy Research (CPPR).
Dr Monika Krishan's academic background includes a Master’s in Electrical Engineering from the Indian Institute of Science, Bangalore, India and a Ph.D. in Cognitive Psychology from Rutgers University, New Jersey, USA. Her research interests include image processing, psychovisual perception of textures, perception of animacy, goal based inference, perception of uncertainty and invariance detection in visual and non-visual domains.
Dr Krishan's areas of study also include the impact of artificial intelligence devices on human cognition from the developmental stages of the human brain, through adulthood, all the way through the aging process, and the resulting impact on the socio-cognitive health of society. She has worked on several projects on the cognitive aspects of the use and misuse of technology in social and antisocial contexts at SERC, IISc as well as the development of interactive graphics for Magnetic Resonance Imaging systems at Siemens.