The architecture of harm in the digital age has shifted from physical proximity to algorithmic reach. Artificial intelligence now enables the creation of hyper-realistic images, voices, and videos that can be deployed to exploit, extort, and silence. Young women, particularly those in the 18 to 25 age group, occupy a precarious space in this ecosystem. They are legally adults, yet socially and psychologically vulnerable, and therefore fall outside the protective canopy of child-centric legislation such as the Protection of Children from Sexual Offences Act, 2012, even as they remain prime targets of technology-enabled abuse.
Indian law has responded to digital exploitation through a layered but fragmented framework. The POCSO Act marked a decisive intervention by criminalising sexual offences against minors, including those facilitated through digital means. Section 15 penalises storage and dissemination of child sexual abuse material, and the Supreme Court has clarified that even viewing or possessing such material attracts penal consequences. In Just Rights for Children Alliance v. S. Harish (2024), the Court held that digital possession of child sexual abuse material is not a passive act but one that perpetuates exploitation, thereby attracting criminal liability. The Act’s definitional breadth includes computer-generated imagery involving minors, indicating legislative foresight in addressing evolving technological threats.
Parallel provisions exist under the Information Technology Act, 2000. Sections 67 and 67A criminalise the publication and transmission of obscene and sexually explicit material in electronic form. Section 67B specifically addresses child sexual abuse material. Sections 66C and 66D deal with identity theft and impersonation, while Section 66E addresses violations of privacy in relation to images of private areas. These provisions, though significant, are technology-neutral. They were not designed with artificial intelligence in mind and therefore operate through judicial interpretation rather than explicit statutory design.
Also Read- The Constitutional Friction: DoPT’s Mandate, the CAPF Act of 2026, and the Legal Battle for Parity
The Bharatiya Nyaya Sanhita, 2023, continues to criminalise the circulation of obscene material, including in digital contexts, but does not fundamentally alter the legal position regarding AI-generated exploitation. General penal provisions relating to voyeurism, stalking, intimidation, and defamation supplement this framework, yet they remain reactive and case specific. The constitutional guarantee of dignity under Article 21, as interpreted in Justice K.S. Puttaswamy v. Union of India (2017), provides a normative foundation for protecting informational privacy and bodily autonomy in digital spaces. However, constitutional principles require statutory articulation for effective enforcement.
The central legal vacuum lies in the absence of a dedicated framework addressing AI-generated sexual content and synthetic identity manipulation. Deepfake pornography, non-consensual image morphing, and AI-driven sextortion are prosecuted by stretching existing provisions. This creates doctrinal uncertainty and inconsistent enforcement. The protective shield of POCSO ceases at the age of eighteen, creating a legal cliff for young women who remain equally susceptible to exploitation. Indian law also lacks a clear recognition of non-consensual image-based abuse as a distinct offence, leading to fragmented prosecution under obscenity, privacy, or defamation provisions.
The vulnerability of young women is not incidental but structurally embedded. The digital economy incentivises visibility. Social media platforms encourage the sharing of images, videos, and personal information, which become raw material for AI-based manipulation. Grooming techniques have evolved into sophisticated psychological operations that exploit emotional dependence, validation seeking, and fear of social stigma. The reluctance to report such crimes is amplified by reputational concerns and the slow pace of investigative processes. Law enforcement agencies, though increasingly equipped, still face challenges in digital forensics, cross-border evidence collection, and platform cooperation.
Patterns of AI-enabled crime reveal a transition from isolated offenders to organised digital networks. Synthetic identities are used to establish trust with victims before deploying extortion tactics. Real-time manipulation technologies can superimpose faces onto video streams, creating false but convincing visual evidence. Automated sextortion networks deploy bots to target thousands of individuals simultaneously. Dark web marketplaces facilitate the trade of both real and AI-generated exploitative material. These developments blur the distinction between reality and fabrication, making evidentiary standards more complex and enforcement more challenging.
Comparative jurisdictions have begun to respond with greater specificity. Several states in the United States have enacted laws criminalising the creation and distribution of deepfake pornography without consent. The United Kingdom has expanded the scope of its Online Safety framework to include harmful synthetic content. The European Union’s evolving regulatory approach to artificial intelligence emphasises transparency, accountability, and risk classification. These developments underscore the need for India to move beyond interpretative application of existing laws and adopt a more explicit legislative approach.
A coherent response requires a multi-layered reform strategy. First, there is an urgent need for a dedicated statutory framework addressing digital sexual exploitation. Such a law must define deepfakes and synthetic media, criminalise their malicious creation and dissemination, and provide civil and criminal remedies tailored to the nature of harm. Second, the protective principles underlying POCSO should be extended through a calibrated framework to cover young adults who remain vulnerable in digital environments. Third, non-consensual image-based abuse must be recognised as a standalone offence, with clear elements of consent, intent, and harm.
Platform accountability must be strengthened through enforceable obligations. Intermediaries should be required to deploy AI-based detection tools, implement robust grievance redressal mechanisms, and ensure time-bound removal of harmful content. The due diligence obligations under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, need to be supplemented with proactive monitoring requirements for high-risk categories of content. At the same time, safeguards must be built to prevent overreach and protect freedom of expression under Article 19(1)(a), subject to reasonable restrictions under Article 19(2).
Digital forensics and investigative capacity must be enhanced. Specialised cyber units, trained in AI-related evidence, are essential. Evidence preservation protocols must be standardised to ensure admissibility under Section 65B of the Indian Evidence Act, 1872. Cross-border cooperation mechanisms, including Mutual Legal Assistance Treaties, must be streamlined to address jurisdictional challenges. Victim protection frameworks must prioritise confidentiality, psychological support, and legal assistance, recognising the long-term impact of digital sexual exploitation.
Regulation of artificial intelligence itself is an integral part of the solution. Mandatory watermarking of AI-generated content, traceability mechanisms, and accountability standards for developers can create deterrence without stifling innovation. Ethical guidelines must be translated into enforceable norms, ensuring that technological advancement does not come at the cost of human dignity.
The law has historically evolved in response to social change, but the pace of technological transformation demands anticipatory governance. AI-driven exploitation is not merely a legal issue; it is a challenge to the very notion of identity, consent, and autonomy. Young women, situated at the intersection of visibility and vulnerability, bear the brunt of this transformation.
POCSO remains a landmark in protecting children, but its protective logic cannot remain confined within rigid age boundaries in a borderless digital world. The task ahead is to construct a legal framework that recognises the continuum of vulnerability and responds with clarity, precision, and compassion. The legitimacy of the legal system will ultimately be measured by its ability to protect those who are most exposed to harm in spaces that are increasingly invisible yet profoundly real.















