Salini T S, Advocate, High Court of Kerala
In the digital age, social media platforms have transformed into powerful tools for communication, information dissemination, and social mobilization. However, they have also become potential tools for unverified accusations, defamation, and cyberbullying that can destroy lives. A single post is enough to take out an innocent life. The tragic suicide of U. Deepak, a 42-year-old man from Kerala, on January 18, 2026, exemplifies this peril. Accused of inappropriately touching a woman named Shimjitha on an overcrowded bus, Deepak faced public outrage after a video of the incident went viral. While Shimjitha claimed harassment, many spectators argued it was an accidental brush due to the bus’s overcrowding. Regardless of intent, the online backlash led to Deepak’s death, igniting debates on whether social media amplified an exaggerated claim into a fatal defamation campaign. This incident underscores a glaring gap: the absence of robust regulatory and punitive mechanisms to curb social media abuse. In a country like India, where over 900 million people are online as of 2026, the need for such frameworks is not just pressing but essential to protect individual rights, ensure justice, and balance free speech with accountability.
Social media’s democratizing potential is undeniable, enabling voices from marginalized communities to challenge injustices. Yet, its dark side—facilitated by anonymity, virality, and algorithmic amplification—poses severe risks. Misuse manifests in various forms: defamation through false narratives, doxxing (revealing personal information), cyberbullying, and orchestrated harassment campaigns. In Deepak’s case, the video’s rapid spread led to a “trial by media,” where public opinion supplanted legal processes, resulting in irreversible harm. Similar incidents abound; recall the 2024 case in Maharashtra where a teacher was falsely accused of child abuse via a doctored video, leading to job loss and social ostracism before courts exonerated him. According to a 2025 report by the Internet and Mobile Association of India (IAMAI), cyber defamation complaints rose by 45% year-over-year, with women and professionals being frequent targets. Without checks, social media becomes a weapon for personal vendettas, political agendas, or sensationalism, eroding trust in institutions and exacerbating mental health crises. Suicide rates linked to online shaming have surged, with the National Crime Records Bureau (NCRB) reporting over 1,200 such cases in 2025 alone.
To address this, India must establish a comprehensive regulatory body akin to a “Social Media Oversight Commission” (SMOC), empowered to monitor, investigate, and penalize abuses. This entity could operate under the Ministry of Electronics and Information Technology (MeitY), collaborating with law enforcement and platforms like X (formerly Twitter), Facebook, and Instagram. Current laws provide a foundation but fall short in enforcement and scope. The Information Technology Act, 2000 (IT Act), amended multiple times up to 2025, is the cornerstone. Section 66 of the IT Act criminalizes sending offensive messages through communication services, with punishments up to three years imprisonment and fines. This was invoked in the 2023 Supreme Court case Kaushal Kishor v. State of Uttar Pradesh, where hate speech on social media was deemed punishable if it incites violence or disturbs public order. However, Section 66A, which once targeted “annoying” or “grossly offensive” content, was struck down in 2015’s Shreya Singhal v. Union of India for being vague and infringing on Article 19(1)(a) of the Constitution (freedom of speech). This void allows much abuse to slip through, as platforms often evade liability under Section 79’s safe harbor provisions, which exempt intermediaries if they act diligently to remove unlawful content.
Complementing the IT Act is the Indian Penal Code (IPC), 1860, soon to be replaced by the Bharatiya Nyaya Sanhita (BNS), 2023, effective from July 2024. Under IPC Section 499, defamation—publishing imputations harming reputation—is punishable by up to two years imprisonment or fine. In social media contexts, this extends to posts, shares, or comments. The BNS retains similar provisions under Section 356, broadening “publication” to include digital mediums. For instance, in Deepak’s scenario, if the video was shared with intent to defame without verifying facts, it could attract charges under this section. Section 503 (criminal intimidation) addresses threats via online posts, while Section 509 penalizes words or gestures insulting a woman’s modesty, though ironically, misuse of this section in false harassment claims highlights the need for balance. The Protection of Children from Sexual Offences (POCSO) Act and the Scheduled Castes and Scheduled Tribes (Prevention of Atrocities) Act also intersect, but for general misuse, these are insufficient.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, updated in 2023 and 2025, mandate platforms to appoint grievance officers, remove content within 36 hours of complaints, and trace originators of mischievous messages. In 2025, amendments required AI-driven content moderation for hate speech and misinformation, with penalties up to ₹50 crore for non-compliance. Yet, enforcement remains lax; platforms often prioritize user growth over accountability, as seen in the delayed response to viral deepfakes during the 2024 elections. The Supreme Court’s 2025 directive in Vineet Kumar v. Union of India emphasized proactive monitoring, but without a dedicated regulator, implementation falters. Internationally, models like the UK’s Online Safety Act, 2023, impose duties on platforms to prevent harm, with fines up to 10% of global revenue. The EU’s Digital Services Act (DSA), 2022, requires transparency in algorithms and rapid content removal, inspiring India’s potential reforms.
A proposed SMOC could bridge these gaps by integrating punitive measures. First, mandatory fact-checking protocols for viral content: Posts exceeding a view threshold (e.g., 10,000) could require verification labels or temporary holds. Penalties for violators might include graduated fines—₹1 lakh for first offenses, escalating to ₹10 lakh plus imprisonment for repeats—under an amended IT Act. Second, victim-centric redressal: A 24/7 helpline and fast-track courts for cybercrimes, reducing the average resolution time from 18 months to 3 months. Third, algorithmic accountability: Platforms must disclose how content is amplified, curbing echo chambers that fuel mob justice. Fourth, education mandates: Integrating digital literacy in school curricula to foster responsible usage.
Critics argue that regulation stifles free speech, invoking Article 19(2)’s reasonable restrictions only for public order, morality, or sovereignty. However, the harm from unchecked misuse—economic losses, mental trauma, and suicides—outweighs this. The Supreme Court in Anuradha Bhasin v. Union of India (2020) upheld proportionality in restrictions, ensuring they are narrowly tailored. A SMOC could incorporate safeguards like independent audits and appeals to prevent overreach, drawing from Singapore’s Protection from Online Falsehoods and Manipulation Act (POFMA), 2019, which allows corrections without blanket censorship.
In conclusion, U. Deepak’s death is a stark reminder that social media, while empowering, can be lethal when abused. India’s legal framework, encompassing the IT Act, IPC/BNS, and intermediary rules, provides tools but lacks teeth without a dedicated regulator. Establishing an SMOC with punitive powers would deter misuse, protect innocents, and restore faith in digital spaces. Policymakers must act swiftly; delaying reform risks more tragedies in our hyper-connected world. As digital citizens, we owe it to victims like Deepak to demand accountability—not censorship, but responsible governance.