FREE SPEECH IN THE CLICK ERA: Redefining Constitutional Limits on Digital Speech in India
INTRODUCTION
Digital platforms like X, YouTube,
Threads, Reddit and LinkedIn have completely changed how people communicate,
express ideas and take part in public affairs. Information now travels
instantly, conversations spread rapidly, and online spaces have become central
to democratic engagement. This expansion reflects the spirit of Article
19(1)(a),[1] which places free
expression at the heart of Indian democracy.
At the same time, the unfiltered
nature of social media has brought serious concerns. Misinformation, hate
campaigns, deepfakes and manipulated content circulate faster than they can be
verified. These dangers bring Article 19(2)[2] into sharper focus,
especially when the digital world operates without a clear, modern legal
framework.
As online speech gains the power to shape public perceptions, spark unrest and affect individual dignity, India faces a crucial constitutional question: how do we protect freedom while addressing digital harm? This blog examines how that balance can be achieved.
FREE SPEECH IN THE DIGITAL ERA
A. The Expanding Reach of Article
19(1)(a)
Article 19(1)(a) promotes free and
open communication, encourages accountability and supports India’s pluralistic
traditions. Courts have consistently widened its scope in response to social
and technological changes.
In Romesh Thapar v State of Madras
and Brij Bhushan v State of Delhi, the Supreme Court made it clear that free
expression includes circulation of ideas without pre-censorship.[3] R. Rajagopal v State of
Tamil Nadu broadened this further to include digital forms of communication.[4]
Commercial speech received
protection in Tata Press Ltd v MTNL,[5] recognising that
advertisements also inform the public. Electronic broadcasts were protected in
Odyssey Communications v Lokvidayan Sanghatana.[6]
The right to receive information,
recognised in Raj Narain, and the right to dissent through silence as held in
Bijoe Emmanuel, show how Article 19(1)(a) continues to evolve.[7]
B. Reasonable Restrictions Under
Article 19(2)
To balance free expression with the
interests of society, Article 19(2) allows reasonable restrictions. These
include security of the State, public order, defamation, contempt of court,
decency, morality, sovereignty and integrity.
Security of the State applies only
to serious threats such as rebellion or espionage, not criticism of the
government. Defamation and contempt of court remain valid grounds under the
Contempt of Courts Act 1971 and Articles 129 and 215.[8]
In S. Rangarajan v P. Jagjivan Ram,
the Court stressed that restrictions must be narrow, necessary and
proportionate. This principle is central as digital platforms amplify speech to
unprecedented levels.[9]
C. Why Social Media Challenges Free
Speech Doctrine
Social media introduces
complexities the Constitution never anticipated. Algorithms decide what becomes
visible, anonymity makes accountability difficult and content spreads across
borders within seconds.
Deepfakes illustrate the problem
clearly. Recent AI-generated videos involving Shilpa Shetty show how easily
identities can be misused. Political content is often edited or manipulated,
shaping public opinion before facts emerge. The Kanhaiya Kumar episode in 2016
demonstrated how quickly online narratives fuel polarisation.
These developments raise serious questions about truth, fairness and democratic participation.
SUPREME COURT'S APPROACH TO DIGITAL SPEECH
A. Shreya Singhal and Online
Expression
Widespread misuse of Section 66A of
the IT Act led to the landmark decision in Shreya Singhal v Union of India.[10] The law allowed arrests
for vague terms like “annoying” or “offensive,” which resulted in arbitrary
action, including the arrest of two young women in Maharashtra for a Facebook
post. This violated Articles 14, 19(1)(a) and 21.[11]
Drawing from Romesh Thapar and
Whitney v California,[12] the Court held that
satire, criticism and political commentary lie at the heart of free expression.
Section 66A was struck down, ensuring that online spaces remain open to
democratic debate.[13]
B. Proportionality in Anuradha
Bhasin
In Anuradha Bhasin v Union of
India,[14] the Supreme Court held
that restrictions on internet access must satisfy the proportionality test. Any
shutdown must serve a legitimate aim, be the least restrictive option and
undergo regular review. The Court recognised that the internet has become essential
for exercising rights under Article 19(1)(a).
C. Private Platforms and Horizontal
Rights
With private platforms dominating public discourse, the Court in Kaushal Kishor v State of Uttar Pradesh acknowledged that certain constitutional rights may operate horizontally when private actors shape the public sphere.[15] This is an important development as content moderation decisions increasingly determine who gets heard online.
DIGITAL HARMS AND THEIR DEMOCRATIC IMPACT
A. Deepfakes and Synthetic
Manipulation
Deepfakes are one of the most
worrying digital threats today. AI-generated videos that imitate real
individuals create confusion, harm reputations and interfere with democratic
processes. Countries such as China and Singapore have already enacted specific
legal frameworks to address this.
India now faces the challenge of
distinguishing legitimate satire from malicious manipulation without
suppressing creativity or political expression.
B. Algorithmic Spread of Hate and
Misinformation
Algorithms prioritise engagement
over accuracy. Content that provokes anger or shock often receives more
visibility, creating echo chambers and narrowing democratic dialogue. Harmful
posts may go viral before platforms can intervene.
This environment quietly influences
public opinion, raising concerns about fairness and inclusiveness.
C. Online Harassment and
Manipulated Targeting
Doxxing, impersonation, abusive trolling and deepfake pornography have become routine. Women, journalists, students and activists often reduce their online presence due to constant harassment. This “chilling effect” limits discussion, shrinks participation and impacts elections, where misinformation spreads rapidly.
REGULATORY GAPS IN INDIA
A. Weaknesses of the IT Act 2000
The IT Act was drafted long before
social media evolved. It focuses on e-commerce and cyber fraud, not deepfakes,
impersonation or algorithmic manipulation. Section 79 on intermediary liability
and Section 69A on blocking powers remain vague, leading to inconsistent
enforcement and self-censorship.
Although the Digital Personal Data
Protection Act, 2023 introduces a more structured framework for handling
personal data, it does not directly address misinformation, deepfakes or
algorithmic manipulation. Its focus remains on consent, lawful processing and
data protection obligations, which leaves a gap in regulating harmful digital
speech. As a result, privacy is better protected, but the challenges of
synthetic content and targeted manipulation still require separate safeguards
B. Concerns Around the IT Rules
2021 to 2023
The IT Rules were introduced to
increase accountability but raised serious concerns. Traceability requirements
threaten encryption, and government fact-checking powers may chill political
criticism. Courts have questioned whether broad categories like “fake” or
“misleading” meet constitutional standards under the proportionality test.[16]
C. Section 69A and Lack of
Transparency
Although the Supreme Court upheld Section 69A in Shreya Singhal,[17] it did so expecting transparent procedures. In reality, most blocking orders remain confidential, limiting public knowledge and judicial review.
COMPARATIVE GLOBAL MODELS
The United States follows a highly
protective free speech model, allowing restrictions only when speech directly
incites imminent lawless action, as held in Brandenburg v Ohio.[18] Platforms enjoy broad
immunity under Section 230 of the Communications Decency Act, which supporters
see as encouraging open debate but critics say enables misinformation.
The European Union adopts an
accountability-based system. Under the Digital Services Act, large platforms
must assess and reduce risks linked to harmful or misleading content, while the
General Data Protection Regulation ensures strong data protection and greater
user rights.
The United Kingdom focuses on preventing online harm. The Online Safety Act places a duty of care on platforms to address issues such as cyberbullying and grooming. While some fear over-regulation, the aim is to keep digital spaces safer without curbing legitimate speech.
DRAWING A BALANCE LINE FOR INDIA
A. Protecting Democratic Speech
Democracy survives only when people
can openly question the government and debate public issues without fear.
Article 19(1)(a) protects even uncomfortable, unpopular or sharp criticism.[19] Restrictions should apply
only when speech clearly crosses into real harm like deepfake deception or
incitement. In the digital age, the rule must stay simple: speech is free
unless it creates concrete, immediate danger not just because it offends
someone.
B. Transparency and Platform
Accountability
Social media platforms quietly
decide what the country sees, what trends, and which voices get buried. Users
rarely know why a post was removed or why their reach suddenly drops. To keep
digital spaces fair, platforms should give clear reasons for takedowns, allow
proper appeals and reveal how their algorithms rank content. Safe harbour must
depend on responsible behaviour, not blind immunity.
C. Independent Oversight for
Digital Governance
Right now, government blocking orders often stay secret, and automated moderation removes content without giving people a chance to respond. India needs a neutral, expert-led digital oversight body that reviews takedowns, handles user appeals and ensures transparency. With judicial involvement and technical expertise, such a body can protect rights while addressing serious digital harms under Section 69A fairly, openly and without political influence.
CONCLUSION
Free speech online is strained by
deepfakes, harassment and misinformation. The rule must stay simple that speech
remains free unless it causes real harm under Article 19(2). India now needs
stronger safeguards and transparent platforms to manage digital risks
responsibly and may soon need a dedicated AI governance framework like the EU
AI Act, focusing on deepfakes and automated decision-making.
Pushkar Singh (B.A., LL.B. Student
at GNLU, Gandhinagar)
“Law. Insight. Perspective.”
[1] Constitution of India 1950, art
19(1)(a).
[2] Constitution of India 1950, art
19(2).
[3] Romesh Thapar v State of Madras
AIR 1950 SC 124; Brij Bhushan v State of Delhi AIR 1950 SC 129.
[4] R Rajagopal v State of Tamil
Nadu (1994) 6 SCC 632.
[5] Tata Press Ltd v MTNL (1995) 5 SCC 139.
[6] Odyssey Communications Pvt Ltd
v Lokvidayan Sanghatana (1988) 3 SCC 410.
[7] State of UP v Raj Narain
(1975) 4 SCC 428; Bijoe Emmanuel v State of Kerala (1986) 3 SCC 615.
[8] Contempt of Courts Act 1971, s 2;
Constitution of India 1950, arts 129, 215.
[9] S Rangarajan v P Jagjivan Ram
(1989) 2 SCC 574.
[10] Shreya Singhal v Union of India
(2015) 5 SCC 1.
[11] Constitution of India 1950, arts
14, 19(1)(a), 21.
[12] Romesh Thapar v State of Madras
AIR 1950 SC 124; Whitney v California 274 US 357 (1927).
[13] Shreya Singhal v Union of India
(2015) 5 SCC 1.
[14] Anuradha Bhasin v Union of
India (2020) 3 SCC 637.
[15] Kaushal Kishor v State of Uttar
Pradesh (2023) 4 SCC 1.
[16] Anuradha Bhasin v Union of
India (2020) 3 SCC 637.
[17] Shreya Singhal v Union of India
(2015) 5 SCC 1.
[18] Brandenburg v Ohio 395 US
444 (1969).
[19] Constitution of India 1950, art
19(1)(a).

Comments
Post a Comment