Photo by the blowup on Unsplash
What counts as "recommended" on Instagram isn’t neutral. It's the result of a complex algorithm engineered not for accuracy, but for engagement. In India, that engagement often means disinformation about reservation (a system of affirmative action), caste and gender reaches massive audiences, frequently disguised as debate, satire, or self-help. It is important to examine how Instagram’s algorithm fuels casteist and misogynistic narratives by pushing marginalised creators into hostile visibility, i.e. being made highly visible to an audience primed to harass, mock, or attack you.
A system stacked against marginalised creators
A disturbing pattern emerged when I ran a controlled experiment and interviews with creators. When users interact with content from anti-caste or feminist accounts, Instagram’s Suggested Reels algorithm often responds by recommending reels that are hostile to anti-discrimination and feminist politics. What follows is not organic dissent but an algorithmic funnel into reels promoting Men’s Rights Activism, misinformation about reservation and caste-denialist narratives, frequently packaged as motivational monologues, political commentary, or ‘humour.’
The problem is structural. Instagram, like Meta’s other platforms, is designed to maximise engagement, and controversy drives clicks. It amplifies disinformation, including claims that Scheduled Castes/Tribes receive ‘undeserved benefits’ or that caste atrocities are ‘fabricated,’ follows a long history of casteist tropes used to undermine affirmative action and deny oppression.
To understand how Instagram’s algorithm responds to progressive content, I created three fresh Instagram accounts in early May 2025. Each profile was anonymous, with no followers or posts. Over the course of five days, I engaged exclusively with content from a curated list of 12 creators, including Dalit, Bahujan, Adivasi and feminist accounts posting about the Constitution, caste discrimination and gender justice. I liked 15–20 reels per account and spent approximately 20 minutes a day scrolling Suggested Reels.
By the second day, all three accounts showed a marked increase in hostile content. Initially, hate-leaning or ideologically antagonistic posts were rare, just a few scattered reels. But by Day 3, they made up more than a third of the feed. These outcomes were consistent across all three test accounts, despite variations in the creators followed or the timing of interactions. By Day 5, the Suggested Reels feed had become almost unrecognisable from its initial state. Where the first day's feed featured videos on constitutional rights and gender justice, by the fifth day, it was flooded with content that:
- Mocked reservation and framed it as "reverse discrimination"
- Pushed Men’s Rights propaganda targeting Dalit women
- Suggested caste-based violence was staged or exaggerated
- Glorified savarna masculinity with nationalist and hyper-masculine tropes (an ideal of dominant-caste manhood that fuses caste entitlement with patriarchal values, often presented through themes of toughness, control and cultural superiority.)
This content may not contain direct slurs, but it spreads ideology hostile to marginalised communities under the veneer of ‘free speech’ or ‘alternate perspectives.’
Advocacy groups like Equality Labs – South Asian Dalit civil rights organisation, and Social & Media Matters – an Indian digital rights organisation, have documented how algorithms reward content that couches casteism, sexism, or communalism in the language of opinion or satire, thus evading moderation protocols.
This content may not contain direct slurs, but it spreads ideology hostile to marginalised communities under the veneer of ‘free speech’ or ‘alternate perspectives.’ This can be seen in extension to the 2020 resignation of Facebook’s Policy Head in India after revelations that the platform refused to takedown religious hate owing to the company's and its executives’ political alignment with the ruling right-wing Bharatiya Janata Party (BJP) in the country.
Instagram is trying to rage-bait us
For Aleena, a 29-year-old Dalit poet from Kerala, the dangers of Instagram’s algorithmic amplification aren’t theoretical, they are personal. With over 92,000 followers, she has experienced firsthand how platform virality can turn into algorithmic violence.
One of her early reels narrating a poem on caste, unexpectedly went viral. "I had only 500 followers then. That reel got over 300,000 views and with it came thousands of hate comments," she recalls. The harassment wasn’t limited to disagreement. It was vicious, casteist and gendered. "They told me to ‘go clean sewers’. One said a good-looking savarna man should make me his maid to cure my inferiority complex."
She began to notice a pattern that every time she posted something explicitly rooted in Dalit feminist thought, Instagram seemed to surface it to audiences that, she said, is "primed to disagree, to hate."
Aleena added, “I honestly feel like Instagram is trying to start a commotion. It’s rage-baiting us,” adding, “They show our work to people who hate it. You get harassed. You talk about it. Then you get harassed even more. It’s as if the algorithm penalises you for speaking about your pain.”
Artist and writer Priyanka Paul (artwhoring), has noticed the same pattern. “Instagram has modified my algorithm in a way that I get a lot of posts that are downright casteist, jokes about slurs or memes that are casteist.”
She also pointed out how caste assertion by Dalit users is often twisted by dominant-caste creators. “There’s also a phenomenon I have noted where something that can be seen as caste assertion by individuals from Dalit communities is used by oppressor caste folks to make jokes about it, like sometimes I cannot tell if this is casteist or not,” she said.
Even neutral content, she said, becomes a target. “Very normal, non-polarising or non-controversial videos, with even as much as a hint of a picture of Babasaheb, the comments will be flooded with casteist remarks.” Babasaheb is a widely used honorific for Dr. BR Ambedkar, a Dalit icon and the chief architect of Indian Constitution, who led movements against caste oppression.
Instagram has modified my algorithm in a way that I get a lot of posts that are downright casteist, jokes about slurs or memes that are casteist.
- Priyanka Paul, artist and writer
Visibility without safety
The Big Fat Bao (TBFB), an artist and design researcher whose work sits at the intersection of caste, gender, food and visual culture, has experienced years of shadowbanning, harassment and algorithmic suppression. “Every time I post about caste or gender, I see a drop in reach,” they said. “I have to manually send my posts to 100–200 people, asking them to share it for the algorithm to even pick it up.”
Even when TBFB didn’t post for a month, they lost over 600 followers and noticed their feed was flooded with misogynistic and Hindutva-leaning content, none of which they had engaged with. “I wasn’t even seeing the kind of creators I usually follow. It was all male influencers, sexist podcast bros and content that I had actively blocked.”
In 2022, Instagram took down one of their artworks commemorating The Annihilation of Caste. “It had no graphic content or language that violated guidelines, just the phrase 'annihilation of caste' yet it was flagged and removed,” they said.
Years of online abuse, ranging from casteist trolling to doxxing, have also reshaped how TBFB uses the platform. “I stopped posting selfies. I avoid reading message requests because most of them are vile,” they said. “There’s a constant pressure to self-censor, especially when I talk about Brahminical patriarchy or rape culture. You get tone-policed or even attacked from within the community.” They added, “I used to be blunt, angry and honest. Now I spend 24 hours just figuring out how to word things so I don’t get cancelled by someone, because if that happens, all the harassment that follows is just unbearable.”
Self-censorship is found to be a direct consequence of online violence. Aleena has also adjusted how she uses the platform. She doesn’t share her full name, hometown, and avoids posting her location in real time. "Instagram is not a safe space. It reflects the real world, only more unhinged."
Priyanka echoed these concerns, describing the toll of being hyper-visible to hostile audiences. “I am infinitely scarred and on a lifelong path to recovery (it often feels) from having my privacy, my sense of safety, my ability to present my ideas coherently all very much snatched from me. It’s a very lonely and taxing experience,” she said.
I used to be blunt, angry and honest. Now I spend 24 hours just figuring out how to word things so I don’t get cancelled by someone, because if that happens, all the harassment that follows is just unbearable.
- The Big Fat Bao, artist and design researcher
With over 76,000 followers, Priyanka’s art and commentary have frequently been targeted. “It’s also scary when your source of income or main way of finding work is Instagram, because it’s a platform where you are a notification or click away from being dehumanised. To Instagram, and its algorithms and to the general way we consume media, I am just a pawn.”
She recalled how the most liked post on her page isn’t about her work but about being trolled by a Bollywood star, followed by an avalanche of abuse. “People love consuming this. It feels painful to know that [they] want to consume your pain, not your work, not your art, you have to rip yourself open and be vulnerable and share your weakest moments and how it feels to be so abused. Then people tell you ‘more power to you’.”
Priyanka also flagged the rise of coordinated trolling by young users who attack and engage with hateful content in a coordinated way. “I think it’s very concerning the swiftness with which there are youth from across this country with access to smart phones who are quick to type abuses and rape threats in comments and DMs, and I doubt Instagram will do anything about this.”
Yatharth, a researcher and designer working on issues of platform design and caste in technology, said these outcomes are not coincidental. "The social media paradigm is built on engagement. It doesn’t matter what the content says, as long as it keeps people reacting," he said. "Hate tends to generate more engagement and platforms are incentivised to amplify whatever keeps people on the app."
In India, where majority (67%) social media users are male, or belong to English-speaking and dominant castes, the algorithm reinforces their dominant worldview. "It’s a feedback loop. And creators from marginalised backgrounds get caught in the churn," Yatharth said.
He explained that casteist content often travels widely not just within hate networks, but also when progressive users share it in order to critique or expose it. “The algorithm only sees engagement, it doesn’t care why something is being shared. So it ends up pushing that content even more,” he said.
Even in cases where casteist content isn’t overt hate speech, it still flourishes under the guise of ‘alternate perspectives’ or ‘history.’ “Take creators who say things like, feminism destroys families, these are ideological distortions passed off as facts. The platforms won’t flag it, because it’s not technically misinformation in their systems. And challenging that kind of content often requires nuance and context, which short-form content formats like Reels are not designed for,” Yatharth said.
The social media paradigm is built on engagement. It doesn’t matter what the content says, as long as it keeps people reacting. [...] Hate tends to generate more engagement and platforms are incentivised to amplify whatever keeps people on the app.
- Yatharth, researcher and designer
A key weakness is Instagram’s (and Facebook’s) inability to moderate effectively in India’s diverse linguistic and cultural landscape.
Research by Social & Media Matters, based on interviews with 49 Indian moderators, found a heavy reliance on just Hindi and English, leaving dozens of regional and local languages unmonitored, misclassifying hate, overburdening moderators and users alike.
In 2022, Meta’s Oversight Board warned that this insufficient resourcing for non-English moderation undermines users in India, where only 2% of appeals come from Central and South Asia, despite having the platform’s largest user base in the region.
Regulation or retreat?
"Five years ago, we thought the problem was that platforms didn’t understand caste. Now it’s clear that they get it. They just don’t care," Yatharth said.
He is sceptical that social pressure can change anything. "The only thing platforms respond to is state or legal pressure. But that’s a double-edged sword in a country where the state is often hostile to marginalised voices."
Global trends don’t inspire confidence either. In January 2025, Meta announced it would scale back investment in misinformation and fact-checking worldwide. Content moderation teams and ethics staff have been laid off. Advisory boards disbanded. Platforms are moving toward a libertarian "free speech" model with minimal oversight.
In India, where caste, gender and religious hierarchies already shape everyday life, the amplification of hate speech through social media isn't just a digital problem, it's a social one.
In the absence of real accountability, Yatharth believes the burden continues to fall unfairly on marginalised creators themselves. “There’s this common suggestion that we just need more counter-narratives. But it’s not a fair fight. Dominant groups have more money, more people and entire bot farms. Even if platforms were neutral, which they are not, the field is tilted. Hate is louder and hate is better resourced.”
Aleena echoes this. "Instagram thrives on engagement. But if rage and trauma are what drive that engagement, then the platform is complicit. They have the power to stop it. But I don’t think they will."
In India, where caste, gender and religious hierarchies already shape everyday life, the amplification of hate speech through social media isn't just a digital problem, it's a social one. The same casteist and patriarchal narratives that have fuelled offline violence now find fertile ground online, repackaged as debate, humour or opinion. And while those at the margins continue to face systemic oppression, the platforms tasked with connecting us often deepen those divides.
Instagram, like its parent company Meta, has made repeated commitments to user safety, equity, and responsible AI on global stages. But these promises rarely extend to users in the Global majority, especially those speaking up from the edges of power. In reality, the cost of engagement is not distributed equally. For creators from marginalised communities, virality can mean violence, and visibility can mean being fed to the algorithm.
- 437 views





Add new comment