NO ONE HAS A COMPELLING PRODUCT STORY FOR AI, EXCEPT FOR THOSE USING IT FOR WRONG REASONS

Asia In case you missed it Most Read Opinion Tech

Sat 22 March 2025:

When OpenAI launched ChatGPT in November 2022, it rapidly became one of the world’s most visited websites and, upon its eventual release in the app store, the most downloaded app. Since then, nearly every company has been racing to integrate Generative AI into their products—whether as standalone applications, embedded features, or even standalone hardware.

The technology driving large language models (LLMs) is genuinely novel. It is transforming how we interact with computers, and we are in the midst of a paradigm shift, i.e., we are moving from structured, keyword-based “computer speak” semantics, which feels very primitive, almost caveman-like, to seamless natural language interactions. The rigid syntax and norms that once defined our interaction with computers are becoming obsolete, rendering the skills perfected by millennials and the struggles of older generations irrelevant.

However, many big ideas are yet to come to fruition. This natural way of interaction is meaningless if AI cannot effectively complete tasks. The current conventional belief, as well as a major push by tech companies, is that AI agents (there is no universal consensus on the definition of an “AI agent”, just like there is no definition for Artificial General Intelligence) are here and are on the brink of perfection, poised to simplify everything from ordering food to managing groceries with a few spoken commands. But this vision clashes with existing business realities. How will food delivery platforms push advertisements or upsell premium features if AI eliminates the need for users to browse? If autonomous agents conduct deep research on behalf of users, they disrupt the fundamental internet economy—one built on human engagement with news sites, blogs, and other ad-supported content. The assumption that a human is almost always on the other end of the screen is what sustains digital monetisation. It threatens to upend this model entirely and breaks whatever unwritten social contract we have. 

__________________________________________________________________________

https://whatsapp.com/channel/0029VaAtNxX8fewmiFmN7N22

__________________________________________________________________________

Agents are still a few years away from becoming a reality, and Generative AI, in its current state, is both impressively capable and frustratingly limited due to various constraints, including technological and scientific limitations, regulatory hurdles, legal battles, monopolistic capture of hardware and operating ecosystems such as mobile phones. While the grand vision has yet to materialise, the key question now is: What are the tangible, real-world applications of this technology today? Some dismiss it as nothing more than a tool for students to cheat on homework, while technologists have embraced its potential, especially in coding. More specifically, it excels at vibe coding, where it is producing work on par with that of an early-career developer. Meanwhile, the rise of Generative AI has also reignited interest in the dead internet theory, as AI-generated content floods online spaces, permanently altering our digital feeds with low-quality, algorithmic sludge.

Early on, there was widespread concern that AI would supercharge disinformation campaigns, making them faster, more efficient, and harder to detect. While AI has certainly been leveraged for such purposes, the actual impact has been far less catastrophic than initially feared.

Every product needs a story; it shapes public perception and helps critics understand how a company envisions its technology being used. More importantly, it also displays unique use cases for the normies and prepares the product for mass market. As stated earlier, right now, AI models are in an awkward phase—they are remarkably powerful yet far from perfect. This makes crafting a compelling narrative a significant challenge. Without a clear, persuasive story, companies struggle to position their AI offerings, and this is evident from the lacklustre and often uninspiring advertisements produced in the last year or two. 

Last year, during the Olympics, Google aired an ad showcasing its Gemini model. In it, a father asks the AI to help his daughter write a letter to an athlete she deeply admires. The result? A disaster. The ad fell flat, not only because it lacked authenticity but also because it highlighted Google’s uncertainty about a genuinely compelling and respectable commercial use case for its AI. Overall, to the public, it just seemed disingenuous.

Apple hasn’t fared much better. Many of its Apple Intelligence ads have been met with scepticism. Take the one starring Bella Ramsey, for example. In the ad, an agent (the profession of the individual is not specified) asks her if she has read a pitch. Instead of responding directly, she pulls out her phone, asks the AI to summarise it, and then recites the summary—pretending as if she had read the entire thing. Moreover, it is implied in the ad that she hasn’t even grasped the gist of the plot. Rather than inspiring confidence in AI, the ad unintentionally promotes dishonesty—nothing more, nothing less. Moreover, the new focus for Apple Intelligence is the Genmoji ads, which can be found across many cities in the United States.  

There isn’t a compelling story to tell because the technology simply isn’t there yet. Beyond that, the very way these AI models have been built has polarised society. They’ve been trained on the hard work and copyrighted materials of individuals and institutions—many of whom receive neither credit nor compensation. Understandably, this has sparked outrage, with critics viewing AI not just as an exploitative tool but as something fundamentally unethical. Others raise environmental concerns, pointing to the water-hungry data centres powering these models. Yet, this criticism often comes with a dose of hypocrisy—many of the same people raising alarms conveniently overlook the environmental impact of their own digital habits, from binge-watching shows to endlessly streaming their favourite games; all of these things are possible only due to water-hungry data centres.  

In India, AI “use cases” are emerging in unexpectedly insidious ways. Media publishers—ranging from fringe outlets to mainstream organisations—are increasingly using AI-generated thumbnails in their articles to reinforce stereotypes. Typically, when we think of AI perpetuating stereotypes, we assume it’s an issue rooted in biased training data. For instance, an AI might generate white figures for certain professions, default to brown faces when depicting terrorists, or associate languages like Arabic with extremism—flaws often seen as mere byproducts of the datasets these models are trained on. But in this case, the bias isn’t incidental—it’s intentional.

In India, right-wing portals have leveraged this technology to actively advance their agenda of spreading hate, fuelling conspiracy theories, and deepening prejudices, particularly against minority communities, with Muslims often being the primary targets.

Figure 1: Collage of AI-generated thumbnails featured in OpIndia articles in 2024

The image above (Figure 1) is a compilation of AI-generated thumbnails featured on OpIndia in 2024, a right-wing publication known for spreading misinformation and conspiracy theories. These thumbnails accompanied stories covering incidents such as abuse, sexual harassment of minors, and kidnapping. It is pertinent to note that all the images depict men in traditional Muslim attire, including skullcaps and kurtas, regardless of whether the individuals involved in the actual stories were religious or whether their faith was relevant to the events. 

This pattern is concerning because it reflects a selective visual framing that reinforces specific stereotypes. The use of traditional attire in these AI-generated images subtly anchors the narrative to a particular religious identity, even when the stories themselves may not explicitly connect the incidents to religion. This is not merely about illustrating news—it’s about crafting a visual language that repeatedly associates certain crimes with a specific community.

Another publication even more notorious than OpIndia for such visual framing is Sudarshan News. Below (Figure 2) is a compilation of AI-generated thumbnails featured on the publication’s YouTube channel in February 2025. A recurring AI-generated image on Sudarshan News is a skull-cap-wearing Muslim man with fangs and primate-like features. The artwork speaks for itself—which community it is attempting to alienate and demonise.

Figure 2: Collage of AI-generated thumbnails featured on Sudarshan News’ YouTube thumbnail in February 2025. 

Last May, ahead of the general elections, India Today, one of India’s oldest English-language magazines, shared a cover art on its social media. The cover features AI-generated art depicting a bearded Muslim man and a veiled woman—essentially, visibly Muslim characters. The title reads, “The Muslim factor: Fragmented in the past, the community’s vote seems to be consolidating in this general election, impacting results in 86 constituencies.”

India Today Magazine’s cover, May 27, 2024 issue.

To someone unfamiliar with India’s political landscape, this might appear to offer a fresh analytical perspective on the elections. However, for anyone with even a basic understanding of Indian politics, this framing is neither novel nor insightful. The Muslim vote has historically been positioned in opposition to the Sangh Parivar, even if fragmented across various parties. What’s striking here is not the data point but the deliberate visual and narrative choices.

The artwork itself is just the façade; it’s the accompanying text that exposes the underlying intent. This combination of imagery and language doesn’t seek to inform, it seeks to evoke. It reinforces a reductive, stereotypical image of Muslims that aligns with the biases often harboured by the average bhakt. Rather than engaging with the political complexities of Muslim voting patterns, the cover simplifies the community into a monolithic “factor,” framed as a political anomaly to be scrutinised rather than as an integral part of the electorate.

The cover and the accompanying text are as lazy and unimaginative as those who argue certain political parties are the “B-team”, i.e., they only exist to divide/split/cut the votes. While this narrative may satisfy the biases of die-hard party loyalists, it also fuels a troubling idea that some parties or individuals should simply step aside and not contest elections. How is that even acceptable? Democracy is inherently imperfect, but that very imperfection is what makes it vital. The notion that political competition should be curtailed to serve a singular agenda undermines the essence of a democratic system. Elections are meant to be contested, debated, and decided by the people and not dictated by a narrow, self-serving narrative.

Are AI models to blame for generating such images? That remains unclear. Like any tool, they, too, can be wielded for different purposes—a hammer can drive a nail or be used as a weapon; the outcome depends on the person using it. These kinds of images could just as easily be created through traditional photo editing and manipulation, but that process requires skill and effort. AI, on the other hand, is mostly free, requires no expertise, and delivers results instantly. It’s undeniably accelerating the spread of hate.

Furthermore, this author views the use of AI-generated images by media outlets to reinforce stereotypes as a free speech and ethical responsibility issue. It is not an excuse to give these outlets a free hand to continue demonising minorities. However, as seen over the last decade, such discourse often becomes a double-edged sword, providing authorities with a pretext to restrict press freedom. Much like how concerns over “misinformation” and “tech sovereignty” have been leveraged to police speech and impose moral gatekeeping, so do AI ethics risks becoming yet another tool for control.

Beyond AI’s use in newsrooms, two other major and notorious applications of the technology cast a long shadow. The first is the rise of AI-generated Non-Consensual Intimate Imagery (NCII)—colloquially known as “revenge porn”—which has left policymakers and lawmakers across the world scrambling for solutions. Global icons like Taylor Swift and former Presidential candidate Kamala Harris have been targeted, as have Bollywood stars, and even high school and university students from South Korea to the United States. How does one even begin to tackle this crisis? Copyright laws might offer partial relief, but they are far from sufficient and were never made for this purpose. New legislation will almost certainly emerge, yet its enforcement must be handled with extreme caution. The legal precedents set today could shape the future for decades, potentially not only stalling technological progress but also paving the way for unchecked authoritarian control.

The second looming threat is AI-powered scams. In a country like India, where people are already falling victim to digital arrest scams—which sounds completely absurd and has little to do with AI—the potential for large-scale fraud is alarming. Once the scam industry fully integrates AI, the consequences could be catastrophic.

While the grand promises of AI remain on the horizon, we are already treading a precarious path, grappling with questions that have gone unanswered for decades. These questions and uncertainties are being carried over and compounding in this new era of technology. Moreover, the truth is a very cliched truth: as this technology advances, so too must our research, our legal frameworks, and our moral considerations. 

Author

Kalim Ahmed: is a writer and an open-source researcher who focuses on tech accountability, disinformation, and foreign information manipulation and Interference.

__________________________________________________________________________

FOLLOW INDEPENDENT PRESS:

WhatsApp CHANNEL 
https://whatsapp.com/channel/0029VaAtNxX8fewmiFmN7N22

TWITTER (CLICK HERE) 
https://twitter.com/IpIndependent 

FACEBOOK (CLICK HERE)
https://web.facebook.com/ipindependent

YOUTUBE (CLICK HERE)

https://www.youtube.com/@ipindependent

Think your friends would be interested? Share this story! 

Leave a Reply

Your email address will not be published. Required fields are marked *