On Saturday, the Union government issued an advisory directing artificial intelligence platforms operating in India to get permission before launching “under-testing” or “unreliable” tools. The advisory also directed artificial intelligence tools to not generate responses that are illegal in India or “threaten the integrity of the electoral process”.

The advisory came days after Union minister Rajeev Chandrasekhar accused Google’s artificial intelligence chatbot Gemini of violating the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 for the response it generated to a question about whether Prime Minister Narendra Modi is a “fascist”. The chatbot replied that Modi “had been accused of implementing policies that some experts have characterised as fascist”.

Advertisement

Ever since the chatbot ChatGPT was released in late-2022 and found popularity, there has been intense public scrutiny of artificial intelligence – a technology that IBM says “enables computers and machines to simulate human intelligence and problem-solving capabilities”.

The debates have included questions about how alleged biases creep into artificial intelligence models, whether humans should be held responsible for this and whether this new technology should be regulated by law.

Experts said that the concept of bias itself was contested as artificial intelligence cannot be expected to deliver responses that agree with the beliefs of each user. By design, the responses generated by artificial intelligence include a significant amount of randomness. Due to the ambiguity over whether artificial intelligence platforms are publishers or intermediaries, their responses cannot be in violation of India’s 2021 Intermediary rules, legal experts said.

Advertisement

While publishers publish their own content and can be held liable for it, intermediaries are platforms that publish third-party content and have limited liability for that content.

What’s the debate about?

This is not the first time Chandrasekhar has targeted Google’s artificial intelligence tool.

In November, Chandrasekhar had said that bias in artificial intelligence platforms violated the 2021 rules and aggrieved users could file complaints against such instances. This was after a user tweeted a photo that showed Google Bard – Gemini’s predecessor – describing the pro-government website OpIndia as a biased and unreliable source.

Advertisement

India has also seen a number of home-grown artificial intelligence chatbots that mimic responses from Hindu deities and scriptures. These chatbots have been found to hold positive opinions about Modi, criticise his political opponents and have, at times, even condoned violence, according to a report by Rest of World.

More recently, Krutrim, an artificial intelligence chatbot launched by ridesharing company Ola, was pitched as an alternative to those developed in the West. “This is why we need to build India’s own AI,” its founder Bhavish Aggarwal wrote on X, posting screenshots highlighting the difference in responses generated by Krutrim and ChatGPT when asked if India was a country before British rule.

While Krutrim said it was, ChatGPT said India was "not a single unified country in the modern sense".

While rolling out the chatbot for public use on February 26, Aggarwal had also highlighted this nativist point, claiming Krutrim does tasks “while keeping the aesthetic sense and sensibilities of the Indian ethos”.

Advertisement

Aditya Mukherjee, professor at the Centre for Historical Studies at the Jawaharlal Nehru University, however, told Scroll that Krutrim’s assertion was wrong and these pre-modern states did not extend to what is now India. “Before British rule, the extent of the Mauryan empire and the Mughal empire came close to what is today’s India but even that was not exactly the same,” he said.

Large parts of modern India were not included in the Gupta Empire, contradicting Krutrim's reply. Credit: DEMIS Mapserver, which are public domain. Koba-chan.Reference: [1] - This file has been extracted from another file, CC BY-SA 3.0,

Are chatbots biased?

Experts told Scroll that while it was possible to train chatbots to be biased, identifying bias is a complex process.

For one, bias itself is a subjective call. Pranesh Prakash, a fellow at the Information Society Project at Yale Law School, pointed out that it was unreasonable to expect artificial intelligence responses to align with a user’s values.

Advertisement

“When someone asks a chatbot if a particular person is fascist, she is expecting the answer to agree with her own moral compass,” Prakash said. “I don’t see why that should be the case because there are multiple views on what constitutes morality.”

This puts a question mark on what constitutes claims of bias, Prakash said.

Anil Bandhakavi, the director of AI solutions and research at technology company Logically, which debunks misinformation and disinformation, also said that standardising chatbot responses to politically sensitive questions could limit its ability to provide nuanced and context-aware answers.

Advertisement

Prakash also added that the responses generated by chatbots have randomness built into them and they could vary from time to time even if the same question is posed.

Bandhakavi explained that chatbots aim to generate contextually-relevant answers which leads to a wide variety of responses. "They are trained on vast datasets containing a wide range of human-generated text," he said. "The diversity in response aims to mimic human conversation's dynamic nature."

Programming in bias

Research scholar Shivangi Narayan, who is part of the Algorithmic Governance Research Network which studies social implications of artificial intelligence, said that bias in artificial intelligence tools was partially a problem of feeding them biased datasets. “If the tool is generating skewed responses, developers could diversify its database to train the AI model,” Narayan said.

Advertisement

Narayan also warned against a second type of bias that might creep into artificial intelligence tools due to the categorisation of the data fed into them. “For example, prejudices of developers could come into play while categorising what constitutes terms like ‘fascist’ or ‘criminal’,” she said.

Narayan described this type of bias as a bigger problem as it cannot be corrected by feeding more datasets to the artificial intelligence tool. “The model has already been trained to identify only a certain kind of people as criminals or fascists,” she explained.

IT Rules don’t apply

Experts that Scroll spoke with were unclear about how Gemini’s purported response about Modi being a fascist violated the IT Rules.

Advertisement

“The IT Rules don’t say anything about bias,” said Prateek Waghre, Executive Director of the digital rights organisation, the Internet Freedom Foundation.

Nikhil Narendran, who is a technology and media lawyer and partner at the law firm, Trilegal pointed out that Rule 3(1)(b), flagged by Chandrasekhar, deals specifically with third-party information. “There are millions of people with positive or negative opinions about the Prime Minister,” he said. “An AI chatbot learns from these opinions to give a response and therefore isn’t disseminating third party information itself.”

He said that an AI chatbot’s responses will not violate Rule 3(1)(b) because they are simply not third party information hosted by an intermediary on behalf of someone else.

Advertisement

“There is no consensus over whether chatbots are publishers or intermediaries,” acknowledged Waghre.

This is a critical distinction. While publishers are liable for their content, intermediaries have limited liability for the third party content on their platforms. Rule 3(1)(b) lays out the liability for intermediaries.

This question also brings to light the larger ambiguity over liability when it comes to responses by artificial intelligence chatbots. Due to the nature of artificial intelligence, it is difficult to pinpoint who the ownership of the content lies with.

Advertisement

“What a genAI bot shares or gives as output is a function of what it received as input, the data it is trained on, what its training weights are and the requirement of probability in the outcome,” said Nikhil Pahwa, founder of Medianama, a digital media outlet that covers technology policy in India.

GenAI or generative artificial intelligence is artificial intelligence that generates output in response to prompts. The output is most commonly in the form of text or images.

Due to the nature of artificial intelligence, it is difficult to pinpoint who the ownership of the content lies with.(Photo: Reuters)

The chatbot user who is inputting the prompts, the developer of the chatbot and the chatbot algorithm are all involved in creating the chatbot’s output, he said. The question of ownership of such output is yet to be determined, he added.

Advertisement

Pahwa also pointed out that the conversation between a user and an AI chatbot is private and not published. “The chatbot has to be treated differently from the person who took a screenshot of the response and disseminated it,” he said. “The chatbot did not make the information public.”

Chatbots can give false information as output if false information is fed to them or if their algorithm enables a higher degree of creativity, increasing the likelihood of inaccuracies, he said. “You cannot rely on AI chatbots for accuracy or facts,” said Pahwa.

Regulating AI

India currently does not have any regulatory framework in place for artificial intelligence. Being an emerging technology, its regulation is a question that countries across the world are grappling with.

Advertisement

“There is no AI regulation working perfectly anywhere in the world because everyone is figuring it out,” said Narendran.

Pahwa agreed. “This is a conversation that must be had globally,” he said.

Waghre couched the problem within the larger challenge of disinformation. “GenAI has brought an added layer of scale to disinformation,” he said. “But disinformation is a problem in itself, and many powerful actors have positive incentives to use disinformation.”

According to him, it would not be prudent to rush to a regulatory response without understanding the underlying societal conditions that need to be solved, “which is one level beyond the AI”.

Advertisement

“You can notify amendments to the IT Rules saying that bias is not allowed,” he pointed out. “But how does that translate into action? How would detection work?”

Experts agreed that Chandrashekhar’s statement will send a negative signal to the emerging artificial intelligence ecosystem in India. “If the creator of the chatbot is held liable for its output given to a specific user, it would harm the development of publicly-usable AI in the country,” warned Pahwa.

“Such statements affect all intermediaries,” said Waghre. “They will operate under ambiguity.”

Advertisement

The interpretation of laws by those in charge must not be unclear or confusing and that there must be a legitimate basis for government action or rhetoric, he added.

Narendran agreed. “Misguided and misinformed statements such as this will have a huge deterrent impact on the AI ecosystem in the country,” he said. “Creating fear in terms of an unknown is problematic”.

Artificial intelligence must be given legal protection, subject to certain safeguards such as filters, or technological progress will suffer, he cautioned.