How artificial intelligence promotes racist rants against Indians – Firstpost

7

An urgent need of the hour is for the Indian government to step in and discuss with tech giants on the necessity to stop racialism on the Internet
read more

As artificial intelligence (AI) gets more deeply enmeshed with human lives, the question emerges time and again around the extent to which AI has superior capabilities than human intelligence and skills. While AI has displayed immense potential to be superior to skills possessed by humans in certain sectors, the fact remains that AI itself adheres to human behaviour and thinking. There is a general misplaced belief that technology and AI are neutral and objective. It is precisely this misperception that allows AI to perpetuate racial discrimination.

An example of this lies in the fact that AI has increasingly been found to emulate stereotypes and racism that humans more than often indulge in. A team of technology and linguistics researchers revealed in 2024 that large language models like OpenAI’s ChatGPT and Google’s Gemini hold racist stereotypes about speakers of African American Vernacular English, or AAVE- an English dialect spoken by Black Americans.

Another often stereotyped community is the Indian community with racist diatribes rife on the Internet about Indians’ English accents to myriad forms of ways of living. If stereotypes in the English language were not enough, the Chinese language also has a particular derogatory term for Indians — a san. Google, when automatically translating Chinese language slurs on Indians, on X (formerly known as Twitter), translates the racist Chinese term, directly into Indians. The term a san has a historical connotation and traces its origins to the time when the British Indian soldiers were deployed in Shanghai, when the People’s Republic of China did not exist, and only the Republic of China existed. Since British Indian officers often said “aye Sir”, it got roughly translated into a san. There are several other anecdotal explanations on the origin of the term, and this is just one of them. However, the fact remains that the term is derogatory. In today’s context, the term means the dirty third world. It is a racist and derogatory term.

On August 16, on X, a post by a bot in Chinese was found spreading fake information about how India and the US were arguing with each other at the Olympics, over a wrestler being 100 gm overweight. While the information in itself is wrong and can be assumed to have malicious motives, the more concerning part is that the term san ge, a version of a san was used, and Google automatically translated it into Indians. Google translate has multiple release versions, and the latest v3 and v2 versions were tried, and all indulged in the translation of a san into Indians. However, what is unknown is the version that X uses.

ChatGPT follows the same method and translates a san into Indians. However, upon providing feedback, it states that it identified the word incorrectly. Given that ChatGPT is AI, and not human, when provided the same prompt to translate the same tweet, it displays the same derogatory result of a san as Indians. WhatsApp’s Meta AI however, is more nuanced and does not translate the term into Indians, when given the same prompt as ChatGPT. Microsoft Chat’s Copilot unfortunately does not fare the same way as Whatsapp’s Meta AI and indulges in the same behaviour as Google translate and ChatGPT.

Generative AI has been changing the world and has all the potential to drive increasingly tectonic societal shifts in the near future. Google Translate has a market share of 69.60 per cent in the data science and machine learning market. India and Indians have and continue to benefit from its AI and machine learning models. Google also has the highest share in India for its browser Chrome. More nuanced behaviour from the tech giant can easily be expected. A greater sensitivity should be the basic minimum.

A few days ago in Taiwan, a student at Yangming Jiaotong University posted a photo on Instagram, of some Indian students having dinner at a table, and used the racist rant of a san. The post was screenshotted, translated into English and forwarded in the Indian students’ circles. It caused a lot of discomfort and unease at being unwelcome. These racist diatribes remain difficult till date for several communities. If AI, which is seen as the future also emulates the same stereotypes and the racism that humans have indulged in, the future is only much bleaker with smarter technologies to indulge in more divisiveness.

In his vision statement, “Human Rights: A Path for Solutions”, UN Human Rights chief Volker Türk has stated that generative AI offers previously unimagined opportunities to move forward on the enjoyment of human rights, however its negative impacts are already proliferating! An urgent need of the hour is for the state, i.e. the Indian government, to step in and discuss with tech giants on the necessity to stop racialism. A bias from the past leads to bias in the future!

Dr Sriparna Pathak is an Associate Professor of China Studies at O.P. Jindal Global University, India. Nikhil Parashar is the Operations Head at ThinkFi, New Delhi. Views expressed in the above piece are personal and solely those of the authors. They do not necessarily reflect Firtspost’s views.



Images are for reference only.Images and contents gathered automatic from google or 3rd party sources.All rights on the images and contents are with their original owners.

Aggregated From –

Comments are closed.