Using AI in policing may lead to targeting neighbourhoods of marginalised: CJI DY Chandrachud

HomeLegalUsing AI in policing may lead to targeting neighbourhoods of marginalised: CJI...

Become a member

Get the best offers and updates relating to Liberty Case News.

― Advertisement ―

spot_img

Even as we gear up to use Artificial Intelligence (AI) in the field of criminal justice system, we must be wary of systemic biases creeping in due to the nature of data being fed into the system, Chief Justice of India (CJI) DY Chandrachud cautioned.

In this regard, he explained how the data which form the basis for algorithms reflect biases or systemic inequalities in the criminal justice system and it could lead to perpetuating the same bias and end up targeting neighbourhoods of marginalised communities.

“If historical crime data used to train these algorithms reflect biases or systemic inequalities in the criminal justice system, the algorithms may perpetuate these biases by targeting the same neighbourhoods as “high-risk” areas for future crime. This can result in disproportionate surveillance and policing of already marginalized communities, exacerbating social inequalities and perpetuating cycles of discrimination,” said the CJI.

Predictive policing algorithms often operate as black boxes, meaning their internal workings are not transparent, he added.

CJI Chandrachud was delivering the Keynote Address at the 11th Annual Conference of the Berkeley Centre for Comparative Equality and Antidiscrimination Law on the topic “Is there Hope for Equality Law?”

The conference was organised by the National Law School of India University, Bengaluru.

The CJI further said that the principle of “contextualization” becomes paramount to dealing with AI challenges in diverse contexts like India.

“India’s rich demographic patterns, characterized by linguistic diversity, regional variations, and cultural nuances, present a unique set of challenges and opportunities for AI deployment. As responsible users, we must ask ourselves these critical questions to ensure that our engagement with AI is ethical and equitable. We need to be vigilant about the origins of data and its potential biases, scrutinize the algorithms we employ for transparency and fairness, and actively seek to mitigate any unintended discriminatory effects,” said the CJI who has been at forefront of technological evolution of the judiciary in the country.

The CJI further added that climate change amplifies the inequities faced by marginalised and disadvantaged groups and that women, children, disabled individuals and indigenous people face heightened risks from climate change, including displacement and health inequities and food scarcity.

“Inequality thus becomes both a cause and consequence of climate change,” the CJI opined.

In this regard, he pointed out how wealthier individuals often have the means to invest in protective infrastructure and cooling systems during extreme heat whereas poorer communities lack such resources, making them more vulnerable to climate-related disasters.

“Ensuring climate justice requires recognising these differential impacts and actively involving affected communities in decision-making processes,” CJI said.

#bias #NLSIU Bangalore #Artifical Intelligence #CJI DY Chandrachud #Social inequality #Bias in investigation #airr #news #airrnews #channel

RATE NOW
wpChatIcon