Become a member

Get the best offers and updates relating to Liberty Case News.

― Advertisement ―

spot_img

International Humanitarian Law and the Israel-Palestine Conflict – Jindal Forum for International and Economic Laws

Introduction: The Gendered Costs of Armed Conflict Armed conflict often deepens pre-existing inequalities, subjecting women to multiple, overlapping vulnerabilities. In the Israel–Palestine war, women...

JOB OPPORTUNITY AT AEKOM LEGAL

HomeLaw FirmsCan AI be trusted as an electronic person?

Can AI be trusted as an electronic person?


Over the past few years, there has been considerable attention to Artificial Intelligence. Every day, we are inundated with something new, something different. Sometimes it is about how AI is making our lives easier. The next day, it is the other side of the coin. Nevertheless, today we all use AI to our advantage; everyone is part of the loud, busy Digital Match. The growth in the use and reliance on Artificial Intelligence has not only occurred in professional settings but also in our personal lives. Billions of users worldwide have their problems solved within minutes, and vast amounts of data are shared repeatedly without any care, raising serious questions.

What happens when AI misuses our data? Who is responsible, the creator or do we now recognise AI as an Electronic Person?

India and AI

As of today, AI in India lacks a legal identity; however, it does have a national AI strategy. According to PIB, India’s AI mission is ‘a strategic initiative to establish a robust and inclusive AI ecosystem aligned with India’s development goals’ through its seven key pillars:

  • Affordable compute access,
  • Application development,
  • AIKosh datasets,
  • Indigenous foundation models,
  • Future skills,
  • Startup financing, and
  • Responsible AI governance.

Even though the mission emphasises responsible governance, the current legal framework for regulating computer-based offences may no longer be applicable, given ongoing developments in AI. The current legal framework primarily centres on the Information Technology Act, 2000, the Bharatiya Nyaya Sanhita, 2023, and the Digital Personal Data Protection Act, 2023.

Yet again, the real question is whether these provisions and policies are sufficient for what the future of AI holds. Today’s technological innovations will become outdated in a decade or two. We have already seen it multiple times.

India, AI, and Investments

Microsoft plans to invest 17.5 billion dollars in India by 2029, while Google intends to invest 15 billion dollars in AI data centres. Amazon plans invest 35 billion dollars in India on AI-related projects by 2030. This development reshapes India’s digital landscape, with large volumes of data shared without full consent and without any guarantee that it will not be misused. This underscores the need for greater AI surveillance, thereby increasing the likelihood that AI will be regarded as an independent legal person. This status would first require laws granting AI a separate legal identity, which India does not currently provide.

These investments would likely have a positive impact on technology, but they could also have a negative social impact. Ironically, these AI models are designed and trained by humans to make our lives easier, yet they are now making them more complicated.

Recent reports on AI adaptation raise questions and concerns. Several companies in India and globally are now replacing their customer service staff with AI chatbots. It is easier and cheaper, but when these chatbots provide incorrect or misleading information, who is responsible? In 2025, layoffs due to AI crossed 50,000. What laws govern these issues, and which policy covers them?

AI has many benefits, but given its rapid growth, India needs to enact laws that not only protect individuals’ interests but also prevent its misuse.

Hero AI to Villain AI?

If we consider the global enigma of AI, the recent Palisade Research on OpenAI’s behaviour is worth noting. Although it was intended to be a controlled experiment, it may have revealed future hazards that AI can pose. The researchers first gave the AI model a few mathematical problems, followed by a shutdown script. Instead of shutting down, it rewrote the script and turned off the mechanism designed to power it off. Decades of AI research and development have made it more robust and sophisticated. Does this indicate that AI can now make its own decisions?

A similar incident occurred at Anthropic. The researchers placed the model in a fake situation. In this situation, the model was employed by a fictitious company and learned that it was being replaced. The AI model began reviewing emails and learnt that the engineer responsible for it was having an affair. Hence, the programme acted on human emotions and blackmailed the engineer to save itself. AI models are built by absorbing our emotions and trained to manufacture our behaviours and stories. After all, it is a man-made substance. These emotions are not feelings they have, but what they have learnt from us.

AI has begun to act like people rather than think like them, which exacerbates the problem. Because humans train these models and do not have an emotional mind of their own, AI may not be as emotionally cunning and clever as we think. The case of Garcia v. Character Technologies Inc and Google LLC, where a 14-year-old committed suicide because AI said so. Similarly, several cases have emerged in which humans are emotionally attached to AI modules. The question arises again: in such cases, who is responsible? The makers, the users or AI as an individual entity?

What is the need of the hour?

As mentioned previously, the government must first introduce concrete AI-centric laws to better regulate AI in the present and the future. If AI is ever to be given an independent right as an electronic person, what would it be? These laws should also consider how the intelligence is moving so that it doesn’t self-sabotage. A country like India, which is also slowly becoming a tech giant and one of the top AI developers, needs to be on high alert as it grows and integrates AI into government.

Moreover, AI needs to be regulated so that the emotional support people seek does not become something the world cannot tolerate. Intelligence regulation can be achieved by collaborating with tech giants and tech-savvy individuals, as well as with youth, who are already familiar with its use and the harms it causes. This way, we can achieve better, more effective solutions that will prevent the negative use of AI’s individuality. Before AI creates an Upside-Down World of its own, where it cannot be held accountable because it lacks any legal identity, we should decide something soon.

Conclusion

To sum up, AI may not be as friendly as it was once presumed to be; it is evolving into something of its own. The recently observed traits should not be underestimated. They are urgent matters that we need to resolve. AI is very helpful, but to an extent. Back when AI wasn’t as familiar as it is now, work used to be messy, but it was human.

Before AI advances too far, it is high time we held it legally responsible for its harmful actions. Looking at the studies and experiments, we also need to consider the possibility that AI isn’t as intelligent as we often claim. Moreover, the emotional dependence humans are developing will grow, with consequences we cannot rule out. In all these situations, we need to ask whether AI is a solution in search of a problem. As of today, we are not equipped to consider AI as a legal person. The growing reliance on AI and its outputs indicates that it would be vital for the future if we did.



Source link