We are US based top Moving Company, expreienced work focused on quality.

Location

246, Old York Rd, NY 08080

Working Hours

09:00 AM to 07:00 PM ( Mon - Sat )

Phone Number

+11 231 456 7890

Regulating AI

Globally, governments are grappling with finding the right regulatory framework to govern artificial intelligence (AI). The concern is genuine, as AI does have several positive as well as negative impacts. Deepfakes, cyber security, data thefts, etc, are the negatives; as Prime Minister Narendra Modi pointed out at the recently held Global Partnership on Artificial Intelligence (GPAI), global security will face a big threat if AI-laced weapons were to reach terrorist organisations. At the same time, the positives are many, including harnessing the technology for education, healthcare, agriculture, etc. India has rightly acknowledged that while each country needs to do its own bit, AI regulation will also need a global consensus as the technology is not restricted by geographical boundaries. For instance, deepfakes have emerged as a big threat in India recently and need to be tackled on an urgent basis, and the government is working towards bringing certain measures to check them.

The recent regulations brought about by the European Union (EU) is also being seen as a template by a section of advocacy groups on how to go about regulating AI. The EU has, for instance, put clear guardrails on the adoption of AI by law enforcement agencies. It has put restrictions on facial recognition technology, and on using AI to manipulate human behaviour. Further, governments can use real-time biometric surveillance in public areas only where there are serious threats involved. These sound good, but may not be an ideal template to follow for countries such as India.

So, rather than taking the extreme EU regulations as an ideal template, or even those of the US which are more in the form of guidelines, the better approach for India would be to identify dangers and see whether they can be tackled under existing laws. Most of the dangers AI poses, like deepfakes, can be tackled through existing laws like CrPC, IT Act, and DPDP Act. Some fine-tuning of the existing laws will certainly help.

For instance, currently intermediaries are supposed to remove harmful or unlawful content within 36 hours of being flagged. Perhaps this needs to be brought down to a couple of hours, as enough damage would have been done in 36 hours.

Certain Indian examples about handling such issues act as a perfect case study. A few years back, the government was concerned with the spread of rumours through WhatsApp. This was handled by engaging with the platform which brought about the forward label exclusively for India. This minimised the danger. The government may further increase the due diligence which social media platforms need to do with regard to checking the harmful AI-generated content in lieu of legal protection under the safe harbour clause. Such a nuanced approach will help in achieving the objective of using AI for technological advancement while keeping in check its harmful effects.

Leave a Reply

Your email address will not be published. Required fields are marked *