Indian Govt to launch National AI Safety Institute soon

The Indian government is finally thinking of setting up a national AI safety institute. At Microsoft’s Building AI Companions for India event in Bangalore, MeitY secretary S Krishnan, talked about the need for an AI Safety Institute in India along with Microsoft AI CEO Mustafa Suleyman.

“I think AI Safety Institute (AISI) seems to be the flavour of the day across the world, and we are in the process of trying to establish one ourselves in order to understand this better,” said Krishnan.

As of last month, the Indian government has been involved in discussions over establishing an AI Safety Institute for a while now, and per reports, the MeitY conducted a meeting last month to chart out the institute’s objectives, budget, framework, among others.
Krishnan argued for a balanced, proactive regulation. “I think, you know, we sort of waited for things to go wrong this time around. I think it’s time to be really thoughtful and deliberate and not treat that as such a taboo,” he said.

AI has been effective in the past. “With things like misrepresentation and deepfakes, we feared so much both in the Indian election and in the other elections which have been held throughout the world in 2024 but I think existing legislation have proved reasonably effective in addressing those issues, we’ve been able to tackle them to a significant extent,” said Krishnan.

But still, AI is in its nascent stage, a lot is yet to unfold in the coming years. “I think the tricky thing for us to figure out in the next five years is when a model starts to have the ability to improve itself independently, those kinds of recursive self improvement mechanics, you know, we sort of don’t really know exactly how they’re going to turn out,” said Mustafa Suleyman on how AI’s future is a top priority for policymakers. He also noted that AI advancements are difficult to predict, and some capabilities may require a more interventionist regulatory approach.

The UK was the first one to launch its AI Safety Institute to coordinate research and develop capabilities for testing advanced models. Subsequently, similar institutes were established across the world. The goal is to advance the testing and evaluation of frontier AI systems for safety risks.

In 2023, companies like OpenAI, Meta, Google Deepmind, and Microsoft, signed voluntary agreements giving the UK AISI early access to their models. “We hosted the Bletchley AI safety summit… and we got an agreement between China, the US, Europe across these risks,” which showcases a step towards cooperative global AI safety measures,” said Ian Hogarth, chair of the UK Government’s AI Foundation Model Taskforce in an interview.

Subsequently, early this year in the US, OpenAI and Anthropic signed MOUs with the US AI Safety Institute. “Safety promotes trust, which promotes adoption, which drives innovation, and that’s what we are trying to promote at the US AI Safety Institute.” said Elizabeth Kelly, director of the US AI Safety Institute, in an interview highlighting the vision of this institute in the coming future.

Previous articleElectric blanket safety checks in Nottinghamshire reveal high failure rate
Next articleBorder Patrol adds more surveillance towers in remote desert