Singapore has released a draft governance framework on generative artificial intelligence (GenAI) that it says is necessary to address emerging issues, including incident reporting and content provenance. The proposed model builds on the country’s existing AI governance framework, which was first released in 2019 and last updated in 2020.
GenAI has significant potential to be transformative “above and beyond” what traditional AI can achieve, but it also comes with risks, said the AI Verify Foundation and Infocomm Media Development Authority (IMDA) in a joint statement.
There is growing global consensus that consistent principles are necessary to create an environment in which GenAI can be used safely and confidently, the Singapore government agencies said. “The use and impact of AI is not limited to individual countries,” they said. “This proposed framework aims to facilitate international conversations among policymakers, industry, and the research community, to enable trusted development globally.”
The draft document encompasses proposals from a discussion paper IMDA had released last June, which identified six risks associated with GenAI, including hallucinations, copyright challenges, and embedded biases, and a framework on how these can be addressed.
The proposed GenAI governance framework also draws insights from previous initiatives, including a catalogue on how to assess the safety of GenAI models and testing conducted via an evaluation sandbox.
The draft GenAI governance model covers nine key areas that Singapore believes play key roles in supporting a trusted AI ecosystem. These revolve around the principles that AI-powered decisions should be explainable, transparent, and fair. The framework also offers practical suggestions that AI model developers and policymakers can apply as initial steps, IMDA and AI Verify said.