Leading figures in artificial intelligence (AI) safety research convened in Singapore on 26 April 2025 for the inaugural Singapore Conference on AI International Scientific Exchange on AI Safety (SCAI ISE). This event, part of Singapore AI Research Week, was held alongside the International Conference on Learning Representations (ICLR) 2025, marking its first appearance in Singapore. The conference brought together over 100 participants, including academics, industry leaders, and policymakers from 11 countries, to discuss AI safety and establish global research priorities.
The Singapore Consensus on Global AI Safety Research Priorities was published as a result of the conference, highlighting three key areas: risk assessment, development, and control. Risk assessment focuses on understanding potential harms from AI systems and developing methods for precise measurement and third-party audits. Development aims to create AI systems that are trustworthy and secure by design, following safety engineering frameworks. Control involves managing AI systems’ behaviour to achieve desired outcomes, even amidst uncertainties.
Minister for Digital Development and Information, Josephine Teo, emphasised the importance of bridging research and policy to ensure effective AI governance. The Singapore Consensus will be presented at the Asia Tech x Singapore Summit on 28–29 May 2025, aiming to influence policymaking and foster a balance between safety and innovation.
This conference is part of Singapore’s ongoing efforts to build a trusted AI ecosystem. Previous initiatives include the AI Verify Foundation’s toolkits for testing AI models and the Model AI Governance Framework for Gen AI, which addresses concerns whilst promoting innovation. The Singapore AI Safety Red-Teaming Challenge Evaluation Report was also published following a multicultural and multilingual exercise conducted in partnership with Humane Intelligence.