The Seoul AI Summit, hosted by South Korea and the United Kingdom, convened international organizations to deliberate on the global advancements in Artificial Intelligence. Participants included government representatives from 20 countries, the European Commission, the United Nations, as well as members from notable academic institutes and civil groups. Major AI players such as OpenAI, Amazon, Microsoft, Meta, and Google DeepMind also participated.
During the conference held on May 21 and 22, the nations involved made commitments regarding AI safety, AI safety institutes, research grants, and risk thresholds. One of the primary goals was to develop a global set of AI safety standards and regulations.
Several key steps were undertaken:
- Tech giants committed to publishing safety frameworks for AI models;
- Nations agreed to establish an international network of AI Safety Institutes;
- There was a consensus on collaborating regarding the risk thresholds for AI models that could potentially be used in the creation of biological and chemical weapons;
- The UK government pledged up to 8.5 million pounds in grants for research into protecting society from AI risks.
Michelle Donelan, the UK's Secretary for Technology, stated, "The agreements reached in Seoul mark the beginning of the second phase of our AI safety agenda, where leaders worldwide are taking concrete steps to protect our world against AI risks. This will also initiate a deeper understanding of the science that will support a common approach to AI safety in the future."
Key Initiatives:
-
AI Safety Frameworks by Tech Giants: Sixteen global AI companies, including Amazon (USA), Antropic (USA), Cohere (Canada), Google (USA), G42 (UAE), IBM (USA), Inflexion AI (USA), Meta (USA), Microsoft (USA), Mistral AI (France), Naver (South Korea), OpenAI (USA), Samsung Electronics (South Korea), the Institute of Technological Innovation (UAE), xAI (USA), and Zhipu.ai (China), voluntarily committed to implementing best practices for AI safety. "Frontier AI," defined as high-capacity general-purpose AI systems that can perform a wide variety of tasks and match or exceed the capabilities of the most advanced models, will have its safety measures transparently managed, with companies responsible for their development and deployment.
-
International Network of AI Safety Institutes: Countries including Australia, Canada, the EU (France, Germany, Italy), Japan, South Korea, Singapore, the UK, and the USA agreed to form a network of AI Safety Institutes. They signed the Seoul Declaration of Intent for international cooperation on AI safety science, pledging to enhance "international cooperation and dialogue on artificial intelligence in light of its unprecedented progress and impact on our economies and societies."
-
Collaboration on AI Risk Thresholds: The EU and 27 other nations agreed to work together on defining risk thresholds for AI models that could assist in the development of biological and chemical weapons. These thresholds will identify when model capabilities could present "severe risks," including those that could help malicious actors access biological or chemical weapons or evade human oversight. Proposals for these thresholds will be developed in consultation with AI companies, civil society, and academia and will be discussed at the upcoming AI Action Summit in Paris.
-
UK Government Research Grants: The UK government will provide up to 8.5 million pounds in research grants to study the mitigation of AI risks, such as deepfakes and cyber-attacks. The grants will focus on "Systemic AI Safety," examining social-level understanding and intervention where AI systems operate. This could include limiting the spread of misinformation, preventing AI-driven cyber-attacks on critical infrastructure, and mitigating harmful side effects of autonomous AI systems on digital platforms.
In Romania, the AI Expo Europe event will decode the impact of Artificial Intelligence on the present and future. Scheduled for October 6-7 at Radisson Blu in the capital, this major interactive conference in Eastern and Central Europe will involve professionals, leaders, and innovators in a unique event that is already changing the world. Under the slogan "Transforming tomorrow, today," the event will feature industry leaders discussing cutting-edge applications and the future of AI, interactive exhibitions showcasing the latest practical services and innovations brought by AI, and networking events in the VIP lounge with industry leaders and partners, along with a gala ceremony.
For more information, to purchase tickets, or explore promotional packages, visit: www.aiexpoeurope.com. Discussion topics will be extensive, covering areas like machine learning, natural language processing, AI in healthcare, robotics, AI for business analytics, AI in cybersecurity, ethical considerations in AI development, AI's role in environmental sustainability, AI in financial services, the future of education through AI, smart cities, the intersection of AI and IoT, advancements in consumer technology AI, and the impact of AI on creative industries, among others.