What can make AI safer?

The AI Safety Summit sets out its aims for the two-day conference: to consider the risks of AI and to discuss how they can be mitigated through internationally coordinated action.

The AI Safety Summit sets out its aims for the two-day conference: to consider the risks of AI and to discuss how they can be mitigated through internationally coordinated action.

Credit: Getty

In 5 seconds

UdeM's Yoshua Bengio and Catherine Régis are set to participate in the AI Safety Summit in England.

Yoshua Bengio and Catherine Régis

Yoshua Bengio and Catherine Régis

Credit: Amélie Philibert, Université de Montréal

On Wednesday and Thursday, world experts on artificial intelligence will gather at Bletchley Park, Buckinghamshire, England for the AI Safety Summit 2023. Among the participants will be two professors from Université de Montréal: AI pioneer Yoshua Bengio and health law expert Catherine Régis.

Members of Mila, they will join an exclusive cross-section of representatives of national governments, leading AI companies, civil society groups and experts in research to consider the safety risks of AI as it increasingly powers economic growth, drives scientific progress and extends into the public sphere in areas as varied as health care and cybersecurity.

Already widely heard from on these issues, Professor Bengio reiterated his concerns last week in an open letter he co-signed with 23 international colleagues, detailed in The Guardian newspaper. Named co-chair in early 2022 of the Global Partnership on Artificial Intelligence working group on responsible AI, Professor Régis has also been vocal as a legal scholar on AI.

"This summit represents an important step in setting new benchmarks for global AI collaboration and governance," Régis said on the eve of her departure for the United Kingdom, where participants will be hosted by British prime minister Rishi Sunak.

"While legislative, interdisciplinary research and other actions are essential at the state level, it is those taken at the international level that will, in my view, have the greatest impact given the collapsing boundaries in AI."

These include protection of democratic institutions, large-scale human rights violations, cyber attacks, global health initiatives, and the like she said. "And it's all the more important because I don't see a future where AI continues not to develop at a lightning pace."

On its website, the AI Safety Summit sets out its aims for the two-day conference: to consider the risks of AI, especially at the frontier of development, and to discuss how they can be mitigated through internationally coordinated action.

"Frontier AI models hold enormous potential to power economic growth, drive scientific progress and wider public benefits, while also posing potential safety risks if not developed responsibly," the website says.

Bletchley Park, it notes, is "a significant location in the history of computer science development and once the home of British Enigma codebreaking," and particpants there will draft "a set of rapid, targeted measures for furthering safety in global AI use."

Five objectives will be discussed at the summit:

  • a shared understanding of the risks posed by frontier AI and the need for action
  • a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
  • appropriate measures which individual organisations should take to increase frontier AI safety
  • areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance
  • showcase how ensuring the safe development of AI will enable AI to be used for good globally

To learn more

Read the summit's working document of 45 pages entitled "Capabilities and risks from frontier AI: a discussion paper on the need for further research into AI risk."

On the same subject

artificial intelligence law professors