Doing the responsible thing

In 5 seconds

Advances in artificial intelligence will have an enormous impact on our lives, so it’s important that people be able to weigh in on the issues that AI will raise.

What if, a few years from now, it was up to robots to make medical diagnoses or to decide who gets a loan or a job? On what basis and by what right will we assign the power to make decisions that affect our future?

These are the types of questions that will be addressed by the Forum on the Socially Responsible Development of Artificial Intelligence, organized by Université de Montréal in partnership with the Fonds de recherche du Québec.

Taking place November 2 and 3 at the Palais des congrès de Montréal, the event will bring together leading Canadian and U.S. experts in fields ranging from pure science to the humanities and social sciences.

They will explore issues raised in several key areas affected by rapid developments in AI:

-        cybersecurity;

-        legal and social responsibility in AI development;

-        the moral psychology of AI;

-        reactions to AI in the labour market, educational services, healthcare and the legal system, as well as the effect on the organization of "smart cities."

Guidelines for innovation

This is not the first time society has faced technological innovations that force people to address fundamental ethical issues. One need only think of nuclear energy and genetic and genomic research, whose social and ethical implications have been the subject of much debate.

But according to UdeM philosophy professor Marc-Antoine Dilhac, a member of the Forum’s organizing committee, AI is something else altogether.

“What’s different about artificial intelligence is that we are dealing with machines and algorithms that are increasingly capable of simulating human intelligence and making decisions beyond human control,” said Dilhac, holder of the Canada Research Chair in Public Ethics.

The goal of the Forum is to propose guidelines to help ensure the responsible development of AI and to make it an engine for social progress and a guarantor of equality and justice. One of the main issues examined by the Forum will be the notion of responsibility.

As Dilhac put it: “Who will be responsible when a machine makes a good or bad decision? This is a key question to which we don’t yet have the answer, and we need to engage everyone in the debates emerging from this Forum so that the different stakeholders can agree on the best way forward.”

To that end, experts and ethicists taking part in the Forum will propose ethical guidelines aimed at avoiding the pitfalls of a development approach that fails to take social and ethical considerations into account.

“We need to launch a broad discussion on the ethical questions raised by AI’s growth because, to the extent that it will impact all our lives, we have to make sure that its development is aligned with democratic imperatives and that the public has the right to weigh in on this new world being built for them," said Dilhac.

Towards a declaration

To help foster dialogue with the public in the coming months, the Forum will draft a document called the Montreal Declaration for the Responsible Development of Artificial Intelligence.

Chaired by Christine Tappolet, a philosophy professor and director of UdeM's Centre de recherche en éthique, an interdisciplinary committee including professors Yoshua Bengio and Pascale Lehoux will propose the main principles for the Montreal Declaration.

These principles will serve as the basis for an ongoing process of reflection on the issues.

“The declaration will be a starting point and it will remain open-ended,” explained Dilhac. “Our aim is to launch a medium- and long-term dialogue that will evolve in accordance with new innovations and the ongoing debate. It will be supported by a platform and by research groups that will help carry the discussion forward.”

The organizers believe this open approach will lend greater legitimacy to the Montreal Declaration, as it will be based on a process of democratic drafting and be informed by experts who share and promote common values: research transparency, a responsible approach to technological innovations, universality, social progress and knowledge-sharing.

“We have all the ingredients we need for a truly democratic and ethically framed enterprise,” Mr. Dilhac concluded. “And, in this sense, the Declaration’s consultative and collective nature will help guide public and political decision-makers.

"We want the Declaration to initiate a broad dialogue that will have legislative, regulatory and legal impacts – as was the case, for example, with genetic research and the issue of euthanasia.”

(Interview by Martin LaSalle)

Media contact