Major funding to research the positive long-term impact of AI

In 5 seconds

The Open Philanthropy Project provides $2.4M U.S. to MILA.

The Montreal Institute for Learning Algorithms has just been awarded a $2.4 million U.S. research grant to try to make artificial intelligence safer for society.

The funding comes from the Open Philanthropy Project, a U.S. organization that identifies outstanding giving opportunities, makes grants, follows the results, and publishes its findings. The Open Philanthropy Project’s main funders are Cari Tuna and Dustin Moskovitz, a co-founder of Facebook and Asana, who are looking to give their fortune away in their lifetimes, and as effectively as possible, in order to help humanity thrive.

The $2.4 million U.S. will be spread over four years. Of the total, $1.6 million will go to Professor Yoshua Bengio and his team at Université de Montréal's computer science department, while $800,000 will support their MILA colleagues at McGill University, computer science professors Joelle Pineau and Doina Precup.

"We see Professor Bengio’s research group as one of the world’s preeminent deep learning labs and are excited to provide support for it to undertake AI safety research," said Daniel Dewey, the Open Philanthrophy Project’s program officer for potential risks from advanced artificial intelligence. “Many of the most talented machine learning researchers spend some time in Professor Bengio’s lab before joining other universities or industry groups.”

The grant to MILA is intended to support research to improve the positive long-term impact of AI on society, and is renewable when it expires in 2020. "Given that AI safety research is a relatively new area, we think it is particularly valuable to keep potential research options flexible," Dewey said.

For Bengio, the process has just begun. He’s looking down the road to the next decade.

“This grant looks at several important long-term questions about building safer AIs, from allowing computers to better understand the consequences of their actions to helping them learn about human moral judgements,” he said. “That’s something that remains hard to formalize even for philosophers and psychologists, so it will probably take more than four years to get there. But it's important to start now.”

Media contact