Renowned U.S. cryptographer coming to UdeM

In 5 seconds Affiliated with Harvard University and the tech company founded by the inventor of the World Wide Web, Bruce Schneier will give a free public lecture this Thursday at the MIL campus.

Bruce Schneier, an American cryptographer, academic and blogger renowned for his global expertise in artificial intelligence and computer security technology, is coming to Université de Montréal. 

The 63-year-old author of of 14 books, including the bestseller A Hacker's Mind, will give a free public lecture this Thursday in the auditorium the MIL Campus, also livecast on Zoom. (Pre-registration is mandatory.) 

His subject: "the coming AI hackers." 

Schneier lives in Cambridge, Mass., where he's a fellow at Harvard University's Berkman Klein Center for Internet & Society, and works for the Boston tech company Inrupt Inc, co-founded by World Wide Web inventor Sir Timothy John Berners-Lee. 

His lecture – given in English with simultaneous interpretation in French –  is organized by UdeM's International Centre for Comparitive Criminology, in collaboration with the Centre for Public Law Research and the L. R. Wilson Chair. 

On the eve of coming to speak here, Schneier gave a glimpse of what to expect. 

Questions Answers

As a self-described "public-interest technologist," you've warned that artificial intelligence systems have begun to exploit vulnerabilities in our social, economic and political systems. How so? 

AI systems are natural hackers, in that they don't have our human intuitions as to what is acceptable or not. If you give these systems a problem, they will naturally think "out of the box" because they don't have a natural conception of what the "box" is. This makes them creative and effective problem solvers, which is important in many respects. But it does mean they will naturally find loopholes on our societal systems: social, economic, and political. My fear is that our systems have a lot of loopholes, mostly because we humans are sloppy with our language and can't think of everything, and that AI systems will find them. And then humans will exploit them, just as we exploit loopholes in tax laws, corporate regulations and all sorts of other systems of rules. 

Are the dangers more pronounced now with Donald Trump once again the White House – and do they extend beyond the United States? 

The technologies of AI are inherently power enhancing; they give the humans who wield them more power. I just published a book on AI and democracy: Rewiring Democracy. In it, I give examples from all over the world of people using AI to make democracy more equitable, more responsive, more participatory, more democratic. In the hands of people who want more democracy, AI can help. But, equally, in the hands of people who want less democracy, AI can help as well. The dangers are the humans, not the technology. But the technology makes the humans more dangerous. 

What role do you see yourself continuing to play in this new reality? Can you imagine a day when a new Cassandra – this time, an AI one – takes over? 

I don't feel like a Cassandra. If you know the story, Cassandra foretells doom but no one believes her. And she gets killed at the end of the story! In some groups, I am the most positive person about the benefits of AI. In other groups, I am one of the most negative. This is a technology that will change many aspects of society. And while we can predict the near-term direct effects, it's very hard to predict the further-on effects, when many different changes in many different aspects of society are all interacting with each other. What we need to do is to be neither cheerleaders nor Cassandras, but realistic – and be willing to regulate this industry in order to steer AI towards safer, better and more democratic outcomes. 

Share