From Greece to Montreal, the amazing journey of Ioannis Mitliagkas

  • Forum
  • 04/25/2018

  • Catherine Mathys

In 5 seconds

Raised in western Macedonia, he came to UdeM (via Texas and Stanford) to study and teach deep-learning models at MILA under Yoshua Bengio.

Ioannis Mitliagkas has a mission. As a newly minted professor of computer science and operations research at UdeM's Montreal Institute for Learning Algorithms, he wants to demystify how artificial neurons function.

"We know how to do certain things but we still don't know exactly why or when something will work or stop working," says Mitliagkas, who started here last September.

"Much more theory is required to understand what’s happening."

Mitliagkas has always been interested in how things work. As a child growing up in Greece, he took apart old TVs so he could use the parts in his electricity experiments.

"My parents always supported my curiosity and they let me play with wires and batteries," he recalls.

At UdeM, he has barely had a free moment since arriving from Stanford University, where he did postdoctoral studies – the sparseness of his office testifies to his lack of free time.

The brainchild of Yoshua Bengio, MILA represented an obvious next step for Mitliagkas. "That’s where everything in my field is happening these days," he says. 

He wasn't always cut out for an academic career. in his hometown of Kozani, in western Macedonia, Mitliagkas never imagined he'd one day become a university professor – even less so in Montreal.

"I always thought doing a doctorate or teaching abroad was reserved for exceptional people," he says modestly.

It was an undergraduate professor in Crete who first planted the seed of higher learning that eventually took Mitliagkas to the University of Texas at  Austin for his doctoral work, and then on to the prestigious Stanford University in California.

Serving the public good

As a student, Mitliagkas ruled out working in the private sector after he graduated. "I knew that the role, mission and responsibility of public organizations were to serve the common good. And that’s what I liked.”

Choosing an academic career can be difficult in a highly sought-after field with few suitable candidates. But not for him. Computer science, he believes, allows researchers to find technical explanations to ever-elusive phenomena.

In his research, Mitgliakas uses applied statistics and information theory to better understand why some deep-learning models work surprisingly well with data they've never been exposed to before. Information theory is a type of mathematical theory involving the transmission of information based on probability; it was developed by American mathematician Claude Shannon in the 1940s. For Mitliagkas, the theory provides a good tool for measuring information and content with random variables.

In deep learning, artificial neurons become transmission channels, and "we’re looking at how information spreads across the different layers of the neural network," the researcher says. Although that approach is not new, it is now being applied to better understand system behaviour.

Sometimes it's a good idea to use traditional theories to explore modern issues. That’s one of the things Mitgliakas is doing with his graduate class this winter. As part of the course he gives in topics in artificial intelligence, he helps students decode the results of recent studies using traditional theories derived from optimization, statistics and information theory.

The class work inspires him more than anything else. Besides looking for solutions to problems in deep learning, what he likes most is working closely with students. "What I want to do is convey that same sense of excitement and curiosity I feel, and to share my knowledge with my students," he says.

And there's one last challenge, too: language. Although he's teaching in English for now, he has also started to learn French – and hopes to be able to teach his francophone students in their own language soon.