Ryan Lowe

Ph.D. Student, Reasoning & Learning Lab, McGill University

I am a Ph.D. student in Computer Science in the Reasoning & Learning Lab at McGill University, supervised by Joelle Pineau, and unofficially by Aaron Courville and Yoshua Bengio at MILA, University of Montreal. Most of my previous research involved building deep generative models for natural language, particularly in the context of dialogue systems. I'm generally interested in deep learning, reinforcement learning, natural language understanding, causal models, and multi-agent communication.

I sometimes like to think about the societal impacts of AI. I co-organize the Montreal AI Ethics Group with founder David Krueger, where we discuss these issues with AI researchers at McGill and the University of Montreal. I've also started writing an AI series for Graphite Publications to communicate machine learning basics to a wider audience. If you are interested in writing for the series, feel free to contact me.

Before coming to McGill, I did my undergraduate degree in Mathematics & Engineering at Queen's University. I have previously worked at the Institute for Quantum Computing, the Max Planck Institute for the Dynamics of Complex Technical Systems, and the National Research Council.

My CV can be found here.


  • I am currently interning at OpenAI, working with Igor Mordatch and Pieter Abbeel on multi-agent communication with reinforcement learning.
  • I gave a talk at the Stanford NLP Group and Google Brain on "The Problems with Neural Chatbots". You can find the slides here.
  • An OpenAI blog post detailing some of our ongoing work on multi-agent communication is now live, and can be found here.
  • I will be giving a talk at the RE-WORK Machine Intelligence Summit in San Francisco this Friday, talking about recent multi-agent communication work at OpenAI.
  • Publications


    Ryan Lowe, Michael Noseworthy, Iulian Serban, Nicolas Angelard-Gontier, Yoshua Bengio, Joelle Pineau.
    "Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses."

    Peter Benner, Ryan Lowe, Matthias Voigt.
    "L∞-Norm Computation for Large-Scale Descriptor Systems Using Structured Iterative Eigensolvers."
    Numerical Algorithms, under review.

    Iulian Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau.
    "A Survey of Available Corpora for Building Data-Driven Dialogue Systems."
    Dialogue & Discourse, under review.


    Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio.
    "An Actor-Critic Algorithm for Sequence Prediction."
    In International Conference on Learning Representations (ICLR), 2017.
    [paper] [code]

    Ryan Lowe, Nissan Pow, Iulian Serban, Laurent Charlin, Chia-Wei Liu, Joelle Pineau.
    "Training End-to-End Dialogue Systems with the Ubuntu Dialogue Corpus."
    In Dialogue & Discourse, 2017.

    Iulian Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, Yoshua Bengio.
    "A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues."
    In Association for the Advancement of Artificial Intelligence (AAAI), 2017.
    [paper] [code]


    Iulian Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau.
    "Generative Deep Neural Networks for Dialogue: A Short Review."
    In NIPS Workshop on Learning Methods for Dialogue, 2016.

    Chia-Wei Liu*, Ryan Lowe*, Iulian Serban*, Mike Noseworthy*, Laurent Charlin, Joelle Pineau. [* equal contribution]
    "How NOT to Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation."
    In Empirical Methods in Natural Language Processing (EMNLP), 2016. [Oral]

    Teng Long, Ryan Lowe, Jackie Cheung, Doina Precup.
    "Leveraging Lexical Resources for Learning Entity Embeddings in Multi-Relational Data."
    In Association for Computational Linguistics (ACL, short paper), 2016.

    Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, Joelle Pineau.
    "On the Evaluation of Dialogue Systems with Next Utterance Classification."
    In Proceedings of SIGDIAL (short paper), 2016.

    Emmanuel Bengio, Pierre-Luc Bacon, Ryan Lowe, Joelle Pineau, Doina Precup.
    "Reinforcement Learning of Conditional Computation Policies for Neural Networks."
    In ICML Workshop on Abstractions in Reinforcement Learning, 2016. [Oral]


    Ryan Lowe*, Nissan Pow*, Iulian Serban, Joelle Pineau. [* equal contribution]
    "The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems."
    In Proceedings of SIGDIAL, 2015. [Oral]
    [paper] [code] [dataset] [slides]

    Ryan Lowe, Nissan Pow, Iulian Serban, Laurent Charlin, Joelle Pineau.
    "Incorporating Unstructured Textual Knowledge into Neural Dialogue Systems."
    In NIPS Workshop on Machine Learning for Spoken Language Understanding, 2015.

    < 2015

    Peter Benner, Ryan Lowe, Matthias Voigt.
    "Computation of the H∞-Norm for Large-Scale Systems."
    Numerical Solution of PDE Eigenvalue Problems Workshop, Oberwolfach, Germany, pp. 3289-3291, 2013.
    [paper] [slides]

    Peter Benner, Ryan Lowe, Matthias Voigt.
    "Numerical Methods for Computing the H∞-Norm of Large-Scale Descriptor Systems."
    Householder Symposium XIX, Spa Belgium, pp. 248-249, 2013.


    The Ubuntu Dialogue Corpus v2

    The Ubuntu Dialogue Corpus v2 is an updated version of the original Ubuntu Dialogue Corpus. It was created in conjunction with Rudolf Kadlec and Martin Schmid at IBM Watson in Prague. The updated version has the training, validation, and test sets split disjointly by time, which more closely models real-world applications. It has a new context sampling scheme to favour longer contexts, a more reproducible entity replacement procedure, and some bug fixes.

    You can download the Ubuntu Dialogue Corpus v2 here.
    Code to replicate the results from the paper is available here.

    The Ubuntu Dialogue Corpus v1

    The Ubuntu Dialogue Corpus v1 is a dataset consisting of almost 1 million dialogues extracted from the Ubuntu IRC chat logs. This dataset has several desirable properties: it is very large, each conversation has multiple turns (a minimum of 3), and it is formed from chat-style messages (as opposed to tweets). There is also a very natural application towards technical support. The size of this dataset makes it a great resource for training dialogue models, particularly neural architectures.

    Note that this dataset is outdated; please use the Ubuntu Dialogue Corpus v2 .

    Talks & Media

    Technical Talks

    The Problem with Neural Chatbots [slides] Sequence Generation and Dialogue Evaluation [slides]
    • OpenAI — San Francisco, United States, December 2016
    Modern Challenges in Learning End-to-End Dialogue Systems [slides] An Actor-Critic Algorithm for Sequence Prediction [slides] How Not To Evaluate Your Dialogue System [slides]
    The Ubuntu Dialogue Corpus [slides]

    Non-Technical Talks

    McGill Reasoning & Learning Lab: Research Overview [slides] Humanity in the Age of the Machines [slides]

    In the Media

    "Learning to Communicate."
    Igor Mordatch, Pieter Abbeel, Ryan Lowe, Jon Gauthier, Jack Clark.
    OpenAI Blog, 2017.

    "Artificial Intelligence Has Already Taken Over."
    Ryan Lowe
    Graphite Publications, 2016.

    "How Machines Learn."
    Ryan Lowe
    Graphite Publications, 2016.

    Website design replicated with permission from Dustin Tran.