Machine Learning (COMP-652 and ECSE-608)
Winter 2017

Syllabus

General Information

Location:Trottier room 0070
Times:Tuesday and Thursday, 1:05-2:25pm
Instructor:Doina Precup and Guillaume Rabusseau, School of Computer Science
Office:McConnell Engineering building, room 111N and 104N (left from the elevators)
Phone:(514) 398-6443 (Doina)
Email:dprecup@cs.mcgill.ca and guillaume.rabusseau@mail.mcgill.ca
Office hours:
 
TBA (see course home page)
Meetings at other times by appointment only!
Class web page:
 
http://www.cs.mcgill.ca/~dprecup/courses/ml.html
IMPORTANT: This is where class notes, announcements and homeworks are posted!


Course Description

The goal of this class is to provide an overview of the state-of-art algorithms used in machine learning. The field of machine learning is concerned with the question of how to construct computer programs that improve automatically with experience. In recent years, many successful applications of machine learning have been developed, ranging from data-mining programs that learn to detect fraudulent credit card transactions, to autonomous vehicles that learn to drive on public highways, and computer vision programs that can recognize thousands of different object types. At the same time, there have been important advances in the theory and algorithms that form the foundation of this field. During this course, we will study both the theoretical properties of machine learning algorithms and their practical applications.


Prerequisites

Basic knowledge of a programming language is required. Knowledge of probabilities/statistics, calculus and linear algebra is required. Example courses at McGill providing sufficient background in probability are MATH-323 or ECSE-305. Some AI background is recommended, as provided, for instance by COMP-424 or ECSE-526, but not required. If you have doubts regarding your background, please contact Doina to discuss it.


Reference Materials

There is no required textbook. However, there are several good machine learning textbooks describing parts of the material that we will cover. The schedule will include recommended reading, either from these books, or from research papers, as appropriate. Lecture notes and other relevant materials are linked to the lectures web page. Assignments are linked to the assignments web page


Class Requirements

The class grade will be based on the following components:

  1. Four assignments - 40%. The assignments will contain a mix of theoretical and experimental questions.
  2. A midterm in-class examination - 30%.
  3. A final project - 30%.
  4. Participation in class discussions - up to 1% extra credit.
Minor changes to the evaluation scheme (if any) will be announced in class by Thursday, January 12 (pending in-class discussion and the estimated total enrollment).


Homework Policy

Assignments should be submitted electronically on the day when they are due, through MyCourses. For late assignments, 10 points will be deducted from the grade for every late day, for the first 5 days. No credit is given for assignments submitted more than 5 days late, unless you have a documented problem.

All assignments are INDIVIDUAL! You may discuss the problems with your colleagues, but you must submit individual homeworks. Please acknowledge all sources you use in the homeworks (papers, code or ideas from someone else).

McGill University values academic integrity. Therefore all students must understand the meaning and consequences of cheating, plagiarism and other academic offenses under the Code of Student Conduct and Disciplinary Procedures (see www.mcgill.ca/students/srr/honest for more information).

In accord with McGill University's Charter of Students' Rights, students in this course have the right to submit in English or in French any written work that is to be graded.

In the event of extraordinary circumstances beyond the University's control, the content and/or evaluation scheme in this course is subject to change.


Tentative Course Content

  1. Introduction, linear models, regularization, Bayesian interpretation (3 lectures)
  2. Non-linear models and ensemble methods (3 lectures)
  3. Large-margin methods, kernel methods (2 lectures)
  4. Structured data (6 lectures): graphical models, deep belief networks
  5. Latent variables, unsupervised learning (2 lectures)
  6. Computational Learning Theory (2 lectures)
  7. Spectral methods (2 lectures)
  8. Midterm examination (1 lecture)
  9. Analyzing temporal and sequence data (3 lectures)
  10. Reinforcement learning (2 lectures)
IMPORTANT: This outline is subject to change. Up-to-date information about the course content and assigned readings will be posted on the schedule web page.