Course Description

CS595D is a graduate computer science seminar that will explore topics in AI safety and bias in machine learning. These are both fundamental problems in AI research that have far more questions than answers. Machine learning is currently deployed all over the world, classifying data that impacts real people every single day. This year the EU passed "right to explanation", a law that will take effect in 2018, and will affect all companies that operate in Europe (yes Google, Facebook, etc). Topics will include distributional shift, scalable supervision, interpretable machine learning and more.

Course Organizers

Class Time and Location

Fall quarter (Sept. - December, 2016).
Time: Thursdays 4:00-5:00
Location: HFH 1132

Office Hours

Just email me to meet!

Contact Info



Event TypeDateDescriptionCourse Materials
Seminar September 22, 2016 Logistics
Seminar September 29, 2016 Concrete Safety [Concrete Problems in AI Safety]
Seminar October 6th, 2016 TBD [Concrete Problems in AI Safety]
[EU Right to Explanation]
Seminar October 13, 2016 ML Systems [ Hidden Technical Debt of Machine Learning Systems]
Seminar October 20, 2016 Overfitting Datasets [ Unbiased Look At Dataset Bias]
Seminar October 27th, 2016 ML Security [ Stealing Machine Learning Models via Prediction APIs]
Seminar November 3, 2016 TBD [Intriguing Properties of Neural Networks]
Seminar November 10th, 2016 Unbiasing ML [Equality of Opportunity in Supervised Learning]
Seminar November 17th, 2016 Cyber Physical ML [ DeepMPC: Learning Deep Latent Features for Model Predictive Control]
Cancelled November 24th, 2016 Thanks giving!
Seminar December 1st, 2016 Adversarial Deep Learning [ Quick Survey] [ Explaining and Harnessing Adversarial Examples]