log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
SecureML: A System for Scalable Privacy-Preserving Machine Learning
Yupeng Zhang - UMD
Friday, December 9, 2016, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Machine learning is widely used in practice to produce predictive models for applications such as image processing, speech and text recognition. These models are more accurate when trained on large amount of data collected from different sources. However, the massive data collection raises privacy concerns. In this paper, we present new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method. Our protocols fall in the two-server model where data owners distribute their private data among two non-colluding servers who train various models on the joint data using secure two-party computation (2PC). We develop new techniques to support secure arithmetic operations on shared decimal numbers, and propose MPC-friendly alternatives to nonlinear functions such as sigmoid and softmax that are superior to prior work.

 
We implement our system in C++, and show that it has efficiency comparable to existing software for machine learning on plaintext data (e.g. TensorFlow), when the two servers are hosted in the same local area network. Our experiments validate that our protocols are several orders of magnitude faster than the state of the art implementations for privacy preserving linear and logistic regressions, and scale to millions of data samples with thousands of features. We also implement the first privacy preserving system for training neural networks.
This talk is organized by Mukul