Parallel secure computation framework
Kartik Nayak
Abstract
Many machine learning algorithms can infer more than just the required model (age, sex, political affiliation, etc.). This problem can be solved using secure computation. However, secure computation is too slow in practice to crunch big data. Also, most algorithms make random accesses and making them trivially oblivious requires using ORAM which is not practical. We introduce a technique to efficiently reduce a large portion of graph based (which includes many machine learning) algorithms to its distributed oblivious version with minimal communication overhead due to parallelism. Due to the distributed nature of the algorithms, we can easily scale the execution by adding a large number of machines to mine a large amount of data.
This talk is organized by Chang Liu