log in  |  register  |  feedback?  |  help
Unconditional UC-Secure Computation with (Stronger-Malicious) PUFs
Saikrishna Badrinarayanan - UCLA
Friday, November 17, 2017, 12:00-1:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)

In this talk, we explore the feasibility of UC-secure computation

using trusted hardware as setup - specifically, we focus on physically

unclonable functions (PUFs). Brzuska et al. (Crypto 2011) proved that

unconditional UC-secure computation is possible if parties have access

to honestly generated PUFs. Dachman-Soled et al. (Crypto 2014) then

showed how to obtain unconditional UC secure computation based on

malicious PUFs, assuming such PUFs are stateless. They also showed

that unconditional oblivious transfer is impossible against an

adversary that creates malicious stateful PUFs.


In this talk, we show how to go beyond this seemingly tight result, by

allowing any adversary to create stateful PUFs with a-priori bounded

state. This relaxes the restriction on the power of the adversary

(limited to stateless PUFs in previous feasibility results), therefore

achieving improved security guarantees. This is also motivated by

practical scenarios, where the size of a physical object may be used

to compute an upper bound on the size of its memory.


We then introduce a new security model where any adversary is allowed

to generate a malicious PUF that may encapsulate other (honestly

generated) PUFs within it, such that the outer PUF has oracle access

to all the inner PUFs. This is again a natural scenario, and in fact,

similar adversaries have been studied in the tamper-proof

hardware-token model (eg: Chandran et al. (Eurocrypt 2008)), but no

such notion has ever been considered with respect to PUFs. All

previous constructions of UC secure protocols suffer from explicit

attacks in this stronger model.


The talk is based on joint work with Dakshita Khurana, Rafail

Ostrovsky and Ivan Visconti., and will be based on the following


paper: https://eprint.iacr.org/2016/636.

This talk is organized by Octavian Suciu