log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Build It Break It: Measuring and Comparing Development Security
Friday, October 23, 2015, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

There is currently little evidence about what tools, methods, processes, and languages lead to secure software. To address this problem, we designed the Build-it Break-it Fix-it secure programming contest. Contest participants build software they intend to be secure, and then other participants attempt to break the software, to see whether it really is secure or not. The software is ultimately scored based on quality metrics like performance, correctness, and security. Teams are also scored on their ability to find bugs. We hope that data collected during the contest (e.g., development repository snapshots, surveys, outcomes) will help us better understand what works and what doesn't in secure development. We also hope that the contest experience provides education value to participants, particularly the immediate feedback from the adversarial setting. We show preliminary results from runs of the contest that demonstrate the contest works as designed, and provides the data desired.

 

This is joint work with Andrew Ruef, James Parker, and Piotr Mardziel (current and former graduate students), and Dave Levin, Atif Memon, and Jan Plane. More on the contest can be found at https://builditbreakit.org, and a description of the contest design is at http://www.cs.umd.edu/~mwh/papers/ruef15bibifi.html

This talk is organized by Jeff Foster