log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Contesting Secure Software Development
Thursday, June 11, 2020, 12:00-1:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

With the ongoing, frequent disclosure of the existence and exploitation of security vulnerabilities, one might wonder: How can we can build software that is more secure? In an attempt to focus educational attention on this question, and gather empirical evidence at the same time, we developed the Build it, Break it, Fix it (BIBIFI) security-oriented programming contest. In BIBIFI, teams aim to build specified software that should be correct, efficient, and secure. These goals mimic those of the real world. Security is tested when teams attempt to break other teams’ submissions. Winners are chosen from among the best builders and the best breakers. BIBIFI was designed to be open-ended — teams can use any language, tool, process, etc. that they like.

We ran three 6-week contests involving a total of 156 teams from across the world, and three different programming problems. Most participants had previous development experience and security education. Quantitative analysis from these contests found several interesting trends. For example, the most efficient build-it submissions used C/C++, but submissions coded in a statically-type safe language were 11× less likely to have a security flaw than C/C++ submissions. A manual, in-depth qualitative analysis (using iterative open coding) of the vulnerabilities in 76 of these projects also revealed interesting trends. For example, the analysis found that simple mistakes were least common: only 26% of projects introduced such an error. Conversely, vulnerabilities arising from a misunderstanding of security concepts were significantly more common: 84% of projects introduced at least one such error. Overall, our results have implications for improving secure-programming language choices, API designs, API documentation, vulnerability-finding tools, and security education.

This is joint work with James Parker, Andrew Ruef, Dan Votipka, Kelsey Fulton, Matthew Hou, Michelle Mazurek, and Dave Levin, all at the University of Maryland.

Zoom: https://umd.zoom.us/my/mhicks2

Bio

Michael Hicks is a professor at the University of Maryland. He had a mustache for a while, but he's shaved it.

This talk is organized by Mike Hicks