log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Combining Crowdsourcing and Computer Vision for Street-level Accessibility
Friday, October 17, 2014, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Roughly 30.6 million individuals in the US have physical disabilities that affect their ambulatory activities; nearly half of those individuals report using an assistive aid such as a wheelchair, cane, crutches, or walker. Despite comprehensive civil rights legislation for Americans with disabilities, many city streets, sidewalks, and businesses remain inaccessible. The problem is not just that street-level accessibility affects where and how people travel in cities but also that there are few, if any, mechanisms to determine accessible areas of a city a priori.

In my talk, I will describe our two-prong approach to this problem: first, to develop scalable data collection methods for acquiring sidewalk accessibility information using a combination of crowdsourcing, computer vision, and online map imagery, and second, to use this new data to design, develop, and evaluate a novel set of navigation and map tools for accessibility. Our overarching goal is to transform the ways in which accessibility information is collected and visualized for every sidewalk, street, and building façade in the world. This work is in collaboration with Professor David Jacobs and graduate students Kotaro Hara and Jin Sun.

In the last portion of the talk, I will also describe other active projects in my group focused primarily on interactive physical computing (e.g., wearables, e-textiles) and potential collaboration opportunities.

This talk is organized by Jeff Foster