log in  |  register  |  feedback?  |  help  |  web accessibility
Human-AI Integration
Jun Rekimoto - The University of Tokyo, Sony Computer Science Laboratories https://lab.rekimoto.org/
IRB 4105 or https://umd.zoom.us/j/94009626389?pwd=ISwA4IU3ylBVrp5dWaENxDG7BEcIYb.1
Monday, June 23, 2025, 2:00-3:00 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

While traditional HCI (Human-Computer Interaction) is a research field that focuses on the interface between humans and machines, I advocate for Human Augmentation—humans enhanced by technology. The scope of augmentation extends beyond intellectual enhancement to encompass sensory, cognitive, physical, and existential dimensions. This augmentation extends beyond individual humans to develop into a future society called IoA (Internet of Abilities), where people and technology merge on networks and their capabilities are complementarity enhanced, leading to a world of Human-AI Integration where AI becomes integrated with human abilities. I will introduce examples such as silent speech, which enables communication without vocalization, and skill transmission systems where human vision merges with real-world agents, and discuss the future relationship between humans and technology.

Bio

Jun Rekimoto, Ph.D.

Professor, Interfaculty Initiative in Information Studies at The University of Tokyo

Fellow and Chief Science Officer, Sony Computer Science Laboratories, Inc.

ACM SIGCHI Academy

Rekimoto’s research interests include Human-Computer Interaction (HCI), Human Augmentation, the Internet of Abilities (IoA), and Human-AI Integration. He has pioneered several innovative projects, such as NaviCam—the world’s first hand-held AR system—and CyberCode, the first marker-based AR system. His SmartSkin established the multitouch interaction method, now indispensable in today’s smartphones, tablets, and other interactive devices.

This talk is organized by Samuel Malede Zewdu