An experiment in puppeteering a UR5 robot through the browser
I was invited to participate at a residency at The Studio for Creative Inquiry at Carnegie Mellon University during the Spring of 2019, where I received a grant to assist with the development of Handsfree.js. What you’re seeing in the video above is me mirroring my head movements on a Universal Robot.
As I was working on NextCheckbox.com in 2018, I befriended an individual who was recovering from a severe stroke at a homeless shelter. The stroke had left him unable to effectively communicate with friends and family as he couldn’t speak clearly on the phone nor could he type or text. This inspired me to begin working on a “Face Pointer”.
Using computer vision, I started a library that took in a user’s head pose through the browser’s webcam and used that to place a pointer on the screen. The user could then smile, raise an eyebrow, or use any other custom gesture to activate clicks. The project received numerous awards, including a grant from Google PAIR, a residency at The Studio for Creative Inquiry Carnegie Mellon, School of AI Fellowship, a grant from Glitch.com, and support through GoFundMe.
Later versions of the library included support for body pose estimation and hand gesture recognition: