All posts in: Software

Inspiring Auteurism in 360° and VR Film

Recently whipped up a new talk, based on my work at Facebook, designing tools for immersive media creators. Delivered the talk—titled Auteurism in 360°—at FITC‘s 2Spotlight AR/VR 2017 conference in Toronto. “Film cannot be art, for it does nothing but reproduce reality mechanically.” — Rudolf Arnheim in 1933, opening his book “Film as Art” by quoting criticism from his contemporaries. By the early-mid 20th century, the artistic merit of film had yet to be fully realized. This mirrors the present state of 360 media and VR filmmaking: we have yet to find artistic equivalents to classical cinematic techniques (framing, montage, transition, zoom, synchresis, etc.) in immersive filmmaking. “Auteurism in 360 Degrees” will unpack methodologies for translating the styles, standards, and practices of 2D filmmaking into 360 media and VR film, all from the perspective of a Facebook Product Designer actively working on 360 media editing and sharing tools.

Speaking about VR Interfaces at FITC

I’ll brought my FutureType talk (Using the History of Typography to Inform the Future of UI) to the FITC Conference 2017 in Toronto. With the rising popularity of immersive media, design thinkers need to rethink UI standards around a new modality. As communication becomes more gestural, conversational, and less typographic, we will need to ensure it does not become abstract. In a world threatened by fake news, ensuring a future of clear, nuanced, and truthful communication will be incredibly important. This talk unpacks a condensed history of typography and justifies why said history can be used as a jumping off point for designing the future of user interfaces. We will discuss how to reconsider typographic history—and the nuances of phonetic letterforms—when inventing new user interfaces for a future of modular, invisible, and alternate reality devices.  

Digitizing Facial Movement During Singing

1 Digitizing Facial Movement During Singing : Original Concept A machine that uses flex sensors to detect movement and change in facial muscles/mechanics during phonation, specifically singing. Sensors can then be reverse engineered for output. Why? Because fascination with the physics of sound, the origin of the phonetic alphabet (Phoenician/Greek war tool later adapted by the Romans), and the mechanics of the voice (much facial movement/recognition research at the moment leans in the direction of expression rather than sound generation). Found two exceptional pieces on not just the muscles of the face but the muscular and structural mechanics of speech AND two solid journal articles about digitizing facial tracking. After reading the better part of The Mechanics of the Human Voice, and being inspired by the Physiology of Phonation chapter,  we decided to develop a system of sensors to log the muscle position and contraction of singers during different pitches […]