All posts by: ellecortese@gmail.com

Designing Safe Spaces for Virtual Reality

Fellow Facebook AR/VR Experiences Design Team member Andrea Zeller (Content Strategist) and I recently wrote a chapter for an upcoming book on design ethics. The article instructs VR practitioners on how to fold consent ideology and body sovereignty into their VR design practice in order to foster safer, more inclusive virtual spaces and interactions. Facebook Research Preview: Designing Safe Spaces for Virtual Reality(See below for the paper’s abstract and citation).  — Abstract Virtual Reality (VR) designers accept the ethical responsibilities of removing a user’s entire world and superseding it with a fabricated reality. These unique immersive design challenges are intensified when virtual experiences become public and socially-driven. As female-identifying VR designers in 2018, we see an opportunity to fold the language of consent into the design practice of virtual reality—as a means to design safe, accessible, virtual spaces. Chapter from: DeRosa, Andrew, and Laura Scherling, eds (2020), Ethics in Design and Communication: Critical […]

Inspiring Auteurism in 360° and VR Film

Recently whipped up a new talk, based on my work at Facebook, designing tools for immersive media creators. Delivered the talk—titled Auteurism in 360°—at FITC‘s 2Spotlight AR/VR 2017 conference in Toronto. “Film cannot be art, for it does nothing but reproduce reality mechanically.” — Rudolf Arnheim in 1933, opening his book “Film as Art” by quoting criticism from his contemporaries. By the early-mid 20th century, the artistic merit of film had yet to be fully realized. This mirrors the present state of 360 media and VR filmmaking: we have yet to find artistic equivalents to classical cinematic techniques (framing, montage, transition, zoom, synchresis, etc.) in immersive filmmaking. “Auteurism in 360 Degrees” will unpack methodologies for translating the styles, standards, and practices of 2D filmmaking into 360 media and VR film, all from the perspective of a Facebook Product Designer actively working on 360 media editing and sharing tools.

Speaking about VR Interfaces at FITC

I’ll brought my FutureType talk (Using the History of Typography to Inform the Future of UI) to the FITC Conference 2017 in Toronto. With the rising popularity of immersive media, design thinkers need to rethink UI standards around a new modality. As communication becomes more gestural, conversational, and less typographic, we will need to ensure it does not become abstract. In a world threatened by fake news, ensuring a future of clear, nuanced, and truthful communication will be incredibly important. This talk unpacks a condensed history of typography and justifies why said history can be used as a jumping off point for designing the future of user interfaces. We will discuss how to reconsider typographic history—and the nuances of phonetic letterforms—when inventing new user interfaces for a future of modular, invisible, and alternate reality devices.  

I met Ray Kurzweil at the Grammys

Ray Kurzweil attended the Grammys this year (2015) to receive the Technical Grammy Award (for innovation in music technology). I was there. I picked his brain for as long as he would allow me. Here is a photo of me, standing beside one of my heros, exploding with joy.

Digitizing Facial Movement During Singing

1 Digitizing Facial Movement During Singing : Original Concept A machine that uses flex sensors to detect movement and change in facial muscles/mechanics during phonation, specifically singing. Sensors can then be reverse engineered for output. Why? Because fascination with the physics of sound, the origin of the phonetic alphabet (Phoenician/Greek war tool later adapted by the Romans), and the mechanics of the voice (much facial movement/recognition research at the moment leans in the direction of expression rather than sound generation). Found two exceptional pieces on not just the muscles of the face but the muscular and structural mechanics of speech AND two solid journal articles about digitizing facial tracking. After reading the better part of The Mechanics of the Human Voice, and being inspired by the Physiology of Phonation chapter,  we decided to develop a system of sensors to log the muscle position and contraction of singers during different pitches […]

Building My First Synthlophone

Digitized xylophone trial run. Keys are made from blue plexi glass, laser cut to open/close with velcro strips and have their wiring discretely flow out of the bottom. Keys sit on metal rods, separated by O-rings, mallets are soft vibraphone mallets. Analog readings are pulled from MEAS flexi-piezos, fed into a Arduino Mega programed as a MIDI out, into a Nord Modular Micro.

Audio Software Mirror Redux

The audio software mirror has evolved yet again, becoming a … The Conversation Cloud Generator is a wearable device that generates on-screen word clouds by listening to conversation and surveying syntax and volume. The interface is concealed within wrappings of wireform mesh (a thinner, more pliable version of the material that covers standard microphones) and sewn to a waist belt. The innocuous wearable contains two microphones and an Arduino. On microphone takes volume readings via the Arduino while the second sends four second audio clips (wav files) directly to the computer. Via Processing, the audio data is sent to Google Speech and the returned result is saved as text to a data file, accompanied by its corresponding volume readings (by four second delay). As the user’s leisure, the word cloud is generated via mouse click. Word size is determined by merging the volume values with the frequency of word use […]