All posts by: ellecortese@gmail.com

Speaking about Interfaces at FITC

I’ll brought my FutureType talk (Using the History of Typography to Inform the Future of UI) to the FITC Conference 2017 in Toronto. With the rising popularity of immersive media, design thinkers need to rethink UI standards around a new modality. As communication becomes more gestural, conversational, and less typographic, we will need to ensure it does not become abstract. In a world threatened by fake news, ensuring a future of clear, nuanced, and truthful communication will be incredibly important. This talk unpacks a condensed history of typography and justifies why said history can be used as a jumping off point for designing the future of user interfaces. We will discuss how to reconsider typographic history—and the nuances of phonetic letterforms—when inventing new user interfaces for a future of modular, invisible, and alternate reality devices.  

I met Ray Kurzweil at the Grammys

Ray Kurzweil attended the Grammys this year (2015) to receive the Technical Grammy Award (for innovation in music technology). I was there. I picked his brain for as long as he would allow me. Here is a photo of me, standing beside one of my heros, exploding with joy.

Digitizing Facial Movement During Singing

1 Digitizing Facial Movement During Singing : Original Concept A machine that uses flex sensors to detect movement and change in facial muscles/mechanics during phonation, specifically singing. Sensors can then be reverse engineered for output. Why? Because fascination with the physics of sound, the origin of the phonetic alphabet (Phoenician/Greek war tool later adapted by the Romans), and the mechanics of the voice (much facial movement/recognition research at the moment leans in the direction of expression rather than sound generation). Found two exceptional pieces on not just the muscles of the face but the muscular and structural mechanics of speech AND two solid journal articles about digitizing facial tracking. After reading the better part of The Mechanics of the Human Voice, and being inspired by the Physiology of Phonation chapter,  we decided to develop a system of sensors to log the muscle position and contraction of singers during different pitches […]

Building My First Synthlophone

Digitized xylophone trial run. Keys are made from blue plexi glass, laser cut to open/close with velcro strips and have their wiring discretely flow out of the bottom. Keys sit on metal rods, separated by O-rings, mallets are soft vibraphone mallets. Analog readings are pulled from MEAS flexi-piezos, fed into a Arduino Mega programed as a MIDI out, into a Nord Modular Micro.

Audio Software Mirror Redux

The audio software mirror has evolved yet again, becoming a … The Conversation Cloud Generator is a wearable device that generates on-screen word clouds by listening to conversation and surveying syntax and volume. The interface is concealed within wrappings of wireform mesh (a thinner, more pliable version of the material that covers standard microphones) and sewn to a waist belt. The innocuous wearable contains two microphones and an Arduino. On microphone takes volume readings via the Arduino while the second sends four second audio clips (wav files) directly to the computer. Via Processing, the audio data is sent to Google Speech and the returned result is saved as text to a data file, accompanied by its corresponding volume readings (by four second delay). As the user’s leisure, the word cloud is generated via mouse click. Word size is determined by merging the volume values with the frequency of word use […]

Multi-Serial PhotoCell Theremin

Started with the old Theremin #2, used a photocell instead of the rangefinder, added a second photocell and tweaked the pitch shift button to jump octaves (e.g. a D# is still a D# when button is pressed). One photocell controls the note (takes five readings with an array and maps it between 100 & 500 if pitch button is pressed and 50 & 250 if else); the second photocell controls the duration of the note by dividing the reading by 20; and the switch, as previously mentioned, controls the octave jump. In Processing, the the note, duration and switch are collected as serial data and used to oscillate the colours on the sketch’s grid. The note controls the opacity, duration affects the movement and the button does two things: controls the green value in the fill() AND affects the note reading (opacity) via pitch shifting. Arduino code: int analogCell1 = […]