I’ll be bringing my FutureType talk (Using the History of Typography to Inform the Future of UI) to the FITC Conference in Toronto this April. Talk overview: With the rising popularity of immersive media, design thinkers need to rethink UI standards around a new modality. As communication becomes more gestural, conversational, and less typographic, we will need to ensure it does not become abstract. In a world threatened by fake news, ensuring a future of clear, nuanced, and truthful communication will be incredibly important. This talk unpacks a condensed history of typography and justifies why said history can be used as a jumping off point for designing the future of user interfaces. We will discuss how to reconsider typographic history—and the nuances of phonetic letterforms—when inventing new user interfaces for a future of modular, invisible, and alternate reality devices. Tickets available.
All posts by: email@example.com
Hacking a USB keyboard with a PS2 adapter to function as a keyboard controller for a Gameboy soundcard. Powered by a blank Gameboy cartridge flashed with LSDJ.
Ray Kurzweil attended the Grammys this year (2015) to receive the Technical Grammy Award (for innovation in music technology). I was there. I picked his brain for as long as he would allow me. Here is a photo of me, standing beside one of my heros, exploding with joy.
1 Digitizing Facial Movement During Singing : Original Concept A machine that uses flex sensors to detect movement and change in facial muscles/mechanics during phonation, specifically singing. Sensors can then be reverse engineered for output. Why? Because fascination with the physics of sound, the origin of the phonetic alphabet (Phoenician/Greek war tool later adapted by the Romans), and the mechanics of the voice (much facial movement/recognition research at the moment leans in the direction of expression rather than sound generation). Found two exceptional pieces on not just the muscles of the face but the muscular and structural mechanics of speech AND two solid journal articles about digitizing facial tracking. After reading the better part of The Mechanics of the Human Voice, and being inspired by the Physiology of Phonation chapter, we decided to develop a system of sensors to log the muscle position and contraction of singers during different pitches […]
Developments in turning medical sensors into music-generating devices (via Arduino, Gameboys & a Nord Micro Modular).
I made another animated overlay commercial (for SEVENTEEN + Lord & Taylor).
Digitized xylophone trial run. Keys are made from blue plexi glass, laser cut to open/close with velcro strips and have their wiring discretely flow out of the bottom. Keys sit on metal rods, separated by O-rings, mallets are soft vibraphone mallets. Analog readings are pulled from MEAS flexi-piezos, fed into a Arduino Mega programed as a MIDI out, into a Nord Modular Micro.
The audio software mirror has evolved yet again, becoming a … The Conversation Cloud Generator is a wearable device that generates on-screen word clouds by listening to conversation and surveying syntax and volume. The interface is concealed within wrappings of wireform mesh (a thinner, more pliable version of the material that covers standard microphones) and sewn to a waist belt. The innocuous wearable contains two microphones and an Arduino. On microphone takes volume readings via the Arduino while the second sends four second audio clips (wav files) directly to the computer. Via Processing, the audio data is sent to Google Speech and the returned result is saved as text to a data file, accompanied by its corresponding volume readings (by four second delay). As the user’s leisure, the word cloud is generated via mouse click. Word size is determined by merging the volume values with the frequency of word use […]
Developments in building the capacitive-sensitive touch wooden keyboard for the consonance/dissonance unity game.
Started with the old Theremin #2, used a photocell instead of the rangefinder, added a second photocell and tweaked the pitch shift button to jump octaves (e.g. a D# is still a D# when button is pressed). One photocell controls the note (takes five readings with an array and maps it between 100 & 500 if pitch button is pressed and 50 & 250 if else); the second photocell controls the duration of the note by dividing the reading by 20; and the switch, as previously mentioned, controls the octave jump. In Processing, the the note, duration and switch are collected as serial data and used to oscillate the colours on the sketch’s grid. The note controls the opacity, duration affects the movement and the button does two things: controls the green value in the fill() AND affects the note reading (opacity) via pitch shifting. Arduino code: int analogCell1 = […]