Project from MIT Media Lab’s Tangible Interfaces Course ’18
Verbal conversation is the simplest form of transferring information between people without any digital augmentation. However, as society becomes more global with people coming from all sorts of backgrounds, it may be difficult to convey meaning without supplementary data channels. For example, if there happens to be a language barrier, it’s natural to try to use other ways to convey what is trying to be said, if that’s with hand gestures or images. Our approach is to have an on-body device that can display images that are related to the context of the sentence being spoken to further aid in the conversation and truly reinforce the saying, “you are what you say”. We utilize a Raspberry Pi 3 equipped with a lapel microphone to record audio while using Google Voice API for speech-to-text conversion. Our system then queries Google Images to find the most relevant photo related to a specified keyword from each sentence and displays the image on a wearable liquid crystal display (LCD).