In seiner Funktionalität auf die Lehre in gestalterischen Studiengängen zugeschnitten... Schnittstelle für die moderne Lehre
In seiner Funktionalität auf die Lehre in gestalterischen Studiengängen zugeschnitten... Schnittstelle für die moderne Lehre
What does the rhythm of the waves sound like? Ever wonder how that deep blue could be a tone? Immerse yourself in a new dimension of synaesthetic perception - with our music-generating camera.
Synsia is an AI-driven image-to-sound sampler that translates visual information into sounds. Take a photo or video, add your favorite samples and synsia will generate a sound loop, based on your visual and audio footage. You can personalize the sound and character of the loop with various settings and modulation options.
With Synsia as a portable device, you can take photos and videos and generate new sounds. Combined with existing samples from a library, unique loops can be created. The recorded motif determines the type of sound. For example, dark colors can lead to more melancholic music, patterns in the image support the rhythm or the motif itself supports the genre. The AI composes something new from all this information, which can be adjusted as desired using three rotary controls for Expression (mood), Chaos (randomness) and Complexity (how many parameters from the image/video are included). Other settings can be made via the large touch display on the bar at the side. To operate these with one hand, there is a large rotary wheel in conjunction with a small one on the front. All sounds can be exported afterwards, e.g. for Abelton.
Two different pictures - two different sounds, genres and settings
To convert visual information into auditory signals, it requires a combination of computer vision and artificial intelligence. Those are the main components and types of software that we need.
All sounds can be exported and imported via Bluetooth or USB-C. Various devices and headphones can also be connected to the bottom of the camera. Images/videos can also be transferred.
As the AI is intended to be open source, it can be continuously improved and the software remains up-to-date.
The housing is made of recycled plastic. The speaker is covered with fabric as well as the handle is made out of woven fabric. The rubber dots around the lens and on the handle allow the device to rest flat on the front without scratching the lens. Important operating elements are made of high-quality CNC-milled aluminum. The Ai requires a lot of space in the housing as it needs a large processor. Otherwise, outsourcing via Wifi is also possible. We chose teal and lime as the color palette because they reflect a contrast of calm and energy.
Design Details
So far it is a concept that still needs to be tested for feasibility. The next steps in this project are therefore to talk to engineers and discuss whether the hardware requirements for the software are met. Will the data be processed in the camera or sent to an external server via an app?
Other important questions are whether and how we can reduce costs. According to our Bill Of Material, the costs per device currently amount to approx. 990 - 2100€.
We also want to review sustainability, but have already made important decisions in this regard. Our housing is screwed together and allows the possibility of repair. We also went for open source software so that the product can be developed further for a long time.
„Synsia“ is an imaginary word that originated from the word „synesthesia“, a condition in which the stimulation of one sensory channel leads to sensations in a secondary sensory channel; for example hearing colours. this is basically what our device does, so the name comes very close to it’s function.
the target audience for the device is quite large, anyone from hobby musician to professional sound designer to photographer to people who just want to have some fun exploring new sounds - in our eyes the majority of people can be a target group.
innovative about synsia is, that it combines the AI aspect and the dawless jamming/music producing world in one specific device.
Synsia Workflow
During our research, we spent a lot of time on the topic of AI and tried out existing image-to-sound applications. We also made an user testing in the course to check whether sounds can be assigned to specific images. We found that there was a certain basic consensus in the group during the test, but here and there people also assigned completely different images to loops.
We also conducted an interview to get a picture of a persona. „Experimenting to come up with new sounds“ was emphasized, which is what we focused on with creating Synsia.
Examples of Design Thinking methods, that helped us shaping our concept
During the process, we kept thinking about the shape of the device. At first, we wanted to make a device with a removable camera. But after the midterm presentation, we switched to an all-in-one device
We wanted to create a special feature with the screen. So we experimented with the shape and the interface for a while.
A lot of the semester was spent researching and thinking about what Ai can and cannot do in our project. It was difficult to think about how all of this could work and to find a consensus in a group. Due to other courses and initial difficulties in reaching agreement, as we all had very different interests, we came to the design very late in the semester. We then decided to put a lot of work into the design itself and to stage the functionality of Synsia in an elaborate video instead of building a detailed 1:1 model. This decision was very important for us, as we now have a well-thought-out concept in a short space of time and are happy with the result.
1 Kommentare
Please login or register to leave feedbackWow, was für ein mega Projekt!!!