ada
visual voice user interface concept
Programming for validating your data visualisation can be time-consuming and nerve-wracking. What if we didn’t have to programme at all, but instead used the skills such as pointing and explaining and could thus validate our concept within the shortest time possible?
introduction
topic and goal
Ada is a machine-learning based prototyping tool for tablets that can be used to validate data visualisation conceptsquickly and easily. In cooperation with the smart assistant Ada, you can get from the idea to the finished data visualisation in a short time through intuitive sketching and explaining. The abstract process of coding is replaced by natural voice and gesture commands.
concept
matching technology and the problem
The process of data visualisation can be divided into two main areas. First, there is the data analysis and concept development. Usually this part is done quite quickly, after all this is one of the core competences of designers. Validating the concept with the real data is often the more time-consuming and sometimes nerve-wracking second part for designers.
We asked ourselves the question:
What if we didn’t have to programme at all, but instead used the skills such as pointing and explaining, which we as humans have mastered anyway, and could thus validate our concept within the shortest time possible?

This is possible through the combination of various machine learning components such as voice-to-text, natural language processing and computer vision.
communicating with machines
Programming is nothing other than communication with machines. For this, we as humans have to break down our goal into logical units and communicate them to the machine as instructions in the form of code. At the moment, we are therefore forced to adopt a computer’s way of thinking and working, as well as its language. With the help of our ML-supported smart assistant (Ada), this abstract process of coding is replaced by natural speech and gesture commands. So instead of humans having to learn the language of the machine, the machine understands the language of humans.

hybrid interfaces

The shift towards more human interaction means that the design of the interface has to break away from conventions, patterns and the way classic visual applications are designed. Through language as an interface, many functions and information slip into the conversation – i.e. into the context of communication – and therefore no longer need to be represented visually. Thinking in terms of static screens, on which all content must always be visible, also falls away. Instead, communication with the machine approaches that between two people. All this results in a hybrid interface reduced to a few visual elements, in which the classic visual component is supplemented by an auditory level.

interaction between man and machines
Through technology, a lot happens in the background. In the following, individual aspects from the diagram are briefly discussed.

sketch
At the beginning, the user sketches his concept. Although the Smart Assistant is inactive during this process, information is already collected here for the later implementation of the visualisation. For example, CV and ML perform pattern recognition to determine the sketched shapes. In concrete terms, this means that what the user scribbles in succession is tracked. From this, a possible animation sequence can be derived or other aspects can be related to each other.
explain
The next step – explaining the concept – is mainly NLP. Specifically, V2T comes into play during this step. This means that the word spoken by the user is converted into text by means of a speech recognition programme. This converted text is then broken down into its constituent parts by NLU to determine the intended meaning of a sentence.
queries
If the user asks a question or the smart assistant has queries during the explanation process, NLG comes into play in addition to V2T and NLU. In a nutshell, in our case that means that registered gaps in the data set or conflicts in pattern recognition or between visual and auditory input are first translated into text by the machine, which is then transferred into an audio file and this is then played back again.
communication between man and machines
Communication between man and machine is a central element in the interface and determines whether the tool feels intuitive to the user, especially when the interface is reduced to the minimum. In our case, besides the verbal communication with Ada, the visual communication level is also of great importance in order to depict the often non-verbal, unobtrusive communication between two people in the interface as closely as possible.
Our tool gives the user visual feedback in two different ways. Ada’s status is communicated via the central icon. Linguistic inputs as well as answers from Ada are depicted via a lively motion design of the icon. At the same time, what is understood is highlighted in the sketch. In the event of incomprehension or queries, a marker is displayed in the sketch in addition to a discreet “head-shaking” icon animation.

project information
course
Invention Design II
September 2020 – February 2021
Semester 4
HfG Schwäbisch Gmünd
contact information
any questions?
let’s get in touch
If you have any questions about the project or want to know more about it, just write me a short message.
I am happy about any suggestions and comments.