Tuesday, May 30, 2017

Sonifying Plant Moisture using a simple Algorithmic Arpeggiator

Putting together a number of projects related to the Raspberry pi, sensors, and node-red, this post describes the sonification of sensor values in a relatively simple but practical context.

After putting together the automated plant watering system and appending to a physical actuator to it, here we look at other ways of communicating the status of the system to the user: using sound and music!

Mapping sensor values to music

The concept of mapping sensor values to music in the context of new digital musical instruments is a deep and fascinating area of research (ok, I might be a bit biased here since this is the subject of my PhD work... :P). In a nutshell, the mapping problem can be laid out by the following components:

Input -> Mapping -> Synthesis

The input is the sensor signals that correspond to some physical phenomena of interest. Here we have a value that gives us a sense of the humidity levels (to be precise: it is difficult to get a sense of exactly how much water is in the soil, and I'm not even sure what would be the appropriate units to measure it... but what we do get is a relative number that moves up and down as the soil dries out, and that's what we're interested in here!) In our current situation, it is simply a single value coming from the moisture sensor, perhaps scaled to a more convenient range (lets say 0-100...)

The mapping is the part of the system where the sensors values are translated in a meaningful way into control parameters to drive the sound producing module. . While a rather abstract concept that is usually completely implemented virtually for most digital systems, this is an important part that ultimately defines how the system behaves (i.e. produces what kind of output, for a given input).

The synthesis component refers to the part that actually generates sound - typically attached to some kind of physical sound producing device. Here we build a small algorithmic arpeggiator that takes in parameters and generates notes that are being emitted in real time through a MIDI port which can be used to control any hardware synthesizer. (It probably makes sense to look into using a software synth on the Pi itself, for a more standalone solution in the future...)

Design of Mapping

The general behaviour was inspired by the system presented in the Music Room project. Here, two values (speed and proximity between two people) are used to drive an algorithmic composer that can respond in real time to generate music in a "classical" style. The sensor values here are mapped to concepts like valence and arousal, which in turn affect the tempo and key of the music being produced. (For more details check out the paper/video!). In our case we take a simplified but similar version of the concept.




Algorithmic Arpeggiator/Sequencer

The sequencer is implemented as a Python script, building on one of the examples in the mido library. It simply randomly selects from an array of notes and emits an note-on followed by a note-off message after a certain duration. I expose the relevant parameters via OSC to control the tempo (i.e. note duration), and the key (each scale name changes the contents of the note array). The Python code is available here and the OSC address/message handling is quite straight forward.  Of course, the possibilities for incorporating more complex and interesting musical concepts is endless, but we leave it here for now... :)

Software Architecture

Since most of our existing system is built in node-red, we simply add the necessary nodes to talk to the sequencer. The OSC node takes care of formatting the message, and then we simply pipe it out a UDP object to the correct local port where the Python script is listening.


node-red Configuration

Here's what the node-red configuration looks like. The top function node divides the moisture range into "pentatonic", "major",  and "minor" scales with decreasing moisture values. The tempo map function below provides an exponential scaling of the note durations, which cause a rather sharp change as you reach a "critically low" moisture value (to be fine-tuned).



The blue OSC nodes ("key" and "tempo") take care of the formatting of messages, and here they're sent to port 7002 on the same host where the Python sequencer is running. The entire flow is available here.

This is what the dashboard looks like, showing the changes in "key" and tempo (in Beats Per Minute) as the moisture decreases:


Audio/Video recording to come...

Soft Synths

It is possible to do all the sound generation on the RPi itself. I have experimented with amsynth running on the Pi, and after battling a bit with alsa configurations, managed to get the onboard soundcard to generate the actual audio as well. The universality of MIDI means you can have it any way you like!


No comments: