Showing posts with label MIDI. Show all posts
Showing posts with label MIDI. Show all posts

Friday, May 25, 2018

Fun with Home Automation Part 1: The most convoluted doorbell

This is the first of a two part series on using a music keyboard for home automation. Here we describe using a motion detection system to trigger two audio events:

1.) MIDI notes on a keyboard (because why not)
2.) Your Google Home device (this might actually be useful to some people).

The result is that when someone approaches our front door, two notes are sounded on a music keyboard, and the Google Home speaker talks to us as well. The future is now!

In this example, I receive motion events from my security camera software (iSpy) via a http GET request. The actual camera feed is from a streaming Raspberry Pi server (running RPi-Web-Cam Interface). iSpy allows me to view multiple streams, and manage motion detection for each stream. I also (as a lazy option) store stills during motion events into a dropbox folder, so I can see it easily on other devices without any custom application. This particular setup probably warrants a separate post at some future date...

Back to the main feature in question: the doorbell itself!

Here I'm using node-red again, running on a Raspberry Pi. Here's the overall system block diagram:


And here's what the flow looks like: [node-red flow code here]

On top of a stock node-red install, you will need the following two addons:

node-red-contrib-midi: for talking to the MIDI port
node-red-contrib-google-home-notify: for sending text to speech snippets to your Google Home device.

Relatively simple: on the top left you see the incoming motion /get request. I actually assemble a simple html reply and shoot it back for testing purposes, so you can test emitting this event with your browser and it will return an html page with "OK" on it.

1. Triggering MIDI notes

The Raspberry Pi running the node-red server also has a MIDI keyboard (in this case, a cheap Casio CTK-2300) connected via USB. The midi out object will automatically find any class compliant ports, and list them in a drop menu.

To emit the MIDI events, we have two triggers in the middle of the flow that basically turn on, and then off a particular MIDI note (you need to emit both ON and OFF messages otherwise the key will be stuck forever, even after it fades out and becomes inaudible). I put a delay on the second note so they are played one after the other. The message format is simply an array containing the raw MIDI bytes. You can take a look at the trigger objects (or midi out object info) to see the exact messages (notes) I'm sending.

2. Triggering Google Home

When we first got our Google Home device, one thing I really wanted to do is to be able to emit custom events in the form of audio notifications on the speaker. Turns out one of the easiest ways is to use google-home-notifier. The gist of how it works is that on a local network, you simply need to know the IP address of your Google Home speaker, and then audio can be directed straight to it! So the notifier application does a bit of text to audio first with your input, and simply transmits it to the Google Home, and thats it! Much simpler than I imagined. Obviously if you want more complex two way interactions you'll probably have to dig into the actual Google Assistant API...

Tuesday, May 30, 2017

Sonifying Plant Moisture using a simple Algorithmic Arpeggiator

Putting together a number of projects related to the Raspberry pi, sensors, and node-red, this post describes the sonification of sensor values in a relatively simple but practical context.

After putting together the automated plant watering system and appending to a physical actuator to it, here we look at other ways of communicating the status of the system to the user: using sound and music!

Mapping sensor values to music

The concept of mapping sensor values to music in the context of new digital musical instruments is a deep and fascinating area of research (ok, I might be a bit biased here since this is the subject of my PhD work... :P). In a nutshell, the mapping problem can be laid out by the following components:

Input -> Mapping -> Synthesis

The input is the sensor signals that correspond to some physical phenomena of interest. Here we have a value that gives us a sense of the humidity levels (to be precise: it is difficult to get a sense of exactly how much water is in the soil, and I'm not even sure what would be the appropriate units to measure it... but what we do get is a relative number that moves up and down as the soil dries out, and that's what we're interested in here!) In our current situation, it is simply a single value coming from the moisture sensor, perhaps scaled to a more convenient range (lets say 0-100...)

The mapping is the part of the system where the sensors values are translated in a meaningful way into control parameters to drive the sound producing module. . While a rather abstract concept that is usually completely implemented virtually for most digital systems, this is an important part that ultimately defines how the system behaves (i.e. produces what kind of output, for a given input).

The synthesis component refers to the part that actually generates sound - typically attached to some kind of physical sound producing device. Here we build a small algorithmic arpeggiator that takes in parameters and generates notes that are being emitted in real time through a MIDI port which can be used to control any hardware synthesizer. (It probably makes sense to look into using a software synth on the Pi itself, for a more standalone solution in the future...)

Design of Mapping

The general behaviour was inspired by the system presented in the Music Room project. Here, two values (speed and proximity between two people) are used to drive an algorithmic composer that can respond in real time to generate music in a "classical" style. The sensor values here are mapped to concepts like valence and arousal, which in turn affect the tempo and key of the music being produced. (For more details check out the paper/video!). In our case we take a simplified but similar version of the concept.




Algorithmic Arpeggiator/Sequencer

The sequencer is implemented as a Python script, building on one of the examples in the mido library. It simply randomly selects from an array of notes and emits an note-on followed by a note-off message after a certain duration. I expose the relevant parameters via OSC to control the tempo (i.e. note duration), and the key (each scale name changes the contents of the note array). The Python code is available here and the OSC address/message handling is quite straight forward.  Of course, the possibilities for incorporating more complex and interesting musical concepts is endless, but we leave it here for now... :)

Software Architecture

Since most of our existing system is built in node-red, we simply add the necessary nodes to talk to the sequencer. The OSC node takes care of formatting the message, and then we simply pipe it out a UDP object to the correct local port where the Python script is listening.


node-red Configuration

Here's what the node-red configuration looks like. The top function node divides the moisture range into "pentatonic", "major",  and "minor" scales with decreasing moisture values. The tempo map function below provides an exponential scaling of the note durations, which cause a rather sharp change as you reach a "critically low" moisture value (to be fine-tuned).



The blue OSC nodes ("key" and "tempo") take care of the formatting of messages, and here they're sent to port 7002 on the same host where the Python sequencer is running. The entire flow is available here.

This is what the dashboard looks like, showing the changes in "key" and tempo (in Beats Per Minute) as the moisture decreases:


Audio/Video recording to come...

Soft Synths

It is possible to do all the sound generation on the RPi itself. I have experimented with amsynth running on the Pi, and after battling a bit with alsa configurations, managed to get the onboard soundcard to generate the actual audio as well. The universality of MIDI means you can have it any way you like!


Tuesday, May 02, 2017

Playing a MIDI keyboard in node-red

While likely not originally designed for typical IoT platforms, it is possible to play a MIDI keyboard in node-red via the GUI dashboard using a relatively simple setup.

For this somewhat unusual exercise, you will need:

- A Rasbperry Pi (can do this on a desktop platform, but whats the fun in that?)
- A USB-MIDI capable Keyboard. (You can use a USB-MIDI adapter with an older MIDI keyboard without USB of course)
- Install of the most recent node-red, with the dashboard. The only additional node you'll need is node-red-midi.

For further details check out the detailed writeup here.

Here's what it looks like in action. The computer monitor shows the flow as well as dashboard UI. Excuse my rats nest under the desk.


Some neat features:
- Works on any platform in the browser
- Allows concurrent connections, so more than one person can play with it at the same time on different devices 

Some obvious limits I can think of:
- limited UI from node-red-dashboard for this purpose; a row of buttons is not a great interface for an instrument
- multi-touch
- not tested for performance (latency, etc).

Next step: hooking up some music to the plant moisture sensor!

Wednesday, July 06, 2016

Jeux d’orgues on iPad with Yamaha CP33

Jeux d'orgues is a neat application that contains a number of French organ samples (link here). There's also an iOS version and Android version (called Opus 1). Using a class compliant USB-MIDI adapter or alternatively directly connecting to the USB-MIDI port, it is possible to play the instrument using external controllers.

Here we see it in action with my Yamaha CP-33. The neat thing is the app supports MIDI CC/PC mappings to trigger different stops, which means I can change them using the built in patch keys of the keyboard!


Tuesday, June 17, 2014

Air Organ

I was fortunate enough to have access to a MIDI enabled Casavant organ at our church. The organ is a fascinating instrument in many ways, but one particular is the fact that it was the first instrument where you can change the mapping between the input and output on the fly, and that is an otherwise exclusive feature of new digital musical instruments.

With the ever-improving Leap Motion SDK, and some work related motivation to "yarpify" things, I got the following running after struggling with some typos in my SYSEX messages that are used to control the organ.


The above shows one simple mapping: X (left right) controls the pitch, and moving the hand forward goes from no sound, to a single stop, to a second stop that's making notes a third higher. What was immediately interesting was that the digital control of the organ is extremely fast, and glissing through in this manner created runs that are basically impossible to do on a standard keyboard (well, maybe if you practiced some two-hand technique where you can time the black notes in between the white ones...). Also, it was very apparent that a simple linear X-position to pitch is highly unnatural when you don't have the tangible feedback of a physical, rectangular keyboard.