Beam Beats is an interactive and tangible rhythm sequencer that translates the geometry of beacons on the ground into rhythms and polyrhythms thanks to a rotating laser beam. This experimental MIDI instrument is about investigating self-similarities in polyrhythms, as described in this post.
Update: Here’s the video of a demo at Dorkbot at La Gaité Lyrique in Paris
Before I report more on this project with demos and videos, here are some pictures to illustrate the work currently in progress, thanks to lasercutting techniques in particular:
The brand new laser-cut acrylic structureThe prototype laser head and retro-reflection light sensorThe Arduino to detect the light variations and convert them into MIDI notesVery early design of beacons each with their own light sensor
This prototype works, here’s a picture of the laser beam rotating:
I’ve been working on the Arduino source code to add the ability to send MIDI clock in order to sync Beam Beats to other MIDI devices.
Now I mostly need to conclude on the design for the active beacons(sensors stands) and the passive (retroreflective adhesive stands) beacons, and find out what to play with this sequencer…
On a Sunday afternoon, fifteen friends, all non musicians but two amateur musicians, joined the machines set in turn, around 8 at a time, to improvise electro or hip-hop rhythmic grooves. Of course there were food and drinks as well.
The goal was to introduce music making in an attractive way: no pressure, no constraint, no reason to be shy, just have fun!
Don’t be shy
Some arrived saying “no no no I will never sing in the mike”, but ended up doing the mike to record voice loops into the Kaoss Pad… Other spent time to figure out how to play something musical from machines they were touching for the first time. It was amazing how everybody got into it!
Here is a time lapse overview, unfortunately silent:
The machine set was primarily composed of an Akai MPC500 with various programs (drums, stabs, acapella chops, etc.), a Korg Kaoss Pad 3 with an Shure SM58 mike and a Korg microKorg, all MIDI clock-ed together, and all mixed into a Yamaha MG12/4 with a Boss SE-70 reverb send. There were also a 49-keys Casio keyboard and a Yamaha SHS10 mini handheld keyboard, both used as MIDI controllers to drive an EMU Turbo Phatt. Various circuit-bent toys were also pre-mixed and then fed into the Kaoss Pad as well, to get the best out of them.
Every machine has its own appeal: the Kaoss Pad 3 is the most appealing at first, but then it gets more difficult to master. The microKorg, thanks to its sync-ed arpeggiator and its modulation wheel is also rewarding very quickly. The MPC is somehow harder since you must be rather accurate when playing for the rhythm to sound good. The EMU Phatt is very easily rewarding, especially when using the Beats programs or other programs made of several sounds laid over the keyboard.
Fun or accurate: this is the question
One note about loops, either on the Kaoss Pad or in the MPC: if they help to improve the accuracy of the music (repeats perfectly each time), they also kill a lot of fun out of it since you only have to play once, record, and then you are done! On the other end, playing repeatedly the same thing (or at least trying to) helps to get warmer and warmer!
Thanks all for joining us for this party, and see you for next time!
Following my ongoing work on a theory of rhythms and a corresponding physical instrument using lasers, here is a version of the same idea implemented into an Arduino: a generative sequencer. The idea is to generate rhythms, and perhaps melodies, from one rhythm seed, then use mutated copies of it to create something more interesting, all this in realtime using knobs and buttons.
This is not ‘yet another step sequencer’, but really a generative music device, that builds a real time MIDI loop based on an underlying theory described in a previous post.
This is work in progress, and is shown ‘as is’ for the sake of getting feedback.
Approach
The approach is to generate a “seed” of rhythm that is then copied a few times into “instances”, each instance being distorted in its own way. The controls for the human player (or programmer) are therefore all about adjusting the deformations (transforms actually) to apply to each instance, and of course defining the seed.
Eventually each instance is wired to a MIDI note, channel etc. to remote control a synthesizer or a drum machine or any MIDI setup, to generate actual sounds.
Principle: one seed, many transformed instances
The maths
Given a seed of rhythm of lengh length composed of pulses, each of duration m, then:
for each instance k of the seed, each pulse i,
pulse(k, i) happen at time t = phase(k) + i . m . stretch(k), t < length
where phase(k) and stretch(k) are the phase and stretch settings for the instance k.
Realtime control of the sequencer is therefore all about setting the phase and stretch values for each instance, once the pulse number and the pulse duration of the seed have been globally set.
Inversely, for a given instance k, at time t, we have a pulse if:
there exists an i, such as t = phase(k) + i * m * stretch(k)
i.e. i = (t - phase(k))/(m * stretch(k))
Thinking in MIDI ticks (24 per quarters), in 4/4, for 1 bar, length = 4 * 24, phase is in [-24 .. 24] and stretch is in [4:1 .. 1:4] and m in [12 .. 48] by steps of 12 ticks.
The implementation is the very simple: for each instance of the seed, and given its phase and stretch settings, whenever the modulo condition above is true, then we emit its MIDI note, with the set velocity on the set MIDI channel.
As usual, the pace of the loop is primarily based on the value from the Tempo potentiometer.
Overview of the circuit, with the switches and the knobs
Adding some swing
8th note swing
The EMU SP-1200, early Linn machines, Roland 909, Akai MPC and many other machines are famous for their swing quantize, a feature that delays every other note by a certain amount in order to create a groovy feeling (see Swung Note).
Different machines express the swing factor in different ways, we will stick to the MPC format, expressed in percent from 50% (no swing, play straight) to 75% (hard shuffle).
For a swing for 8th notes, this swing factor represents the ratio of the period of the first 8th note over the period of the second 8th note, in percent.
In the Arduino code of our generative sequencer, we chose to do a fixed swing for 8th notes only.
A big constraint is that we are limited to a resolution of 24 ticks per quarter note, which is not a lot! By contrast, MPC have a 96 ppq resolution. Because a hard swing of 70% translates into hardly 2 MIDI ticks at 24 ppq, doing the swing on top of the ticks will not be accurate at all!
The only solution is to vary the internal tempo before and after each 8th note. The drawback (or advantage) is that the MIDI clock being sent will also move, reflecting the same swing. Since the Swing knob value is actually between 0 and 25 (to be read from50% to 75%), the tempo before (t-) and the tempo after (t+), are given by:
t+/- = (50 +/- swing) * t / 50
where t is the base loop period without swing
Video Demo
Here is a video demo. There are only 3 instances, selected by the switches 1, 2 and 3; the first switch selects the GLOBAL settings: step duration (quarter, 8th etc.), swing factor, tempo. Each instance can be set its Phase, Stretch, MIDI note, velocity and MIDI channel. Here I have preset the MIDI channels, instance 1 on channel 1 (the microKorg) and instances 2 and 3 on channel 2 (the MPC with a drum kit).
The goal is to build a simple beat by only adjusting the parameters.
Please note that some parts of the code are not used any more, such as the big constant arrays, and some comments are not up to date (e-g no prime number anymore).
All analog inputs are wired to simple knobs. Digital inputs 8, 9, 10 , 11 are the four buttons used to switch pages. Digital output 12 is the activity LED (showing when the knob is active within the knob pagination). MIDI out is on the Tx pin.
/*
* Generative rhythm sequencer, more details at: http://cyrille.martraire.com
*
* Knob mapping according to a grid 2^n . prime^m, against the MIDI 24 ticks/quarter.
*
* Knob pagination to multiplex the knobs several times with LED to show activity.
*
* Memory-less event handling thanks to maths relationships.
*
* MIDI note on output on every 16 channel and MIDI clock according to tempo.
//---------- USER INPUT AND PAGINATION -----------
#define PAGE_NB 4
#define KNOB_NB 6
#define FIRST_PAGE_BUTTON 8
#define PROTECTED -1
#define ACTIVE 1
#define SYNC_LED 12
// the permanent storage of every value for every page, used by the actual music code
int pageValues[PAGE_NB][KNOB_NB];
// last read knob values
int knobsValues[KNOB_NB];
// knobs state (protected, enable...)
int knobsStates[KNOB_NB];
// current (temp) value just read
int value = 0;
// the current page id of values being edited
int currentPage = 0;
// signals the page change
boolean pageChange = false;
//temp variable to detect when the knob's value matches the stored value
boolean inSync = false;
int maxValue = 890;
int scaleLen = 15;
int *scale = scale3;
int step = 60;
int center = 30;
int coeff = 10;
//---------- GENERATIVE MUSIC CODE ---------
unsigned int cursor = 0;
int bars = 1;
int length = bars * 4 * 24;
// INPUTS
int PHASE = 0;
int STRETCH = 1;
//int DIRECTION = 2;
int NOTE = 2;
int DURATION = 3;
int VELOCITY = 4;
int CHANNEL = 5;
// GLOBAL KNOBS (seed and global settings)
int seedDuration = 24;
int seedTimes = 8;
int instanceNb = 4;
int swing = 0;//0..25% (on top of 50%)
//
int loopPeriod = 125/6;//120BPM
int actualPeriod = loopPeriod;
//instance i
int phase = 0;
int stretch = 1;
int note = 48;
int duration = 24;
int velocity = 100;
int channel = 1;
// custom map function, with min value always 0, and out max cannot be exceeded
long mapC(long x, long in_max, long out_min, long out_max)
{
if (x > in_max) {
return out_max;
}
return x * (out_max - out_min) / in_max + out_min;
}
// for each instance, and for the given cursor, is there a pulse?
boolean isPulse(byte phase, byte stretch){
int num = cursor - phase;
int denum = seedDuration * stretch / 4;
return num % denum == 0;
}
// Sends a MIDI tick (expected to be 24 ticks per quarter)
void midiClock(){
Serial.print(0xF8, BYTE);
}
// plays a MIDI note for one MIDI channel. Does not check that
// channel is less than 15, or that data values are less than 127:
void noteOn(char channel, char noteNb, char velo) {
midiOut(0x90 | channel, noteNb, velo);
}
// plays a MIDI message Status, Data1, Data2, no check
void midiOut(char cmd, char data1, char data2) {
Serial.print(cmd, BYTE);
Serial.print(data1, BYTE);
Serial.print(data2, BYTE);
}
// read knobs and digital switches and handle pagination
void poolInputWithPagination(){
// read page selection buttons
for(int i = FIRST_PAGE_BUTTON;i < FIRST_PAGE_BUTTON + PAGE_NB; i++){
value = digitalRead(i);
if(value == LOW){
pageChange = true;
currentPage = i - FIRST_PAGE_BUTTON;
}
}
// if page has changed then protect knobs (unfrequent)
if(pageChange){
pageChange = false;
digitalWrite(SYNC_LED, LOW);
for(int i=0; i < KNOB_NB; i++){
knobsStates[i] = PROTECTED;
}
}
// read knobs values, show sync with the LED, enable knob when it matches the stored value
for(int i = 0;i < KNOB_NB; i++){
value = analogRead(i);
inSync = abs(value - pageValues[currentPage][i]) < 20;
// enable knob when it matches the stored value
if(inSync){
knobsStates[i] = ACTIVE;
}
// if knob is moving, show if it's active or not
if(abs(value - knobsValues[i]) > 5){
// if knob is active, blink LED
if(knobsStates[i] == ACTIVE){
digitalWrite(SYNC_LED, HIGH);
} else {
digitalWrite(SYNC_LED, LOW);
}
}
knobsValues[i] = value;
// if enabled then miror the real time knob value
if(knobsStates[i] == ACTIVE){
pageValues[currentPage][i] = value;
}
}
}
In the post “Playing with laser beams to create very simple rhythms” I explained a theoretical approach that I want to materialize into an instrument. The idea is to create complex rhythms by combining several times the same rhythmic patterns, but each time with some variation compared to the original pattern.
Several possible variations (or transforms, since a variation is generated by applying a transform to the original pattern) were proposed, starting from an hypothetical rhythmic pattern “x.x.xx..”. Three linear transforms: Reverse (”..xx.x.x”), Roll (”x.x.xx..”) and Scale 2:1 (”x…x…x.x…..”) or 1:2 (”xxx.”), and some non-linear transforms: Truncate (”xx..”) etc.
Geometry + Light = Tangible transforms
The very idea behind the various experiments made using laser light beams and LDR sensors is to build an instrument that proposes all the previous transforms in a tangible fashion: when you move physical object, you also change the rhythm accordingly.
Let’s consider a very narrow light beam turning just like the hands of a clock. Let’s suppose that our clock has no number written around, but we can position marks (mirrors) wherever on the clock surface. Still in the context of creating rhythms, now assume that every time a hand crosses a mark (mirror) we trigger a sound. So far we have a rhythmic clock, which is a funny instrument already. But we can do better…
Going back to our rhythmic pattern “x.x.xx..”, we can represent it with 4 mirrors that we position on a same circle. On the illustration below this original pattern is displayed in orange, each mirror shown by an X letter.. If we now link these 4 mirrors together with some adhesive tape, we have built a physical object that represents a rhythmic pattern. The rotating red line represents the laser beam turning like the clock hands.
Illustration of how the geometry of the rhythmic clock physically represents the transforms
Now we have a physical pattern (the original one), we can of course create copies of it (need more mirrors and more adhesive tape). We can then position the copies elsewhere on the clock. The point is that where you put a copy defines the transform that applies to it compared with the original pattern:
If we just shift a pattern left or right while remaining on the same circle, then we are actually doing the Roll transform (change the phase of the pattern) (example in blue on the picture)
If we reverse the adhesive tape with its mirrors glued on, then of course we also apply the Reverse transform to the pattern (example in grey on the picture)
If we move the pattern to another (concentric) circle, then we are actually applying the Scale transform, where the scaling coefficient is the fraction of the radius of the circle over the radius of the circle of the original pattern (example in green on the picture)
Therefore, simple polar geometry is enough to provide a complete set of physical operations that fully mimic the desired operations on rhythmic pattern. And since this geometry is in deep relationship with how the rhythm is being made, the musician can understand what’s happening and how to place the mirrors to get any the desired result. The system is controllable.
To apply the Truncate transform (that is not a linear transform) we can just stick a piece of black paper to hide the mirror(s) we want to mute.
If we layer the clock we just described, with one layer for each timbre to play, then again changing the timbre (yet another non-linear transform) can then be done by physically moving the pattern (mirrors on adhesive tape) across the layers.
From theory to practice
Second prototype, with big accuracy issues
Though appealing in principle, this geometric approach is hard to implement into a physical installation, mostly because accuracy issues:
The rotating mirror must rotate perfectly, with no axis deviation; any angular deviation is multiplied by two and then leads to important position deviation in the light spot in the distance: delta = distance . tan(2 deviation angle)
Each laser module is already not very accurate: the light beam is not always perfectly aligned with the body. To fix that would require careful tilt and pan adjustment on the laser module support
Positioning the retroreflectors in a way that is accurate and easy to add, move or remove at the same time is not that easy; furthermore, even if in theory the retroreflectors reflect all the incoming light back to where it comes from, in practice maximum reflectance happens when the light hits the reflector orthogonally, which is useful to prevent missed hits
Don’t hesitate to check these pages for progress, and any feedback much appreciated.
Now we want to put that into practice to build a musical instrument. Let’s consider we want to do something inspired by the Theremin, but simpler to play and more funny to look at while easier to build as well.
Requirements
Here is now a case study: We want to build an instrument that must be:
Playable by friends that know very little in music: in other words really easy to play
Attractive for friends that enjoy the fun of playing and jamming together: must be expressive
Suitable for groovy electronic music (house, hip-hop, electro, acid-jazz, trip-hop and a bit of techno)
Able to participate within a small band among other instruments, mostly electronic machines
With a funky look and visually attractive when performing
Design and justification
The finished instrument
Based on these requirements, and after some trials and error we go for the following:
Finger physical guidance, because it is too hard to keep hands in the air at the same position (req. 1)
Bright leds plus light sensor as a primary way of playing, for the funky look and visually attractive performance (req. 5)
Discrete pitch as primary way of playing, with pitch quantization against a pentatonic scale, easy, good sounding (but coupled to one fixed tonality unless putting configuration buttons) (req. 1)
Expression on the pitch when a note is pressed send pitch bend events to modulate the pitch as on a guitar or real theremin; this only happen after a note is pressed, not to conflict with the primary pentatonic scale (req. 2)
Allows for additional expression using a distinct light sensor mapped to a Midi Continuous Controller (controller 1: modulation wheel) (req. 2)
Continuous rhythm control to start the project simply, plan to quantize it on 16th notes according to a midi clock later (tradeoff to keep simple right now, should be even simpler due to req. 1)
MIDI controller rather than integrated synthesizer to allow for very good sounds generated from external professional synthesizers (req. 3)
Internal scale setup within the range between C3 and C5, to dedicate the instrument to play solo on top of an electronic rhythm (req. 4)
Arduino implementation (easy to implement pitch quantization logic and expression controls)
Construction
The very simple circuitThe Arduino board
An aluminium rail (from a DIY shop) is fixed to an empty salt box as the body (hence the name “Salty Solo”)
The main sensor and the expression sensor are two simple LDRs connected through a voltage divider to two Arduino analog inputs
Two buttons are simply connected to two Arduino digital inputs.
The MIDI out DIN plug is connected to the Arduino Tx pin.
The rest is in the Arduino software!
I have decorated the salt box with adhesive stuff…
Playability and fixes
At first try, playing the Salty Solo is not that easy! A few problems happen:
Reaction time (or “latency”) is not good
Moving the light with the left hand is not very fast, hence impedes playing a melody that sounds like one.
Also there is a kind of conflict between the note quantization that does a rounding to the closest accurate note, and the expression control that allows to “bend” notes.
The first problem has been solved by increasing the Arduino loop frequency down to 100Hz (wait period = 10ms); to prevent sending MIDI continuous controller and MIDI pitch bend too often we therefore needed to send them (when needed) once out of a few loops.
For the second problem a workaround has been done by using the second button to trigger the next note in the scale, and pressing both buttons together triggers the second next note in the scale. Basically, in the (almost) pentatonic scale we use this usually means jumping to the third or to the fifth. This kind of jump is very common when playing guitar or bass guitar thanks to their multiple strings, and it does help play faster while moving the left hand much less. With two buttons like this it is much easier to play melodic fragments.
The last problem has been solved a bit randomly: because of a bug in the code, the pitch bend only happen when on the highest note: this saves the most useful case of playing the guitar hero on a very high note while bending it a lot. On the other notes, sliding the left hand descend or ascend the scale instead. Further playing and external opinions will probably help tweak this behaviour over time.
How can we design musical instruments that both musicians and non-musicians can play and enjoy?
In the previous part of this series, we stated that a musical instrument must provide “A way for musicians and non-musicians to make music easily: musical assistance“.
We will focus on that point in this post.
In the previous post, when discussing the various controls an instrument provide to make music, we already noted that discrete controls were easier to use, since they automatically enforce to play in tune or in rhythm; this was already a simple case of musical assistance.
The ideal instrument
Out of every possible arrangement of sounds, very few can be considered music. Therefore, a tool that could make every possible musical sounds and no non-musical sound would be the ideal instrument.
Such an instrument would have to be very smart and understand what music is. It would have to be playable as well. This instrument probably cannot exist, but designing an instrument is about trying to go there.
Empower the tools so that they can empower the user: towards instruments that always play in tune
The more we can understand what music is, and more specifically what it is not, the more we can embed that understanding into the instrument so that it can assist the player by preventing non-musical events. Preventing non-musical sounds helps the non-musician, while loosing no freedom (or very little) for the expert musician.
To illustrate that approach, let us consider a simplified musical system made of an infinity of notes. If we know that we want to play Western music, we know we can reduce our notes set down to 12; if we know we want to play in the Blues genre, then we know we will only use 6 notes out of this 12 notes set; if we know that our song is in minor A then we know which 6 notes we will only need. Going further, if we know we want to play a walking bass line, we might end up with only 3 playable notes: the task of playing has become much simpler!
This idea already exist de facto in the form of the “black keys only” trick: the subset of black keys on the piano keyboard forms a pentatonic scale (a scale with only 5 notes) that sounds beautiful in whatever combination you play them:
With this approach in mind, let’s now have a look at what music is, and more specifically what are its constraints on how to arrange sounds together.
Musical constraints
Music aims at providing aesthetic pleasure by the mean of organised sounds. This typically imposes constraints on what can be played.
Music (more precisely musical pleasure) happens when enough surprise meets enough expectation at the peak of the Wundt curve (shown at left): expectation is typically caused by conventional constraints (e-g. western music conventions) and internal consistency of the piece of music that enable the listener to expect what will happen next; on the other hand, surprised is caused by little deviations against the expectations to create interesting tension.
In short, too much expectation means boring music, whereas too much surprise means just noise.
Musical constraints
In almost every popular genre of music, conventional constraints requires adhesion to a melodic scale (e-g. major or minor etc.), to a rhythmic meter (e-g. 4/4, or 6/8 etc.), and almost always to a tonality. These constraints represent a “background grid” on top of which a given piece of music will draw a specific motif.
Layers of constraints
On top of these conventional constraints, a piece of music also exhibits an internal consistency. This often happens through some form of repetition between its component parts (think about the fugue, or the verse-chorus-verse-chorus repeated structure, or the obvious theme repetition in classical music). These constraints are usually written into the musical score, or into the simpler theme and chord chart in Jazz notation.
Performance freedom
Musical performance, for instance on a concert stage, is therefore all about the remaining freedom you have within these constraints. When playing a fully written score this freedom is reduced to the articulations between the notes, playing louder or softer, or playing a little faster or slower. In jazz the freedom is bigger, as long as you more or less follow the scale, tonality, chords and rhythm of the song: this is called improvisation. Improvisation on the saxophone or most classical instruments requires being trained to use only the subset of notes that are suited for each song (“practice your scales”) .
Of course, if a player excessively uses his available freedom he runs the risk to loose the audience, and listeners might consider it is noise, not music. On the other hand, if played too perfectly or “mechanically” they might get bored.
Layers of constraints
We can consider the above constraints as layers of constraints on top of each other (see picture): at the top, the category “pleasant music” defines aesthetic constraints (if one wants to play unpleasant music then the freedom is bigger and there will be less constraints). Below this layer, western music brings its constraints (e-g. meter, tempered scale, tonality), then each genre adds its constraints (tempo range, timbres, rhythmic patterns etc.), and at the bottom layer, each piece of music finally defines the most restrictive constraints in the form of the written score, chord chart etc.
For a very deep and rigorous exposé on this topic, I recommend the excellent bookHow Music REALLY Works which is full of precious insights.
Applications
The more we can embody these constraints into the instrument, the easier it will be to play. As an example, if our constraints impose that we can only play 6 different notes, an instrument that enables to play 12 different notes is not helpful: we run the risk to play 6 notes that will be “out of tune”! The ideal instrument should only propose the right 6 notes.
Harmonic table keyboard
The new C-Thru Harmonic Table keyboard (USB)
If we want to play chords, or arpeggios, the usual piano keyboard layout is not the most straightforward one, because you have to learn the fingering for each chord.
For that particular purpose, the keyboard can be improved, and people have done just that: it is called the Harmonic Table layout.
Here is an article that explains more about it, and here is the website (also with explanations) of the company C-Thru that is launching such a commercial keyboard at the moment.
The beauty of this keyboard is that every chord of the same type has the very same fingering, as opposed to the piano keyboard:
Same fingering for every chord of the same type (picture from C-Thru)
Auto-accompaniment
In some musical genres, such as salsa, waltz, bossa-nova etc. there is a very strong rhythmic pattern that constraints the song very much, especially for the accompanying instruments.
Manufacturers have long embedded these constraints in the form of an auto-accompaniment feature built-in the popular electronic keyboards. The rhythmic pattern for many genres are stored into memory, and when you play a chord on the left part of the keyboard, the chord is not played according to your finger hitting the keyboard but according to the chosen rhythmic pattern. This enables many beginners or very amateur musicians to play complex accompaniment and melody easily. The same system can also play predefined bass lines, also stored in memory.
Going further, some electronic keyboards have also added shortcuts for the most common chords types: you only have to press one note to trigger the major triad chord, and if you want the minor triad chord you just press the next black key to trigger it, etc. This is called “auto chord” on my almost toy Yamaha SHS10.
A Yamaha electronic keyboard with auto-accompaniment and auto-chord features
However, this kind of accompaniment does indeed restrict the music space severely, and therefore the result is very quickly very boring. Auto accompaniment is now considered “infamous”, extremely “kitsch“, and very amateur-sounding. But this is not because this idea has been pushed too far that it is a bad idea in itself or forever…
Samplers and Digital Audio Workstations
Though classical instruments play single sounds (usually single notes or single chords), samplers, loopers and groove boxes can trigger full musical phrases at the touch of a button. This in turn can be played in the rhythm, or quantized, as described in the previous post. Here the idea is to have musical constraints already buit-in into musical building blocks, waiting to be used together.
Going further, DJs play complete records in the rhythm of the previous record (beatmaching), and increasingly take care of the records harmony (harmonic mixing): they actually build a long-term musical piece, with dramatic progression from opening up to a climax, rest etc. In this respect such a DJ mixing session or “playlist” can be compared with a symphony, except that the DJ is using full ready-made records instead of writing raw musical lines for each instrument of the orchestra.
Though not really “live” instruments, recent software applications for wannabe music producers such as Garage Band or Magix Music Maker and to some extent many other more professional software Digital Audio Workstations (DAW), have taken the approach of providing huge libraries of ready made music chunks. From full drum loops to 2-bars-long riffs of guitar, brass, piano or synthesizer to complete synth melodies and even pieces of vocals, you can create music without playing any music at all in the first place.
A very important aspect is that these chunks are also very well prepared to be assorted together (same key or automatically adjustable key, same tempo or automatically adjustable tempo, pre-mastered), therefore whatever combination of these chunks you choose, it cannot sound bad!
Again we can consider that this later approach embeds the knowledge from music theory that a song has one tempo, that must be shared by every musical phrase, and has one tonality, that must also be shared by every musical phrase.
When creating a new project you must set the tempo and tonality for the whole song:
Startup settings for new project in Garage Band
Then you can navigate the available loops and musical fragments; whenever you choose one it will be adjusted to fit the song’s tempo and key.
Garage Band Loops browser
Just like auto accompaniment, this idea results in good-sounding but often uninspired music (“Love in This Club” by Usher, US number one hit has however been produced using three ready made loops from Logic Pro 8, which can be considered the very professional version of Garage Band, as shown here). Again, this approach enables a lot of people to have fun doing music easily with a good reward.
Conclusion
Instruments that are more musically-aware can use their knowledge of music theory to assist human players more: they can prevent hitting a note that would be out of tune, they can correct the timing, enforce the key, tempo etc.
Instruments that trigger bigger chunks of music such as loops and readymade musical phrases (e-g. samplers, groove boxes etc.) can be considered the most attractive for non musicians. Playing a musical chunk or a record does yield instant gratification, more than with most other instruments; however making something really musical out of that still requires training and practice.
The key issue is therefore to find a balance between the amount of knowledge the instrument must embedd versus the remaining freedom to express emotions and to try unconventional ideas, in other words to be really creative. The problem with musical constraints is that even if they apply almost always, there is always a case when we must violate them!
A note on visual feedback
Just like cyclists do not look at the bicycle when they ride, musicians hardly look at their instrument when they play. Instead they look at the director, at their score, or at their band mates to shares visual hints of what is going to happen next. When being taught a musical instrument, not looking at ones fingers is one of the first advises to be told.
Instruments have to provide feedback, and almost always this feedback is what you hear.
However in a live concert or performance people in the audience really expect to see somethig happening, hence there is a clear need for a visual show.
Visual excitement for the audience happens when:
We can see the instruments: the view of an instrument can already be exciting
There is a physical involvement from the player(s), they must do something and be busy enough doing visible things, gestures, attitude, dancing etc. On the other hand, a guy in front of a computer, clicking the mouse from time to time, is not very attractive.
There is enough correlation between the gesture and attitude of the player and the music. If you can see the singer lips doing the sound you hear, the shoulders of the pianist moving when he tries to reach higher notes that you also hear, the drummer hitting the drums that you hear, then the visual stimuli and the music combine together to create additional excitement.
The player is having fun: having fun on stage is a great way to excite an audience!
How can we design musical instruments that both musicians and non-musicians can play and enjoy?
“A musical instrument is an object constructed or used for the purpose of making music. In principle, anything that produces sound can serve as a musical instrument.” (Wikipedia)
What is a musical instrument?
In practice, nobody hardly calls a hammer a musical instrument, and hardly anybody considers an iPod as a musical instrument either. These two objects are missing something essential: the hammer is missing a way to assist the player in making musical sounds, whereas the iPod is lacking critical control to shape and alter the music in real-time.
Intuitively, musical instruments are tools to make music that are convenient for that purpose, and that provide enough live control to the human player.
Claves, perhaps the simplest instrument
Therefore, to design a musical instrument, we need to address:
A way to generate sounds (every way to originate sounds is classified here); we won’t discuss that point in this post
A way for musicians and non-musicians to make music easily, musical assistance (we will discuss this point partially in this post)
A way for musicians and non-musicians to control the music being made: plenty of control (we will discuss this point in detail in this post)
For non musicians, it is obvious we have to address point 2 (musical assistance) more, whereas for musicians point 3 (plenty of control) will probably be the most important one, otherwise they will be frustrated.
Of course the later two points are conflicting, so the design of a musical instrument will require finding a balance between them.
Musical controls: Playability Vs. Expressiveness
“Playability is a term in video gamingjargon that is used to describe the ease by which the game can be played” (Wikipedia).
Here we will say an instrument has a good playability if it is easy to play, by musicians and especially by non-musicians.
Music is made of sounds that share essential attributes (the list is far from complete):
Pitch: this represents how “high” or “low” a sound is. Women typically sing at a higher pitch than men
Rhythm: this represents how the sounds are distributed over the metrical grid, in other words the pattern of sounds and silences
Articulation (the transition or continuity between multiple notes or sounds, e-g. linked notes -legato- or well separated notes)
Tempo: this represents the overall “speed” of the music
Dynamics: this represents how “loud” or “soft” a sound is
Timbre: this is the “colour” of the sound, as in “a piano has a different timbre than the saxophone”; we can sometime talk about “brighter” or “duller” timbres, etc.
Special effects and Modulations: electronic effects are endless, and so are modulations; most well-known modulations are the vibrato of the singer, or the special way to “slide” the start of notes on the saxophone.
Pitch and Rhythm are by far the primary controls, and the playability of an instrument is strongly linked to them: an instrument that is difficult to control in pitch or rhythm will surely be particularly difficult.
On top of pitch and rhythm, every musician, experienced or not, demand enough other controls to be as expressive as possible
Continuous pitch Vs. discrete pitch
Continuous pitch instruments can produce sounds of any pitch (lower or higher note) continuously, such as the violin, cello, theremin or the human voice (and also the fretless bass guitar).
Discrete pitch instruments are represented by the piano, flute, trumpet, the usual guitar, every instrument with a keyboard or with no frets, valves, or explicit finger positions.
continuous pitch instrument by Feromil, particularly hard to play
Because discrete pitch instruments already enforce automatically that every note played must belong to at least a scale (usually a chromatic scale), they are considered easier to learn and to play as opposed to continuous pitch instruments, where the player must find out the notes a priori using his/her ears alone (although after a deep training it will become instant habit).
Discrete pitch instruments have a better playability than continuous pitch instruments.
The chromatic scale contains 12 notes, out of which the (main) tonality of a song usually only uses 5 to 7 notes (diatonic, pentatonic or blues scales). It is usually up to the player to practice intensively to be able to play in any such sub scale without thinking about it, but one could imagine a simpler to play instrument that could be configured to only propose the notes that you can play with within a song (to be rigorous we should then deal with the problem of modulation, but it would bring us too far).
In contemporary music, instruments are sometimes expected to support microtonal scales, i.e. much more than 12 notes.
The great advantage with continuous pitch instruments is that they offer extreme control on the pitch and its variations: one can create a vibrato on a violin by quickly wobbling the finger on the fingerboard, or create a portamento by moving slowly between notes. The counterpart for that control is the huge training effort required.
Some instruments with a fixed pitch (simple drums, triangle, claves, wood sticks etc.) are obviously the easiest to use with respect to the pitch.
Playing in rhythm Vs. quantized playback
Most classical instruments require you trigger the sound at the right time with no help: you have to feel the rhythm, and then you must be accurate enough with your hands to trigger the sound (hit the drum, bow the string etc.) when you want it to be triggered. Again, this often requires training. By analogy with the pitch, we can consider that a continuous control of the rhythm .
Playing the pads on the MPC to trigger sounds or to record a rhythmic pattern
Electronic groove boxes (Akai MPC in particular) and electronic sequencers do take care of triggering sound accurately on behalf of the musician, thanks to their quantization mechanism: you play once while the machine is recording, then the machine adjusts what you played against a metrical grid, and then repeats the “perfect” rhythm forever. We can consider that as a discrete control of the rhythm.
Note that the grid does not have to be rigid as one can imagine, it can accommodate very strong swing, and quantization can also be applied partially, to fix a little while keeping some of the subtle human timing inaccuracy that can be desirable.
Discrete rhythm instruments (quantized rhythm) have a better playability than continuous rhythm instruments.
Talking about rhythm, the theremin is a bit special since a note is hardly triggered, instead one just controls the sound intensity (dynamics) with the loop antenna on the left. The rhythm is more than continuous…
Realtime quantization can only adjust notes later in time. One could imagine an instrument where notes can only be triggered when two conditions are met: when the player requests a note, and when a machine-generated rhythmic grid triggers a pulse. This would be a form of realtime quantization, which would make the instrument more accessible to non musicians, especially if they are poor in rhythm.
A vintage analogue step sequencer is an electronic device with buttons to decide whether to sound or not and a knob to adjust the pitch of the sound for each pulse of a fixed rhythmic grid. Each column of buttons and knobs is scanned in turn to trigger sounds for each beat. Playing such a sequencer does not require timing accuracy (it is a discrete rhythm instrument) but requires accuracy to set each independent pitch (because it’s set with a rotating know it is a continuous pitch instrument).
To sum up the analysis of pitch and rhythm controls as being either continuous or discrete (or none), here is above a simple chart that shows various instruments sorted by their continuous or discrete pitch and rhythm controls. It reads like this: “Violin is a continuous pitch and continuous rhythm instrument, a groove box is has no control on pitch and is a discrete rhythm instrument”.
The more we move to the upper right corner, the most difficult the instrument usually is (and probably the most expressive as well). The more we more down to the bottom left corner, the easier the instrument is (and probably the most frustrating as well). We therefore have defined a criterion to estimate the playability (or the other way round, expressiveness) of an instrument.
Guitar Hero has been put in the chart, in the bottom left corner, whereas it is not an instrument (as judged by a U.S. Court), since the actions on the fake guitar do not control any sound at all, they only enable to increase ones score.
Expressiveness through other controls
Strange instrument by Jean-François Laporte (Can)
As listed above, playing music makes use of many controls in addition to pitch and rhythm.
Many classical instrument that rely on physical principles (string, air pressure…) provides to the musician a lot of natural controls: changes in the way a sax player breathes, hitting a drum in the center or near the outside, sliding changes in pitch on a string etc.
Electronic instruments used to be rather poor with respect to the expressive potential they provided, however there is now a huge range of devices that help modulate the sound in realtime: pedals, breathe controllers, knobs and sliders, keyboard aftertouch (pressure sensitivity after the key has been pressed).
Various modulation wheels or devices for electronic keyboards from various manufacturers
Typical modulation controls for electronic keyboards control the pitch bend (how to modulate the pitch continuously, by derogation to the discrete pitches of the keyboard), and the vibrato (again a fast and vibrating modulation of the pitch, also by derogation to the discrete pitches of the keyboard). They are often wheels, and sometimes joystick or even touchpad.
A turntable can be considered an instrument, a very limited one, if it provides a way to play a record at a faster or slower pace, hence controlling the tempo.
Conclusion
Non musicians prefer instruments with less control, or with discrete controls that are easier to get right, so that they can have fun without the risk of being out of tune or out of rhythm. We have reviewed this reflection for the primary musical controls: pitch and rhythm, including a way to estimate their playability.
Homemade breakbeat controller by Michael Carter aka Preshish Moments (USA)
However, when non-musicians become really interested in getting closer with music they do not want a one-button does it all music box such as Beamz. They want, just like professional musicians, to have enough control to express emotions through the instrument.
On the other end, being both a continuous pitch and a continuous intensity instrument, the theremin is definitely one of the most difficult instrument to play: “Easy to learn but notoriously difficult to master, theremin performance presents two challenges: reliable control of the instrument’s pitch with no guidance (no keys, valves, frets, or finger-board positions), and minimizing undesired portamento that is inherent in the instrument’s microtonal design” (Wikipedia).
This series of post follows a recent and very interesting discussion with Uros Petrevski. Many pictures were shot at the Festival Octopus in Feb 2009.