Kandinski said he conceived his works in musical terms. That his paintings were like symphonies in which each element of the composition – each line, shape and colour – responded to sound composition criteria: yellow is a high-pitched sound, lines are organized into rhythmic clusters, triangle edges generate crystalline timbres, circles sound low…
In his words: “A painter who finds no satisfaction in mere representation, however artistic, in his longing to express his inner life, cannot but envy the ease with which music, the most non-material of the arts today, achieves this end. He naturally seeks to apply the methods of music to his own art.”
This “synesthetic” approach to painting has its specular image in much of the music that, precisely because of the 20th century avant-gardes – and especially due to the emergence of abstract art – has sought alternative systems of musical writing outside the classic code of the pentagram and traditional notation. We talk about graphic music notation, piano rolls, sonograms…
These series of drawings, generated through computer code and plotted using analogue tools, are the starting point for the following works. How can we transform drawing – either generative or manual – into musical compositions?
But how can we establish a graphic code that allows us to represent music?
To answer this question, we first need to understand the nature of music. A “classical definition” tells us that “music is the art that, by sorting sounds over time, produces an aesthetic and/or emotional effect on the listener.” From here, we could discuss the role that harmony, rhythm, and timbre play, as the most cutting-edge currents of music leave aside the systems of tone and periodic pulsation and embrace noise as a timbral material with which to build musical speeches. For no one to get angry, let us embrace the idea of music as an art of sorting sounds: a geometry of time.
From this geometric perspective, it should be easier for us to start giving answers to the problem of musical graphic representation. At least, it seems that the concepts of rhythm and duration could be solved simply. If we encode time from left to right on a piece of paper – as we do with writing – so that what happens “before” in time is to the left of the paper and what happens “after” is to the right, we would already have the canvas ready for rhythmic writing.
Now, we draw a sequence of shapes on the paper and read them from left to right: wider shapes will generate longer sounds, while narrower shapes generate shorter sounds; greater density – more quantity and less distance between shapes – will create faster and more compact rhythms. Easy, right?
Sonògraf is an open-source audiovisual instrument designed for music and plastic education at elementary schools. Based on the concept of image sonification, boys and girls can transform their drawings into music. Sonògraf was awarded a Guthman, the most prestigious international award for innovation in new musical instruments.
However, we still need to solve a couple of problems. Let’s start with the easiest one: tone.
When we talk about tone, we refer to how high- or low-pitched a sound is. For example, the singing of birds or mosquitoes is high-pitched, while the sound of a tractor’s engine or a double bass is low. Whether a sound is high or low-pitched depends on “frequency” – the speed of atmospheric vibrations. Let’s remember that the mosquito’s high-pitched sound depends on the rapid beating of its wings, or, in other words, how many times per unit of time this beating generates waves of compression and decompression of the air around it. It is the same principle we would apply to guitar strings: bass (low) strings complete vibration cycles fewer times per unit of time than sharper (high-pitched) strings, causing atmospheric pressure waves to occur less or more quickly.
In the end, the tone is all about speed. The faster, the higher.
And attention because if we think a little bit, we will quickly understand that rhythm and tone are a single continuum. A fast-paced rhythm is transformed into tone and vice versa: a very low tone is converted into rhythm. That is the case with a sports car’s or a Harley-Davidson’s engine acceleration from zero: pop pop p p p p prprprprpp pprrrrruuuummmn.
The scientific convention – as arbitrary as any other human convention – proposes that frequency (which the academy measures in “hertz” or cycles per second) is represented on the Y axis of a graph: lower frequency below; higher frequency above. Well, why not?
We could take alternative paths that would also make sense, such as using colour to identify the tone (and, in fact, we could undertake a whole scientific circumlocution around the parallels between the vibrational frequency of the atmosphere and the electromagnetic spectrum to justify it). But for now, let’s keep the previous solution. Low down, high up, indeed.
And back to our piece of paper, note that the previous rhythmic figures can also be organized in the tonal space, placing them higher or lower on the paper. Hey, this already starts to resemble music!
Clash/Blend is an audiovisual installation for video projection and a hyper organ. It premiered at the Reconciliation Chapel in Berlin during the Aggregate Festival, feeding the chapel’s tube organ with a flow of random, real-time-generated graphic music scores.
Now, we must deal with dynamics: volume. How do we assign loudness to sound interpretation? Again, there are infinite possible solutions, and they are all somewhat arbitrary, like any representation of reality other than reality itself (as with Borges’ map). Our proposal would be (because we believe it is the most intuitive solution within arbitrariness) to use gradients to represent volume. The more contrast the shape has with the background, the louder it would be: if our paper is white, black would be the sound with the greatest possible volume, and white would be – obviously – silence; and the different gradations of grey would be the possible expressive nuances of the volume, from pianississimo to fortississimo. A figure painted in a white-to-black gradient would produce an in crescendo sound: from absolute silence to the maximum possible loudness. We also appreciate other alternative systems that may have a more straightforward reading depending on the context, such as encoding volume according to the thickness of the shapes.
But for now, let’s keep the gradients and return to our piece of paper. In addition to the rhythm and tone, we can now add dynamic expression. We can now make crescendos, diminuendos, accents, fortepianos … We’ re doing well.
The installation ppff (pianissimo/fortissimo) transforms a grand piano into an interpreter of an ever-changing graphic music score.
We must finally resolve the timbres. Again, we will face thousands of possibilities because this is a free, useless, artistic, arbitrary exercise. We corner our vital frustration fuelled by late turbo capitalism and take a moment of our existence to solve this issue. As usual, we start by defining “timbre”.
When we talk about “timbre”, we mean the character of a sound (whether produced by an instrument, a voice, an object, or an animal…). But what allows us to differentiate one sound from another beyond its tone, volume dynamics, or internal rhythm? What makes a piano’s C# note different from a guitar’s C# note?
The explanation is simple yet complicated. Reality is complex; materials vibrate in whimsical ways, and the wing of a mosquito does not generate the same vibrational pattern as the string of a harpsichord. Similarly, the geometric complexity of the atmospheric pressure waves caused by the sound of a clarinet is different from that of a cello.
Our hearing, in its glorious sophistication, allows us to perceive all these nuances and – fortunately – differentiate between a fart and a cello, even if they are tuned in the same key.
A scientific analysis of the issue would conclude that sounds are made up of multiple simultaneous frequencies, and that the characteristic spectral impression of each sound is found in the number and magnitude of this frequency set.
Espectres is an audiovisual installation based on the real-time synthesis of visual spectrograms. While spectrograms are usually graphical representations of prerecorded sounds, in Espectres, images are generated BEFORE sound. Subsequently, an image sonification process transforms these images, generated from code, into sound. Rhythms, harmonies, and timbres are scrupulously encoded within the rule system that governs the procedural creation of images.
Given the difficulty of representing the multiple frequencies that constitute the characteristic timbres of the instruments in a graphic score, we conclude that life is short and that we don’t have time to waste dealing with such complexity. A colour can represent each differentiated timbre. Shazam! Violin: yellow; drum: green; harpsichord: blue
We then add the timbre (and chromatic!) dimension to our music sheet, allowing us to create a polyphonic composition for multiple instruments. Rhythm, tone, volume, and timbre.
In fact, at this point, we have it! John Cage would be proud!
FORMS – Quartet de Corda is a multimedia concert. Musicians must read the graphic music scores on a screen before them, as if playing a video game, and transform them into music. These same music sheets are the main element of the concert’s visual scenography, allowing the audience to read and anticipate the forthcoming music.
From here, and after this immersion in audiovisual transmutations, we have opened a research field that goes beyond the music scores for instrumental ensembles. This idea of deeply linking music to image, sound to light, and time-geometry to space-geometry allows us to conceive fascinating works where the senses are confused. Lasers that make music and light harmonies. Kinetic instruments and concerts for constellations. Sculptures of solid light in A minor.
We call it Visual Music.
Signes is an audiovisual kinetic canvas. It consists of a matrix of rotating segments ranging from order to chaos according to a set of graphical composition rules. Each light segment has an associated sound frequency, turning the canvas into a visual musical instrument where movement, colour and harmony are interlaced.
Espills is a solid light sculpture. Thanks to the densification of the atmosphere through artificial haze, light becomes visible, allowing the creation of three-dimensional compositions that float in the air. A set of sonification rules transform the geometric data feeding the laser into music.
Astres is an ongoing research project. Planned as a laser mapping job at the starry sky, a series of stellar tracking tools has been developed in collaboration with UPC professor Francesc Rey, allowing stars to be precisely tracked throughout the night on any date and geolocation. Although it is a work-in-progress research, one of its possible final forms is a hybrid between a concert and an astronomical conference, in which the stars and laser light are sonified to create an audiovisual experience.
Acknowledgements
To Irma Vilà i Òdena for her participation in the publication of this article.
Recommended citation: VILANOVA, Santi. A geometry of time. Mosaic [online], October 2024, no. 201. ISSN: 1696-3296. DOI: https://doi.org/10.7238/m.n201.2406
Deja un comentario