MARCEL VESGA




Third experimentation session

Category : + info and process · No Comments · by Jun 21st, 2013

[ylwm_vimeo]56986051[/ylwm_vimeo]

We have been trying the different effects which can be used in the interface related to the movements of the right hand (to measure the violin bow hold). We base our work in the 5 different bow strokes from the last session. The movement of the right hand is not always properly reflected on the interface; after different tests, we realized we were not analyzing the depth factor. Our next attempts wil be focused into increasing the capacity to capture better the movements. There are some videos that will come next time!

Second experimentation session

Category : + info and process · No Comments · by Jun 21st, 2013

[ylwm_vimeo]56952054[/ylwm_vimeo]

We started to analyze the sound of the open violin strings: G, D, A and E. The frequencies are faraway from each other, we supposed would easier to pick them up. Later we tried to analyze consecutive notes a little bit faster. The result was positive, the program is collecting the different frequencies without problems.

How does this process work?

We have a feature extractor analyzer which captures sound through a standard laptop microphone, would be possible also to do it with any other one. We focused on the “ Raw Midi pitch” which basically tell us at which frequency is the violin vibrating. This midi-data is been translated into OSC (Open Sound Control) messages which can be later used to control visual environments in order to produce sound visualization.
The same thing is happening which the Kinect. Different features from motion are captured such as joints’ positions and speeds from both arms (hand, elbow and shoulder) are also being translated into OSC messages. All this information can now used and read by sound synthesis or animation software such us “Processing” or “Quartz composer”.

The next steps include translating information into visual representations from both, movements and sound.

First experimentation session

Category : + info and process · No Comments · by Jun 21st, 2013

[ylwm_vimeo]56952053[/ylwm_vimeo]

Our research has already started. We have tried two different softwares to work with the Microsoft Kinect, Ethno Tekh’s Ethno Tracker and FAAST. While Talía played the violin, we observed the movements and parts of the body that each skeleton tracker detects. Elbow, wrist and hand were to be analysed. Having tested different bow strokes from different perspectives, we noticed that the Ethno Tracker collected the signal better when the violin player was in front of the machine. Here we have the results of the different bowing techniques:

  • Spiccato: elbow and speed of the right hand are detected, but without a lot of variation in the position.
  • Staccato: speed of the right hand and elbow position are detected.
  • Detaché: speed of the right hand and elbow position are detected.
  • String crossing: speed and position of the elbow and the right hand are detected.
  • Martelé: speed and position of the elbow and the right hand are detected.

The results were quite satisfactory. We have already seen what is possible to analyse with Kinect. The next step is trying to train the machine to recognize the bow strokes for itself. This is step is not compelling yet to draw the graphics in the interface.

Let’s keep up the good work!

State of the art

Category : + info and process · No Comments · by Jun 21st, 2013

Some examples of what has been done so far in this field…

There are different references which work as mobile apps/ sort of video games:

SoundBrush iPad App

Pitch Painter iPad App

Aquasonic App

Windharfe App

Rehearsal: An app for practicing musicians

All of the previous references were designed as mobile applications, and as stand alone not collaborative apps.

The use of these apps in pedagogic work and their efficacy still has to be further researched and analysed.

Bjork’s Biophillia Album App Well known application, very interactive and visually compelling although focused on the artist’s album.

First steps: 1. Understand sound, digital sound, acoustics and the musical sound

Category : + info and process · No Comments · by Jun 21st, 2013

In order to be able to come up with an idea of how to start working out the relations between visuals and sound, first we had to get deeper understanding of sound, but specially from the relations between the physical aspects of sound, digital sound, acoustics and the musical sound.

Summary

A theoretical understanding of sine waves, harmonic tones, inharmonic complex tones, and noise, is useful to understanding the nature of sound. However, most sounds are actually complicated combinations of these theoretical descriptions, changing from one instant to another. *For example, a bowed string might include noise from the bow scraping against the string, variations in amplitude due to variations in bow pressure and speed, changes in the prominence of different frequencies due to bow position, changes in amplitude and in the fundamental frequency (and all its harmonics) due to vibrato movements in the left hand, etc.

Digital representation of sound (As explained in the MAX/MSP help guide)

“To understand how a computer represents sound, consider how a film represents motion. A movie is made by taking still photos in rapid sequence at a constant rate, usually twenty-four frames per second. When the photos are displayed in sequence at that same rate, it fools us into thinking we are seeing continuous motion, even though we are actually seeing twenty-four discrete images per second. Digital recording of sound works on the same principle. We take many discrete samples of the sound wave’s instantaneous amplitude, store that information, then later reproduce those amplitudes at the same rate to create the illusion of a continuous wave.”

But we also needed to understand its limits and advantages:

  • Since a digital representation of sound is just a list of numbers, any list of numbers can theoretically be considered a digital representation of a sound.
  • Any sound in digital form (whether it was synthesized by the computer or was quantized from a “real world” sound) is just a series of numbers. Any arithmetic operation performed with those numbers becomes a form of audio processing.

Sound explained from it’s acoustic properties:

*Pitch: is a perceptual attribute of sounds, defined as the frequency of a sine wave that is matched to the target sound in a psycho- acoustic experiment. If the matching cannot be accomplished consistently by human listeners, the sound does not have pitch.

*Fundamental frequency: is the corresponding physical term and is defined for periodic or nearly periodic sounds only. For these classes of sounds, fundamental frequency is defined as the inverse of the period. In ambiguous situations, the period corresponding to the perceived pitch is chosen.

*Melody: is a series of single notes arranged in a musically meaningful succession.

*Chord: is a combination of three or more simultaneous notes. A chord can be consonant or dissonant, depending on how harmonious are the pitch intervals between the component notes.

*Harmony: refers to the part of musical art or science which deals with the formation and relations of chords.

*Harmonic analysis: deals with the structure of a piece of music with regard to the chords of which it consists.

*Musical meter: this term has to do with rhythmic aspects of music. It refers to the regular pattern of strong and weak beats in a piece of music. Perceiving the meter can be characterized as a process of detecting moments of musical stress in an acoustic signal and filtering them so that underlying periodicities are discovered. The perceived periodicities (pulses) at different time scales together constitute the meter. Meter estimation at a certain time scale is taking place for example when a person taps foot to music.

*Timbre, or, sound colour, is a perceptual attribute which is closely related to the recognition of sound sources and answers the question “what something sounds like”; Timbre is not explained by any simple acoustic property and the concept is therefore traditionally defined by exclusion: “timbre is the quality of a sound by which a listener can tell that two sounds of the same loudness and pitch are dissimilar”.

The four aspects of musical sound:

After all, music is actually and simply sound. When we arrange these characteristics in such a way that we find it “pleasing” to listen to we call that music, although the term “pleasant” its closely related to the subjectivity in perception.

(1) Pitch – the highness or lowness of sound
(2) Duration – the length of time a musical sound continues
(3) Intensity – the loudness or softness of a musical sound
(4) Timbre/Tone color – the distinctive tonal quality of the producing musical instrument.

Explained in more depth here