The most basic acoustic theory is the properties of a sine wave. These are it's wavelength, amplitude and frequency and they explain the volume and pitch of the sound wave. These things are labelled on the diagram below. The diagram shows the wavelength which is when a full cycle of the wave is completed, from peak to peak for example.The wavelength defines the pitch of a sound, as if they're closer together that means they cycle more times per second and higher frequency sounds are higher in pitch. The amplitude is labelled from the centre line (0 in measurement equivalent to no noise) up to the highest point a wave reaches, which is the peak. This defines the volume of the sound the wave creates, and a larger wave is a bigger sound, a smaller wave is a quieter sound. Finally the frequency which isn't labelled, refers to the complete cycles I mentioned earlier, as the amount of cycles per second is the frequency, and this is measured in hertz (Hz). These are the basic things that define the characteristics of a sound, and are applicable to more complex waves than this.
This is applied to synthesis as waves are generated in the synthesiser, and depending on the type of synthesis they're either added to or subtracted and these properties they have are altered to shape the sound being created. Another thing changed is harmonics, which are whole number multiples of the fundamental frequency defined by the initial wave form like the sine wave. These are what give the sound it's larger character and timbre, and they're what are altered in subtractive synthesis. In more complex synthesis like FM, non-integer multiples are introduced as waves creating inharmonic clashing tones. In relation to real musical instruments like acoustic guitars for example, timbre is defined by things like the quality of materials, construction and assembly, and poor quality instruments can create non integer harmonics accidentally. In further relation between acoustic guitars and synths, the string of an acoustic guitar is the equivalent to an oscillator, which is what creates the sound when prompted to, in a guitar when the strings plucked and in an analog synth when a voltage is sent to that component.
There's a further important theoretical things to link synthesis and instruments which generate sounds with air passing through pipes, like pipe organs, woodwinds like flutes etc. In these instruments air oscillates inside the pipes before it exits the other end to which it enters creating a series of harmonics which make the note sounds. An instrument like flute or clarinet where a player blows on one end, the opposite end has a matching boundary air pressure which is what cause the sound to loop back into the cyclinder creating the standing wave which generates the harmonics. In wind instruments with one pipe or tube the notes are created by effectively lengthening and shortening the distance the air can travel to escape by covering holes, and in the case of instruments with a pipe per note, these can be closed altogether to eliminate that note from being played. These characteristics are what's important to consider to replicate their sound with synthesis. In an instrument where air is blown into one end, it travels the length and back multiple times to generate the sound, so we can infer that this would be a slower attack as the wave is multiplying harmonics as it oscillates in the cylinder. This is just one example of how you could apply theory of acoustic instruments to synthesis to accurately replicate them.
Finally a theoretical similarity between a synth and acoustic guitar is envelope settings, though the difference is that they're varied with performance technique on a guitar by the player, rather than being set on a synth. The envelope's are 4 parameters that define the onset and shape of the sound, and they're Attack, Sustain, Decay and Release. The attack is how fast the sound reaches its peak volume, so a fast attack would replicate a drum hit and a slower attack might simulate a a bowed violin for example. The sustain is how long the sound remains at this volume before it begins to lower, and the lowering is controlled by the decay. Finally the release is like the opposite of the attack, as it's how quickly the sound fades after the key is stopped being pressed, or in the case of a real instrument how long the sound will resonate after the initial muting or releasing of pressure on the string. An acoustic guitar player may control the sound from the strings with performance techniques like how hard they pluck a string, whether they pick or use their fingers, and whether they slightly mute a string to shorten it's oscillation with the side of their right hand, but these are essentially the same sort of controls as are applied in synthesis to shape sounds.
This theory is most easily applied to the process of subtractive synthesis, where there are for example two oscillators generating waves, passing them through a filter and being shaped by an envelope, with the characteristics of the sound being shaped by the removal of frequencies. It's subtractive, as the harmonic content of the beginning wave is large and full of multiples, but the process that follows of filtering and envelope shape is step by step carving away content from the wave to give it new audible properties. The majority of popular synthesisers create sound in this way, but other methods include frequency modulation and physical modelling which are written about below. Here are some audio examples of the ES2 subtractive synth in Logic. The first three are a vanilla patch, which is where parameters are turned off or completely on to create very pure wave sound, and I have three wave examples that show different tonal qualities between waveforms, the vanilla wave is a square wave, then the sine and sawtooth. The sawtooth is the harshest, grittiest sounding wave and lends itself to electric bass sounds, where as the sine wave is softer and would be more suited to a synth pad sound, or classic Wurlitzer keyboard emulation.
Physical Modelling: Physical modelling differs from the prior methods as it follows mathematical formula to shape and create sounds. These formulas will aim to replicate certain sounds with similarities in the attack, decay, sustain and release features of real instruments, for example the fast attack of a drum would be replicated with an algorithm defining a sound with a fast attack to simulate the drum hit. However the algorithm's and processes of physical modelling can be so detailed and advanced that this type of synthesis can begin to go further than just replicating properties of acoustic instruments, and can begin to try and emulate the finest details of any sounds distinct timbre, which is a big advantage for those who have physical modelling capabilities but cannot afford expensive analog synths for example, as with physical modelling it's possible to identify what makes a particular analogue sythesizer so destinctive and aim to recreate that as closely as possible, to the point where in a multi track recording it may be difficult to tell that you're hearing a replication of an analogue synth and not the real thing. Physical modelling synths are exclusively digital, as this is the only way algorithms can be applied to a sound through computing, so some people may suggest that digital synths lack the weight of analog, but these sounds can be so accurate that when treated with analog filtering between the output and the recording device or speakers, it can sound distinctly analog and accurate to what it's emulating. Also as this is the most recently developed synthesizer of the ones I've investigated and was being made as computer technology was advancing, companies like Yamaha were keen to implement the two technologies to stay cutting edge. An example of a physical modelling synthesizer is Yamaha VL1, which was described in a Sound on Sound review from 1994, as "a major change in the way electronic musical instruments are made and perceived". They synth was step on from the sample and synthesis process of some previous instruments, where a sampled sound was filtered and amplified to create the finished sound, as this had such accurate mathematical codes to create the initial sample that it could a produce better and more manipulable sounds as a result. It improved the performance elements of playing, which sample synths couldn't, like if you wanted to trip a note like you may playing a melody on a rhodes keyboard for example, the sample synth wouldn't make the shape of sound for this as the samples didn't sound any other way than a pure note, but the Yamaha VL1 could do far better. It had controllers specific to real instruments, like vibrato for example which would be used on a violin or orchestral string instrument, which again increased the accuracy of the sounds.
Here is an audio example of the three synthesisers types I've talked about, each using a bass sound to represent what each sounds like. The first is ES2, then FM1 and finally Logic's Sculpture synth which is their Physical modelling synth. You will hear from this that the ES2 and FM1 synths sound very electronic and you can clearly hear different elements blended together in the sound, which is from the use of two or more oscillators. However, the Sculpture bass sounds like a real electric bass guitar, and there's no synthetic or fake sounds to it, though the ES2 and FM1 sounds could more closely emulate a real instrument, they would never be as accurate as Sculpture, and are better for creating experimental sounds.
There are some differences and similarities between these types of synthesis, and the intended use for these synths also differs, although sometimes the best results are achieved from ignoring intended use and changing the parameters until an interesting sound is found. The clear difference between physical modelling synthesis and the other types is that this is the most focussed on emulating real instruments, where the others are focussed far more on the possible interesting avant garde sounds that could be produced. This means for a conventional pop song recording, the physical modelling synths like Yamaha VL1 would be used for layering sounds, for example providing a synthetic version of a string instrument to thicken up or complement the real one, which could be done with other synthesiser types but would be more difficult. The FM synthesis theory has far more abstract results than the other synthesis types, and therefore would find it's place in more avant grade music, ambient techno or other electronic genres, where the other synthesisers wouldn't necessarily have as much of a place.
Importance of Synthesis: Synthesis is a crucial component of modern recordings and musical styles too, be it that a synthesized sound is responsible for playing the hook of a dance track that makes it memorable or at the other end of the spectrum is simply used to reinforce the sound of an electric bass guitar and isn't playing such a major part compositionally, but is making all the difference to the depth and the weight of the sound we're hearing. Synthesis allows us to create sounds we couldn't otherwise humanly produce, for example if we relate back to the simple theory that an oscillator is equivalent to the string of an acoustic guitar and the person plucking the string therefore could be the equivalent to an LFO controlling the rate of pitch of that oscillator, then we get far more capability from what the LFO can do than the human can. An LFO could control the oscillator speed in musical terms like 1/16 beats with total accuracy to produce that desired sound, where as a human asked to do the same thing could never do so. Furthermore using the LFO to control the filter cuttoff creates a modulated effect used heavily in dance and electronic music today, which again a human could not recreate with their playing, so you could say therefore that synthesis has managed to birth it's own genres of music entirely based upon it's large capability to create sounds in these ways. Since synthesis was first used in popluar music and film in the late 60s and early 70s it has been synonymous with forward progression, and has created the "futuristic" sounds of science fiction film soundtracks, and has been a key tool for musical innovation too, as bands like Faust in Germany,Throbbing Gristle and Brian Eno in England relied heavily upon synthesizers to progress what may otherwise have been rock music in Faust's case into experimental electronic, industrial electronic and ambient music all based upon sounds created by synthesisers. Synthesis is also important in the advancement of music technology and related products. For example, any sequencer with software instruments will have sampled real instruments to use but also instruments that are modelled upon real instruments and create a very realistic recreation of them as a result of detailed investigation into the characteristics of the acoustic instrument and how to emulate it with synthesis. Also software and hardware synthesis has developed as research in other areas has inadvertantly contributed to how it can change, like with FM synthesis which works similarly to the transmission of radio waves with the strong carrier wave which I spoke of earlier. Advancements like this have all had knock on effects on music and music technology products to make music with.
To conclude, I've investigated the musical instruments properties and applied such theory to suggest how they would be replicate with synth's controls, giving an overview of subtractive synthesis in relation to these acoustic principles, and then a comparison to frequency modulation synthesis and physical modelling synthesis. Finally I've spoken of the importance of synthesis in music and music technology's progression.




No comments:
Post a Comment