Sie sind hier : Startseite →  Hifi Wissen und Technik→  Die Archivierung von Tönen→  Sound-Restoration-Teil 13

13 The engineer and the artist
13.1 Introduction

Until now, I have concentrated upon recovering the original sound (or the intended original sound) from analogue media. But now we enter what, for me, is the most difficult part of the subject, because we must now study the ways in which the artists modified their performances (or their intended performances) for the appropriate media. I find this difficult, because there is little of an objective nature; effectively it is subjectivism I must describe.

But I hope you agree these subjective elements are very much part of sound recording history. Since the industry has undergone many revolutions (both technological and artistic), I hope you also agree that understanding these subjective elements is a vital part of understanding what we have inherited. It may also help you understand what we have not inherited - why some people were considered “born recording artists” and others weren’t, for example. (And I shall continue to use the contemporary word “artists”, even though the word “performers” would be better).

The structure of this chapter will basically be chronological, although it will switch halfway from pure sound to sound accompanying pictures. For the period 1877 to 1925 (the approximate years of “acoustic recording”), we examined some of the compromises in the previous chapter. We lack a complete list of the methods by which artists and record producers modified performances for the recording horn(s), but it is clear the idea was to simulate reality using all the tools available, including hiring specialist vocalists, elocutionists, and instrumentalists. The effect upon the performers was totally untypical, and we saw that this raises severe problems for us today. Should we be even be trying to reproduce “the original sound”?

We also saw how the acoustic process revolutionised popular music by dictating the three-minute dance hit, its instrumentation, and its simple rhythmic structures. On the other hand, Edmundo Ros’ complex Latin-American rhythms were almost inaudible before full frequency range recording was developed.

Meanwhile in classical music, acoustic repertoire was biassed by the lack of amplification. This prevented any representation of large choral works, any use of instruments dating from the eighteenth century or before, or the employment of massed strings or other large sound sources such as organs. Whilst a cut-down symphony orchestra with (perhaps) eight string instruments was certainly feasible in the 1920s, it still remained common for brass bands and special recording ensembles to perform repertoire composed for a classic symphony orchestra.

But with electronic amplification, for the first time the performers could “do their own thing” without having to modify their performances quite so much. And the art of sound recording could begin.

13.2 The effect of playing-time upon recorded performances

By 1902, Berliner’s seven-inch disc records had become outgrown, although several companies continued to make seven-inch discs for the niche market of seven-inch gramophone turntables. The two dominant sizes for the next fifty years were ten-inch and twelve-inch records. (I shall continue to use these expressions rather than the metric equivalents, because (a) they were notional sizes only, some records being slightly larger and some slightly smaller; and (b) when official standards were established in 1954-5, the diameters weren’t “round figures”, either in inches or millimetres!)

Seven-inch discs and two-minute cylinders continued until 1907; but the “standard song” (inasmuch as there is such a thing) might be cut to one verse and two choruses only - a mere travesty of the ten-inch version, which could have a much better structure. The three-minute melody became the cornerstone of the popular music industry for the next seventy-five years. It survived the transition to seven-inch 45rpm “singles,” and was only liberated by twelve-inch 45rpm “disco singles” in 1977.

Three-minute melodies were also optimum for dancing. These always had a distinct ending with a major chord of the appropriate key, signalling dancers to stop and applaud each other. From about 1960 many songwriters had come to hate this type of ending, and introduced “the fade-ending” (done electronically) to eliminate the burden. But radio disc jockeys, desiring to “keep the music moving” while promoting their show, began to “talk over” the fade ending as soon as its volume allowed.

At this point I shall introduce a story about the “protest singer” Bob Dylan. In 1965 he decided to break the boundaries of the “three-minute single”, and asked his company (US Columbia) the maximum duration which could be fitted on a 45rpm side. He was told “six minutes”, this being the duration of an “extended play” record (with two tunes on each side). So he wrote Like A Rolling Stone with a very unusual structure, so it would be impossible for Columbia to cut the master tape with a razor blade. Each verse and each chorus started one beat ahead of a bar line, so an edit would scramble the words. It had twenty-bar verses, followed by twelve-bar choruses, and twelve-bar instrumental “link passages”. Although I have no evidence, I surmise he deliberately blew the “link passages” on his harmonica badly, so no-one could ever edit two link passages together and shorten the song that way! The result ran for precisely five minutes and fifty-nine seconds - a specific case of sound recording practices altering a performance.

Twelve-inch “78s”, introduced in 1903, played for between four and five minutes per side. When vocal records predominated, this was adequate; hardly any songs or arias were longer than that. But all these timings are very nominal. Any collector can think of records considerably shorter than these, and Peter Adamson has listed many of the longest-playing 78rpm discs (Ref. 1). Some of the latter illustrate the point I made in section 5.4 about engineers deliberately slowing their recording turntable, to help fit a long tune onto one side.

But I raise this matter so we may understand what happened when pieces of music longer than four minutes were recorded. At first the music would be shortened by the musical director brutally cutting it for the recording session. It was not until 20th November 1913 that the first substantially uncut symphony was recorded (Beethoven’s Fifth, conducted by Nikisch) (Ref. 2). This established the convention that side changes should be at a significant point in the music, usually at the end of a theme. But when a chord linked the two themes, it was common practice to play the chord again at the start of the new side. This is a case where we might have to make significant differences between our archive copy and our service copy.

I shall now point out a couple of less obvious features of this particular situation. In Britain, that symphony was first marketed on eight single-sided records priced at six shillings each. If we rely on official figures for the cost of living (based on “the necessities of life”), this translates in the year 2000 as almost exactly one hundred and fifty British pounds. Shortly afterwards it was repackaged on four double-sided records at six shillings and sixpence each, so it dropped to about eighty-two pounds. Although it isn’t unique in this respect, Beethoven’s “Fifth” could be divided into four movements, each of which occupied one double-sided record. Therefore it was feasible for customers to buy it a movement at a time - for slightly over twenty pounds per movement!

Although I described it as “substantially uncut”, the exposition repeats of both the first and last movements have gone. In 1926 Percy Scholes wrote: “In the early days of . . . ‘Sonata’ Form it was usual to repeat the Enunciation. This was little more than a convention, to help listeners to grasp the material of the Movement” (Ref. 3). Because both repeats would have added ten pounds to the price, and pedants could always put the soundbox back to the start of the side at the end of the first exposition, you can see that quite a number of issues arose from the cost, the limited playing time, and how the music was divided up.

I apologise if this section has concentrated upon Western “classical” music, but that is what I find myself handling most frequently. Of course, many other types of music can be affected. For example, the modifications to performances of Hindustani classical music (in this case, very severe ones) have recently been studied by Suman Ghosh. (Ref. 4)

In more complex cases, a service copy might tamper with the original performance very significantly. One such case is where there was “an overlap changeover.” This writer was once involved in acquiring a performance of Bach’s Brandenburg Concerto No.2, during which an outgoing theme overlapped an incoming theme. The work had been conducted by a significant British conductor; but the recording had remained unpublished, and unique test pressings were made available by the conductor’s widow. She was quite adamant that the repeated bar should be axed; and this was before the days of digital editors which make such a task simple! So some nifty work with a twin-track analogue tape recorder was necessary to get the music to sound continuous. Readers will have to decide the solutions to such problems themselves; but this was the actual case in which “cultural pressures were so intense” that I could not do an archive copy, as I mentioned in section 2.5.

The other complex case is where disc sides changed, forcing a break in the reproduction. Some conductors minimised the shock of a side change by performing a ritardo at the end of the first side. (Sir Henry Wood was the principal proponent of this technique). Of course it makes complete musical nonsense with a straight splice; but we now have technology which allows a change of tempo to be tidied, thereby presumably realising the conductor’s original intentions. Again, you have to understand why these performance modifications occurred before you can legitimately recreate “the original intended sound.”

Yet record review magazines definitely show that customers weren’t happy with music being broken up like this. To reduce their concerns, commercial record companies introduced the “auto coupling” principle. Large sets of records could be purchased in an alternative form, having consecutive sides on different physical discs. An “auto changer” held a pile of discs above the turntable on an extended spindle, and automatic mechanisms removed the pickup at the end of each side, dropped another disc on top of the first, and put the pickup into the new groove. When half the sides had been played, the customer picked up the stack, turned it over without re-sorting the records, and loaded it on the spindle for the second half of the music.

If your institution has such records, you should remember that the “manual” version and the “auto coupling” version can give you two copies of the same recording, usually under different catalogue numbers. (This may help with the power-bandwidth product). As a small aside to this issue, I personally maintain that when 78rpm sides were sold in auto coupling versions, this is definite evidence that the conductor (or someone, at least) intended us to join up the sides today.

Most long music continued to be recorded in “four-minute chunks” until about 1950. If the subject matter was intended to be a broadcast rather than a disc record, obviously we must rejoin the sides to restore “the original intended sound,” even though it might have been split into four-minute chunks for recording purposes. Nothing else was possible when the power-bandwidth product of 78s meant sounds could only be copied with an audible quality loss. A breakthrough of sorts occurred in about 1940, when American Columbia started mastering complete symphonic movements on outsized (44cm) nitrate discs, probably using the same technology as “broadcast transcriptions” used in the radio industry. This seems to have been in anticipation of the LP (Long Playing record). I say “breakthrough of sorts”, because although the nitrate had a better power-bandwidth product (in the power dimension, anyway), it meant a note perfect performance for fifteen minutes or more. Practical editing still wasn’t feasible.

Copy-editing of disc media seems to have been pioneered by the Gramophone Company of Great Britain in 1928, but the results were not very convincing for quality, and almost every company avoided it whenever possible. “Auto couplings” reduced the effects of side changes, but still meant one large break in the music where the stack of records had to be turned over. For broadcasters this was intolerable; so they made disc recordings with their sides arranged in a different pattern, known as “broadcast couplings.” They assumed two turntables would be available for transmitting the discs, and the odd-numbered sides would come from the discs on one turntable and the even-numbered sides from the discs on the other, while consecutive sides were never on the same physical disc. These couplings resulted from continuous disc recordings made with two disc-cutting turntables; and if the recording was made on double-sided blanks, “broadcast couplings” automatically resulted. This writer regards these situations as meaning we must join up the sides on the service copy.

Broadcasters often used “overlap changeovers” - passages recorded in duplicate on the “outgoing” and “incoming” discs. The incoming disc (heard on headphones) would be varispeeded manually during a broadcast, until it synchronised with the “outgoing” disc; after which a crossfade from one to the other would be performed. Today we simply choose the passage with the better power-bandwidth product, and discard the other version.

Meanwhile, the craft of disc editing was perfected by broadcasters. “Jump cuts” were practised - the operator would simply jump the stylus from one part of a disc to another, in order to drop a question from an interview for example. Surviving nitrates often bear yellow “chinagraph” marks and written instructions showing how they were meant to be used. And the disc medium permitted another application for broadcasters, which vanished when tape was introduced - the “delayed live recording.” It was quite practicable to record (say) a football commentary continuously. When a goal was scored, the pickup could be placed on the disc (say) thirty seconds earlier, and the recording was reproduced while it was still being made. (Only now is digital technology allowing this to happen again).

Optical film, and later magnetic tape, allowed “cut and splice” editing. I shall talk about the issues for film in section 13.13, but it was quickly realised by English Decca that you couldn’t just stick bits of tape together to make a continuous performance for long-playing records. If the music just stopped and started again (which happened when mastering on 78s), a straight splice meant the studio reverberation would also suddenly stop, very disturbing because this cannot occur in nature. When mastering on tape, it was vital to record a bar or two of music preceding the edit point, so the studio would be filled with suitable reverberation and a straight splice would be acceptable.

From this point, reviewers and other parties started criticising the process of mastering on tape, because record companies could synthesise a perfection which many artists could not possibly give. However, this criticism has evaporated today. The public expects a note perfect performance, and the art of the record producer has evolved to preserve the “spirit” of an artistic interpretation among the hassles of making it note perfect. Thus, such commercial sound recordings and recorded broadcasts differ from reality, and this branch of music is now accepted as being a different medium.

13.3 The introduction of the microphone

Apart from telephony (where high fidelity was not an issue), microphones were first used for radio broadcasting in the early 1920s. This was the first significant use of electronic amplification. Although early microphones lacked rigorous engineering design, it is clear that they were intended to be channels of “high fidelity” (although that phrase did not come into use until about 1931). That is to say, the microphone was meant to be “transparent” - not imposing its identity on the broadcast programme - and microphones rapidly evolved to meet this criterion.

Customers soon recognised that the commercial record industry had better musicians than broadcasters could afford. On the other hand, broadcasts lacked the constant hiss and crackle of records, and any “megaphone quality” added to artists. Record sales began to decline against the new competition.

As we saw in chapter 5, the solution was developed by Western Electric in America. Using rigorous mathematical procedures (itself a novel idea), they designed apparatus with rigorous behaviour. Much of this apparatus was used for public address purposes at the opening of the British Empire Exhibition in 1924, which was also the first time the voice of King George V was broadcast; this survives today in a very poor acoustic recording! Electrical methods of putting the sound faithfully onto disc had to wait a little longer, until the patent licensing situation was sorted out. As far as I can establish, the earliest Western Electric recording to be published at the time was mastered by Columbia in New York on 14th February 1925 (Ref. 5). Nobody (myself included) seems to have noticed the significance of its recorded content. It comprised a session by “Art Gillham, the Whispering Pianist”! Although he clearly isn’t whispering, and the voice might have been captured by a speaking tube instead of an acoustic recording horn with the same effect as a microphone, it would have been impossible to publish his accompanying himself on the piano any other way.

A performance technique which was often believed to depend on technology is “crooning.” Crooners began in radio broadcasting, when listeners were wearing headphones to avoid their needing electronic amplification; this helped the implication of a vocalist singing quietly into a listener’s ear. But the art of the “crooner” soon became completely inverted. Before the days of public address, successful crooners had to be capable of making themselves heard like operatic tenors, while still projecting an intimate style of vocal delivery. Contemporary writers frequently criticised crooners for being vocally inept, comparing them with opera singers where “Louder” was often considered “Better”; but in fact, a crooner’s craft was very much more complex and self-effacing.

For some years, microphones suffered because they were very large, which affected their performance on sounds arriving from different directions; I shall discuss the significance of this in the next section. As I write this, another consideration is that microphones cannot be subject to unlimited amplification - most microphones have less sensitivity than a healthy human ear. I know of only one which can just about equal the human ear. It is made for laboratory noise measurement applications, but it is a big design, so it suffers the size disadvantage I’ve just mentioned. In practice, the best current studio microphones have a noise performance a full 8dB worse than the human hearing threshold. So, for high fidelity recording which also takes into account “scale distortion”, we still have a little way to go.

13.4 The performing environment

Since prehistoric times, any sounds meant to have an artistic impact (whatever that means!) have evolved to be performed in a space with certain acoustic properties. This is true whether the location is a cathedral, an outdoor theatre, or a woodland in spring. It is an important factor which may influence the performances we hear today. It took nearly half a century for sound recording technology to be capable of handling it. From 1877 until 31st March 1925 nearly all sound recordings were made without any worthwhile representation of the acoustic environment. On that date, US Columbia recorded six numbers from a concert given by the Associated Glee Clubs of America in the Metropolitan Opera House, New York. (Ref. 6).

Although these had been preceded by radio broadcasts from outside locations, they were the first published recordings to demonstrate that the acoustic environment was an important part of the music. And another decade went by before even this was widely appreciated. It wasn't just that the end product sounded better. Musicians and speakers were also affected, and usually gave their best in the correct location. I shall now give a brief history of the subject. In fact, it is a very complex matter, worthy of a book on its own; I shall just outline the features which make it important to restoration operators.

I can think of only one acoustic recording where the surrounding environment played a significant part - the recording of the Lord Mayor of London, Sir Charles Wakefield, in 1916 (Ref. 7). In this recording it is just possible to hear a long reverberation time surrounding the speech, which must have been overwhelming on location. Apparently the Gramophone Company’s experts thought this was so abnormal that they took the precaution of insisting that “Recorded in Mansion House” was printed on the label. As far as I can ascertain, this was the first case where the location of a commercial recording was publicly given, although we now know many earlier records were made away from formal studios.

When the microphone became available, together with the electronic amplifier, it was possible to get away from the unnatural environment of the acoustic recording studio. Because there was a continuously rolling recording programme, it took several months for the new advantages to be realised. In the meantime there were some horrible compromises. Serious orchestral performances, made in acoustic recording studios and recorded electrically, sounded what they were - brash, boxy, and aggressive. It was no longer necessary for artists to project their performances with great vigour to get their music through the apparatus, but they continued to perform as though it was. Sir Compton Mackenzie’s comments about the first electrically recorded symphony have been much quoted (Ref. 8), but anyone hearing the original recording (even when correctly equalised) must agree with him.

It was first necessary to get away from the acoustic recording rooms. In Britain some choral recordings were tried from the “Girls’ Canteen” at the factory in Hayes. Next, recording machinery was installed at Gloucester House in the centre of London, and various locations were tried by means of landlines. Soon, half a dozen halls and places of worship in London became recording studios on a temporary or permanent basis.

Before the end of the year, some unsung genius discovered The Kingsway Hall, which became London’s premier audio recording location until the late 1980s. The first half of 1926 was the heyday of “live recording,” with performances from the Royal Albert Hall and the Royal Opera House being immortalised; but the difficulties of cueing the recording, the unpredictability of the dynamics, and the risk of background noises, slowed this activity. Instead, the bias turned in favour of recordings made from natural locations to the machine in private performances, not public ones.

Early microphones were essentially omnidirectional, so they picked up more reverberation than we would consider normal today. Furthermore, their physical bulk meant they picked up an excess of high frequencies at the front and a deficient amount at the sides and back. This affected both the halls which were chosen, and the positioning of the mikes. Since we cannot correct this fault with present-day technology, we must remember that “woolly bass” is quite normal, and contemporary artists may have modified their performances to allow for this.

Volume changes in a performance become evened out when more reverberation is picked up. This reduces transient overloads (particularly troublesome on grooved media). All this explains why we may find recordings which are both “deader” and “livelier” than we would normally expect today.

The presence of any reverberation at all must have seemed very unusual after acoustic recording days. This explains what happened after the world’s first purpose-built recording studios were opened at Abbey Road in 1931. The biggest studio, Studio 1, was intended for orchestras and other large ensembles; and the first recorded works, including “Falstaff” and the Bruch Violin Concerto, show that its sound was the equal of present-day standards. But then some “acoustic experts” were brought in, and the natural reverberation was dampened. The following year similar recordings were done in the same place with a reverberation time about half its previous duration; and this remained the norm until about 1944. Many artists hated the acoustics so much that sessions had to resume at the Kingsway Hall.

The two smaller studios suffered less alteration. Studio 2 (used for dance bands) seems to have been deliberately designed to work with the microphones, since it had a surprisingly even amount of reverberation at all frequencies. It was longer than present-day fashion, but is not grossly distasteful to us. Studio 3 was intended for solo piano, speech, and small ensembles up to the size of a string quartet. It was practically dead, and you can hardly hear the acoustics at all. It may well have affected the performers, but there is nothing to show it.

Microphones with frequency responses which were equally balanced from all directions came into use in 1932 - the first “ribbon” microphones, with a bi-directional polar diagram. This was not intuitive, and many artists found it difficult to work with them. An omnidirectional equivalent with similarly uniform directionality did not appear until 1936 (the “apple and biscuit” microphone, or Western Electric 607).

Probably the creative use of acoustics reached its peak in radio drama. In radio there are only three ways to indicate the “scenery” - narration, sound effects, and acoustics. Too much of the first two would alienate listeners, but acoustics could provide “subliminal scenery.” When Broadcasting House opened in London in 1932, a suite of drama studios with differing acoustics could be combined at a “dramatic control panel”. There was also a studio for live mood music, and another for the messier or noisier sound effects. An entertaining account of all this may be found in the whodunit Death At Broadcasting House (Ref. 9), but the art of deliberate selection of acoustics did not cease in 1932.

The arrival of ribbon microphones meant that actors could also give “depth” to their performances by walking round to the side and pitching their voices appropriately; even in the 1960s, this was a source of considerable confusion to newcomers, and scenes had to be “blocked” with the rigour of stage performances. Using this technique, the listener could be led to identify himself with a particular character subliminally. Meanwhile, studio managers were manipulating “hard” and “soft” screens, curtains, carpets, and artificial reverberation systems of one type or another, to get a wider variety of acoustics in a reasonable amount of space (Ref. 10). It is the writer’s regret that these elements of audio craftsmanship never took root in America.

An “echo chamber” formed one of the radio drama rooms at Broadcasting House. It contained a loudspeaker and a microphone in a bare room for adding a long reverberation time (for ghosts, dungeons, etc). It was not particularly “natural” for music purposes, but film companies soon adopted the idea for “beefing up” the voices of visually attractive but less talented singers in musical films. The Abbey Road studios had an echo chamber of their own for popular vocalists from the mid-1940s, and so did other recording centres where popular music was created. But, to be any good, an echo chamber had to be fairly big; and it was always liable to interference from passing aircraft, underground trains, toilets being flushed, etc. Engineers sought other ways of achieving a similar effect.

One of the most significant was a device introduced by the Hammond Organ Company of America - the “echo spring.” This basically comprised a disc cutting-head mechanism generating torsional vibrations into a coil of spring steel wire, with a pickup at the other end. Hammond made several sizes of springs with different reverberation times. They were used in early electronic musical instruments, and later by enterprising recording studios, and the only difficulty was that they tended to make unmusical twanging noises on transient sounds.

This difficulty was eliminated by the “reverberation plate”, invented by Otto Kühl in 1956 (Ref. 11). This comprised a sheet of plain steel, two metres by one and a millimetre thick, suspended in a frame, with a driver in the middle and a pickup on one edge. An acoustic damping plate nearby could be wound closer or further away to mop up sounds emitted by the steel, and caused the reverberation time to vary from about 1 second to over 4.5 seconds at will. While it was not completely immune from interference, it was much less susceptible. And because it was smaller than a chamber and could do a wider range of acoustics, nearly every studio centre had one.

In the early 1970s digital signal processing technology came into the scene. The “Lexicon” was probably the first, and was immediately taken by pop studios because it permitted some reverberation effects impossible in real life. Today, such units are made by many companies with many target customers at many prices, and developments are still happening as I write.

There are two points I want to make about all these systems - the “chamber,” the “spring,” the “plate” and the “digital reverb.” The first is that they all add reverberation to an existing signal, and if the first signal has reverberation of its own, you get two different lots, which is somewhat unnatural. The other is that the same quantity of reverberation is added to all the components of that signal - so if it is applied to a vocalist with an orchestral backing, both get the same amount of reverberation, irrespective of whether one is supposed to be in the foreground and the other in the background. Although it is possible to limit these disadvantages by means of filters, and by means of “echo send buses” on sound mixing consoles, you should know they exist. I mention them so you may be able to detect when someone has interfered with a naturally reverberated signal and modified the “original sound.”

Since most artists instinctively pitch their performances according to the space to be filled, there is rarely any doubt whether a performance was being done “live” or especially for the microphone. But it could become blurred in the mid-1930s, when public address systems first became normal in the bigger halls (Ref. 12). Yet there is another stabilising factor - the effect of reverberation time on the tempo of music or speech. If a performance goes too fast in a live acoustic, the notes or words become blurred together, and a public address system cannot always save the situation. The lesson for us today is that if we hear a conventional performing artist in an unusually “live” or an unusually “dead” environment, we cannot assume that the tempo of his performance will be “normal.”

13.5 “Multitrack” issues

Anyone who asserts that “multitrack recording” was invented in the 1960s is wildly wrong; but I shall be considering its birth (which was for the film industry) in section 13.13. As far as I am aware, the first multitracked pure sound recording was made on 22nd June 1932 by a banjo player called “Patti” (real name Eddie Peabody). On a previous date he had recorded the first layer of sound. This was then processed into a shellac pressing, and played back on a “Panatrope” (an electrical reproducer), while Patti added a second layer. This record (UK Brunswick 1359) was given to me by a musician who was completely unable to understand how Patti had done it!

A month later RCA Victor added a new electrically recorded symphony orchestra to a couple of acoustically recorded sides by Enrico Caruso (section 12.21). In September 1935 the soprano Elisabeth Schumann recorded both title parts of the evening prayer from “Hänsel and Gretel” by Humperdinck. In all these cases, the basic rhythm had been laid down first, so the second layer could be performed to fit it.


After the war, magnetic tape recording (which gave instant playback) eliminated the processing delay with disc (we shall cover that problem in sections 13.9 and 13.18). A most remarkable system was invented by guitarist Les Paul, who was joined by his wife Mary Ford as a vocalist (or several vocalists). Les Paul had one of the earliest Ampex professional (mono) tape recorders, and he had its heads rearranged so the playback head was first, then the erase head, and finally the record head. He planned out his arrangements on multi-stave music paper, then normally began by recording the bass line which would suffer least from multiple re-recordings. He and his wife would then sight read two more staves as the tape played back, mixing the new sounds with the original, and adding them to the same piece of tape immediately after it had been erased. By this process they built up some extremely elaborate mixes, which included careful use of a limiter to ensure all the parts of Mary Ford’s vocals matched, “double-speed” and “quadruple-speed” effects, and “tape echoes.”

Magnetic tape mastering caused the original idea to be reborn in Britain in 1951, using two mono tape machines. This avoided the loss of the entire job whenever there was a wrong note. For a three-month period EMI Records thrashed the idea, but without any great breakthroughs. Perhaps the most ambitious was when Humphrey Lyttelton emulated a traditional jazz band in “One Man Went To Blow.”

Coming to the mid-sixties, multitrack recording was introduced largely because it was necessary to mix musical sources with wide differences in dynamics. So long as the balance between the different instruments could be determined by the teamwork of arranger, conductor, and musician, the recording engineer’s input was secondary. But electrically amplified instruments such as guitars meant the natural vocal line might be inaudible. Also many bands could not read music, so there was a lot of improvisation, and accurate manual volume controlling was impossible.

Early stage amplifiers for vocals resulted in nasty boxy sounding recordings, so vocalists were persuaded to perform straight into high fidelity microphones in the studio. The engineers re-balanced the music (usually helped by limiters to achieve predictable chorus effects), added reverberation, and supplied a small number of exotic effects of their own making (such as “flanging.”) All this only worked if the “leakage” from one track to another was negligible; thus pop studios were, and still are, very well damped. Another consequence was that vocalists were deprived of many of the clues which helped them sing in tune; realtime pitch correctors weren’t available until about 1990.

The next British experiments involved the same system as Humphrey Lyttelton a decade before, but using a “twin track” tape recorder, in which either track could be recorded (or both together). The Beatles LP “With The Beatles” was compiled this way in 1963, with the instrumental backing being recorded on one track, and the vocals added on the other. In Britain, the resulting LP was issued in “stereo” (Parlophone PCS3045), which is really a misnomer. There is no stereo “spread”; all the backing is in one loudspeaker, and the vocals in the other!

Within a couple of years, multitrack recording allowed “drop-ins” for patching up defective passages within one track, while international magnetic tape standards allowed musicians on opposite sides of the world to play together. To make this work, the recording head would be used in reverse as a playback head, so the musicians could hear other tracks while everything remained in synchronism. Modern studio equipment is specifically designed to provide very sophisticated sounds on the musicians’ headphones. Not only does this enable them to hear their own voices when the backing is excitingly loud, but a great deal of reverberation is sometimes added so they get the “third vital clue” (section 12.4) to help them sing in tune. This may be used even though no reverberation is contemplated for the final mix. Very sophisticated headphone feeds are also needed so each musician can hear the exact tracks he needs to keep exactly in tempo, without any other musicians or a conductor.

By the late 1960s, the creative side of popular music making was using all the benefits of multitrack tape and sophisticated mixing consoles. This enabled musicians to control their own music much more closely, although not necessarily advantageously. The creative input of a record producer became diluted, because each musician liked to hear himself louder than everyone else, with arguments vigorously propounded in inverse proportion to talent! So the record producer’s role evolved into a combination of diplomat and hype-generator, while popular music evolved so that musical devices such as melody rubato and dynamics were sacrificed to making the music loud (as we saw in section 11.6). The actual balance would be dictated by a “consensus” process. It did not mean sterility; but it slowed the radical changes in popular music of previous decades. This writer sheds an occasional tear for the days when talented engineers like Joe Meek or Geoff Emerick could significantly and creatively affect the sound of a band.

In the 1970s, some downmarket pop studios had a poster on the wall saying “Why not rehearse before you record?” This was usually a rhetorical question, because all good pop studios were at the frontiers of music. They had to be; it might take several weeks before any results could appear in shops. But soon the conventional idea was soon being stood upon its head. Instead of a recording studio emulating the performance of a group in front of the public, the public performances of musicians emulated what they had achieved in the recording studio.

Most of the technology of a recording studio suddenly found itself onstage for live performances. The sophisticated headphone monitoring needed in a dead studio became the “monitor bins” along the front of the stage. These were loudspeakers which were not provided for the benefit of the audience, but merely so the performers could hear each other. Meanwhile, at dance halls, the live band disappeared; and “rap”, “scratching” and the heavy unremitting beat brought music to the stage where melody rubato and dynamics were totally redundant.

The technology of “live sound” engineers was soon being applied in the world of classical music - particularly in opera, where the sheer cost of productions meant that sophisticated public address systems meant bigger audiences could be served. Strictly speaking, recording engineers did not influence this trend in music. But the recording engineers and the live sound engineers now work together, and between them they will affect the history of music (and drama) even more radically than in the past.

13.6 Signal strengths

In previous chapters we saw that “louder is always better.” Today, people doing subjective reviews of hi-fi equipment are well aware that if one system is a fraction of a decibel louder than another, it will seem “better”, even though human hearing can only detect a change of a couple of decibels on any one system. As a corollary, acoustic gramophone designers were straining to improve the loudness of their products for many years, rather than their frequency ranges.

Yet it was some time before electrically recorded discs were issued at a volume later considered normal. We saw in chapter 5 that there was sometimes a strict “wear test.” Test pressings were played on an acoustic gramophone; if they lasted for thirty playings without wear becoming audible, they might be considered for issue; and if they lasted one hundred playings, that “take” would definitely be preferred. The result, of course, was that louder records failed the test. So most published records were many decibels quieter than the Western Electric or Blumlein systems could actually do. We see this today, when unissued tests sometimes appear which present-day technology can replay very easily. They sound particularly good, because the dynamics are less restricted, and the signal-to-noise ratio is better. The message to us is that the published version might not necessarily be the artist’s or producer’s preferred choice.

Volumes drifted upwards in the 1940s and 1950s, and limiters helped make records consistently loud as we saw in Chapter 10. The ultimate was reached when recording technology significantly affected the course of musical history in 1963. Quite suddenly, the standard “pop group” of drums, vocals, and three electric guitars (lead, rhythm, and bass), became the norm. Standard musical history attributes this to the success of The Beatles. But this does not explain why similar instrumentation was adopted for genres other than the teenage hearthrob market, such as “surfing” groups, country-and-western bands, rhythm and blues exponents, and even acoustic protest singers like Bob Dylan. The reason, I believe, is the RIAA Equalisation Curve (chapter 5). Yes, I’m afraid that’s right. Seven-inch popular “singles” recorded to this equalisation could hold this instrumentation very precisely without distortion at full volume, so the “louder is better” syndrome was again involved. Indeed, this writer was cutting pop masters in those years, and can swear that the ease of cutting the master was directly proportional to how closely the performance matched this norm.

The subliminal louder-is-better syndrome keeps cropping up throughout the whole of sound recording history, and I will not repeat myself unnecessarily; but I shall be enlarging the topic when we talk about sound accompanying pictures. But more recently, we have become aware that the reproducing equipment of the time also played a part. For example, melody harmony and rubato could be reproduced successfully on 1960s record players of the “Dansette” genre. When more ambitious sound systems became available in the mid-1970s, these musical features became less important. A heavy steady beat became the principal objective; a “Dansette” could never have done justice to such a beat.

In section 13.2 we noted how the maximum playing times of various media influenced what became recorded, and how the three-minute “dance hit” ending with a definite chord led the market from 1902 to the mid-1960s. We also saw how others preferred to escape this scenario by performing an electronic “fade” at the end. In 1967, The Beatles were to demolish the whole idea by publishing a song “Strawberry Fields Forever” with a “false fade ending.” Any radio disc jockey who tried introducing the next record over the fade would find himself drowned by the subsequent “fade up”, while the fame of the group meant there would be thousands of telephoned complaints at the switchboard. So here is another example of how artists were influenced - creatively - by sound engineering technology.

13.7 Frequency ranges

This brings us to the matter of frequency responses. In principle, engineers could always affect the frequency response of an electrical recording by using circuits in a controllable manner. Yet the deliberate use of equalisation for artistic effect seems to have been a long time in coming. To judge from the historical recorded evidence, the facility was first used for technical effect, rather than artistic effect. The situation is reminiscent of 1950s hi-fi demonstrations, comprising thundering basslines and shrieking treble that were never heard in nature.

In 1928 the Victor Company made a recording of the “Poet and Peasant” Overture with bass and treble wound fully up (Ref. 13). But the effect was constrained by the limitations of the Western Electric cutter system (which was specifically designed to have a restricted bandwidth), and the effect sounds ludicrous today. I have no evidence for what I am about to say, but it is almost as if Victor wanted a recording which would sound good on a “pre-Orthophonic” Victrola.

According to fictional accounts, controllable equalisation was first used in the film industry in attempts to make “silent” movie stars sound better; but I have no documentary proof of this. Yet watching some early films with modern knowledge shows that relatively subtle equalisation must have been available to match up voices shot at different times and on different locations.

Until the loudspeakers owned by the public had reasonably flat responses, there was little point in doing subtle equalisation. Engineers working in radio drama, for example, had to use brutally powerful filtering if they wanted to make a particular point (a telephone voice, or a “thinks” sequence). Conventional equalisation as we know it today was unknown. Indeed, the researchers planning such installations had only just started to learn how to get a wide flat frequency response. They considered technically illiterate artistic nutters would be quite capable of misusing the facility to the detriment of their employers. This was definitely the case when I entered the industry as late as 1960; both in broadcasting and in commercial recording, the only way we could affect the frequency response was by choice of (and possibly deliberate misuse of) the microphone(s). In any case, the philosophy of today’s restoration operator (the objective copy should comprise “the original intended sound”) means that we hardly ever need to attack such equalisation, even when we know its characteristics.

However, it might be worth noting one point which illuminates both this and the previous section. The louder-is-always-better syndrome resulted in the invention of something caused “the EQ mix.” This took advantage of multitrack recorders, and allowed the band to participate in the mixing session in the following way. Equalisation controls on each track could be adjusted to emphasise the most characteristic frequencies of each instrument, for example higher frequencies to increase the clarity of a rhythm guitar. (This would normally be done with something called “a parametric equaliser”, which can emphasise a narrow or a wider range of frequencies by a fixed amount. It is actually a wrong word, because it is not the kind of “equaliser” to equalise the characteristics of something else, and it does not restore relative phases). If each track had different frequencies emphasised, the resulting mixed sound would be louder and clearer for the same peak signal voltage, and the louder-is-better syndrome was invoked again.

13.8 Monitoring sound recordings

This usually means listening to the sound as the recording (or mixing) is taking place. (It also means “metering” when applied to electrical mixing, but I shall not be thinking about meters in this section).

Professional recording engineers could not monitor acoustic recordings rigorously, although they got as near to it as they could. The musicians were put in a bare room, as I mentioned earlier; but the room would be divided in two by a full height partition. The experts worked on the other side of this with the actual recording machine, and the recording horn poked through a relatively small hole in the wall. Thus the experts heard the music through this hole, and (with experience) they were able to detect gross errors of balance. Communication with the artists was usually by means of a small hinged window or “trap door” in the wall. But many defects could not be heard without playback, which is the subject of the next section.

Electrical recording permitted engineers to hear exactly what was coming through the microphone, but it seems to have been some time before the trap door system was abandoned and soundproof walls and/or double glazing substituted. The Western Electric system involved a loudspeaker with a direct radiating conical cone driven by a vibration mechanism exactly like the disc cutterhead. But this would not have been particularly loud, so there was no risk of a howl-round through the trap door. However, it was just sufficient for the engineer at the amplifier rack to hear the effects of altering the amplification, and many kinds of gross faults would be noticed when the trapdoor was shut. The fact that engineers could both monitor the recorded sound, and control what they were doing, was an enormous step forward.

Landline recordings from other locations permitted “infinite acoustic separation” between engineer and artists. In practice, there would be what we would now call a “floor manager,” although I do not know the contemporary term for the individual. This would be another engineer at the other end of the landline with the musicians, connected with the first engineer by telephone. The first engineer would guide the floor manager to position the artists and microphone by listening to his loudspeaker, and the floor manager would cue the first engineer to start and stop the recordings.

Photographs taken at the opening of the Abbey Road studios in 1931 show the trapdoor system was still in use, although the engineer would have needed lungs of iron to shout down a symphony orchestra at the other end of Studio 1! It seems to have been about 1932 or 1933 before double-glazed windows were introduced, together with “loudspeaker talkback.” Now engineers heard what was coming from the studio with even greater clarity than the public. Although sound mixing definitely antedated this, especially in broadcasting, the new listening conditions enabled multi mike techniques to be used properly for commercial recordings for the first time. I could now write many thousands of words about the engineer’s ability to “play God” and do the conductor’s job; but if our objective copy is defined as carrying “the intended original sound,” we shall not need to take this into account.

In the world of radio broadcasting, a different “culture” grew up. Broadcast monitor areas were conventionally like medium-sized sitting rooms. Broadcasting was considered an “immediate” medium. In Europe, anyway, broadcasts were considered better when they were “live,” and this was certainly true until the mid-1950s. Thus the principal use of recording technology was so complete programmes could be repeated. It was therefore considered more important for a recording to be checked as it was being made, rather than being kept in a virgin state for mass production. In Europe three media were developed in the early 1930s to meet this need - magnetic recording on steel tape, nitrate discs, and Philips-Miller (mechanical, not photographic) film. With each of these media, engineers could, at the throw of a switch, compare the “line-in” signal with the “reproduced” signal. (Of course, this became practically universal with magnetic tape).

This meant the recording engineer usually had to be in a separate room from the engineer controlling the sound, because the monitoring was delayed by the recording and reproducing mechanisms. “Recording channels” comprising at least two machines and a linking console were set up in acoustically separate rooms. This also kept noises such as the hiss of a swarf pipe away. And, as the recording equipment became more intricate and standards rose, the art of sound recording split into two cultures - one to control the sound, and one to record it.

This split remained until magnetic tape recording equipment became reliable enough for the sound-balancing engineer to start the machines himself using push buttons. Then, in the 1960s, the cultures grew closer again. Tape recorders found their way back into the studio monitoring areas, tape editing could be done in close cooperation with the artists, and the “tape op” was often a trainee balance engineer.

13.9 The effects of playback

Although the engineer might play two rôles - a rôle in the recorded performance and a rôle in the technical quality - the former consideration was shared by the artist himself when it became possible to hear himself back. Nowadays, we are so used to hearing our own voices from recordings that it is difficult for us to recall how totally shocked most adults were when they heard themselves for the first time - whether they were speakers, singers, or instrumentalists. This capability played an important part in establishing what we hear from old recordings nowadays. But, important though it may be, it is rather more difficult to ascertain what part it played! So in this section, I shall confine myself to a brief history of the process, and hopefully this will inspire someone else to write about the psychology.

Early cylinder phonographs were capable of playing their recordings almost as soon as they were completed, but there seems to have been little long term effect on the performances. The basic power-bandwidth product was very poor; so the artistic qualities of the recording would have been smothered by massive technical defects. Also the phonograph was (in those days) a new technology interesting in itself, rather than being thought of as a transparent carrier of a performance. Finally the limited playing time meant it was considered a toy by most serious artists, who therefore did not give cylinders the care and attention they bestowed on later media.

Early disc records, made on acid etched zinc, could be played back after ten or twenty minutes of development in an acid bath. Gaisberg recalled that artists would eagerly await the development process (Ref. 14); but it seems the acid baths did not work fast enough for more than one trial recording per session. Thus, it may be assumed that the playback process did not have an effect significantly more profound than cylinder playback.

When wax disc recording became the norm around 1901, the signal-to-noise ratio improved markedly. Almost immediately the running time increased as well, and VIP artists started to visit the studios regularly. Thus playback suddenly became more important; yet the soft wax would be damaged by one play with a conventional soundbox. The experts had to tell artists that if a “take” were played back for any reason, the performance would have to be done all over again. Trial recordings for establishing musical balance and the effects of different horns or diaphragms might be carried out, but the good take couldn’t be heard until it had been processed and pressed at the factory.

The factory might be hundreds of miles away, and the time for processing was normally a week or two. So it can be assumed that the artists had relatively little chance to refine their recorded performances by hearing themselves back. Indeed, anecdotal evidence suggests almost the opposite. Artists were so upset by the hostile conditions of a recording session, that the facility was principally used for demonstrating why they had to perform triple forte with tuba accompaniment whilst being manhandled.

Two kinds of wax were eventually developed for disc recording, a hard wax which was relatively rugged for immediate playback on a fairly normal gramophone, and a soft wax which had much less surface noise and was kept unplayed for processing. (Ref. 15). But with electrical recordings, meaningful playback at last became possible. The soft-wax and hard-wax versions could be recorded at the same time from the same microphone(s). Furthermore, Western Electric introduced a lightweight pickup especially for playing waxes (Ref. 16). It was reported that “A record may be played a number of times without great injury. At low frequencies there is little change and at the higher frequencies a loss of about 2TU” (i.e. two decibels) “per playing.”

But clearly this wasn’t good enough for commercial recording, and it is quite certain that soft waxes were never played before they were processed. (See also Ref. 17). The decision was then swayed by considerations like the time wasted, the costs of two Western Electric systems (one was expensive enough), the cost of the waxes (thirty shillings each), and so on. Frankly, playback was a luxury; but for VIP artists at least, it was sometimes done. It seems principally to have been used in film studios; Reference 18 makes it clear that actors always crowded into a playback room to hear themselves, and modified their performances accordingly.

We saw in the previous section that instantaneous playback was an essential requirement for a broadcasting station, so from about 1932 onwards we may assume that playback to the artists might have been possible at any time. However, we can also see that it was very unlikely in certain situations - mastering on photographic film being the obvious case. Also, many broadcast recordings were made off transmission for a subsequent repeat; if the artist heard himself at all, it was too late to correct it anyway.

But I mention all this for the following reason. Let us suppose, for the sake of argument, that a VIP pianist heard a trial recording made with a Western Electric microphone. Would he have noticed that the notes around 2.9kHz were recorded 7dB louder than the others, and would he have compensated during the actual take? I stress this is a rhetorical question. There is no evidence this actually happened. In any case, the defects of the Western Electric moving-iron loudspeaker cone were greater than 7dB, and moving the microphone by six inches with respect to the piano would cause more variation than that as well. But here we have an important point of principle. Whenever playback of imperfect recordings might have taken place, today we must either make separate archive, objective, and service copies, or document exactly what we have done to the archive copy! It is the only way that subsequent generations can be given a chance to hear an artist the way he would have expected, as opposed to the actual sound arriving at the microphone.

The characteristics of contemporary monitoring equipment are not important in this context. Although a loudspeaker might have enormous colouration by present-day standards, it would colour the surface noise in such a way that listeners could instinctively reject its effects on the music. Certainly it seems people can “listen through” a loudspeaker, and this is more highly developed when the listeners are familiar with the system, and when it’s a good system as well. Thus the judgements of professional engineers may dominate. In any case, the issue does not arise if there was a post-production process - a film doing through a dubbing theatre, or a multitrack tape undergoing “reduction” - because that is the opportunity to put matters right if they are wrong. It is also less likely when trial recordings were made and played back immediately with essentially flat responses. The writer’s experience is that, when the artist showed any interest in technical matters at all, considerations concerning the balance arising from microphone positioning dominated. Finally, when the artist heard test pressings - particularly at home, where the peculiarities of his reproducing equipment would be familiar to him - the issue does not arise.

Although I was unable to find an example to back up my hypothetical question about the Western Electric microphone, there is ample subjective evidence that this was a real issue in areas where lower quality recording equipment predominated. It is particularly noticeable on discs recorded with a high-resistance cutterhead. In chapter 5 we saw this resulted in mid-frequencies being emphasised at the expense of the bass and treble. We can often establish the peak frequency very accurately and equalise it. But when we do, we often find that the resulting sound lacks “bite” or “sparkle.” The engineers were playing test pressings on conventional gramophones whose quality was familiar from the better records of their competitors, and they relocated the musicians or rearranged the music to give an acceptable balance when played back under these conditions. You can see that potentially it is a very complex issue. It was these complexities which caused the most passionate debate at the British Library Sound Archive, and made me decide to write this manual to ventilate them. I have mentioned my solution (three copies); but it is the only way I can see of not distorting the evidence for future generations.

13.10 The Costs of recording, making copies, and playback

Clearly, these are further features we must take into account when we study the interaction between performers and engineers. But the trouble with this section is that money keeps changing its value, while different currencies are always changing their relationships. A few years ago I wrote an account of how the prices of British commercial sound records changed over the years (Ref. 19). I attempted to make meaningful comparisons using the official British “cost of living index”, which (when it commenced in July 1914) represented the price of necessities to the average British “working man”. By definition, this was someone paid weekly in the form of “wages”. It automatically excluded people who received “salaries” (usually received quarterly in arrears), so their influence had no effect upon the “cost of living index”. Yet for several decades, only salaried people could afford sound recordings.

I faced yet more financial problems in the original article, but here I shall not let them derail me - the effects upon this section are not very significant. So I am taking the liberty of dealing with costs from a purely British viewpoint, and because neither Britain nor America experienced the runaway inflation suffered in many other countries, I have simply decided to stick with my original method. I shall just express costs in “equivalent present-day pounds sterling” (actually for the end of the year 2000), and leave readers to translate those figures into their own currencies if they wish.

But I had better explain that British prices were quoted in “pounds shillings and pence” until 1971. (There were 12 pence to a shilling, and 20 shillings to a pound; and the symbol for pence was “d”.). In 1971 Britain changed to decimal coinage with 100 “new pence” to the pound; but the pound stayed the same, and forms the basis for my comparisons.

The next dimension is that we should differentiate between the costs of the recording media (and the overheads of recording them), from the prices people paid out of their pockets for any mass produced end results. This was particularly important between the wars, when making satisfactory disc masters cost astronomical sums. This was only workable because mass production and better distribution made the end results affordable; but I shall study this dimension, because it radically affected what became recorded.

Two more dimensions are what the purchaser received for his money - first playing time, and second sound quality. For the former I have picked a “nominal playing time,” about the average for records of that period. But I shan’t say much about the quality (or power-bandwidth product), except to note when it altered sound recording history.

And there is a whole spectrum for the end results, from vocal sextets in Grand Opera to giveaway “samplers.” Unless I say otherwise, I shall assume the “mainstream” issues in a popular series - in HMV language, the “plum label” range - rather than “red label and higher”, or “magenta label and lower.”

13.10.1 The dawn of sound recording

There is no doubt that news of Edison’s invention stimulated heavy demand for music in the home, but this demand was fanned to blazing point by three unplanned circumstances.

First, Edison’s 1877 Tinfoil phonograph was not designed to allow recordings to be changed. After the tinfoil had been unwrapped from the mandrel, it was practically impossible to put it back again on the machine which recorded it, let alone a different machine.

Bell and Tainter’s “Graphophone” was the first machine to allow the medium to be changed, and Edison soon adopted the principle; yet I don’t know any written history which even mentions this breakthrough! The next circumstance was that both the subsequent “Graphophones” and “Improved Phonographs” were targeted at businessmen for dictation purposes; the idea of using them for entertainment was resisted by nearly everyone in the trade. Finally, the various organisations marketing sound recording in North America and overseas got themselves into a horrible entanglement of permissions and licenses, which made it practically impossible to decide what was legal and what was not - even if anyone could have seen the whole picture, which nobody did.

After the first Edison wax cylinder phonographs began to fail as dictating machines, it was possible to buy one for a hundred and fifty dollars, which translates into £31. 5s. 0d. in British money of the time. When you take inflation into account, it makes no less than £2000 today. In his book “Edison Phonograph - the British Connection,” Frank Andrews describes the early days of pre-recorded entertainment in Britain. It seems this did not start until 1893, and then only by accident. “The J. L. Young Manufacturing Company” was marketing Edison dictation phonographs imported from America. Although the Edison Bell Phonograph Corporation of London had purchased twenty Letters Patent on the use of phonographs and graphophones (to give them a legal monopoly in Britain), Young’s phonographs bore a plate from the Edison works stating that their sale was unrestricted except for the state of New Jersey. With typical American xenophobia, the designer of the plate had forgotten the Edison Bell monopoly in Britain.

The Earl of Winchelsea was at Young’s premises buying some blank cylinders for dictation purposes. The music hall singer Charles Coburn also happened to be trying a phonograph by singing into it, and the Earl overheard him and asked if the cylinder was for sale. Young sold it to him, and apparently Coburn carried on all day singing unaccompanied into the machine. Other customers were informed they were for sale, and the whole stock was cleared by 11 o’clock next morning. Unfortunately, history doesn’t relate how much they fetched.

The first case where we know the price is also a case we cannot date - told by Percy Willis to Olgivie Mitchell (author of “Talking Machines”, a book published in 1922). Willis admitted he had been breaking American law, since he was the first to smuggle a phonograph out of America to Ireland in about 1892, where he made 200 pounds in five days just by exhibiting it. He returned to the States for more machines, hid them under fruit in apple barrels, and recorded many cylinders which he sold at a pound apiece. Thus the cost was one pound for three minutes or so - about £64 when you take inflation into account.

Wax cylinder equipment was maintained for specialist purposes for another four decades, until electrical disc recording displaced it. Besides the “family records” made by phonograph owners, the format was widely used for collecting traditional music and dialect. It was powered by clockwork, so it could be used in the field without any power supply problems. A blank cylinder weighed just short of 100 grams. I do not know its original price, but it would have been about a shilling in 1906 (about three pounds today), and it could be shaved (erased) a dozen times or more - even on location. This made the medium ideal for traditional music and dialect until the 1940s. Allowing for inflation, and the cost of repairs and new cutting styli, call it six pounds for two minutes.

13.10.2 Mass produced cylinders

Between 1892 and 1902 the price of a one-off commercial cylinder had fallen as low as 1/6d - the best part of a fiver at today’s prices, even though this was a “downmarket” make (“The Puck.”). All these cylinders were one-offs, made by artists with enough stamina to work before several recording horns performing the same material over and over again - a circumstance which seriously restricted what we inherit today. In his book, Frank Andrews lists many other twists in the tale as other organisations tried to market cylinders. But Edison’s two “tapered mandrel” patents in Britain expired in April 1902, while Edison’s patent for moulding cylinders was disallowed; and after this, moulded cylinders could be marketed freely.

This launched the situation whereby a commercial sound recording tended to be higher quality than any other type of sound recording. This remained true throughout the eras of electrical recording, sound films, radio broadcasting, television broadcasting, and the compact disc. If the word “quality” is taken as meaning the artistic and craftsmanship aspects of sound recording as well as the technical ones, commercial sound recordings remained superior until the end of the twentieth century. It then became possible to purchase equipment with full power-bandwidth product, and add time, inspiration, and perspiration to create recordings which equalled the best.

Returning to the moulding process: this was comparatively inexpensive. To use a modern phrase, it was a “kitchen table” process; amortised over some thousands of cylinders, it can be ignored for our purposes. Edison and Columbia “two minute cylinders” never dropped below 1/6d until the “four-minute cylinder” took over, when the older ones became 1/-. Bargain two-minute cylinders eventually dropped as low as 8d. - equivalent to almost exactly two pounds today. Many cheap ones were “pirated” (to use twenty-first century vocabulary, although there was no copyright in sound recordings in Britain before 1912).

13.10.3 Coarsegroove disc mastering costs

The first disc records were mastered on acid etched zinc; but when the US Columbia and Victor companies formed their patent pool to allow discs to be mastered in wax, this considerably increased the costs of making original recordings. The slab of wax was about an inch thick (to prevent warpage and to give it enough strength to be transported to the factory for processing), and its diameter was greater than that of the manufactured record, so mothers and stampers could be separated without damage to the grooves. A wax for one ten-inch side weighed over 1.8kg. In other words, it comprised more than eighteen times as much wax as a blank cylinder, so it isn’t surprising that master waxes were quoted as costing thirty shillings each in the mid-1930s (about £54 today), plus the cost of the machinery and operators to cut them, and of course the artists’ fees!

Next came the “processing” - the galvanic work to grow masters, mothers and stampers. Here I do not know the actual costs (which were trade secrets). But in the mid-1960s when I was cutting microgroove on cellulose nitrate blanks, the first stamper cost about the same as the nitrate blank. So, ignoring the artists’ fees, you might double that 1930s figure to £120 for three minutes - and this was before anyone could play the recording back!

In the circumstances, only professional performers who gave reliable performances would be normally be invited to make commercial records. This supported the idea of different classes of artist, together with the idea of different colours of label for each of the different classes. When an artist’s name sold the recording (rather than anything else), only one or two individuals with a talent for getting on with VIPs took on the job called “record producer” today. He and his artist would decide which version should be used after £120 had been expended making a “test pressing” of each take which might be publishable. And this is why unpublished takes seem only to survive for VIP performers.

But, before archivists fish in their pockets for wads of money, I must explain the test pressing system went further than that. It was quite normal for the published version to exist on quite literally dozens of so-called “test pressings”, presumably for attracting new artists, or bribery purposes, or promotional reasons.

In the days before electronic amplification, the combination of acoustic recordings and devices like Pathé’s mechanically magnified discs or The Auxetophone pneumatic amplifier (section 12.25), could make a utilitarian (if inflexible) public address system. Otherwise, only the wealthiest amateurs (facing a cost well into three figures by modern standards, without being upset by failures) could afford to make a sound recording until the 1930s.

13.10.4 Retail prices of coarsegroove pressings

Disc pressing was much more capital intensive than cylinder moulding. Apart from the presses themselves (which cylinders didn’t need), there were ongoing costs for steam to heat the stampers, water to cool them, and hydraulic power for hundreds of tons of pressure. So for the first decade or more, disc records were even more expensive than one-off cylinders, let alone moulded cylinders.

Between 1902 and 1906, a mainstream seven-inch single-sided black label Gramophone disc retailed at 2/6d (playing time about two minutes), while the ten-inch single-sided equivalent (three minutes) cost five shillings (about £15.50 today).

In September 1912, “His Master's Voice” was actually the last label to market double-sided discs. They did so whilst ceasing to advertise in the Trade Press, which makes it difficult for me to establish how they were priced. (Clearly they were hoping not to alienate their loyal dealers, without at the same time getting egg on their faces). But it seems single-sided Gramophone Concert discs (then 3/6d) gave way to double-sided plum label B-prefix HMVs at the same price (equivalent to about £10.95 today). The table below shows what happened next.

The first line shows there was still room for price cuts, and was probably helped by the amortisation of the capital required to build the Hayes factory (construction of which began in 1907). After that, it is striking how consistent that right hand column is. There were comparatively small alterations in the untaxed costs of double-sided plum label discs, and the principal ones were independent of World Wars or anything Chancellors of the Exchequer could do.

I shall now deal with the “extremes”, rather than “plum label” records. For many years, the most expensive “extreme” was considered to be Victor’s 1908 recording of the sextet from Lucia de Lammermoor. Historian Roland Gelatt wrote that this was priced at seven dollars specifically for its ability to attract poorer classes of people to the store like a magnet. Its British equivalent cost fifteen shillings until it was deleted in 1946, equivalent to sixteen or seventeen pounds of today’s money for four minutes of music. And it must also be said that here in Britain, this fifteen shilling issue was eclipsed by others by Tamagno and Melba.

The cheapest records were retailed by Woolworth’s. From 1932 to 1936 they were eight inches in diameter with the tradename “Eclipse”, and from 1936 to 1939 they changed to nine inches with the tradename “Crown.” At sixpence each, this is equivalent to about £1 today. This enormous range for retail prices underwrote both the prestige and the mass production of commercial disc recordings, in a way which never happened with cylinders.

At this point I must break off for two reasons. First, subsequent prices include Purchase Tax or Value Added Tax, and second, popular Long-Playing albums were now available. To show how these fared, I have averaged in the next table the Decca LK series and the HMV CLP series (which were always within a few pence of each other).

This time, it’s apparent from both columns that, in “real terms”, prices of records steadily fell. I leave it to readers to speculate why; but the result was that popular music grew to a billion dollar industry, with all the other genres easily outclassed.

It would be nice to bring this right up to date; but at this point Resale Price Maintenance was declared illegal. (British record companies had used it because royalties to music publishers were fixed at 6.25% of untaxed retail price by law. It would have been impossible to pay the correct royalties if retailers had increased their prices; but although they now gave a “recommended retail price”, competition kept prices down, so music publishers got more than their fair share).

13.10.5 One-off disc recording

In the mid-1930s the costs of wax mastering stimulated the development of “direct disc recording” on various formulations of blanks, the most familiar being “the acetate” (mostly cellulose nitrate). Every few months, the magazine Wireless World provided listings of all types of recording blanks. Double-sided ten-inch discs (six minutes) were around 2/6d each (about five pounds today), and twelve-inch (eight or nine minutes) were about 4/6d (about nine pounds). But unlike wax cylinders, they could not be shaved or “erased”; British sound recording enthusiasts simply had to record note perfect performances with rigorous cueing and timing, or pay the price penalty.

A side effect of this is that we often find discs with bits of the performance missing, and the exact opposite - sections of off-air recordings or “candid” eavesdroppings upon rehearsals, the like of which would never survive today. These prices had doubled by the end of the Second World War, becoming even more expensive than HMV “Red label” pressings. The discs were easily damaged as well. But the advent of microgroove fuelled a rebirth in the medium, since microgroove discs needed more careful handling anyway, and the blanks could now hold five times as much material. Linked to mastering on magnetic tape (with its editability), the strain on performers was largely eliminated. Unfortunately the prices of blanks continued to rise, passing those of manufactured LPs in about 1965; by 1972 the additional costs of the machinery, the cutters, and the engineers, meant cassette tape was better suited for one-off recording work.

13.10.6 Magnetic tape costs

For a number of reasons, disc was paramount in Britain for longer than anywhere else. When we consider open reel magnetic tape, the relationship between quality and quantity was unlike any other medium. If in 1960 you wanted to record a half-hour radio programme (for example), you might use a “professional” seven-inch reel of standard-play tape at 7.5 inches per second (19 cm/sec) “full track” at a cost of £2. 8s. 0d.; or you could use one of four tracks on a “domestic” seven-inch long-play tape at 3.75 inches per second costing exactly the same, but the reel would hold no less than twelve such programmes. Using these figures, one half-hour radio programme would cost the equivalent of from £20 to £1.67 today. Because the trades unions representing musicians and actors influenced what could be kept on broadcast tapes, broadcasting archives are now being forced to plunder such amateur tapes (often with detriment to the power-bandwidth product).

But by 1960 prices were already falling. The earliest definite price I’ve been able to find is £2. 14s. 0d. for a seven-inch long-play tape in July 1956, and it gradually became apparent that British-made tape was much more expensive than imported tape. Its cost fell about one-third between 1960 and 1970, during which time inflation rose almost exactly the same amount, after which British tapes were matching imported ones.

None of these figures include the capital cost of any tape recorders, let alone satisfactory microphones, studio equipment, and soundproofing. This capital element was easily the biggest component of mastering on tape for commercial sales. In the 1950s this element was at its height, and formed the principal reason why the British duopoly of EMI and Decca remained unbroken. But in the 1960s it became feasible for minor recording studios to stay afloat while making recordings of commercial quality (this element had happened a decade earlier in the USA), so the only extra things needed were mass production (pressing) and distribution! The former consideration was prominent in the ’sixties and ’seventies. Despite the commercial popular music industry being (arguably) at its height, it was very difficult to get time on a seven-inch press, so there are very few short runs of popular music pressings - only “acetates”.

By 1970 the Philips cassette had been launched as well. A blank C60 (sixty minutes) cost just under a pound, while C90s were almost exactly £1. 5s. 0d, and C120s around £1. 13s. 0d. These were “ferric” cassettes, of course. Reels of tape did not attract purchase tax, because they were deemed “for professional use”; but when value added tax started in 1973, it went onto everything.

By 1980, new “non-ferric” formulations had doubled and trebled the prices, showing a demand for getting quarts of quality into pint pots. But ferric cassettes remained almost the same in pound notes, the only differences being the extra VAT. The secret of Philips’ success was that Philips maintained strict licensing for the format, so cassettes would always be “downwards compatible” and playable on any machine. Cassettes didn’t have different sizes, speeds, or track systems (continual difficulties with open reel tapes); they only had different magnetic particles.

“Cartridges” were an American invention. Cassettes proved popular in cars, since there was no pickup which could be shaken out of a groove and the tape was encased so it couldn’t tangle. But American motorists favoured the cartridge because it could play continuously, without needing to be changed at awkward moments. Stereo and quadraphonic versions were offered; but they did not penetrate European markets very far, and the quadraphonic ones are almost unbelievably rare here.

13.10.7 Pre-recorded tapes and cassettes

By and large, pre-recorded tape media played second fiddle to analogue disc pressings; later, the same material might be purchased on three or four different formats. In Britain, EMI launched pre-recorded tapes in the autumn of 1954, and the reviewer in The Gramophone that November found it difficult to say anything praiseworthy except that you could tangle them round your leg and they would still play. They were 7.5ips mono half-track. The HMV HTC-series was the equivalent of a 12" plum label LP, retailing for £3. 13s. 6d. (more than double the LP). But this was a prelude to The Radio Show in August 1955, where EMI’s “Stereosonic” Tapes were launched. This was the only stereo medium in Britain for the next three years.

Seven-inch spools of tape at 7.5 inches per second were featured. They consisted of one side of the equivalent LP copied from a magnetic master tape onto two “stacked” parallel tracks (so they couldn’t exceed thirty minutes). A “plum label” selection cost £2. 4s. 0d (equivalent to £31.90 today). At this price reviewers insisted in documenting the playing time, which varied between eighteen-and-a-quarter minutes and twenty-two.

Stereophonic LPs arrived in mid-1958. Although there were attempts to charge more for stereo ones, most of the record industry put up with the inconvenience of “double inventory”, and charged the same whether the LP was mono or stereo. By 1975 virtually all new microgroove releases were stereo, and virtually all users were equipped to play them (if not to hear them in stereo).

Open reel tapes always lacked a common standard which everyone could use for “downwards compatibility”. But they had one supreme advantage which gave them a market niche - the complications of disc mastering and pressing were avoided. Therefore many organisations joined the recording industry to market pre-recorded tapes (and later cassettes) of unconventional subject matter. This would offer rich pickings for sound archives, were it not for their completely insignificant sales.

In 1971 the following “standards” were on offer:

  • 2-track mono open-reel (3.75ips, 5-inch reel)
  • 4-track mono open-reel (3.75ips, 5-inch and 7-inch reels)
  • 4-track stereo open-reel (3.75ips or 7.5ips, 5-inch and 7-inch reels)
  • 4-track stereo cartridge
  • 8-track stereo cartridge
  • Philips CC cassettes (mono and stereo compatible)

-

In that year, Dolby’s B-type noise reduction system made it possible to get gallons into pint pots. This was the fillip the Philips cassette needed. It became the first commercial rival to discs since the cylinder. Pre-recorded cassettes settled down at about 80% of LP prices, and this still continues today. One area - the talking book - has not succumbed to the blandishments of digital recording, and has even caused cassette duplicating plants to expand in this digital age.

This was because the audiocassette had a unique feature - you could stop it, and start again from exactly where you stopped. Book publishers suddenly become cassette publishers; but even the original publishers have usually shortened the text, and cassettes are always dearer than the unshortened printed books. So I cannot compare like with like in this situation.

13.10.8 Popular music

After the 78rpm and 45rpm “popular single” formats I mentioned earlier, the industry spent much time flailing around trying other formats. Twelve-inch LP albums began to dominate, probably because of the sleeve artwork. The “seven-inch single” began dying about 1977, and was replaced by the “twelve-inch single”. The price of crude oil (the basic material from which vinyl copolymer was made) dropped in real terms, so twelve-inch analogue single discs could be sold at much the same price as seven-inch, while the increased diameter meant noticeably cleaner quality (and louder music - the “louder is better” syndrome intruded yet again). Although this did not directly affect classical music, the same technology (and thinking) was used for a small number of twelve-inch singles of classical music (and promotional versions of classical music), until digital techniques came to the rescue.

The term “disc jockey” had first appeared in radio broadcasting. Disc jockeys had begun making regular appearances at dance halls in the early 1960s; but their principal reason for existence had always been to feel the mood of the audience, and select music to reinforce (or contrast) this mood. The twelve-inch single led directly to specific “disco” music. This was different from previous “popular songs”; it was danced to powerful loudspeakers which actually mesmerised the dancers, changing the course of popular music. It resulted in full time disc jockeys who worked in dance halls rather than radio stations, together with new artforms such as “scratching” and “rapping” (improvised vocals over a pre-recorded backing, using twelve-inch singles skilfully played one after the other without a break in the musical tempo. Compact discs always took second place in this environment, but manufacturers eventually got the prices low enough for CD singles to retail between £1.99 and £3.99.

13.10.9 Digital formats

We have now arrived at the period following Resale Price Maintenance. As I cannot have accurate figures, I will use my historical judgement. I shall simply state that digital compact discs (CDs) appeared in 1982.

As I remember they were about fifteen pounds each, about double the equivalent LPs, but offered three advantages which ensured success. First, they were much more difficult to damage; second, they had significantly greater power-bandwidth product; and thirdly they could have a rather longer playing time.

The absolute maximum permitted by the “Red Book” specification is just short of 82 minutes, although the average is about seventy. This makes it practically impossible to compare like with like again, because LPs couldn’t hold that much; and when performances were reissued, most record companies dutifully filled their CDs with extra tracks.

Today new releases from leading record companies still cost about fifteen pounds, but inflation has exactly doubled since 1982, so they are effectively half-price. And there are innumerable bargain issues at a fraction of this. Naxos CDs, for example, are offered at £4.99 (their cassettes are £4.49). There is a whole spectrum of other inexpensive material - many in double packs for the price of one.

But the costs of making CDs have fallen even more rapidly. Not only is it normal to give them away with magazines, but you can now buy a dedicated audio CD recorder for only £250. If you’re a computer buff, you can battle away and eventually do the same job at about half that price. And the blank discs are verging on fifty pence apiece.

13.10.10 Conclusion for pure sound recordings

This last thought shows that mechanical costs of making sound recordings are no longer an issue. Sound recordings are now at the point where costs aren’t significant - straight marketing issues are the only considerations in distributing published sound recordings.

I’m now going to utter a sad thought, so readers who’d like a happy ending should read no further. The public can (and does) have access to great music, well performed, and recorded to standards which approach that of a healthy human ear. By and large they do not realise the technical, financial and artistic costs of getting to this point; so sound recording is no longer a “sexy” industry. It shows every sign of becoming like the water industry - you just turn a tap, and there it is. It is true films and video would be meaningless without it, but here too the resemblance to a water tap is becoming noticeable.

As I write this, music distribution via The Internet is beginning to conquer conventional retail sales. Quite frankly, successful lawsuits brought by the American commercial recording industry seem doomed to failure. The Internet is now developing alternatives to the “client server” model, so it will be impossible to say where any one piece of copyright sound recording is actually located. And the Internet community considers “free speech” to be a Human Right which has higher priority than rewarding multinational companies, so budding musicians put their works on the Internet on the basis “there is no such thing as bad publicity”. Unless all the Berne Treaty countries unite quickly to reward creative people (by some process I cannot even imagine), the seventeenth-century metaphor of the artist in his garret (unable to bring his creations to the public) will once again mean the world in general will be deprived of the greatest art.

It’ll take a major shortage to remind people about the cost of everything and the value of nothing.

13.11 The cinema and the performer

Before sound films, the only ways film actors could communicate meaning was by facial expression, gesture, obvious miming, and “intertitles” (the correct name for short pieces of film inserted to show what characters were saying). Personally, I would also add what is now called “body language”; and, judging by some of the ethereal writing of the “film theorist” community, obviously there’s a Ph.D thesis here.

Academic readers of this manual will probably know about an extremely extensive literature on “film theory”, and anyone not so aware might be recommended to start with Reference 20. However, nearly all sound engineers and directors working with pictures and sound have followed their practical common sense, rather than any academic theory. The literature is in fact extremely misleading for the purposes of understanding “the original intended sound”, let alone its effects upon the performers (the subject of this chapter).

Silent film acting was of course supported by other arts, such as lighting, scenery, costumes, and editing (such as the grammar of “establishing shots” and “closeups”). During the filming, the director was able to speak to his actors and give running directions throughout, and it was common practice to have a small instrumental ensemble to create a “mood” among the actors. The whole craft of silent film had developed into a very considerable art form with its own conventions, a classic case of performers adapting to a medium. This was almost totally overthrown when sound came along - for audiences as well as performers.

Prior to the “talkies,” silent films might be accompanied by anything from a full symphony orchestra playing a closely written score, to a solo piano playing mood music to cues - according to legend, quite independently of the picture! Many cinemas often employed local workers with no special musical talent. When I was younger I was regaled with stories from a Roman Catholic uncle, whose choir was hired for suitable death scenes, and the choirboys improvised sound effects for the rest of the film.

Picture palaces rarely had acoustics suitable for music, the projection apparatus was not always silenced, and I also understand it was quite normal for the audience to converse among themselves. Literate people would also read out the intertitles to those who couldn’t read; and even blind people joined the audience, where it was both safe and sociable.

13.12 Film sound on disc

When sound films were first attempted with acoustic recording, it was almost impossible to get the artist where he could be filmed without also getting the recording horn into shot. The procedure was therefore to record the soundtrack first (usually at a commercial recording studio), process the disc, and then to “film to playback.” Various arrangements of gears and belts might be needed to link the sprockets driving the film with the turntable; but essentially the subject matter could be no more flexible than acoustic recording techniques allowed - loud sounds running about four minutes.

Often the same material would be pressed and sold as an ordinary commercial record, or the disc would subsequently become separated from the film. It isn’t always apparent today when such discs were meant to accompany moving pictures. (Ref. 21). The next step was a backwards one for sound archivists. Film studios might shoot their artists “mute,” and then pay them to sing or speak from the cinema orchestra pit, viewing the film in a mirror so they could direct the full power of their voices at the audience.

Early films would be in reels about one thousand feet long. The frame rate was not standardised until the advent of sound on film. Forgive me while I write about a side issue, but it underlines the message I wish to convey. When film speeds became standardised, cameramen lost a means of artistic expression. It is an oversimplification to assume that the films of the first quarter of this century always had comic actors charging around at express speeds. When we see the films correctly projected, we can also see that the cameraman might be varying the rate at which he turned the handle so as to underline the message of the film - quite subtly “undercranking” for scenes which needed excitement, or “overcranking” for emotional scenes. The speed might even vary subtly within a single shot. This is a tool which cameramen lost when sound began, and a means of expression was taken from them. So we must always remember that the relationship between a performance and the hardware is a symbiotic one.

So the introduction of sound caused a major revolution in cinema acoustics and cinema audiences, as well as the films themselves. Fortunately, Western Electric were at the forefront with their disc technology. I say “fortunately,” because they alone had the know-how (however imperfect) to correct cinema acoustics. Although Western Electric’s prices were very high, the associated contracting company Electrical Research Products Inc. provided a great deal of acoustic advice and treatment. This usually comprised damping around and behind the screen to eliminate echo effects between the loudspeakers and the wall behind the stage, and sufficient extra damping in the auditorium to cut the reverberation time to be suitable for syllabic speech.

It does not seem that modern methods of acoustic measurement were used; but I have found no written record that screen dialogue was too fast in a Western Electric cinema, so it seems the process was successful. At any rate, film makers never seem to have compromised the pace of their films for the cinema audience; and, by a process of “convergent evolution,” this meant that all cinemas evolved to roughly similar standards.

The running time of a 1000-foot reel of silent film might be anywhere between fifteen and twenty minutes - much more than a 78rpm disc, anyway. Between 1926 and 1932 there were two different technologies for adding sound to pictures - disc sound and optical film sound. I shall start by considering disc sound.

Disc recordings switched to 33 1/3rpm to accommodate the extra duration. Electrical recording was essential because of the amplification problem. Because the technology introduced by Western Electric was more advanced, it played the leading role in the commercial cinema for a couple of years (roughly 1927-1929). It used the technology we described in section 6.15, with the addition of three-phase synchronous generator motor systems (called “selsyns”) to link the camera(s) and the disc-cutting turntable(s). Unfortunately, it was impossible to edit discs with the ease of film, and film directors had to alter their techniques.

First came material which could be performed in one “take.” In 1926 the vast majority of American film audiences were not being entertained in dedicated “picture palaces,” but in multi-purpose entertainment halls in small towns. More than half the evening’s entertainment preceded a silent feature film, usually including a small orchestra, singers, vaudeville artists, and film “shorts” with cartoons and newsreels. So it was natural for such live acts to be recorded with synchronous sound; and since both the subject matter and the place of performance already existed, they could be shot in one take. Meanwhile, there was enormous potential for reducing costs and improving artistic standards at the point of delivery.

Also the process was much cheaper than feature film methods of production, and patents were less of a problem. The sound was recorded on disc in the usual way, while there might be one, two, or three cameras running from the same power supply. There would be a “master shot” (usually the widest angle camera) which would be continuous; then the film editor could substitute closeups or cutaways from the other camera(s) whilst keeping the overall length the same. The master disc would be processed and pressed in the usual way.

Vitaphone was the first successful film company to use synchronous sound filming. As Reference 22oints out, this process was lubricated because it was much cheaper to make a film of a vaudeville act and show it all round the country, than to pay the artists to tour the country! What is now accepted as being the first “talkie” (“The Jazz Singer”, starring Al Jolson) was initially shot using this system, with the specific idea that if it failed as a “talkie”, it would still be possible to use the material in musical “shorts” (Ref. 23). And it was not a complete talkie; the film included several reels with “silent film music” on disc.

Its fame is probably due to an accident - a piece of ad libbed speech by Al Jolson - which generated the very idea of “talkies”, and happened to be followed almost immediately by a disc of droning silent film music. Vitaphone had also developed their technology specifically to save costs of hiring musicians to accompany silent films; the first full length movie with a continuous music soundtrack was recorded this way and premiered in 1926 - “Don Juan.” The full story of how thirty years of film art was junked, while sound was economically inevitable, is given in Ref. 24.

There now followed a period of considerable confusion, when established Hollywood stars were junked because they had “unsuitable voices”. For perhaps the first and only time in the whole history of sound recording, a sound engineer had a right of veto over the performance. To some extent this is understandable. The craft suddenly lurched forwards, to a stage analogous to live television a quarter of a century later. Everything happened in realtime - the dialogue, the singing, the action, the cutaways, the camera moves and the lighting changes, all completed with a live orchestra off the set. It is not difficult to tell when this happened, because there will be long sequences shot “television fashion” with several cameras operating at once. The performers will likewise be playing “safe”, and projecting their voices because neither the camera nor the microphone could get in for tight closeups. The microphone had a 2.9kHz peak, as we saw in section 6.5.

The only way of editing sound while it was on disc was to go through another generation, using radio broadcasting techniques to cue the discs. The picture would then be “fine-cut” to match the discs. Reference 25 says the first film to be put through this process (“Old San Francisco”) had audibly degraded sound, as a result of efforts to add earthquake sound effects (presumably to a musical score). The Marx Brothers polished their performances on the legitimate stage before they went to the film studio; this ensured they could get through many minutes of slapstick action without mistakes, while they could time their act so the cinema audience could laugh without drowning dialogue.

Until modern technology came to its rescue, the second Marx Brothers film “Animal Crackers” (1930) - recorded on disc - suffered a major picture editing problem when Captain Spalding appeared to get out of his palanquin twice. This is a classic case where there are two answers to the “subjectivism” problem I mentioned at the beginning of this chapter. What would the original engineers have defined as “the original intended sound”? Would they have preferred us to use the modern tidied-up version, in which a couple of bars of music were cut and spliced while maintaining the rhythm; or would it be better to keep the faulty picture edit to underline the deficiencies of the disc system, while retaining the music as the composer wrote it?

Because early sound-on-disc films gave cinema projectionists difficulties (it might take some seconds for the lumbering mechanism to get up to speed), it became the practice for the end of each reel to finish with a shot of an artist staring fixedly and silently at the camera for ten seconds or so, while the disc comprised unmodulated grooves. This gave the projectionists time to switch over as and when the next reel was running without a visible gap. This gives difficulties today. Ideally, the “archive copy” should have the zombie-shot at full length, while the “service copy” should have the action tightened up to minimise the hiatus.

Vaudeville halls continued in action. They later filled a long felt want during the Depression, when a sound film programme could provide a couple of hours of escapism from the troubles.

13.13 The art of film sound

The vital years of development for the new art form were 1929 to 1939. Sound-on-Film offered better prospects to the feature film industry, and RCA Photophone did the basic research and development as fast as they could. Unfortunately, they were about a year behind Western Electric, and many cinemas were equipped with the disc system. However, the ability to cut and splice the sound as well as the pictures eventually tipped the balance so far as making the films was concerned, and disc mastering virtually ceased after 1930. Before that, it is my belief that the major achievement was to get anything audible at all, rather than to impose any “art” upon it. I choose the year 1929 as being the turning point, because several things happened that year.

Firstly, microphone “booms” and “fishpoles” were adopted, allowing film actors to move freely around the set (instead of being fixed to the spot to address a microphone hidden in a flower vase). Secondly it became possible to cut and splice optical soundtracks without a loud “thump”, thanks to the invention of the “blooping” technique, where an opaque triangular patch over the splice resulted in a gradual fade-out and fade-up. Thirdly, the first true use of “audio art” occurred, when Hitchcock (in his film “Blackmail”) used deliberate electronic emphasis of the word “knife” in some incidental out of shot dialogue, to underline the guilt of the heroine who had just killed a man with a knife.

Because silent films had nearly always been screened with live music (even if the tradition of an upright piano playing “Hearts and Flowers” is a gross caricature), it was perfectly understandable for early sound films to concentrate on music at the expense of speech; and when synchronous speech became feasible, there was a learning curve while stage techniques were tried and abandoned. By 1929, actors had settled down to what is now the convention - a delivery reminiscent of normal speech, but actually projected more forcefully (somewhat analogous to vocalists’ “crooning” technique).

A few early movies slavishly kept the two components, sound and pictures, strictly together; we can see the effects in the first reel of “42nd Street” (1932), where a mechanical thump, a change in sound, and a change in picture happens every time a new actor says a line. But this is also the earliest film I have seen with a better soundtrack from a frequency response point of view; presumably it used one of RCA’s early ribbon microphones. Soon, working with sound and pictures independently became the norm, and devices such as the sound “flashback” and “thinks sequence” became part of the film editor’s vocabulary.

More important, it was possible to “dub” the film. In films, this word has two meanings. First, it means adding music and effects, and controlling the synchronous sound, to make an integrated artistic whole. Second, it can mean replacing the synchronous sound with words from another actor, or in another language. To achieve both these ends, the film industry was first to use “multitrack recording.” All these facilities excite controversy when they go wrong; but the fact that thousands of millions of viewers have accepted them with a willing suspension of disbelief shows they have an enormous artistic validity. By 1933, the film “King Kong” had pictures and sound which were nothing remotely like reality, but which convinced the cinemagoers. It could never have been possible with disc recording.

Despite the advantage of sound-on-film, it took longer to change the cinemas. Fortunately, RCA Photophone was able to invent ways round Western Electric’s patents, and they offered their equipment to cinemas at a fraction of Western Electric’s prices. But disc equipped cinemas were still operating in Britain as late as 1935, the records being transfers from the optical film equivalents, so the boot was on the other foot. If we want to see a film today whose sound was mastered on disc, we usually have to run a magnetic or optical film copy, because synchronous disc replay equipment no longer exists. Yet we are often fighting for every scrap of power-bandwidth product. Today’s sound restoration operator must view the film, and decide whether it was mastered on disc; then select the appropriate format if he has any choice in the matter.

Meanwhile, a very confusing situation arose about sound film patents, which I won’t detail here (it involved three American conglomerates and one German one). Although cross-licensing kept the film cameras running, the whole situation was not sorted out in law until the outbreak of the second World War, by which time the various film soundtrack types I mentioned in chapter 7 had all been developed. The standard position for optical soundtracks on 35mm “release prints” was also finalised by 1929.

The problem of foreign languages then arose. At first many films were simply shot in two or three languages by different actors on the same set. Reference 26 points out a different scenario, in which the comic duo Laurel and Hardy brought foreign elocutionists onto the set to teach them how to pronounce foreign languages “parrot fashion”. The resulting distortions only added to audiences’ hilarity! Meantime, the high cost of installing sound projection equipment meant most countries were a couple of years behind the United States, and the system of intertitles continued for some time.

13.14 Film sound editing and dubbing

I write this next paragraph for any naieve readers who may “believe what they hear.” As soon as it became possible to move away from disc to film sound, it also became possible to cut and splice the pictures independently of sound - and, indeed, make the “whole greater than the sum of the parts” by “laying” different sound, changing the timing of splices, and modifying sound in processes exactly analogous to various lab techniques for the pictures.

Nearly every film or video viewer today is unaware of the hard work that has gone on behind the scenes. The man-days needed to create a convincing soundtrack often outnumber those for the picture; and I am now confining my remarks to just one language.

Since the normal film-sound conventions already allowed the use of two tracks (“live sound” from the set, and music), it was natural for the next split to be between speech and sound effects. Precisely how this became the norm does not seem to have been related anywhere (another Ph.D thesis, anyone?), but it allowed Paramount to set up a studio in Joinville (near Paris) specifically for adding foreign dialogue to American films. This began operations in the spring of 1930. The studio was made available to other Hollywood film companies, and although it closed a few years later, Paris remained a centre for the “foreign dubbing” industry.

Here I use the word “dub” to mean re-voicing an actor, using syllables matching the original lip movements in the picture. As this craft developed, original “stars” would sometimes re-voice their parts back in Hollywood, particularly on location shots where low background noise could not have been assured. (The technology came too late to save the careers of silent-screen lovers with squeaky voices, or noble-looking actors with uneducated accents). My next anecdote keeps resurfacing, so I will relate it now. The Italian film industry, knowing all its films would have to be dubbed, apparently did not ask its actors to learn lines at all. Instead, they apparently counted aloud, and the dubbing actors would interpolate some reasonable dialogue afterwards!

Thus, from 1930 onwards it was normal to have three soundtracks (even for later television films), “music”, “effects”, and “dialogue”. These would often be on the same strip of celluloid, whether optically printed or magnetically recorded.

In section 13.4 above I considered the subject of “sound perspective.” Film aficionados went through a massive debate on this subject in the 1930s. Here the problem was one of “naturalness” versus “intelligibility”. Basically, the optical film soundtrack (even with “noise reduction”, did not have enough dynamic range to allow speakers a hundred yards from the camera to sound a hundred yards from the camera. The problem was exacerbated by solid walls being made out of cardboard, inappropriate studio sets giving the wrong kind of footsteps, and other acoustic phenomena.

In practice, sound recordists did the only sensible thing. They placed the microphone as reasonably close to the actors as they could (usually about a metre above their heads, but facing their faces, so fricatives and sibilants would be caught). They worked on the assumption that if a sound was meant to sound distant, this could then by simulated by volume controlling, filtering, and/or artificial reverberation in the final-mix dub. Generally this remains true to this day, even in television. Today, philosophers of film technique justify the general consistency of speech quality by saying there are always at least two people involved, the speaker and the spectator! (Ref. 27).

Another difficulty is “silence.” The effects track should normally carry a continuous element, which glues the entire scene together. Scene-changes (and time-changes) can be signalled subliminally to the viewer by changing this continuous element. In my work as a film dubbing mixer, this continuous element was the hardest part of constructing a convincing soundtrack, because everything went wrong if it disappeared. (I suspect a great deal of film music might have been composed to overcome this difficulty)! Even “natural” sound picked up on the set includes subliminal components like camera noise and air conditioning, which stick out like a sore thumb when they are absent.

Therefore, film performances may require additional background noise, which has to be moved or synthesised to cover mute reaction shots and the like. A professional sound recordist will always take a “buzz track” for every location, including at least a foot or two of film running through the camera. Generating “silence” which does not resemble a failure of the sound system is usually possible; but of course, inherently it adds to the sounds made by the performers.

On the other hand, it is my duty to record that several film directors have avoided mixing and dubbing film sound on ideological grounds, while most semi-professional and amateur video systems simply eliminate all chance to work creatively with sound at all.

In the next section I shall explain how the limiter was introduced to improve the clarity of speech (by making all syllables the same amplitude on the optical soundtrack). The “louder is always better” syndrome kept creeping in, both in the cinema and on television; and when Dolby noise reduction became available for cinemas, most simply raised the volume of the soundtrack to make it louder while not increasing the background noise. Therefore it became possible to plan a sound mix to be deafening.

In “Star Wars” (1977) this fundamentally affected the dialogue, since the music and effects could be very exciting, and the dialogue was specifically written and performed so redundant lines might be “drowned.” When this film moved to television (before Dolby sound was available to the home viewer), it was necessary to re-mix the three components to prevent viewers complaining they could not hear the dialogue. Comparing the cinema version with the domestic version reveals the deliberate redundancies in the script, which stick out like a sore thumb when made audible!

Meanwhile, when commercial television began in Britain in 1956, viewers began complaining about the loudness of the advertisements. These complaints continue to this day; but if you measure the sound (either with a peak-reading meter or an RMS meter), advertisements always read lower than the surrounding programmes. This demonstrates that subject matter always contributes a significant psychological component to “loudness.” The problem got worse as audio got better, mainly because the “trailers” advertising a forthcoming feature film always utilised the most exciting (and loudest) bits (and the same happened with television trailers). Messrs. Dolby Laboratories were forced to introduce a “movie annoyance meter” in order to protect their name; but Jim Slater of the British Kinematograph Sound and Television Society is quoted as saying: “If cinemas no longer turn the sound down to suit the trailers, they will play features louder. Not everyone will like that.”

13.15 The automatic volume limiter

In the meantime, the film industry led the world in another technology - volume compression. We have dealt with the nuts and bolts of this in chapter 10, so now we will consider the effect upon performances.

Natural speech consists of syllables which vary in peak strength over a range of twelve decibels or more. When dialogue was reproduced in a cinema, it had to be clear. Clarity was more important than fidelity, and with the restricted dynamic range of the first optical soundtracks before “noise reduction”, the “variable density” soundtrack had an advantage. It gave “soft clipping” to the vowel-sounds of words, allowing an unexpectedly loud syllable to be “compressed” as it was being recorded. Although considerable harmonic distortion was added, this was reduced by the “constant amplitude” characteristic employed by optical film sound (section 8.7), and the result was preferable to the unexpectedly loud syllable.

Anecdotal evidence explains how the loud sounds of gunshots were aided by driving the soundtrack further, into “peak clipping”. I have no actual experience this was the case; but traditional movie sound for a revolver comprises a comparatively long burst of “white noise”, the spectrum of which changes little with peak clipping. Anyone who has tried to record a real revolver with a real microphone onto a well engineered digital medium will know that the pulse of the shock wave dominates. Thus, the traditional revolver noise in film Westerns is very much a child of the medium upon which it was being recorded.

Both variable area and variable density soundtracks were made by light being fed between low mass aluminium ribbons in a “light valve”. These ribbons were easily damaged by an overload, and I have no doubt that early amplifiers driving these ribbons were deliberately restricted in power to reduce the damage. Another anecdote tells how Walt Disney, who did the first voices for “Mickey Mouse”, blew up a light valve when he coughed between two takes. Clearly there would have to be a better way.

Western Electric introduced the first “feed back limiter” (sections 11.6 and 11.7) in 1932. This could be used to even out syllables of speech and protect the light valves at the same time. From that point on, all optical sound recording machines seem to have had a limiter as an overload protector, and in my personal opinion this tool became overused. Not only did the three components (dialogue, music and effects) each have a limiter, but so did the final mix as heard in the cinema. In chapter 10 I explained why it may often be impossible to reverse the effects of a limiter; but at least we have the ethical advantage that the film company obviously intended the sound to be heard after two lots of limiting.

As I said, optical sound is recorded with constant amplitude characteristics, meaning that high frequencies are more liable to overload. When an actor had defective teeth (I understand teeth were not as healthy in those days!), the resulting whistles could cause considerable overloads. Within a couple of years, the “de-essing” circuit was incorporated into limiters. The de-esser greatly reduced the overloads, and also improved the balance between consonants and vowels. Therefore virtually all optical film dialogue since about 1935 has been “distorted” in this manner.

Yet this is not the end of the difficulties. When an old film is “restored”, the audio usually goes through yet another limiter! In this case, my view is that here the techniques of Chapter 10 must be invoked to “restore the original intended sound.” Fortunately, I am not alone in this; at least some film enthusiasts support the idea of optical printing the sound, when another stage of limiting will not happen. Either way, the disadvantage is that background noise may double; only some of the techniques of Chapter 3 will be any help.

Because of the effects of the limiters (which compress the dynamic range at a syllabic rate), and because the sound has to be enormously amplified to fill a cinema with 2000 seats, various psychoacoustic distortions occur. These were compensated empirically until 1939, when all the psychoacoustic and physical phenomena were brought together in a seminal paper (Ref. 28). The result was that speech tracks were mixed using a standard frequency curve called “The Academy Curve” or “Dialog Equalization”. This attenuated the bass and added a certain amount of extra energy in the 1kHz to 4kHz range. If you take a speech track from a film intended for screening in a cinema (this does not generally apply to television film soundtracks, although some made in established film studios may also have it), recovering the original sound may require the Academy Curve to be reversed.

An automatic volume limiter has also become a tool to identify “commentary” or “narration.” In this situation, a voice describes something which is not apparent in pictures, while the viewer must not mistake it for something happening on-screen. This is achieved by a combination of several sound engineering tricks. First, the speech is delivered much closer to the microphone than synchronous speech, so it resembles someone sitting immediately next to the listener; second, it is delivered in a “dead” acoustic, so it gains an impersonal quality; thirdly, the limiter tends to be pressed beyond the 4dB to 8dB limit given in Chapter 10 as being the point where listeners unfamiliar with the original don’t notice anything; and finally noise gating (the opposite of compression) may be applied to eliminate intakes of breath. All these modifications of “the original sound” tend to robotise the voice, helping it to be distinguished from something “in shot”; and I once created much puzzlement for a radio drama producer when I imported this technology for the narrator of a radio play.

13.16 Films, video, and the acoustic environment

You would have expected the film industry to have followed in the footsteps of the audio industry (section 13.4), but no. As post-production methods developed, the techniques of augmenting or replacing dialogue and sound effects grew at an incredible rate, but not the use of acoustics. This was probably because films were meant to be seen in relatively large auditoria. Although cinemas were “deader” than the average speech meeting hall, let alone a concert hall, their acoustics were always considerably “liver” than the relatively subtle differences between (say) a kitchen and a sitting room. So these differences were not emulated on the film soundtrack.

By 1933 it was possible to film dialogue in the teeth of a howling gale (e.g. a studio wind machine). This did not need sophisticated windshields or filters. At first, directors attempted to save money by shooting without sound and getting actors to voice the sequences later; but it was soon found better to use a distorted recording to guide the actor. So it suddenly became possible to film action on location, even before directional microphones were invented. But some studios also offered the facility to their top “stars” so they might have several tries at their lines without the need for picture retakes. This explains why, when replayed today in room with a short reverberation time (or seen on TV), “star” actors sometimes sound if they are in a different place from the rest of the cast.

Another difficulty was to get the actors to pitch their voices correctly in a quiet studio. Indeed, Reference 29 shows that, in the 1930s, different film studios had different practices in this matter. But when lines are being “redubbed”, for example when simulating delivery in the teeth of a gale, the engineers quickly learnt the advantages of driving the volume of the background sounds to the actor’s headphones, so the voice would be projected by the right amount for the scene being depicted. But, equally, this writer gets annoyed at hearing V.I.P. actors who obviously aren’t aware of what is supposed to be going on around them.

13.17 Making continuous performances

Continuous performances of long films pre-dated sound. Despite being made on 1000-foot reels, all commercial films were made on the assumption that they would be shown in cinemas with two projectors and staff to ensure continuous pictures. When semi-professional formats (like 16mm film) came on the scene, this rule still applied. Amateur cine enthusiasts dreamt of emulating a commercial cinema in their living rooms, and some actually achieved it.

However, 16mm film could be in 2000-foot reels, and as it ran at 40% of the speed, running times could be as much as fifty minutes. Commercial feature films were transferred to narrow-gauge formats for wealthy amateurs, and were also shown in “third world” locations, aircraft, church halls, and the like. They were usually on two reels, with no alteration to the way the film was presented. Thus it is conventional for us to simply join the reels up when we transfer them to new media today. The format was also used for instructional and documentary films, but hardly any of them were as long as this.

13.18 Audio monitoring for visual media

In early film studios, the sound engineer on a “sound stage” worked at a small wheeled console on the studio floor. This helped communications with lighting staff, microphone boom operator, etc.; but the basic sound monitoring was on headphones. The signal was sent by cable to a fixed sound camera room, where another engineer looked after a film recording machine. It seems it was this second engineer who was responsible for the limiter which protected the light valve. Presumably he could communicate with the engineer on the floor if anything sounded wrong, for instance if he heard infrasonic wind noise; but I have no evidence this actually happened.

From the earliest days, the second engineer could also monitor the performance of the limiter and light valve by a photo-electric cell taking some of the light. The light might be diverted by a partially-silvered mirror before it fell on the film, or it might be located behind the film so the 4% of light not absorbed in the emulsion would be used. The switching was arranged so the loudspeaker was on “line in” before a take, and “PEC Monitor” when the sound camera was started. This monitoring was provided as an equipment check rather than an artistic check.

This monitoring point is also known to have been used as a suitable take-off point for other purposes. For example, in the late 1940s and early 1950s, quarter-inch magnetic tapes were taken for commercial records of film soundtrack music. These would therefore have been through the limiter and light valve, but not the optical film itself. (And a very misleading convention developed, whereby an album called “the Original Motion Picture Soundtrack” actually means “the Original Motion Picture Music - after rejecting unused takes”).

We also know that orchestral music for films was monitored in the conventional manner for sound only media, with the first engineer working with loudspeakers in a soundproof listening area. This would be slightly “deader” than the “theatre optimum”, so the effects of natural reverberation from the scoring stage could be assessed. But in all other film monitoring processes (including dialogue re-recording, dubbing, and reviewing), the normal practice was to monitor in a “theatre optimum” environment, using a listening area designed to emulate a cinema-sized auditorium.

The American Standards Authority established national standards for indoor theatres. It is interesting that many domestic “Dolby Stereo” decoders can supply artificial reverberation to simulate a cinema auditorium in the home. Whether this is a chicken or egg situation is difficult to say!

We must also remember that engineers often made recordings specifically for “internal” purposes, i.e. not for the public. It was normal practice to have an analogue audio tape recorder continually running in a television studio, for several reasons. For example, it might provide instantaneous checks on the lines of “Was the soloist flat in Verse 2?” It might provide the elusive “background atmosphere” so vital in post-production, and which was quite inaudible next to a whirring video machine.

It could supply alternative sounds which could be fitted after video editing had taken place, or separate audience responses for various doubtful motives, and to provide a tape with better power-bandwidth product than the video recorder for commercial LP releases. These tapes are known in Britain as “snoop tapes.” They were “twin-track” from a comparatively early date (the mid-1960s), and often have SMPTE/EBU timecode in the guard band. They are neither “mono” nor “stereo.” They provide a fertile hunting ground for students of television history.

13.19 Listening conditions and the target audience and its equipment

This brings us to another “cultural” issue - assumptions made by engineers about the way sound was meant to be reproduced.

Until the mid-1920s it can be presumed that, when they thought about the matter at all, engineers would assume their recordings would be played on the top of the range machine made by their employer in a domestic listening environment. It would also be assumed that the playback would not last much more than four minutes without a break, and this break would be sufficiently long to allow needles to be changed, clockwork motors to be wound, etc.

Broadcasting was the medium which pioneered “freedom from interruption.” From day one, British broadcast engineers were taught that “the show must go on,” and anything which interrupted a continuous transmission was a disciplinary offence. This signalled the start of background or “wallpaper” music in the home. It had been normal in restaurants and the like for many decades, but now there was a demand for it in the home. Suddenly, there were two audiences for broadcasts (and almost immediately records). First there was the “serious listener,” who could be assumed to be paying attention and listening in reasonable conditions.

Secondly there was the “background listener,” who might be doing several other things at once, and in a noisy situation. Frankly, engineers always worked on the first of these two assumptions until at least the 1950s. But, reading between the lines, it is also clear that more than half the population would be in the latter group. Printed evidence lists extensive complaints about the relative volumes of music and announcements, the unintelligibility of speech, the disturbing nature of applause or laughter, and the ubiquity of radio reproduction in public places. In Britain we have many recordings of radio programmes dating from the same times as such complaints, and (judging by our reaction today, admittedly) it seems the serious listener would not have been concerned. But we must also remember that until 1942 British broadcasting had no standard metering system, limiters, or network continuity suites; so judging one single radio programme on its own isn’t a fair test. As we usually have only isolated radio programmes from this era, we can assume that anyone who wants to hear it will be paying full attention.

At this time, there was only A.M. (amplitude modulated) radio. Because it had to serve both “serious” and “background” listeners, there was a tendency (in Europe, anyway) for A.M radio to be broadcast with the minimum possible dynamic correction and the maximum frequency response. In 1960 this writer visited the BBC Home Service Continuity Suite in Broadcasting House where there was a Quad A.M Check Receiver for monitoring London's A.M transmitter at Brookman’s Park. It was quite impossible to hear any difference between the studio sound and the transmitted sound, even on A.M radio. Surviving nitrate discs made off-air for broadcasting artists from the mid-1930s onwards show extraordinary fidelity for the time.

But when the American Edwin Armstrong invented the F.M. (frequency modulation) system in the mid-1930s, it was suddenly possible for the two uses of radio to be split. Economic conditions made this slower in Europe, but by 1958 F.M. Radio had come to most parts of Britain, duplicating what was going out on A.M. The Copenhagen Plan for A.M. Broadcasting restricted the frequency range of A.M. Radio to 4.5kHz to prevent mutual interference. Thus serious listeners switched to F.M. (which also had lower background noise), and car radios tended to be A.M. because it was less susceptible to “fading” when driving in valleys or between high buildings. In consequence of this, broadcasters also tended to compress the dynamic range of A.M. so it would be audible above the car engine noise, and speech processing kept up the intelligibility in the absence of high frequencies. Before long radio broadcasters were targeting their transmissions accordingly, and we must remember the effects upon off-air recordings surviving today.

The supposed target audience frequently affected the “original intended sound.” In chapter 10 we saw how the “Optimod” unit and similar devices reduce the dynamic range for car radios. The use of some form of compressor was virtually essential for most cinema and television films, because the sound mixer had to wedge three layers of sound (dialogue, music, and effects) into a dynamic range of only about 20dB. And we have also seen how excessive vertical modulation could restrict what went onto a stereo LP. But we now have a few compact disc reissues, where the stereo image is suddenly wider for low-pitched instruments of the standard orchestral layout. I am sure readers of this manual will be able to think of examples like that when they read almost any page. The basic fact is that the “medium” and the “message” are usually inextricably linked.

13.20 The influence of naturalism

Roland Gelatt has pointed out that the idea of a sound recording being a faithful representation of the original is a chimera running through a hundred years of history. It is true that many recording engineers (including those of Edison) conducted tests seeking to prove this point to sceptical consumers, and that they were often successful. It is also true that no-one with the ability to monitor sound in the manner of section 13.5 above has been able to resist running to the room with the performers and seeing what it sounds like there. But it must be confessed that the results of these experiments haven’t been very significant.

The fact is that sound recording is an art form, not a craft. Its significance as a faithful preserver of sound is usually no greater than that of the film “King Kong” documenting the architecture of New York. Whilst sound archivists clearly have a responsibility to “preserve the original sound” where it exists, their responsibilities to “preserve the original intended sound” are much greater. Let us close this chapter, and this manual, with our thanks and respects to the engineers who elevated sound recording from a craft to an art.

REFERENCES

  • 1: Peter Adamson, “Long-playing 78s” (article). The Historic Record, No. 17 (October 1990), pp. 6 - 9.
  • 2: The Gramophone Company (in Germany), matrixes 1249s to 1256s. Reissued on CD in 1991 by Symposium Records (catalogue number 1087, the first of a double-CD set).
  • 3: Percy Scholes, “The First Book of the Gramophone Record” (book), second edition (1927) page 87, London: Oxford University Press.
  • 4: “Impact of the recording industry on Hindustani classical music in the last hundred years” (paper), by Suman Ghosh. Presented at the IASA Conference, September 1999, Vienna, and reprinted in the IASA Journal No. 15, June 2000, pages 12 to 16.
  • 5: Peter Copeland, “The First Electrical Recording” (article). The Historic Record, No. 14 (January 1990), page 26.
  • 6: In America, issued on Columbia 50013D, 50016D, and 348D. In Britain, only four titles were issued, on Columbia 9048 and 9063.
  • 7: Gramophone 1462 or His Master’s Voice E.333.
  • 8: Sir Compton Mackenzie, “December Records and a few Remarks,” The Gramophone, Vol. 3 No. 8 (January 1926), p. 349. Quoted in Gelatt: “The Fabulous Phonograph,” (2nd edition), London, Cassell & Co., p. 232.
  • 9: Val Gielgud and Holt Marvell, “Death at Broadcasting House” (book). London: Rich & Cowan Ltd., 1934.
  • 10: Alec Nisbett: “The Technique of the Sound Studio” (book). London: The Focal Press (four editions)
  • 11: (reverb plate)
  • 12: R. F. Wilmut, “Kindly Leave The Stage!” (book), London: Methuen, pp. 68-9.
  • 13: Victor 35924 or His Master’s Voice C.1564 (78rpm disc), recorded Liederkranz Hall New York, 21st May 1928.
  • 14: F. W. Gaisberg, “Music On Record” (book), London, 1947, pages 33 and 44.
  • 15: P. G. A. H. Voigt, “Letter To The Editor,” Wireless World Vol. 63 No. 8 (August 1957), pp. 371-2.
  • 16: Halsey A. Frederick, “Recent Advances in Wax Recording.” One of a series of papers first presented to the Society of Motion Picture Engineers in September 1928. The information was then printed as one of a series of articles in Bell Laboratories Record for November 1928.
  • 17: Jerrold Northrop Moore, “Elgar On Record” (book), London, EMI Records Ltd., 1974. Pages 70-72 tell the story of an attempt to record Elgar’s voice during an orchestral rehearsal in 1927. The wax was played back twice on the day, and then Elgar asked for it to be processed. The resulting pressings have given difficulties when played today; many of Elgar’s words have been lost.
  • 18: Scott Eyman, “The Speed of Sound - Hollywood and the Talking Revolution” (book), New York: Simon & Schuster 1997, page 184.
  • 19: Peter Copeland, “On Recording the Six Senses” (article), Sheffield: The Historic Record, No. 11 (March 1989), pp. 34-35
  • 20: (ed. Elisabeth Weis and John Belton): Film Sound Theory and Practice (book), Columbia University Press (1985).
  • 21: Baynham Honri, “Cavalcade of the Sound Film,” based on a lecture given to the British Sound Recording Association on 20th November 1953 and published in the BSRA Journal Sound Recording and Reproduction, Vol. 4 No. 5 (May 1954), pp. 131-138.
  • 22-24: André Millard, “America On Record: A History of Recorded Sound” (book); Cambridge University Press, 1995, commencing page 147.
  • 25-26: Scott Eyman, “The Speed of Sound - Hollywood and the Talking Revolution” (book), New York: Simon & Schuster 1997. From the point of view of the evolution of sound recording practice, the information is spread in a somewhat
  • disconnected fashion; but relevant mentions of “The Jazz Singer” may be found on pages 12-15 and 135, “Old San Francisco” on pages 128-9, and Laurel & Hardy on p. 334.
  • 27: (ed. Elisabeth Weis and John Belton): Film Sound Theory and Practice (book), Columbia University Press (1985): article “Sound Editing and Mixing” by Mary Ann Doane, pp. 57-60.
  • 28: D. P. Loye and K. F. Morgan: “Sound Picture Recording and Reproducing Characteristics” (paper). Originally presented at the 1939 Spring Meeting of the S.M.P.E. at Hollywood, California; printed in the Journal of the Society of Motion Picture Engineers, June 1939, pp. 631 to 647.
  • 29: Bela Balazs, “Theory of the Film: Sound” (article), in “Theory of the Film: Character and Growth of a New Art” (book); New York: Dover Publications, 1970. Reprinted in: (ed. Elisabeth Weis and John Belton): Film Sound Theory and Practice (book), Columbia University Press (1985), pp. 106 - 125.

-

- Werbung Dezent -
Zurück zur Startseite © 2007/2024 - Deutsches Hifi-Museum - Copyright by Dipl. Ing. Gert Redlich Filzbaden - DSGVO - Privatsphäre - Zum Telefon der Redaktion - Zum Flohmarkt
Bitte einfach nur lächeln: Diese Seiten sind garantiert RDE / IPW zertifiziert und für Leser von 5 bis 108 Jahren freigegeben - kostenlos natürlich.

Privatsphäre : Auf unseren Seiten werden keine Informationen an google, twitter, facebook oder andere US-Konzerne weitergegeben.