The original impulse for the creation of Cinemetrics could be said to have been to find some kind of order in the succession of shot lengths that occur over the duration of a motion picture. The most common graphical representation of the shot lengths in a film is through the successive lengths marked down the timeline of a computer non-linear editing program (an NLE), but Yuri Tsivian and Gunars Civjans invented their own graphical representation, which is the Cinemetrics graph. This shows the lengths of the shots represented by the lengths of vertical bars on the negative y-axis of a graph. The x-axis intrinsically shows the ordinal shot number in equal divisions, but it is actually calibrated with a non-linear scale that gives the cumulative time-lapse from the beginning of the film to the end of each successive shot.
At a quick glance, these Cinemetrics shot length graphs are very complex and very varied, but the initial method that was tried to find structures and regularities in them was to plot a polynomial curve of best fit through the shot length values. This is referred to as a “trendline”, because that is what the makers of computer spreadsheet programs call such a line, though “trendline” had long meant something slightly different in financial graphs or charts. The general polynomial equation for this curve is given by
Y = Ax + Bx2 + Cx3 + Dx4 + Ex5 + Fx6 + Gx7 + Hx8 + Ix9 + Jx10 + Kx11 + Lx12 … etc.
Here “y” stands for the length on the y-axis, and “x” for the ordinal number of the shot in the sequence of shots. The coefficients or parameters A, B, C etc. represent numbers determined to give the curve of best fit through the varied values using the standard “least squares” algorithm. In the case of Cinemetrics, the degree or order of the trendline curve can vary from the first degree, which is y = Ax, and which gives a straight line, through the second degree y = Ax + Bx2 , and so on up to the 12th. degree, which has the form of the equation above, and which can have more bends in it. Mathematics dictates that the maximum number of ups and downs or bends in a trendline is at most one less than the degree of the trendline, so a 12th degree trendline can have no more than 11 ups and downs or bends in it. (These ups and downs are properly called maxima and minima.) It is possible to have a trendline of greater order than twelve, but it is not very convenient, for a number of reasons.
The hope was that frequent patterns might be observed in the trendlines over the results for a large number of films, but this hope has not been properly realised so far, though there do seem to be a few shapes that recur sometimes. A more systematic search for favoured shapes in the Cinemetrics trendlines might involve the use of scatter diagrams to see if there is any clumping of the values of the trendline coefficients about favoured values. In its ordinary form this method can only deal with second order trendlines and their two coefficients A and B, but there are ways to extend this graphical treatment to higher orders. (see “A 5-Dimensional Scatter Plot” at https://informationandvisualization.de/blog/5dimensional-scatter-plot This could obviously be extended to deal with at least one more dimension.)
Another more mathematically sophisticated approach to finding recurring patterns in the sequence of shot lengths in a film is that of James Cutting and his colleagues, in Attention and Hollywood Films, by James Cutting, Jordan DeLong, and Christine Nothelfer. That paper is discussed in the articles
on the Cinemetrics website.
Tsivian and Civjans initially considered another means of finding a structure in the series of shot lengths, which was to use a moving or rolling average, but this was not implemented until recently. Although it does not detect simple recurring patterns in the shot lengths, it has a use at the next level of investigation, which is relating the patterns in shot lengths to the nature of what is going on dramatically in the scenes in the films concerned. This is demonstrated and discussed in my article in the “Movie Measurement Articles” section of the website.
The same sort of idea, but using the Cinemetrics trendlines to relate to the dramatic content of the film dates back to the very beginning of Cinemetrics, with Yuri Tsivian’s initial investigation of Intolerance using trendlines, which can be found on the Cinemetrics website also in the “Movie Measurement Articles” section, and there are other examples of this approach scattered throughout the comments attached to individual Cinemetrics graphs.
A caution about this approach is sounded by taking a Cinemetrics graph for Darby O’Gill and the Little People with a 12th. degree trendline added, as below:
Loading...
and then comparing it with a similar Cinemetrics graph created by using a random selection of shot lengths which follow the same frequency distribution.
Loading...
Although you can see that the distribution of long and short shots is more even in the random graph, the trendline for this latter still has a pronounced set of wiggles that suggests a degree of structure equal to that in the real film, even though it actually has no meaning in relation to any film content.
Enter the Metrics
A more general and more abstract method of investigating the shot lengths in a film is to ignore the order in which they occur, and see what mass features can be found in them. The most obvious of these measures or metrics is the arithmetic mean of all the shot lengths in a film, popularly called the average. Less obvious is the median value of the shot lengths, which is that length for which half the shots in the film have greater lengths, and the other half have lesser lengths. Besides the mean (which we call the Average Shot Length in Cinemetrics) and the median, other basic statistical measures include the Standard Deviation, which measures the extent of the spread of the data about the mean value.
Statistics and Shot Lengths
Much of the elaborate apparatus of present-day statistics is fortunately not necessary for investigating shot lengths. The reason for this is that most of the methodological superstructure of modern statistics is directed towards estimating how reliable a sample taken from a population is in representing the characteristics of the entire population. But in Cinemetrics we are dealing with all the shot lengths for the entire film, and this is the whole population, not a sample from it, so the question of sample reliability does not arise, and notions like “confidence levels”, “rejection of the null hypothesis” and “robustness” are both irrelevant and misleading.
One way to investigate the global properties of the collection of shots making up a film is to group the shots purely according to their length. That is, one groups and counts how many shots in a film have lengths between, say, 0 and 1 second, 1 and 2 seconds, 2 and 3 seconds, and so on. These groups of lengths are correctly called “class intervals”, but the popular name for them nowadays is “bins”. The result of grouping the shot lengths into class intervals is called a distribution, and the number of shot lengths in each class interval, or bin, is the frequency of that group of lengths. (This use of the word frequency is different to that describing how often a periodic wave goes though its cycle.) Computer spreadsheets have a tool for doing this that is described by various names. In Microsoft Excel it is a data analysis tool called “histogram”. Strictly speaking, a histogram is a kind of bar chart in which the bars touch each other, and which can indeed be used to represent a distribution.
In this sort of treatment the collection of shots whose lengths are being investigated is called, in general, a population, and their lengths are the variable describing the population being studied. As the text-books tell you, the first thing you should do when investigating how a characteristic variable describing a population varies is to plot it on a graph. So here is a graph illustrating the distribution of shot lengths for Darby O’Gill and the Little People (1959).
Loading...
This shows, for instance, that there were 246 shots with lengths between two and three seconds out of the 1160 shots making up Darby O’Gill.
Examination of many such shot length distributions shows that they have similar-looking asymmetric profiles, as you can see in the article “The Numbers Speak” elsewhere on the Cinemetrics website. This resemblance suggests looking for a standard theoretical statistical distribution to which they conform. The best-known distribution is the Normal distribution, which is also referred to as the Gaussian distribution in Europe, and as “the bell curve” by the innumerate. This describes things like the occurrence of differing heights in a large collection of people, and many other natural phenomena. Obviously it is of no use here, because it is symmetrical in shape. But my investigations long ago suggested that for shot length frequencies, the best fit amongst the basic statistical distributions was with the Lognormal distribution. This distribution is described by the equation for the likelihood of a shot of a particular length x occurring (the Probability Density) f(x) is
Loading...
where m is the median value of the distribution of shot lengths, and is what is called the “shape factor”. The probability density has to be multiplied by the total number of shots in the distribution to get the expected number, according to the equation, within each class interval.
The Lognormal distribution results when the quantity under consideration, in our case shot length, is determined as a result of the probabilities associated with a large number of independent causative factors being multiplied together. The Lognormal distribution turns up in various diverse areas; for instance the separation between cars on a motorway. A current hot topic is its use to describe cancer survival rates. One would perhaps not expect it to appear in films, because people think of films as being entirely consciously constructed, and not a product of the blind forces of nature, like the other examples I have just mentioned. But the amount of conscious decision-making in making films may be less than people think, and many film-makers may often be doing what they do unthinkingly, because that is what everybody does. In films what is presumably concerned in determining the length of a shot is the simultaneous interaction of such factors in the scene being filmed as how the actor moves in the shot with respect to the closeness of the camera, the length of the lines he speaks, and how the other actors react, and so on. The fact that different individuals are usually responsible for these various components of a film, from the scriptwriter to the director to the editor, assists the independence of these causes. However, once in a while a film-maker may decide to do something unorthodox on purpose (this is the aesthetic impulse, after all), and this will upset the regularity of features like the Lognormal distribution.
There are various methods of finding the values of the parameters m and to get the closest fit of the theoretical equation, but I still use a standard pencil and graph-paper method that dates back to the years B.C. (Before Computers), when I started this enterprise. You can find this method described in the original, and still standard text, The Lognormal Distribution by J. Aitchinson and J.A.C. Brown (Cambridge Univerity Press, 1957). Using this method, the theoretical Lognormal distribution that is the closest fit to actual observed distribution for Darby O’Gill can be plotted as a coloured line on the graph of the distribution.
Loading...
As you can see, there is not an absolutely exact correspondence, but there is quite enough similarity to warrant seeing where this idea leads us. Since computers became widely available, programs have been developed that perform such a distribution fitting automatically, and here is the result of applying one of these to Darby O’Gill.
Loading...
At first glance it might look as though it is just as good as my manual result, perhaps even better, so we need a precise way of comparing the accuracy of the two fits. This is provided by the Pearson correlation coefficient, r, which is an efficient and accurate way of comparing two matching sets of data, and it can have values between 1 and -1. If its value is zero, there is no correlation between the two sets, and if it is 1, the two sets are identical. The closer the result is to 1, the better the fit. Negative values mean that the pairs of values making up the two series are as different as possible, with a large number always paired with a small number, and vice versa. One can roughly say that a value of the correlation coefficient in the range 0 to 0.09 indicates no correlation between the two series, 0.1 to 0.3 indicates a small or weak correlation, 0.3 to 0.5 a medium correlation, and 0.5 to 1.0 a large or strong correlation. However, this description depends on the circumstances, and in our case we need to be more demanding, with a value of 0.95 upwards indicating a good match. There is no sharp cut-off point where the distribution suddenly changes from being a Lognormal distribution to not being a distribution at all; the observed distribution just gets closer or less close to the ideal theoretical distribution.
In the case of the manually derived Lognormal distribution above, r = 0.975407, and for the automatic computer fitting result, r = 0.970011. But from this point onwards, I will use the coefficient R2 , which is the square of r, to measure goodness of fit of the theoretical distribution with the actual observed distribution, because this exaggerates the discrepancy between the two distributions being compared, making the difference more visible, and also for consistency with the results in my article The Numbers Speak, which you can read on this website. R2 takes values between 0 and 1, with the value 1 representing perfect matching, and 0 no matching at all. In this form, the match of the manually derived fit is described by R2 = 0.951, and the computerized fit by R2 = 0.941. So the old manual method does give a slightly better fit to the actual distribution. I have found that this is also true for the other examples I will deal with later. It would appear that some computerized curve-fitting programs are not using the best algorithm for this purpose, at least in the case of the Lognormal distribution. Hence my continued use of the old manual method, despite the greater labour involved.
Nick Redfern has checked 40 shot length records from the Cinemetrics database in his piece "Some brief notes on cinemetrics II" on his website for how well they conform to the Lognormal distribution, and he has found that 20 of them pass the strict test derived from random sampling theory that he is using. He had already done three Chaplin films in his piece "Testing normality in cinemetrics" in the same place, and altogether that gives 22 out of 43 films that conform very well to the Lognormal distribution. That proportion is rather similar to my results in my article “The Numbers Speak”, which is on the Cinemetrics site. So we can say that it is probable that about half of all films, with an ASL less than 20 seconds, conform well to the Lognormal distribution. It is worth noting that of the remainder, a substantial proportion just miss out on being Lognormal according to both Nick’s and my analyses. All this is valuable information, as it leads to questions about why this distribution appears for shot lengths, and I have noted above in general terms what might be the explanation, as well as in “The Numbers Speak”. This is an area that needs more investigation, because we want causal explanations for the features of shot length distributions.
The films that seriously fail to conform to the Lognormal distribution must do so for a reason. That is, the makers must either have been forced into it, or have made at least a semi-conscious effort to break away from the norm of the Lognormal distribution, so identifying the reasons that this happens is important.
To return to the Lognormal distribution, its shape is ordinarily defined by two parameters, the median and the shape factor. So the median is a good measure to have in this case, as in others. However, in the case of the Lognormal distribution, these two factors can be derived from the mean (that is, the ASL) and the standard deviation by inverting the two relations:
Loading...
In other words, all you need in theory to characterize a Lognormal distribution is the standard deviation and the ASL.
So for Lognormal distributions the median IS related to the ASL, and hence the ASL is useful, too. Because actual shot length distributions of feature films are not exactly Lognormal, these relationships cannot be used to derive the value of the shape factor of the Lognormal distribution that is the best fit to any particular actual shot length distribution. In any case, the mean exists as a basic characteristic of any distribution, and the ASL has been adopted by many other people as a standard measure for film statistics since I introduced it 35 years ago. This is partly because it is easy to get. You just have to know how many shots there are in a film, and the film’s length, to work it out. That is how I come to have a database of nearly 10,000 ASLs from complete films, which is very useful for stylistic comparisons. I consciously chose to call it the Average Shot Length, rather than the Mean Shot Length, because I reckoned that a smaller number of the many rather innumerate people in film studies would be put off by the former name. You can only get the median by listing all the shot lengths in a film, as in Cinemetrics. It is worth remarking that if you only consider the median shot length, you can be seriously misled about the distribution of shot lengths in the film you are considering. For instance, both The Lights of New York (1928) and The New World (2005) have median shot lengths of 5.1 seconds, so on this ground alone you might think they have similar distributions, but when you look at their other features it turns out they are very different.
Loading...
Loading...
The crucial feature is that in The Lights of New York there are a substantial number of shots with length greater than 50 seconds, in fact 12 of them, represented by the tall bar at the right end of the graph, whereas there is only one for The New World. The reason for this substantial number of long takes in The Lights of New York is that is subject to the technical constraints on shooting and editing synchronized sound at the beginning of the sound period that I describe in Film Style and Technology: History and Analysis. Like many films made at the very beginning of the use of synchronized sound, The Lights of New York is a mixture of scenes done in long takes shot with direct sound, and other scenes that were shot wild and post-synchronized. The use of normal fast cutting in the latter was of course unconstrained by technical factors. Another well-known film that has the same hybrid form is Hitchcock’s Blackmail. However, if we also take into account the ASLs of the two films, with 9.9 seconds for The Lights of New York, and 6.8 seconds for The New World, this significant difference is highlighted. Although the makers of The Lights of New York and Blackmail were forced into using this peculiar sort of shot length distribution, it later proved possible for some film-makers to adopt this style intentionally, as we shall see shortly.
The Coefficient of Variation
As Yuri Tsivian has noticed that for feature films, the ASL and the Standard Deviation (STD) are usually of a rather similar size. This is remarkable, and needs explanation, because for distributions in general this is not true. This ratio of (STD)/(ASL) is called the “coefficient of variation”, with the standard abbreviation Cv, and in the case of the Lognormal distribution it depends solely on the Shape Factor through a slightly complicated relationship which you can look up. As I noted in "The Numbers Speak", the Shape Factor tends to be around 0.8 for shot length distributions. This is the source of the effect under discussion, but not the reason for the existence of the effect. Working out values, I find that a value of the Shape Factor of 0.83 corresponds to a Cv of 1, a Shape Factor of 1.0 to a Cv of 1.3, a Shape Factor of 0.7 to an Cv of 0.8, and so on. However, since the shot length distributions for films are not exactly Lognormal, these values are only approximate.
Nick Redfern has observed that this feature indicates that there is greater variation in the shot lengths of the sound films than the silent films, and in fact for the 181 silent features (1913-1929) in the Cinemetrics database the Coefficient of Variation is 0.97, and for the 1607 sound features in the database, the Coefficient of Variation = 1.14.
The distribution of values of the Coefficient of Variation for film shot lengths is fairly close to being a Normal distribution, as shown here.
Loading...
As mentioned, the mean value of Cv for sound features in the Cinemetrics database is 1.14. The correlation between the actual experimental values from the Cinemetrics database and the theoretical Normal distribution given by R2 is 0.97, which is very good.
Now, the general shape of the distribution of Cv suggests to me that mostly film-makers are unconsciously working for some sort of standard mix of long and short shots in their scene dissection, but don’t always quite hit it. However, there are also some who want to put in extra long takes beyond the normal mixture of lengths, and they are represented in the vestigial right tail of the graph above, which departs slightly from normality. The number of these film-makers is small, but who they are is important. I list the films with Cv greater than 1.9 in chronological order:
The Front Page (1931)
Rain (1932)
Citizen Kane (1941)
Touch of Evil (1958)
Lady From Shanghai (1947)
Macbeth (1948)
Forty Guns (1957)
Ride Lonesome (1959)
Verboten! (1959)
Who’s Afraid of Virginia Woolf? (1966)
Week End (1967)
En Passion (1969)
Tout va bien (1972)
Electra Glide in Blue (1973)
1900 (1976)
Paris, Texas (1984)
Wild at Heart (1990)
Amores Perros(2000)
Code Unknown (2000)
Yo (Me) (2007)
You will notice that after two early sound films, there are none from the rest of the ‘thirties. Although these films nearly all have long ASLs, Amore Perros (ASL = 4.9) shows that the association of large Cv with large ASL is not necessary. Conversely, long ASLs do not necessarily produce large values of Cv, as shown by Werckmeister Harmonies (not in the list) with an ASL of 219 seconds, but a Cv of 0.5. That is, a film-maker who is trying to do nothing but long takes will not create a film with large Cv, though they will probably create one with small Cv.
The distribution histogram for Ride Lonesome shows the sort of thing that is going on in its cutting, which is somewhat like that in the earliest sound films such as Lights of New York.
Loading...
Again the film contains a number of very long takes, with 16 shots greater than 50 seconds in length. Partly because of that, the fit of the theoretical Lognormal distribution is not particularly good. These long take shots could reasonably be referred to as “outliers” in this particular case, but to disregard their existence in an investigation is to shut your eyes to the very thing that makes this film special. For this group of films, the use of scenes with very long takes mixed with scenes with ordinary cutting is a wilful and conscious choice by the director, because after around 1932 there were no longer any technical constraints on the shot lengths used.
There is also another larger group of sound films in which the director has set out to do the whole thing in long takes, and such films typically have ASLs longer than 15 seconds. For example, here is the shot length distribution for On the Beach (1959), which has an ASL of 18.4 seconds.
Loading...
The shape here is flattened, and there are many more of the shots in the long tail, which extends beyond the right end of the graph, with 36 shots having lengths greater than 50 seconds. In fact, 75 shots of the 407 shots making up the film have lengths greater than 30 seconds.
Median/ASL Ratio
I have noticed that there is also a relatively fixed ratio between the ASL and the Median for movies, and now, with a bit of help from Gunars Civjans, I have the proof of this. Considering all the 1520 sound fiction feature films in the Cinemetrics database, I find that the average ratio of their Median to their ASL is 0.620. The values are clustered quite close to this average value, as you can see from their distribution graph here:
Loading...
This distribution has the common Normal (or Gaussian) shape, and its Standard Deviation is 0.124 . This is the reason that you can roughly predict what the Median value for any film will be, just from its ASL. Putting this another way, 82% of films have a Median/ASL coefficient in the range 0.5 to 0.7. This is of course another remarkable fact, and again demands explanation.
I had predicted to myself that removing films with an ASL of 15 seconds and higher, which is the region where the fit to the Lognormal distribution begins to break down, would sharpen the above distribution, but in fact doing this makes hardly any change to the shape of the distribution, or to its mean value. However, removing non-American films from the population does sharpen the relation slightly, without changing the Median/ASL ratio much. The 581 American sound films in the Cinemetrics database have a Median/ASL ratio of 0.626, with a standard deviation of 0.109 .
When we turn to all the 186 silent fictional features in the Cinemetrics database, the Median/ASL ratio changes appreciably, to a mean value of 0.711, and a standard deviation of 0.082 , as in the graph here:
Loading...
But concentrating just on the 92 American silent fiction features in the Cinemetrics database has very little effect on the result we get. (Median/ASL ratio = 0.714, with a standard deviation of 0.090).
Since the Median/ASL ratio relates approximately to the shape factor for Lognormal distributions, this means that all the sound films have fairly similar shape factors, and the silent films mostly have a different shape factor. Why there is an appreciable difference between sound and silent films in the shape of their shot length distribution is yet another interesting question.
Actually, it turns out that the difference in the Median/ASL ratio between American sound and silent is more complicated than I thought at first. Closer inspection of the American sound feature film corpus on the Cinemetrics database shows that the ratio varies a bit with the magnitude of the ASL. To take an extreme case, the mean Median/ASL ratio for the 25 feature films with ASL less than 3 seconds is 0.74, while for the 64 films with ASL less than 4 seconds, it is 0.71, and so on, till we get to the 217 films with ASL less than 7 seconds, by which point the mean ratio of Median to ASL is 0.68.
So what we have here is a correlation between the Median/ASL ratio and the ASL for American sound films. It is not a strong correlation, because when we calculate the correlation coefficient (r) for this relation for all the American sound films, it comes out at about 0.3.
Turning back to the 84 American silent films in the database, we find that only 17 of them have ASLs greater than 7 seconds, so their Median/ASL ratio of 0.73 may reasonably be compared with that of 0.68 for the group of sound films with ASL less than 7 seconds. However, the remaining difference between silent films and the group of faster cut sound films remains to be addressed.
Experimental Science
Up to this point, we have just been groping around with descriptive statistics. It is time to start being real scientists, and look for the cause of the phenomenon. The most obvious difference between sound and silent films in this context is that the silent films have dialogue intertitles. The unexamined convention in film analysis is that a dialogue intertitle should be counted as a shot. But up to the editing stage, American silent films were shot with the actors speaking the lines in the script, without regard for where the dialogue intertitles would subsequently go. After the editing was completed, the film was passed to a specialist title writer, who made up the content of the intertitles based partly on what the actors were indicated to be saying in the script, and partly on his or her own imagination. (This process is described on pages 337 and 338 of Kevin Brownlow’s The Parade’s Gone By…). After this, the title cards were painted, filmed, and inserted into each print of the film at the appropriate place when the final show prints were assembled. So perhaps the intertitles are distorting the “natural” distribution of the lengths of the shots, and hence the shape of the shot length distributions of silent films. After all, the duration of a dialogue title is limited by the amount of text you can get onto one title card, and these titles were traditionally given a length that would enable the average person to read them through twice.
So the first simple thing to do is look at any silent films that have no dialogue titles. There are very few of these, with the much the best known being Der letzte Mann (1925). This has a Median/ASL ratio of 0.63, just like the average sound film. This is encouraging, but it could be a lucky fluke. How to bring more silent films into the enquiry? The fairly obvious thing to do is to take the dialogue titles out of a silent film, and then see how it measures up.
So here is the shot length distribution for It (1927), counting the dialogue titles as shots in the usual way, and with the theoretical Lognormal distribution corresponding to the shape factor and median shot length determined from the actual distribution of shot lengths imposed on the histogram as well.
Loading...
I next removed the dialogue titles in a non-linear editing program to create a new version of the film without them. In American silent films of the ‘twenties, most dialogue titles occur between shots of the scene taken from different camera positions, so removing them does not disturb the length of the remaining shots. But in the case where the dialogue title has been cut into the middle of a continuous take, which happens to a lesser extent, I ignore the new cuts resulting when I measure the length of the shot in my new version. The shot length distribution of the reduced film is shown below.
Loading...
Although it is not obvious from the graph, the theoretical distribution is actually not quite so good a fit (with R2 = 0.945) as it was for the original film with the titles in it, which has R2 = 0.968. But our real concern is with the Median/ASL ratio, which was 0.76, but is now reduced to 0.68, and so the distinction between the shapes of the shot length distributions for sound and silent films almost vanishes in this case. So my explanation for the phenomenon looks promising.
I next treated two more silent films in the same way. Here are the median/ASL ratios (which determine the shape factor for Lognormal distributions) for three American silent films. They are given for the original form of the film, with dialogue titles treated as shots, and then for the film modified by omitting the dialogue titles.
(MOST IMPORTANT: When this has been done, if the dialogue title (or titles) are cut into the middle of what was obviously one continuous take, then the divided parts of this take are joined together again to make one continuous shot. This does NOT give the same result as leaving these fragments as separate shots in the analysis.)
Loading...
Now for American silent films made between 1920 and 1928 in the Cinemetrics database, the ASLs cover the range from 3.3 seconds to 7.6 seconds, and the mean value of the Median/ASL ratio is 0.72 for this group. So what we need for a comparable group of sound films are those that cover the same range of ASLs. When this comparable group has been selected, they are found to have a mean Median/ASL ratio of 0.66. The three silent films I have analysed above are indeed clustered around this lower figure after their dialogue titles are surgically removed, and sutures rejoin the parted shots.
Although it is not necessary for accuracy of my demonstration above, to actually see the difference between the distributions with, and without, dialogue titles, I add the distribution graphs for Seventh Heaven and Little Annie Rooney to that for It shown above. Here is the shot length distribution for Seventh Heaven (1927), counting the dialogue titles as shots in the usual way, and with the theoretical Lognormal distribution corresponding to the shape factor and median shot length determined from the actual distribution of shot lengths imposed on the histogram as well.
Loading...
Then we have the shot length distribution with the dialogue titles cut out, as previously described.
Loading...
The correlation coefficient, which indicates the goodness of fit of the actual distribution to the theoretical Lognormal distribution using the median and shape factor derived from the actual values goes from R2 = 0.974 to R2 = 0.962 on removal of the titles. Both these values imply a fairly good fit to the Lognormal distribution for this data, though a whisker better for original distribution.
The similar results found for Little Annie Rooney are as follows:
Loading...
Loading...
In this case the fit between the actual and theoretical distributions is given by R2 = 0.876 when including the dialogue titles, going up to R2 = 0.964 when they are removed. This indicates that there is an appreciably better fit to the Lognormal distribution when the dialogue titles are cut out in this particular case.
If we look at the distribution of the lengths of dialogue titles on their own for Seventh Heaven, we get the following graph. (The results are similar for Little Annie Rooney and It.)
Loading...
You can see immediately that this distribution is a quite different shape to that of a Lognormal distribution, as it lacks the extended right tail, and also that it has broader shoulders.
Removing these dialogue titles has the appreciable effect on the original distributions that we have observed because in the typical American silent film they make up a substantial proportion of the number of shots in the film. The proportion of dialogue titles in an American silent film is usually around 15% of the shots in the film, though this is not true for slapstick comedy, which uses much less titling than ordinary dramas and comedies -- in fact about half as much. This last point means that the shape of the shot distributions for slapstick comedies will be less different from that of sound films with the same ASL than are those of ordinary silent films from equivalent sound films. This has been observed by Nick Redfern recently, in his analysis of the silent and sound films of Laurel and Hardy, which he refers to on his “Laurel and Hardy data” thread of the Cinemetrics discussion board.
Incidentally, this last part of my investigation represents the first appearance of a sub-discipline that might be called “experimental film history”, as a kind of analogue to the “experimental archaeology” that has come to the fore in archaeology in recent times.
Barry Salt, 2011