Previous AppendixAPPENDIX C
The Implications of Statistical Probability for the History of the Text[1]
Zane C. Hodges and David M. Hodges
Today, the whole question of the derivation of "text-types" through definite, historical recensions is open to debate. Indeed, E.C. Colwell, one of the leading contemporary [1975] critics, affirms dogmatically that the so-called "Syrian" recension (as Hort would have conceived it) never took place.[2] Instead he insists that all text-types are the result of "process" rather than definitive editorial activity.[3] Not all scholars, perhaps, would agree with this position, but it is probably fair to say that few would be prepared to deny it categorically. At least Colwell's position, as far as it goes, would have greatly pleased Hort's great antagonist, Dean Burgon. Burgon, who defended the Textus Receptus with somewhat more vehemence than scholars generally like, had heaped scorn on the idea of the "Syrian" revision, which was the keystone to Westcott and Hort's theory. For that matter, the idea was criticized by others as well, and so well-known a textual scholar as Sir Frederic Kenyon formally abandoned it.[4] But the dissent tended to die away, and the form in which it exists today is quite independent of the question of the value of the TR. In a word, the modern skepticism of the classical concept of recensions thrives in a new context (largely created by the papyri). But this context is by no means discouraging to those who feel that the Textus Receptus was too hastily abandoned.
The very existence of the modern-day discussion about the origin of text-types serves to set in bold relief what defenders of the Received Text have always maintained. Their contention was this: Westcott and Hort failed, by their theory of recensions, to adequately explain the actual state of the Greek manuscript tradition; and in particular, they failed to explain the relative uniformity of this tradition. This contention now finds support by reason of the questions which modern study has been forced to raise. The suspicion is well advanced that the Majority text (as Aland designates the so-called Byzantine family[5]) cannot be successfully traced to a single even in textual history. But, if not, how can we explain it?
Here lies the crucial question upon which all textual theory logically hinges. Studies undertaken at the Institut für neutestamentliche Textforschung in Münster (where already photos or microfilms of over 4,500 [now over 5,000] manuscripts have been collected) tend to support the general view that as high as 90 [95] percent of the Greek cursive (minuscule) manuscripts extant exhibit substantially the same form of text.[6] If papyrus and uncial (majuscule) manuscripts are considered along with cursives, the percentage of extant texts reflecting the majority form can hardly be less than 80 [90] percent. But this is a fantastically high figure and it absolutely demands explanation. In fact, apart from a rational explanation of a text form which pervades all but 20 [10] percent of the tradition, no one ought to seriously claim to know how to handle our textual materials. If the claim is made that great progress toward the original is possible, while the origin of 80 percent of the Greek evidence is wrapped in obscurity, such a claim must be viewed as monstrously unscientific, if not dangerously obscurantist. No amount of appeal to subjective preferences for this reading or that reading, this text or that text, can conceal this fact. The Majority text must be explained as a whole, before its claims as a whole can be scientifically rejected.
It is the peculiar characteristic of New Testament textual criticism that, along with a constantly accumulating knowledge of our manuscript resources, there has been a corresponding diminution in the confidence with which the history of these sources is described. The carefully constructed scheme of Westcott and Hort is now regarded by all reputable scholars as quite inadequate. Hort's confident assertion that "it would be an illusion to anticipate important changes of text from any acquisition of new evidence" is rightly regarded today as extremely naive.[7]
The formation of the Institut für neutestamentliche Textforschung is virtually an effort to start all over again by doing the thing that should have been done in the first place—namely, collect the evidence! It is in this context of re-evaluation that it is entirely possible for the basic question of the origin of the Majority text to push itself to the fore. Indeed, it may be confidently anticipated that if modern criticism continues its trend toward more genuinely scientific procedures, this question will once again become a central consideration. For it still remains the most determinative issue, logically, in the whole field.
Do the proponents of the Textus Receptus have an explanation to offer for the Majority text? The answer is yes. More than that, the position they maintain is so uncomplicated as to be free from difficulties encountered by more complex hypotheses. Long ago, in the process of attacking the authority of numbers in textual criticism, Hort was constrained to confess: "A theoretical presumption indeed remains that a majority of extant documents is more likely to represent a majority of ancestral documents at each stage of transmission than vice versa."[8] In conceding this, he was merely affirming a truism of manuscript transmission. It was this: under normal circumstances the older a text is than its rivals, the greater are its chances to survive in a plurality or a majority of the texts extant at any subsequent period. But the oldest text of all is the autograph. Thus it ought to be taken for granted that, barring some radical dislocation in the history of transmission, a majority of texts will be far more likely to represent correctly the character of the original than a small minority of texts. This is especially true when the ratio is an overwhelming 8:2 [9:1]. Under any reasonably normal transmissional conditions, it would be for all practical purposes quite impossible for a later text-form to secure so one-sided a preponderance of extant witnesses. Even if we push the origination of the so-called Byzantine text back to a date coeval with P75 and P66 (c. 200)—a time when already there must have been hundreds of manuscripts in existence—such mathematical proportions as the surviving tradition reveals could not be accounted for apart from some prodigious upheaval in textual history.
Statistical probability
This argument is not simply pulled out of thin air. What is involved can be variously stated in terms of mathematical probabilities. For this, however, I have had to seek the help of my brother, David M. Hodges, who received his B.S. from Wheaton College in 1957, with a major in mathematics. His subsequent experience in the statistical field includes service at Letterkenny Army Depot (Penna.) as a Statistical Officer for the U.S. Army Major Item Data Agency and as a Supervisory Survey Statistician for the Army Materiel Command Equipment Manuals Field Office (1963-67), and from 1967-70 as a Statistician at the Headquarters of U.S. Army Materiel Command, Washington, DC. In 1972 he received an M.S. in Operations Research from George Washington University.
Below is shown a diagram of a transmissional situation in which one of three copies of the autograph contains an error, while two retain the correct reading. Subsequently the textual phenomenon known as "mixture" comes into play with the result that erroneous readings are introduced into good manuscripts, as well as the reverse process in which good readings are introduced into bad ones. My brother's statement about the probabilities of the situation follows the diagram in his own words. [Because of spacing the diagram comes on the next page.]
Provided that good manuscripts and bad manuscripts will be copied an equal number of times, and that the probability of introducing a bad reading into a copy made from a good manuscript is equal to the probability or reinserting a good reading into a copy made from a bad manuscript, the correct reading would predominate in any generation of manuscripts. The degree to which the good reading would predominate depends on the probability of introducing the error.
For purposes of demonstration, we shall call the autograph the first generation. The copies of the autograph will be called the second generation. The copies of the second generation manuscripts will be called the third generation and so on. The generation number will be identified as "n". Hence, in the second generation, n=2.
Assuming that each manuscript is copied an equal number of times, the number of manuscripts produced in any generation is kn-1, where "k" is the number of copies made from each manuscript.
The probability that we shall reproduce a good reading from a good manuscript is expressed as "p" and the probability that we shall introduce an erroneous reading into a good manuscript is "q". The sum of p and q is 1. Based on our original provisions, the probability of reinserting a good reading from a bad manuscript is q and the probability of perpetuating a bad reading is p.
The expected number of good manuscripts in any generation is the quantity pkGn-1 + qkBn-1 and the expected number of bad manuscripts is the quantity pkBn-1 + qkGn-1, where Gn-1 is the number of good manuscripts from which we are copying and Bn-1 is the number of bad manuscripts from which we are copying. The number of good manuscripts produced in a generation is Gn and the number of bad produced is Bn. We have, therefore, the formulas:
(1) Gn = pkGn-1 + qkBn-1 and
(2) Bn = pkBn-1 + qkGn-1 and
(3) kn-1 = Gn + Bn = pkGn-1 + qkBn-1 + pkBn-1 + qkGn-1.
If Gn = Bn, then pkGn-1 = qkBn-1 = pkBn-1 + qkGn-1 and pkGn-1 + qkBn-1 – pkBn-1 – qkGn-1 = 0.
Collecting like terms, we have pkGn-1 - qkGn-1 + qkBn-1 - pkBn-1 = 0 and since k can be factored out, we have (p-q)Gn-1 + (q-p)Bn-1 = 0 and (p-q)Gn-1 – (p-q)Bn-1 = 0 and (p-q)(Gn-1 – Bn-1) = 0. Since the expression on the left equals zero, either (p-q) or (Gn-1 – Bn-1) must equal zero. But (Gn-1 – Bn-1) cannot equal zero, since the autograph was good. This means that (p-q) must equal zero. In other words, the expected number of bad copies can equal the expected number of good copies only if the probability of making a bad copy is equal to the probability of making a good copy.
If Bn is greater than Gn, then pkBn-1 + qkGn-1 > pkGn-1 + qkBn-1. We can subtract a like amount from both sides of the inequality without changing the inequality. Thus, we have pkBn-1 + qkGn-1 – pkGn-1 – qkBn-1 > 0 and we can also divide k into both sides obtaining pBn-1 + qGn-1 – pGn-1 – qBn-1 > 0. Then, (p-q)Bn-1 + (q-p)Gn-1 > 0. Also, (p-q)Bn-1 – (p-q)Gn-1 > 0. Also (p-q)(Bn-1 – Gn-1) > 0. However, Gn-1 is greater than Bn-1 since the autograph was good. Consequently, (Bn-1 – Gn-1) < 0. Therefore, (p-q) must also be less than zero. This means that q must be greater than p in order for the expected number of bad manuscripts to be greater than the expected number of good manuscripts. This also means that the probability of error must be greater than the probability of a correct copy.
The expected number is actually the mean of the binomial distribution. In the binomial distribution, one of two outcomes occurs; either a success, i.e., an accurate copy, or a failure, i.e., an inaccurate copy.
In the situation discussed, equilibrium sets in when an error is introduced. That is, the numerical difference between the number of good copies and bad copies is maintained, once an error has been introduced. In other words, bad copies are made good at the same rate as good copies are made bad. The critical element is how early a bad copy appears. For example, let us suppose that two copies are made from each manuscript and that q is 25% or ¼. From the autograph two copies are made. The probability of copy number 1 being good is ¾ as is the case for the second copy. The probability that both are good is 9/16 or 56%. The probability that both are bad is ¼ x ¼ or 1/16 or 6%. The probability that one is bad is ¾ x ¼ + ¼ x ¾ or 6/16 or 38%. The expected number of good copies is pkGn-1 + qkBn-1 which is ¾ x 2 x 1 + ¼ x 2 x 0 or 1.5. The expected number of bad copies is 2 – 1.5 or .5. Now, if an error is introduced into the second generation, the number of good and bad copies would, thereafter, be equal. But the probability of this happening is 44%. If the probability of an accurate copy were greater than ¾, the probability of an error in the second generation would decrease. The same holds true regardless of the number of copies and the number of generations so long as the number of copies made from bad manuscripts and the number from good manuscripts are equal. Obviously, if one type of manuscript is copied more frequently than the other, the type of manuscript copied most frequently will perpetuate its reading more frequently.
Another observation is that if the probability of introducing an incorrect reading differs from the probability of reintroducing a correct reading, the discussion does not apply.
This discussion, however, is by no means weighted in favor of the view we are presenting. The reverse is the case. A further statement from my brother will clarify this point.
Since the correct reading is the reading appearing in the majority of the texts in each generation, it is apparent that, if a scribe consults other texts at random, the majority reading will predominate in the sources consulted at random. The ratio of good texts consulted to bad will approximate the ratio of good texts to bad in the preceding generations. If a small number of texts are consulted, of course, a non-representative ratio may occur. But, in a large number of consultations of existing texts, the approximation will be representative of the ratio existing in all extant texts.
In practice, however, random comparisons probably did not occur. The scribe would consult those texts most readily available to him. As a result, there would be branches of texts which would be corrupt because the majority of texts available to the scribe would contain the error. On the other hand, when an error first occurs, if the scribe checked more than one manuscript he would find all readings correct except for the copy that introduced the error. Thus, when a scribe used more than one manuscript, the probability of reproducing an error would be less than the probability of introducing an error. This would apply to the generation immediately following the introduction of an error.
In short, therefore, our theoretical problem sets up conditions for reproducing an error which are somewhat too favorable to the error. Yet even so, in this idealized situation, the original majority for the correct reading is more likely to be retained than lost. But the majority in the fifth generation is a slender 41:40. What shall we say, then, when we meet the actual extant situation where (out of any given 100 manuscripts) we may expect to find a ratio of, say, 80:20? It at once appears that the probability that the 20 represent the original reading in any kind of normal transmissional situation is poor indeed.
Hence, approaching the matter from this end (i.e., beginning with extant manuscripts) we may hypothesize a problem involving (for mathematical convenience) 500 extant manuscripts in which we have proportions of 75% to 25%. My brother's statement about this problem is as follows:
Given about 500 manuscripts of which 75% show one reading and 25% another, given a one-third probability of introducing and error, given the same probability of correcting an error, and given that each manuscript is copied twice, the probability that the majority reading originated from an error is less than one in ten. If the probability of introducing an error is less than one-third, the probability that the erroneous reading occurs 75% of the time is even less. The same applies if three, rather than two copies are made from each manuscript. Consequently, the conclusion is that, given the conditions described, it is highly unlikely that the erroneous reading would predominate to the extent that the majority text predominates.
This discussion applies to an individual reading and should not be construed as a statement of probability that copied manuscripts will be error free. It should also be noted that a one-third probability of error is rather high, if careful workmanship is involved.
It will not suffice to argue in rebuttal to this demonstration that, of course, an error might easily be copied more often than the original reading in any particular instance. Naturally this is true, and freely conceded. But the problem is more acute than this. If, for example, in a certain book of the New Testament we find (let us say) 100 readings where the manuscripts divide 80 percent to 20 percent, are we to suppose that in every one of these cases, or even in most of them, that this reversal of probabilities has occurred? Yet this is what, in effect, contemporary textual criticism is saying. For the Majority text is repeatedly rejected in favor of minority readings. It is evident, therefore, that what modern textual critics are really affirming—either implicitly or explicitly—constitutes nothing less than a wholesale rejection of probabilities on a sweeping scale!
Surely, therefore, it is plain that those who repeatedly and consistently prefer minority readings to majority readings—especially when the majorities rejected are very large—are confronted with a problem. How can this preference be justified against the probabilities latent in any reasonable view of the transmissional history of the New Testament? Why should we reject these probabilities? What kind of textual phenomenon would be required to produce a Majority text diffused throughout 80 percent of the tradition, which nonetheless is more often wrong than the 20 percent which oppose it? And if we could conceptualize such a textual phenomenon, what proof is there that it ever occurred? Can anyone, logically, proceed to do textual criticism without furnishing a convincing answer to these questions?
I have been insisting for quite some time that the real crux of the textual problem is how we explain the overwhelming preponderance of the Majority text in the extant tradition. Current explanations of its origin are seriously inadequate (see below under "Objections"). On the other hand, the proposition that the Majority text is the natural outcome of the normal processes of manuscript transmission gives a perfectly natural explanation for it. The minority text-forms are thereby explained, mutatis mutandis, as existing in their minority form due to their comparative remoteness from the original text. The theory is simple but, I believe, wholly adequate on every level. Its adequacy can be exhibited also by the simplicity of the answers it offers to objections lodged against it. Some of these objections follow.
Objections
1. Since all manuscripts are not copied an even number of times, mathematical demonstrations like those above are invalid.
But this is to misunderstand the purpose of such demonstrations. Of course the diagram given above is an "idealized" situation which does not represent what actually took place. Instead, it simply shows that all things being equal statistical probability favors the perpetuation in every generation of the original majority status of the authentic reading. And it must then be kept in mind that the larger the original majority, the more compelling this argument from probabilities becomes. Let us elaborate this point.
If we imagine a stem as follows
in which A = autograph and (1) and (2) are copies made from it, it is apparent that, in the abstract, the error in (2) has an even chance of perpetuation in equal numbers with the authentic reading in (1). But, of course, in actuality (2) may be copied more frequently than (1) and thus the error be perpetuated in a larger number of later manuscripts than the true reading in (1).
So far, so good. But suppose
Now we have conceded that the error designated (a) is being perpetuated in larger numbers than the true reading (a), so that "error (a)" is found in copies 5-6-7-8, while "true reading (a)" is found only in copies 3 and 4. But when "error (b)" is introduced in copy 8, its rival ("true reading (b)") is found in copies 3-4-5-6-7.[10] Will anyone suppose that at this point it is at all likely that "error (b)" will have the same good fortune as "error (a)" and that manuscript 8 will be copied more often than 3-4-5-6-7 combined?
But even conceding this far less probable situation, suppose again
Will anybody believe that probabilities favor a repetition of the same situation for "error (c)" in copy 19?
Is it not transparent that as manuscripts multiply, and errors are introduced farther down in the stream of transmission, that the probability is drastically reduced that the error will be copied more frequently than the increasingly large number of rival texts?
Thus to admit that some errors might be copied more frequently than the rival, authentic reading in no way touches the core of our argument. The reason is simple: modern criticism repeatedly and systematically rejects majority readings on a very large scale. But, with every such rejection, the probability that this rejection is valid is dramatically reduced. To overturn statistical probabilities a few times is one thing. To overturn them repeatedly and persistently is quite another!
Hence, we continue to insist that to reject Majority text readings in large numbers without furnishing a credible overall rationale for this procedure is to fly blindly into the face of all reasonable probability.
2. The Majority text can be explained as the outcome of a "process" which resulted in the gradual formation of a numerically preponderant text-type.
The "process" view of the Majority text seems to be gaining in favor today among New Testament textual scholars. Yet, to my knowledge, no one has offered a detailed explanation of what exactly the process was, when it began, or how—once begun—it achieved the result claimed for it. Indeed, the proponents of the "process" view are probably wise to remain vague about it because, on the face of the matter, it seems impossible to conceive of any kind of process which will be both historically credible and adequate to account for all the facts. The Majority text, it must be remembered, is relatively uniform in its general character with comparatively low amounts of variation between its major representatives.[11]
No one has yet explained how a long, slow process spread out over many centuries as well as over a wide geographical area, and involving a multitude of copyists, who often knew nothing of the state of the text outside of their own monasteries or scriptoria, could achieve this widespread uniformity out of the diversity presented by the earlier forms of text. Even an official edition of the New Testament—promoted with ecclesiastical sanction throughout the known world—would have had great difficulty achieving this result as the history of Jerome's Vulgate amply demonstrates.[12] But an unguided process achieving relative stability and uniformity in the diversified textual, historical, and cultural circumstances in which the New Testament was copied, imposes impossible strains on our imagination.
Thus it appears that the more clearly and specifically the "process" view may come to be articulated, the more vulnerable it is likely to be to all of the potential objections just referred to. Further, when articulation is given to such a view, it will have to locate itself definitely somewhere in history—with many additional inconveniences accruing to its defenders. For, be it remembered, just as history is silent about any "Syrian recension" (such as the one Hort imagined), so also history is silent about any kind of "process" which was somehow influencing or guiding the scribes as manuscripts were transmitted. Modern critics are the first to discover such a "process", but before accepting it we shall have to have more than vague, undocumented assertions about it.
It seems not unfair to say that the attempt to explain the Majority text by some obscure and nebulous "process" is an implicit confession of weakness on the part of contemporary criticism. The erosion of Westcott and Hort's view, which traced this text to an official, definitive recension of the New Testament, has created a vacuum very hard indeed to fill. More than ever, it appears, critics cannot reject the Majority text and at the same time also explain it. And this is our point! Rejection of the Majority text and credible explanation of that text are quite incompatible with each other. But acceptance of the Majority text immediately furnishes an explanation of this text and the rival texts as well! And it is the essence of the scientific process to prefer hypotheses which explain the available facts to those which do not!
[1]This appendix is an edited abstract from "A Defense of the Majority-Text" by Zane C. Hodges and David M. Hodges (unpublished course notes, Dallas Theological Seminary, 1975) used by permission of the authors.
[2]His statement is: "The Greek Vulgate—The Byzantine or Alpha texttype—had its origin in no such single focus as the Latin had in Jerome" (italics in the original). E.C.Colwell, "The Origin of Texttypes of New Testament Manuscripts," Early Christian Origins, p.137.
[3]Ibid., p. 136. Cf. our discussion of this view under "Objections."
[4]Cf. F.G. Kenyon, Handbook to the Textual Criticism of the New Testament, pp. 324ff.
[5]Kurt Aland, "The Significance of the Papyri for Progress in New Testament Research," The Bible in Modern Scholarship, p. 342. This is the most scientifically unobjectionable name yet given to this text form.
[6]Ibid., p. 344.
[7]Ibid., pp. 330ff.
[8]B.F. Westcott and F.J.A. Hort, The New Testament in the Original Greek, II, 45.
[9][N.B.—the fifth generation is represented by all three lines; in other words, each MS of the fourth generation was copied three times, just as in the other generations.]
[10]By "error (b)" we mean, of course, an error made in another place in the text being transmitted from the autograph. We do not mean that "error (b)" has been substituted for "error (a)." Hence, while copies 5-6-7 contain "error (a)," they also contain the original autograph reading which is the rival to "error (b)."
[11] The key words here are "relatively" and "comparatively." Naturally, individual members of the Majority text show varying amounts of conformity to it. Nevertheless, the nearness of its representatives to the general standard is not hard to demonstrate in most cases. For example, in a study of one hundred places of variation in John 11, the representatives of the Majority text used in the study showed a range of agreement from around 70 percent to 93 percent. Cf. Ernest C. Colwell and Ernest W. Tune, pp. 28,31. The uncial codex Omega's 93 percent agreement with the Textus Receptus compares well with the 92 percent agreement found between P75 and B. Omega's affinity with the TR is more nearly typical of the pattern one would find in the great mass of minuscule texts. High levels of agreement of this kind are (as in the case of P75 and B) the result of a shared ancestral base. It is the divergencies that are the result of a "process" and not the reverse.
A more general, summary statement of the matter is made by Epp, ". . . the Byzantine manuscripts together form, after all, a rather closely-knit group, and the variations in question within this entire large group are relatively minor in character." (Eldon Jay Epp, "The Claremont Profile Method for Grouping New Testament Minuscule Manuscripts," p. 33.)
[12] After describing the vicissitudes which afflicted the transmission of the Vulgate, Metzger concludes: "As a result, the more than 8,000 Vulgate manuscripts which are extant today exhibit the greatest amount of cross-contamination of textual types." (Text of the New Testament, p. 76.) Uniformity of text is always greatest at the source and diminishes—rather than increases—as the tradition expands and multiplies. This caveat is ignored by the "process" view of the Majority text.