A CRITIQUE OF "BRAINWASHING" EVIDENCE IN LIGHT OF DAUBERT:

SCIENCE AND UNPOPULAR RELIGIONS

 

By James T. Richardson and Gerald Ginsburg (University of Nevada, Reno); from: Helen Reece (ed.) (1998), Law and Science: Current Legal Issues, Vol. 1. Oxford University Press, pp. 265-288.

 

A paper originally prepared for presentation at Law and Science Seminar, presented by the Faculty of Laws, University College London, June 30, July 1, 1997.

ABSTRACT

This paper first briefly discusses the landmark Daubert decision concerning admissibility standards for scientific evidence. Then the applicability of the decision for social and behavioral science evidence, including psychological syndromes, is examined, leading to the conclusion that such evidence is within the purview of Daubert. Then we apply the Daubert guidelines to evidence often used in so-called "cult/brainwashing" cases to demonstrate a proposed new syndrome of "destructive cultism." The history of admissibility for such evidence is discussed under pre-Daubert criteria, as well. An examination of brainwashing evidence in light of the four "guidelines" from Daubert leads to the general conclusion that such evidence should not be admitted under either set of criteria or any other rigorous rules of admissibility. A discussion of why such evidence is sometimes admitted closes the paper.

INTRODUCTION

A revolution occurred in American evidence law in 1993 when the U.S. Supreme Court rendered the Daubert decision, overturning 70 years of law governing the area of novel scientific evidence. Previously, admissibility of expert evidence had been governed in federal courts (and many state courts as well) by the Frye rule of "general acceptance," which meant that novel scientific evidence could not be admitted unless the methods and principles under which it was found had achieved general acceptance within the relevant discipline(s). This rule of admissibility was a conservative one, and was subjected to much criticism for this and other reasons [1].

The Frye rule was incorporated into the Daubert guidelines, so the notion of general acceptance was not entirely abandoned. However, other guidelines included by the Daubert court go far beyond Frye, and have major implications for all scientific evidence, including, we will claim, social and behavioral evidence being proffered to courts (Richardson, Ginsburg, Gatowski, and Dobbin, 1995).

The other three guidelines include: (1) establishing the "falsifiability" of a theory being presented; (2) the "known or potential error rate" associated with applying the theory; and (3) whether the findings have been subjected to peer review and publication in scientific forums. This list is not exhaustive, as the court also said that other sound criteria have been suggested and: "To the extent that they focus on the reliability of evidence as insured by the scientific validity of its underlying principles, all these versions may well have merit..." (113 S. Ct. 2786, 2796-2797, note 12 (1993)). Thus the U.S. Supreme Court has weighed in heavily in favor of judges being required to make sound scientifically based decisions concerning admissibility of proffered evidence claimed to be scientific [2].

The issue of admissibility of scientific evidence is, of course, not confined to the U.S. It is clear that most Western countries, at least, are grappling with how to handle such evidence (Gatowski, Dobbin, Richardson, Nowlin, and Ginsburg, 1995). Courts in many other countries are dealing with this and related issues, as more and more allegedly scientific evidence is offered in various types of court cases. Interest in this issue is evidenced by how much attention has been give the Daubert decision itself outside the U.S. (see Odgers and Richardson, 1995; Richardson, 1994; Freckleton, 1994; Roberts, 1996 for examples) [3].

DAUBERT AND SOCIAL AND BEHAVIORAL SCIENCE EVIDENCE

A problem immediately arose with the Daubert decision in the U.S. because it was not clear that the court intended to apply the decision to social and behavioral science evidence. Some have claimed that this was not the intention of the court, and that such evidence falls under the "technical, or other specialized knowledge" phrase of Federal Rule of Evidence 702 ( ). Others, including the present authors, believe that social and behavioral evidence should be forced to meet the more rigorous guidelines of Daubert (Underwager and Wakefield, 1993; Richardson, 1994; Richardson et al., 1995; Moore, 1997). There are court decisions on either side of this issue, as well, which leaves the situation ambiguous. Richardson et al. (1995: 12) note that the Supreme Court itself referred several times in the Daubert decision to a case that turned on admissibility of evidence concerning the efficacy of eye witness testimony (U.S. V. Downing, 753 F.2d 1224 (3d Cir. 1985), which suggests that the court did not intend to differentiate between types of scientific evidence.

PSYCHOLOGICAL SYNDROMES AND SCIENTIFIC VALIDITY

It is true that nowhere in Daubert did the court refer to psychological syndromes, thus leaving open interpretations of its applicability to such evidence. This paper assumes the application of Daubert to social and behavioral science evidence, including psychological syndromes [4]. We will assert that psychological syndromes are based on claims that can only be viewed as scientific, and that therefore the claims should be subjected to rigorous criteria such as appear in Daubert before they are accepted as scientific evidence.

To take this position suggests that we are dubious about much psychological syndrome evidence, and that we generally agree with several other critics of such evidence being admitted (Freckleton, 1992, 1994; Underwager and Wakefield, 1993) [5]. Our reasoning is based on the failure for most such evidence to meet criteria such as falsifiability, as well as that related to the establishment of reliable error rates. We admit that some psychological evidence, including even some syndrome evidence, can sometimes meet the other guidelines of Daubert of being generally accepted within certain disciplinary groups and being published in peer review journals for those groups. But, we would point out that these characteristics by themselves do not guarantee the scientific validity of evidence.

Indeed, we would even suggest that political and popular opinion about certain issues can influence decisions to admit allegedly scientific evidence. For instance, it seems clear that decisions to admit Child Sex Abuse Accommodation Syndrome evidence have been influenced by popular concern about child sex abuse (Freckleton, 1994). And similar considerations may have influenced the rapid spread of acceptance of repressed memory evidence by the courts (Loftus, 1993). See Richardson (1996) and Gatowski, Dobbin, Richardson, and Ginsburg (1997) for other examples of syndrome evidence that are scientifically problematic.

Herein we will examine in detail the evidentiary basis for one particular type of allegedly scientific evidence offered in recent decades in cases involving controversial new religions. While the ideas underpinning use of such evidence in "cult" cases have a weak scientific basis, they do enjoy considerable popular support (Bromley and Breschel, 1992, Richardson, 1992), and have been accepted in courts of law (Richardson, 1991; 1996; Anthony 1990; Anthony and Robbins, 1992,1995).

"BRAINWASHING", "MIND CONTROL", AND "DESTRUCTIVE CULTISM

There have been hundreds of court cases involving controversial new religions over the past several decades, since they came into prominence in the late 1960s (Richardson, 1995b; Bromley and Robbins, 1993). Many of the cases involved some claim that participation in such groups was coerced and destructive. The evidentiary theory is sometimes referred to as "brainwashing" or "mind control," popularized notions sometimes associated with what some have claimed is a new psychological syndrome - "destructive cultism" (Shapiro, 1977; Clark, 1978). Supposedly "cult" recruiters use this powerful psychotechnology to recruit unsuspecting youth, who then evidence "destructive cultism" in their behaviors (but see Richardson and Kilbourne, 1983 and Richardson, 1993).

Early in the court cases involving such ideas there was a willingness to accept the claims that brainwashing and mind control were being used, sometimes with apparent quite dramatic effects. Such evidence seemed relevant to cases involving the "cult problem," and it was often offered by a professional mental health specialist such as a psychologist of psychiatrist, usually with little effective rebuttal testimony [6].

Initially, such evidence was used to undergird requests for conservatorships, to assist in obtaining court orders sanctioning "deprogramming" of participants (Anthony and Robbins, 1995). Later, accusations of "brainwashing" and mind control" supported civil action claims of intentional infliction of emotional distress or false imprisonment. And, some deprogrammers who were charged with kidnapping in criminal actions or with false imprisonment in civil actions used brainwashing notions to underpin their "necessity" defenses (Anthony, 1990; Richardson, 1995) [7]. After about two decades of such brainwashing based claims and defenses generally being accepted, such evidence became more problematic, as those being negatively affected by the acceptance of such evidence tried to rebut it, or even have it disallowed completely (Anthony and Robbins, 1995).

BRAINWASHING AND THE FRYE RULE

This previous history of the use of brainwashing based testimony was developed under a climate of admissibility dominated by the Frye rule of general acceptance. Those offering brainwashing based testimony were usually seated as experts and allowed to offer their views, either because judges thought such testimony particularly relevant, or because they thought it was generally accepted among relevant disciplines.

As indicated, most earlier cases (late 1960s through 1970s) did not see much serious effort to rebut such testimony, and typically judges and juries seemed to readily agreed with such views. Appeal courts sometimes did not agree with jury decisions, and some loses were experienced, but overall, triers of fact seemed willing to accept a view of reality concerning "cults" that included notions of brainwashing and mind control.

Later, however, a number of social and behavioral scientists, as well as some professional organizations became concerned about the offering of such testimony, which was viewed by many as not scientifically sound and quite prejudicial, as well as misrepresenting the relevant disciplines. A few scientists (including the first author) worked on cases as rebuttal witnesses or as consultants, assisting attorneys in handling such evidence when it was offered (Anthony, 1990; Richardson, 1997). Some social scientists (Again including this author) also helped develop a few amicus briefs that were submitted in some cases on appeal, usually for professional organizations and individual scholars in the field (Richardson, 1997) [8].

At the trial level it was found that just asking better cross-examination questions or offering rebuttal witnesses was generally not adequate, because juries would nonetheless find in favor of the side offering brainwashing based perspectives (see DeWitt, Richardson, and Warner, 1997, for evidence relevant to this point). Given this circumstance, in a few cases a new effort was made to seek a separate hearing on the validity and reliability of brainwashing based testimony. This was done via requests for a "Frye" hearing on the specific claims being made that made use of brainwashing based testimony. Thus, pre-trial hearings were held in some cases to determine whether or not the brainwashing evidence should be accepted, or on appeal, a ruling might be entered that required a retrial after a separate evaluation of brainwashing based testimony that had been offered and which was significant to the case (Anthony, 1990).

In one major case, U.S. v. Fishman ((1990), Case No. CAR-88-0616-DLJ No. Cal), a decision was made to flatly reject brainwashing based testimony on the basis of the Frye rule of general acceptance. The plaintiff had sought to use as a defense in a mail fraud case that he had been brainwashed by Scientology and that this is what led to his breaking the law. This defense was rejected after a spirited exchange during the pre-trial phase of the case that included submission of considerable material critical of brainwashing theories (Anthony and Robbins, 1992, 1995).

In a case decided after Fishman and referring to it as precedent, Green and Ryan v. Maharishi et al. (U.S.D.C. No. 87-0015 and 0016 (1991)), brainwashing based testimony was again rejected. This time the rejection occurred after an initial trial court decision in favor of civil plaintiffs seeking damages for allegedly being brainwashing into participating in the Divine Light Mission. On appeal, the decision was overturned and the case remanded for further evaluation of the testimony of a well-known brainwashing "expert." [9] The trial court's review of brainwashing testimony was then done using a criterion of "substantial acceptance," a lesser standard than general acceptance, and such testimony was still rejected under this lessor standard.

These two federal court decisions have been cited since in other cases, and the decisions have apparently had something of a deterrent effect. Claims of brainwashing and mind control are heard less in courts across America now (Anthony and Robbins, 1995) This does not mean that new religions are winning more cases, however, as negative sentiments against newer controversial religious groups run deep, fed by tragic events such as occurred at Wac and more recently in San Diego, with the mass suicide of 39 people. However, for whatever reasons, there do appear to be fewer overt uses of brainwashing based claims in contemporary court actions.

BRAINWASHING EVIDENCE AND DAUBERT

Anthony and Robbins (1995: 523) raised an interesting question concerning application of Daubert to brainwashing based testimony and claims in "cult cases." They suggest that the 1993 decision might allow a "regression" back to a time when more brainwashing based testimony was admitted. They may have been suggesting that the greater discretion granted to judges in Daubert could allow those who were not sympathetic to "cults" to admit more evidence against them. However, later in the same paper (Anthony and Robbins, 1995: 531) they seem to suggest that Daubert will not work in a more liberal fashion to let in more of the kinds of testimony that had finally been ruled inadmissible under Frye. They say (Anthony and Robbins, 1995: 531, n. 74) [10]:

Vague and indeterminate identifications of the purported theoretical foundation of proffered testimony...will probably fail to meet criteria of falsifiability. More precise applications of "classical" models of brainwashing and related constructs to formally voluntary associations may also fail to pass muster due to the difficulty of specifying precisely how much non-physical coercion produces involuntariness.... Also pertinent is the general vagueness and lack of reliable diagnostic criteria for "Atypical Associative Disorder" supposedly produced by cultist mind control. (references omitted)

Richardson (1966: 883, n. 37), in a lengthy critique of brainwashing ideas and how they have spread to other legal systems around the world, says, "Under recent, more rigorous, criteria established in Daubert... for the acceptance of scientific evidence, brainwashing-based claims should...be excluded." In another paper, after a discussion of the consequences of applying Daubert criteria to child sex abuse cases, another comment relevant to this paper appears (Richardson, 1994: 34, n. 50):

In another area of high emotion and considerable misinformation, a new syndrome of "destructive cultism" has seen use. The author and others have been involved in criticizing questionable psychological and psychiatric testimony alleging negative effects of participation in new religions groups (..."cults") that cannot be substantiated by well-done research. (references omitted)

Thus we have seen several comments made in passing, usually in footnotes, to the issue of the application of Daubert criteria to brainwashing based testimony. Herein we will apply the four guidelines of scientific reliability from Daubert to such testimony in a more direct and thorough fashion, in an effort to establish with more precision whether or not such testimony meets such rigorous criteria. We will focus on the two more complex guidelines, falsifiability and error rates, but with some comment on the other two guidelines, as well.

Falsifiability and Brainwashing

As far as we are aware no case has yet seen the evaluation of brainwashing based testimony in light of Daubert. But, if this were done it seems reasonable that the analysis would start with perhaps the most difficult of the criteria, falsifiability. Richardson (1994: 19-27) discusses the concept of falsifiability in some detail, including its application to social and behavioral science evidence. He notes that the Daubert opinion cites major figures in the philosophy of science, including Sir Karl Popper, whose writings made the term falsifiability well-known. Popper claimed that scientific knowledge in a given area developed in an incremental way, by the establishment of testable hypotheses at the boundaries of knowledge. By testable he meant that a method could be devised whereby the hypothesis could be put to the test in a manner whereby the test could actually be failed.

This kind of preciseness requires a clear statement about what is expected by the theory being tested. For instance, it is not scientifically defensible to make the following two claims about the effects of child sex abuse:

1. If a child being examined for possible sex abuse does not like their genitals being examined, this is evidence for their having experienced sexual trauma through sex abuse.

2. If a child seems not to mind having their genitals examined this means that they are used to such activity, which is evidence that they have been sexually abused.

Similarly, the following two statements about actions of a perpetrator of child sex abuse cannot be defended as scientifically justified [11]:

1. He shows guilt and remorse, which indicates his guilt.

2. He shows no guilt and remorse, indicating that he is in denial about the abuse he perpetrated, a sure sign of his guilt.

What these examples illustrate is a problem of falsifiability. If any behaviors, including exactly opposite behaviors, can be said to support a theory, then the theory cannot be falsified. If a theory cannot be falsified, according to Popper it is not a scientific theory. Under Daubert such theories should not be admissible as scientific evidence.

Popper used Marxism and Freudian thought as two examples of major theories that cannot, by definition, be tested (Popper, 1952). The latter of these two is relevant to our concerns herein, since much of modern psychiatry and clinical psychology is Freudian-based. For instance, the much-cited Diagnostic and Statistical Manual published by the American Psychiatric Association (1980, 1995) is derived in large part from Freudian thought, and its many entries of mental disorders often lack solid scientific basis (Kirk and Kutchins, 1992; Richardson, 1993). To the extent that testimony based on the DSM is not supported by sound science, such evidence should not be admitted under Daubert.[12]

The issue of brainwashing testimony raises a serious falsifiability problem. Part of the problem is definitional: it is difficult to state clearly what the claim that participants have been brainwashed really means, and, this being the case, it is therefore difficult to state a hypothesis about brainwashing of participants that is truly falsifiable.

Some would apparently say that anyone in a new religion has been brainwashed, by definition, because otherwise it is not logical that they would choose to participate in such strange groups (Singer, 1979; Clark, 1978). Others might say that anyone who would give up a promising career to join an exotic religious group and sell flowers and books on streets and in airports has been brainwashed. These are not testable notions, of course, but are normative statements that reveal a view of human volition that some may find offensive. Such statements are saying, in effect, that because a person is not doing what was expected, and because their behavior is not easily understood, they have been brainwashed.

If one asserts that brainwashing causes a fundamental and permanent change in a person, then these are testable ideas, although their testability requires specification about what kind of change, as well as what the mechanisms causing the change are (not to mention a definition of what is meant by permanent). For instance, one could assert that some basic and negative personality change would occur with people who have been brainwashed. This might be demonstrated through longitudinal research that shows change over time that can be attributed to a process of people agree is brainwashing [13]. Little such research has been done, and much of what there is seems to show ameliorative effects of participation (Galanter, 1978; Kilbourne and Richardson, 1984)) or no long-lasting negative effects (Taslimi et al., 1991).

Other acceptable scientific methods that could yield evidence on the effects of joining would be standardized personality assessment instruments. One could assert that those who join new religions are significantly different from "normals" against which the tests were standardized. However, when such research has been done, the results are usually quite opposite to what might have been predicted from a brainwashing based hypothesis about negative effects (see Richardson, 1985, and 1995 for reviews on a large amount of such research). Few differences with normals have been found and even some of those are hard to characterize in negative terms [14]. Thus, when tested in this way the hypothesis of major damaging change is refuted or falsified.

If one assumes that brainwashing leads to permanent changes, then again the evidence refutes the notion, thus falsifying the assertion. Most of the controversial groups are small and suffer extremely high attrition rates (Bird and Reimer, 1978; Richardson, van der Lans and Derks, 1986). Thus, brainwashing, whatever else it is, seems rather ineffectual, with most participants leaving of their own volition after a time in the groups. Thus the brainwashing hypothesis, so far as it implies permanency, fails a straightforward test of that permanent effect.

Aside from the problems of definition and of empirical research results, there is also difficulty with the possible Freudian basis for asserting that participants have been brainwashed so that they are suffering from a new syndrome called "destructive cultism" (Shapiro, 1977). Shapiro, whose son had become a member of the Hare Krishna, apparently triggering his interest (Richardson, 1992), laid out some detail on the new syndrome. Richardson and Stewart (1997) offer the following synopsis of the new syndrome, using language from Shapiro (1997: 83):

He described destructive cultism as a distinct syndrome with a number of discernible characteristics. It strikes most commonly during adolescence or young adulthood, among "idealistic people under stresses of rapid physical and emotional development." "Change in personality in the most prominent characteristic of this syndrome," and the change may happen suddenly or gradually. People suffering from destructive cultism "may adopt new and unusual eating habits and peculiarities in dress and hair style." They may engage in begging for support, and they may develop "hatred and disrespect for the family, the parents, and society in general." They may display "a constantly somber and grave attitude," and become preoccupied with religious rituals and beliefs. Sufferers may also lose personal identity and "change in mannerisms is common." Some afflicted with destructive cultism enter into a "trance-like state, which is intensified during periods of prayer, meditation and rituals." Shapiro closed his section on the syndrome by stating: "Destructive cultism is a sociopathic illness which is spreading rapidly throughout the U.S. and the rest of the world in the form of a pandemic."

Reviewing this description in terms of the points just made about testability and precision clearly shows the problematic nature of the newly proposed syndrome. It can happen "suddenly or gradually," and it may or may not cause a number of different behaviors [15].

Although the "newly discovered" syndrome has not yet been placed in the DSM, clinical psychologist Margaret Singer has inserted references to "cults" in the DSM itself, and she and others (Anthony and Robbins, 1995) regularly use the DSM as a basis for testimony, including even citing DSM mental disorders that make no reference to "cults." (Anthony, 1990; Richardson, 1991; 1993). Given the unscientific basis of much of the DSM, this raises a serious question about admissibility of assertions about the deleterious effects of participation in new religions under Daubert criteria.

Thus, on balance, it appears that the assertion that participants in new religions are brainwashed lacks scientific support. What research has been done refutes such notions, and leads to other conclusions about why people participate and to what effect(s).

Error Rates and Brainwashing

Daubert instructs judges, who are given a definite "gate-keeping" role, to consider the "known and potential rate of error" of a scientific theory offered as evidence. Voice spectrography is used as an example in Daubert, leading to the conclusion that the error rate concept involves the possibility of wrongly classifying someone. There are, of course, two ways that a misclassification can occur:

1. A decision can be made that someone who is actually speaking is not doing so (a "false negative"), or

2. A decision can be made that someone who is not speaking is doing so (a "false positive").

Many examples can be cited from the social and behavioral science literature, including the examples used above of trying to decide whether a child has been sexually abused or whether someone is a perpetrator of sexual abuse. Is someone a spouse batterer, a drug courier, or are they suffering from some psychological syndrome such as rape trauma syndrome? These types of questions require classification schemes that have immense legal implications. A certain answer to each of them can cause, or at least contribute to, a person losing their freedom and being labeled an abuser, a rapist, or a drug dealer.

Brainwashing based testimony has severe problems passing muster on the error rate guideline, unless one adopts the problematic logic cited above that anyone who participates has, by definition, been brainwashed. If that approach is not applied then any claim that someone has been brainwashed must be accompanied by a list of specific indicators of the quality of being brainwashed, so that people can be properly classified in a properly designed research methodology.

One approach sometimes taken by brainwashing proponents is to use the DSM, citing characteristics of selected disorders such as Atypical Associative Disorder, which does include a mention of "cults" in its description, or Post traumatic Stress Disorder, which does not mention involvement in "cults" (Richardson, 1993). Suffice it to say, that this type of classification process is fraught with difficulties. Both beauty and mental disorder or in the eye of the beholder. If one is looking for signs of mental disorder, then one can probably find them, especially given the broadness of definitions such as the one proposed by Shapiro (1977) for "destructive cultism" described above.

The Shapiro case is actually quite instructive on this point of classification and error rates. Edward Shapiro, the son of psychiatrist Eli Shapiro was incarcerated and transported from New York to Boston on the basis of having been classified as suffering from membership in the Hare Krishna making him "incompetent as a result of mind control," an opinion rendered by another psychiatrist, John Clark after very limited contact with the son (Richardson, 1992: 236).

Edward Shapiro was kept in a mental institution, McClean Hospital in Boston, for two weeks while a team of mental health professionals evaluated him. When those doctors then testified in a conservatorship hearing that he was not suffering from any mental disorder and that he should be released, Dr. Clark insisted that mind control techniques were so powerful that they could actually conceal mental illness (a claim with implications for falsifiability, of course). Clark recommended that Edward be kept in the hospital for further observation under conditions of "stress testing," which meant that Shapiro was not to be allowed to see any friends or attorneys, and he could not practice his religion [16].

The Shapiro case clearly shows a classification disagreement. Psychiatrist Clark, as well as the father who was also a psychiatrist, were attempting to gain court approval of a certain classification of the son Edward. To do so would undergird the request for a permanent conservatorship of the son by the father. But, a team of doctors from McClean Hospital disagree on the classification entirely. They testified that Edward was not suffering from any mental disorder at all, and that he was competent to manage his personal affairs.

The court decided to follow the expert advice of the McClean team of mental health professionals, perhaps because of the plain outrageousness of some of the assertions of Dr. Clark. But, this outcome does not always obtain (Bromley, 1983; LeMoult, 1983), and the judges and juries have often shown a willingness to accept brainwashing based testimony in less egregious circumstances (Richardson, 1991; Anthony and Robbins, 1995).

To accept such assertions can be characterized as making false positive errors of classification. People who are not "brainwashed" are said to be, and leaders of religious groups are said to "brainwash" when they recruit members, definitions of reality with significant public policy and individual freedom implications. Such classification schemes fail to recognize the normalness of the recruitment and persuasion processes used in most newer religious groups, and that many other human groups use similar techniques to gain participants.

Thus it seems clear that brainwashing based testimony does not meet the "known and potential error rate" guideline from Daubert. Therefore it should be disallowed, even if the proffered testimony seems to mesh well with the values and normative positions of the gatekeepers and triers of fact in cases involving exotic religions.

Peer Review and Publication

This guideline seems straightforward on its face, but contains some problematic elements. It would seem reasonable to simply look at the major refereed journals in a field of scientific endeavor to see what was published in those journals. Or, it would seem reasonable to examine the resume of a proposed expert to see if they had published their results in major journals in their field of study. this is the approach taken when evaluating most scientific testimony and the experts offering it.

In the field of social and behavioral science this straightforward approach has some value. The most prominent scientific journal in the world, Science, has, for instance, published at least one article extremely critical of much of the clinical psychological testimony offered in courts of law in America (Faust and Ziskin, 1988). On the specific issue of participation in new religions, the refereed journal with the largest circulation of any in the field of behavioral sciences, The American Psychologist, has published a paper quite critical of brainwashing based theories as they apply to contemporary religious groups (Kilbourne and Richardson, 1984).

However, problems arise with the assessment of peer review and publication. Two major issues include what disciplines are relevant and who are the peers of those offering testimony. The Science article mentioned earlier aroused strong reaction among especially clinical psychologists, who defended their turf vigorously. The American Psychologist article that critiqued brainwashing based theories was itself criticized in a newer "anti-cult" journal that is attempting to achieve respectability (Richardson and Stewart, 1997).

There appear to be major disciplinary differences on the issue of cult brainwashing. Most sociologists and psychologists of religion reject brainwashing theories, whereas many clinical psychologists and psychiatrists apparently give them some credence (Richardson, 1997). The views of the latter are also much more widely accepted among the general public and among journalists and organs of mass media, as well, a circumstance with important implications for admissibility (Gatowski et al., 1997).

Thus, journals in the field of psychiatry may publish brainwashing based articles with no suggestion of the problematic nature of the evidence for such theories. Also, more popular "anti-cult" publications and the popular media disseminate brainwashing theories quite readily. Many fewer people are aware of the vast literature from sociology and psychology of religion, the results of which refute brainwashing theories, and which depend on much more mundane theories to explain participation in such groups and movements.

Similar comments can be made about peer review. When the difference of opinion about the efficacy of expert evidence breaks down along disciplinary lines, this means that apparent peers can probably be found who share the perspective of the one offering the testimony. Indeed, a clinical psychologist or a psychiatrist might even disagree that a sociologist or social psychologist would be a peer (and the sociologists and social psychologist might return the favor).

Thus, gatekeeper judges deciding about admission of brainwashing based testimony can admit evidence that is questionable to many, but do so under Daubert by simply referring to journals or to peer groups that reflect values supportive of those offering the evidence. Because of this problem, it is important that judges making admissibility decisions understand the science behind whatever evidence is being presented. Judges must be able to ascertain what disciplines can and should have something relevant to say about the admissibility of testimony being offered as scientifically based evidence. If information required to make a proper admissibility decision is not being proffered, then the judge must consider how to obtain that information.

In the case of a decision under Daubert concerning admission of brainwashing based testimony, it seems clear that information about publication in major journals would be appropriate, as would information about what professionals in any interested disciplines think of the evidence being offered. To limit consideration to just one discipline on something such as a brainwashing based proffer is possibly to bias the decision process in favor of discipline which has a vested interest in the outcome of the decision (Kilbourne and Richardson, 1984).

General Acceptance

This criterion from the 1923 Frye decision was retained in Daubert, with the understanding that it does make a difference how widely supported a scientific conclusion is among an identified "relevant scientific community." The court said: "Widespread acceptance can be an important factor in ruling particular evidence admissible." The implication of the term widespread in one of inclusiveness, but the court does not explicitly note the problem referred to above of possible disciplinary differences on the value and soundness of a given piece of scientific evidence. And, there is no apparent recognition of the fact that popular opinion can potentially influence admissibility decisions in cases involving issue of great interest to the general public or to official authorities (Gatowski et al., 1997).

We have seen some troubling admissibility decisions made in recent years in controversial, high interest areas such as child sex abuse, repressed memories, rape trauma syndrome, and in so-called "cult/brainwashing" cases, as well. Judges acting as gatekeepers need to overcome pressures from the media or other sources to accept allegedly scientific evidence, and instead assess the scientific basis of such evidence and make decisions accordingly.

In the particular case of brainwashing based testimony, a number of interest groups would weigh in in favor of admitting evidence that might be useful in exerting social control over exotic religions or in offering therapy to participants. Some of those interest groups would be represented by people with impressive professional credentials. However, in this particular area of knowledge it would seem imperative to also seek information based on serious in depth research done in the field by researchers operating without any preconceived normative notions about the groups being studied. General acceptance in this circumstance, then, should encompass more than just the therapeutic community.

Expansion of the notion of relevant scientific community led to the decisions cited earlier rejected brainwashing based testimony under the Frye rule (Fishman and Green). When psychologists and sociologists of religion became involved, and when their research results were presented to the court, this contributed to an understanding by the courts that more than a few people were involved and had relevant information for the court, and that the brainwashing based testimony being offered did not meet the general acceptance criterion [17].

CONCLUSIONS

Simply put, brainwashing based testimony was eventually ruled inadmissible under Frye, and it should also be ruled inadmissible under Daubert. Such evidence does not meet any of the four guidelines, and is explicitly refuted or falsified under some reasonable tests of operationalized hypotheses about the effects of alleged brainwashing. Also, error rate data does not support brainwashing based hypotheses, and indeed it is even difficult to devise a plausible test of a classification scheme that makes any sense. The peer review and publication guideline is not met when it is properly understood as involving a broader range of disciplines, as is also the case with the general acceptance criterion.

Given the problematic nature of scientific support for brainwashing based theories as they are applied to participants in new religions, it is reasonable to ask why such evidence was ever admitted, and why it is sometimes still admitted (Richardson, 1996; Anthony and Robbins, 1995). The most plausible answer has to do with the operation of biases, prejudices, and misinformation in these cases that involve controversial parties and issues or, as Kassin and Wrightsman (1988) say: cases "involving emotional topics over which public opinion is polarized."

"Cult/brainwashing" cases certainly fit such a description, raising the possibility alluded to earlier in the paper that sometimes judges and juries may have been acting out of bias or misinformation in these cases involving such controversial groups. Their judgment about the scientific viability of the brainwashing based testimony may have been clouded by values and views about the role of such groups in society. Considerably anecdotal evidence about specific cases suggests that this may be the case (see Anthony and Robbins, 1992; Richardson, 1991) and there is also some experimental evidence supporting the point (DeWitt, Richardson, and Warner, 1997).

The possibility that admissibility decisions would be affected by such extraneous factors needs further research, of course, but even the possibility of such a development should give all involved with the legal system pause for thought.

 

REFERENCES

American Psychiatric Association. (l980). The Diagnostic and Statistical Manual. 3rd edition revised. Washington, D.C.: American Psychiatric Association.

Anthony, D. (l990). Religious movements and brainwashing litigation: Evaluating key testimony. In T. Robbins and D. Anthony (eds.) In Gods We Trust, 2nd ed. (pp. 295-344). New Brunswick, NJ: Transaction Books.

Anthony, D. and Robbins, T. (1992). Law, social science, and the 'brainwashing' exception in the First Amendment. Behavioral Sciences & Law, 10, 5-30.

Anthony, D. and Robbins, T. (1995). "Negligence, coercion, and the protection of religious belief." Journal of Church and State, 37, 509-536.

Anthony, D., Robbins, T. and McCarthy, J. (1980). Legitimating repression. Society, 17, 39-42.

Barker, E. (1981). Who'd be a Moonie? A comparative study of those who join the Unification Church in Britain. In B. Wilson (Ed.), The social impact of new religious movements. New York: Rose of Sharon Press.

Barker, E. (1989). New religious movements. London: HMSO.

Beckford, J. (1978). Accounting for conversion. British Journal of Sociology, 29, 249-262.

Beckford, J. (1985). Cult controversies. London: Tavistock.

Bird, F. and Reimer, B. (1982). Participation rates in new religious movements and para-religious movements. Journal for the Scientific Study of Religion, 21, 1-14.

Bohm, T. and Guttman, J. (1989). The civil liberties of religious minorities. In M. Galanter (ed.), Cults and religions (pg. 211-238). Washington, D.C.: American Psychiatric Association.

Bromley, D. (l983). "Conservatorships and deprogramming: Legal and political prospects." In D. Bromley and J. Richardson (Eds.), The Brainwashing/Deprogramming Controversy (pp. 267-293). New York: Edwin Mellen.

Bromley, D. and Breschel, E. (1992). General population and institutional elite support for control of new religious movements: Evidence from national survey data. Behavioral Sciences & the Law, 10, 39-52.

Bromley, D. and Richardson, J. (l983). The brainwashing/ deprogramming controversy. Toronto: Edwin Mellen Press.

Bromley, D. and Robbins, T. (1993). The role of government in regulating new and unconventional religions. In J. Wood and D. Davis (eds.), The Role of Government in Monitoring and Regulating Religion in Public Life. Waco, TX:: Dawson Institute of Church-State Studies, Baylor University.

Clark, J. (1978). Problems in referral of cult members." Journal of the National Association of Private Psychiatric Hospitals, 9(4), 19-21.

Clark, J. (1979). Cults. Journal of the American Medical Association 242: 279-281.

Delgado, R. (1977). Religious totalism. Southern California Law Review, 51, 1-99.

Delgado, R. (1979). Religious totalism as slavery." New York University Review of Law and Social Change, 4, 51-68.

DeWitt, J., Richardson, J., and Warner, L. (1997). Novel scientific evidence in controversial cases: A social psychological examination. Law and Psychology Review, 21, 1-26.

Dillon, J. and Richardson, J. (1994). The 'cult' concept: A politics of representation analysis. SYZYGY: Journal of Alternative Religion and Culture, 3, 185-197.

Durham C. (1994). "Treatment of religious minorities in the United States." In European Consortium for Church and State Research, The Legal Status of Religious Minorities in the Countries of the European Union (pp. 323-379). Thessaloniki: Sakkoulas Publications.

Faust, D. and Ziskin, J. (1988). The expert witness in psychology and psychiatry. Science, 241, 31- .

Flynn, F. (1987). Criminalizing conversion: The legislative assault on new religions. In J. Day and W. Laufer (Eds.), Crime, values, and religion (pp. 153-191). Norwood, NJ: Ablex Publishing.

Fort, J. (1985). What is 'brainwashing,' and who says so? In B. Kilbourne (Ed.), Scientific Research and New Religions: Divergent Perspectives (pp. 57-63). San Francisco: American Association for the Advancement of Science.

Freckleton, I. (1992). Battered woman syndrome. Alternative Law Journal, 17, 39-41.

Freckleton, I. (1994). When plight makes right: The forensic abuse syndrome. Criminal Law Journal, 18, 29-49.

Galanter, M. (1978). The 'relief effect:' A sociobiological model of neurotic distress and large group therapy. American Journal of Psychiatry, 135, 588-591.

Galanter, M. (l980). Psychological induction into the large- group: Findings from a modern religious sect. American Journal of Psychiatry, 137, 1574-1579.

Galanter, M. (1989). Cults and new religious movements. Washington, D.C.: American Psychiatric Association.

Galanter, M. and Buckley, P. (l978). Evangelical religion and meditation: Psychotherapeutic effects. Journal of Nervous and Mental Disease, 166, 685-691.

Galanter, M., Rabkin, R., Rabkin, F. and Deutsch, A. (l979). The 'Moonies:' A psychological study of conversion and membership in a contemporary religious sect. American Journal of Psychiatry, 136, 165-169.

Gatowski, S., Dobbin, S., Richardson, J. Nowlin, C., and Ginsburg, G. (1995). Diffusion of scientific evidence: A comparative analysis of admissibility standards in Australia, Canada, England, and the United States. Expert Evidence, 4, 86-92.

Gatowski, S., Dobbin, S., Richardson, J., and Ginsburg, G. (1997). Globalization of behavioral science evidence about battered women: A theory of production and difussion. Behavioral Sciences & Law, forthcoming.

Guttman, J. (1985). "The legislative assault on new religions." In T. Robbins, W. Shepherd, and J. McBride (Eds.), Cults, culture, and the law. Chico, CA: Scholars Press.

Kassin, S. and Wrightsman, L. (1988). The American jury on trial.

Kelley, D. (1983). Deprogramming and religious liberty. In D. Bromley and J. Richardson (Eds.) The brainwashing/deprogramming controversy (pp.309-318). New York: Edwin Mellen Press.

Kilbourne B. and Richardson, J. (1981). Cults versus families: A case of misattribution of cause? Marriage and Family Review, 4, 81-100.

Kilbourne, B. and Richardson, J. (l984). Psychotherapy and new religions in a pluralistic society. American Psychologist, 39, 237-251.

Kilbourne, B. and Richardson, J. (l985). Social experimentation: Self process or social role? International Journal of Social Psychiatry, 31, 13-22.

Kilbourne, B. and Richardson, J. (l986). The communalization of religious experience in contemporary religious groups. Journal of Community Psychology, 14, 206-212.

Kilbourne, B. and Richardson, J. (l988). A social psychological analysis of healing. Journal of Integrative and Eclectic Psychotherapy, 7, 20-34.

Kuner, W. (l983). New religions and mental health. In E. Barker (Ed.), Of gods and men: New religious movements in the West (pp. 255-263). Macon, GA: Mercer University Press.

Kurt, and Kutchins, (1992). The selling of DSM: The rhetoric of science in psychiatry. New Yrk: Aldine de gruyter.

LeMoult, J. (l983). Deprogramming members of religious sects. In D. Bromley and J. Richardson (Eds.) The brainwashing/ deprogramming controversy (pp. 234-257). New York: Edwin Mellen.

Moore, T. (1997). Scientific consensus & expert testimony: Lessons from the Judas Priest trial. American Psychology-Law News, 17, 3-14.

Muffler, J., Langrod, J., Richardson, J., and Ruiz, P. (1997). Religion. In J. Lowinson, P. Ruiz, R. Millman, and J. Langrod (Eds.), Substance abuse: A comprehensive textbook, 3rd Edition (pp. 492-499). Baltimore: Williams & Wilkins.

Odgers, S., and Richardson, J. (1995). Keeping bad science out of the courtroom: Changes in American and Australian expert evidence law. University of New South Wales Law Journal, 18, 108-129.

Popper, K. (1952). The Open Society and Its Enemies. Lawrenceville, NJ: Princeton University Press.

Richardson, J. (1985a). Active versus passive converts: Paradigm conflict in conversion\recruitment research. Journal for the Scientific Study of Religion, 24, 163-179.

Richardson, J. (1985b). Psychological and psychiatric studies of new religions. In L. Brown (Ed.), Advances in psychology of religion (pp. 209-223). New York: Pergamon Press.

Richardson, J. (l989). The psychology of induction. In M. Galanter (ed.), Cults and new religious movements (pp. 211-238). Washington, D.C.: American Psychiatric Association.

Richardson, J. (1991a). Cult/brainwashing cases and the freedom of religion. Journal of Church and State, 33, 55-74.

Richardson, J. (1992a). Mental health of cult consumers: Legal and scientific controversy. In J. Schumaker (Ed.), Religion and mental Health (pp. 233-244). New York: Oxford University Press.

Richardson, J. (1992b). Public opinion and the tax evasion trial of Reverend Moon. Behavioral Sciences & Law, 10, 53-64.

Richardson, J. (1993a). Definitions of cult: From sociological- technical to popular negative. Review of Religious Research, 34, 348-356.

Richardson, J. (1993b). Religiosity as deviance: Use and misuse of the DSM with participants in new religions. Deviant Behavior, 14, 1-21.

Richardson, J. (1993c). A social psychological critique of "brainwashing" claims about recruitment to new religions. In J. Hadden and D. Bromley (Eds.), The handbook of cults and Sects in America (pp. 75-97). Greenwich, CT: JAI Press.

Richardson, J. (1994). Dramatic changes in American expert evidence law: From Frye to Daubert. The Judicial Review: Journal of the Judicial Commission of New South Wales, 2, 13-36.

Richardson, J. (1995a). Clinical and personality assessment of participants in new religions. International Journal for the Psychology of Religion, 5, 145-170.

Richardson, J. (1995b). Legal status of minority religions in the United States. Social Compass, 42, 249-264.

Richardson, J. (1995c). Manufacturing consent About Koresh: A critical analysis of the role of the media in the Waco tragedy. In S. Wright (Ed.), Armageddon in Waco (pp. 153-176). Chicago: University of Chicago Press.

Richardson, J. (1995d). Minority religons ("cults") and the law: Comparisons of the United States, Europe, and Australia. University of Queensland Law Journal, 18, 183-207.

Richardson, J. (1995e). Minority religions, religious freedom, and the new pan-European political and judicial institutions. Journal of Church and State, 37, 39-59.

Richardson, J. (1995f). Two steps forward, one back: Psychiatry, psychology, and the new religions" International Journal for the Psychology of Religion, 5, 181-185.

Richardson, J. (1996a). "Brainwashing" claims and minority religions outside the United States: Cultural diffusion of a questionable concept in the legal arena." Brigham Young University Law Review, 1996, 873-904.

Richardson, J. (1996b). Journalistic bias toward new religious movements in Australia. Journal of Contemporary Religion, 11, 289-302.

Richardson, J. (1997). Sociology, "brainwashing" claims about new religions, and freedom of religion. In P. Jenkins and S. Kroll-Smith (Eds.), Sociology on trial: Sociologists as expert witnesses (115-134). New York: Praeger.

Richardson, J. and Kilbourne, B. (1983). Classical and contempary applications of brainwashing theories: A comparison and critique. In D. Bromley and J. Richardson (Eds.), The Brainwashing/ Deprogramming Controversy (pg. 29-45). New York: Edwin Mellen.

Richardson, J., Ginsburg, G., Gatowski, S., and Dobbin, S. (1995). Problems Applying Daubert to psychological syndrome evidence. Judicature, 79(1), 10-16.

Richardson, J. and Stewart, M. (1997). Medicalization of participation in new religions: A test and extension of the Conrad and Schneider model. Unpublished paper.

Richardson, J., Stewart, M. and Simmonds, R. (l979). Organized miracles. New Brunswick, NJ: Transaction Books.

Richardson, J., van der Lans, J., and Derks, F. (1986). Leaving and labeling: Voluntary and coerced disaffiliation from religious social movements. K. land and G. Lang (Eds.), Research in Social Movements, Conflicts, and Change, Vol. 9, Greenwich, CT: JAI Press.

Richardson, J. and van Driel, B.. (1997). Journalist attitudes toward new religions. Forthcoming, Review of Religious Research.

Robbins, T. and Anthony, D. (1982). Deprogramming, medicalization, and the medicalization of deviant religious groups. Social Problems, 29, 283-297.

Robbins, T., Anthony, D., and Curtis, T. (1975). Youth culture religious movements: Evaluating the integrative hypothesis. Sociological Quarterly, 16, 48-64.

Robbins, T. and Bromley, D. (1992). Social experimentation and the significance of American new religions. In M. Lynn and D. Moberg (Eds.), Research in the Social Scientific Study of Religion, Vol. 4 (pp 1-28). Greenwich, CT: JAI Press.

Roberts, P. (1995). Admissibility of expert evidence: lessons from America. Expert Evidence, 4, 93-100.

Rosen, A. and Nordquist. T. (l980). Ego developmental level and values in a yogic community. Journal of Personality and Social Psychology, 39, 1152-1160.

Saks, M. (1988). "Enhancing and restraining Accuracy in Adjudication." 51 Law and Contemporary Problems 243- .

Saliba, J. 1993. The new religions and mental health. Pp. 99-113 In J. Hadden and D. Bromley (Eds.), The handbook of cults and sects in America (pp. 99-113). Greenwich, CT: JAI Press.

Schein, E., Schneir, I., and Barker, C. (1961). Coercive Persuasion. New York: Norton.

Shapiro, R. (l983). Of robots, persons, and the protection of religious beliefs. Southern California Law Review, 56, 1277-1318.

Sherwood, C. (1991). Inquisition: The persecution and prosecution of the Reverend Sun Myung Moon. Washington, D.C.: Regnery Gateway.

Shupe, A. and Bromley, D. (l980). The new vigilantes. Beverly Hills, CA: Sage.

Shapiro, E. (1977). Destructive cultism. American Family Physician, 15(2), 80-83.

Singer, M. (1978). Therapy with ex-cult members. Journal of the National Association of Private Psychiatric Hospitals, 9(4), 14-18.

Singer, M. (l979). Coming out of the cults. Psychology Today, 12, 72-82.

Singer, M. (1983). Transcript of testimony in George v. ISKCON. Los Angeles District Court; 248 pages.

Singer, M. (1994). Cults in our Midst. San Francisco, CA: Jossey- Bass.

Straus, R. (1979). Religious conversion as a personal and collective accomplishment. Sociological Analysis, 40, 158-165.

Taslimi, C., Hood, R., and Watson, P. (l991). Assessment of former members of Shiloh: The Adjective Checklist 17 years later. Journal for the Scientific Study of Religion, 30, 306-311.

Underwager, and Wakefield, H. (1993). A paradigm shift for expert witnesses. Issues in Child Sex Abuse Accusations, 5, 156-167.

van Driel, B. and Richardson, J. (1988a). The categorization of new religious movements in American print media. Sociological Analysis, 49, 171-183.

van Diel, B. and Richardson, J. (1988b). Print media treatment of new religious movements. Journal of Communication, 38, 377-391.

Wright, S. (1995). Armageddon in Waco. Chicago: University of Chicago Press.

Notes

1. A major reason for criticism of the Frye rule that is germane here concerns the fact that general acceptance does not always equate to scientifically reliable. As Saks (1988) has noted, some forms of forensic evidence that are readily accepted in courts have little supportive scientific research to support them, while other, more scientifically based techniques, may not be acceptable.

2. Not all the justices shared this view, however, as Chief Justice Rhenquist filed a dissent (joined by Justice Stevens) to this part of the opinion, which said (at 4811):

I defer to no one in my confidence in federal judges; but I am at a loss to know what is meant when it is said that the scientific status of a theory depends on its "falsifiability," and I suspect some of them will be too. I do not doubt the Rule 702 confides to the judge some gatekeeping responsibility in deciding questions of the admissibility of proffered expert testimony. But I do not think it imposes on them either the obligation or the authority to become amateur scientists in order to perform that role.

3. The first author has been asked to make presentations on implications of Daubert to appeal court judges in Canada and in four of the six states in Australia, as well as contribute to scholarly publications on the topic.

4. The term syndrome implies that the condition being so described has an actual medical basis; thus being able to successful designate as a syndrome some characteristic or behavior has value, both in terms of social control and economically.

5. Similar comments might be made about psychological profiles, such as airline hijacker profile, drug courier profile, etc., but we will not develop this idea herein.

6. One particular clinical psychologist, Dr. Margaret Singer, claims to have testified in over 40 such cases (Richardson, 1991).

7. A necessity defense basically is a claim that if a law is not broken, then a greater harm will result. The classic example often used in law school classes is that it is all right to "break and enter" a burning building to rescue people in the building. Apparently the analogous logic is that a greater harm is caused by leaving an adult in a new religious group than is caused by forcibly removing them for purposes of deprogramming. Such defenses represent a way to avoid the constitutional provisions against evaluation of religious beliefs and practices, if such defenses are allowed, simply because triers of fact are required to evaluate the extent of the harm that would be done by not removing the person, and that assessment seems to call for learning about the beliefs and practices of the group in question.

8. The amicus briefs presented data-based research conclusions refuting the brainwashing theories being offered, and are available from the first author upon request. See Anthony (1990) for the most thorough critical analysis of brainwashing based testimony, and also see Richardson and Kilbourne (1983) for a comparison of classical and contemporary uses of such theories as ideological weapons.

9. Dr. Margaret Singer was the expert in this case, as well as one of the two experts disallowed from offering brainwashing based testimony in the Fishman case.

10. A similar footnote appears in Odgers and Richardson, 1995: 119, n. 51).

11. See Underwager and Wakefield (1993) for a discussion of this and the previous example, and Richardson et al. (1995) for related discussion.

12. Such evidence might be admitted on other grounds, of course, but not as scientifically based.

13. Getting agreement on what constitutes brainwashing would be quite difficult, however, since the term itself is a popular and not a scientific term (see Richardson and Kilbourne, 1983; Anthony 1990; Richardson, 1991, 1996).

14. For instance, Rosen and Nordquist (1980) found that participants in Ananda Cooperative Village, a Hindu-based communal group in Northern California, were more compassionate than were normal religious people in the U.S.

15. The description is also overtly anti-religious, of course, and efforts to promote such views has been very disquieting to a number of religionists and civil libertarians (see Kelley, 1983; Guttman, 1985; Bohn and Guttman, 1989 for examples).

16. Shapiro was ordered released by the court, which did not accept Clark's expert advice, and Clark was eventually reprimanded by his state medical board for appearing to equate participation in a religious group with mental illness (Richardson, 1992).

17. It is worth noting that legal and constitutional arguments also played a major role (Anthony and Robbins, 1992), but it seems clear that evidence from researchers not agreeing with the brainwashing theory was also important.


Back to the CESNUR Page on Brainwashing and Mind Control Controversies


[Home Page] [Cos'è il CESNUR] [Biblioteca del CESNUR] [Testi e documenti] [Libri] [Convegni]

[Home Page] [About CESNUR] [CESNUR Library] [Texts & Documents] [Book Reviews] [Conferences]

Fri, Dec 10, 1999