Simply put, just because a computer scores test results and generates a narrative report does not mean that the information is accurate or scientific. Any test results must be evaluated in the full context of the subject’s life history and information from other accurate sources. Any self- report instrument is inherently subjective and may be grossly misleading in the absence of independent clinical evaluation and independent verification of the information presented by a subject. The MCMI-III has been challenged in peer reviewed journal articles with the conclusion that it fails to meet the standard of scientific reliability under the Daubert Standard. (Rogers, R. Salekin and K. Sewell, Law and Human Behavior, Vol. 23, No. 4 (1990). “Validation of the Millon Clinical Multiaxial Inventory for Axis II Disorders: Does it Meet the Daubert Standard?” (1999), at p. 438). In fact, several evaluations of the MCMI-III have concluded that it fails to meet the Daubert standard, recommend that it not be used in a forensic setting, have “serious concerns” about its “diagnostic accuracy and construct validity,” and that provides a “wrong diagnosis in more than 4 out of 5 cases.” Reviewers of this flawed instrument rhetorically ask “Why would psychologists ever wish to use a test in a forensic setting that they cannot adequately justify?” The MCMI-III test generates computerized conclusions. Any reliance on the results of this test alone should not be relied upon as “objective” and accurate. Reliance on the test alone may raise the question as to the reliability of the conclusions drawn from the instrument.
[C]linicians who use the interpretive report should not be lulled over time into uncritical acceptance; they should routinely compare the statements generated against independent clinical evidence. Otherwise, test reports may take on a kind of Barnum effect, causing readers to conclude that such reports are accurate, not because they are, but because they present attributes that are so general or common that they apply to almost everyone. Moreover, the mysterious and seemingly quantitative and exact power of computers has no doubt imbued the reports, at least for some, with an undue measure of scientific merit and clinical acumen.
In contrast to and as a result of the Rogers/Sewell journal review of the MCMI-III there have been subsequent studies which argue the validity and reliability of the MCMI-III instrument. If contesting the MCMI-III counsel should search all blogs/articles that support the validity of the MCMI-II as a valid “objective” testing instrument and determine whether the supporting articles have any relationship to the publisher of the MCMI-III instrument, as the authors of any such articles would have an economic interest in confirming the validity of the test.It is therefore critical that when confronted with the results of an MCMI-III report, as in challenging any so-called “objective” psychological test, that counsel investigates the purpose of the instrument, the manner in which it is administered, and how it is ultimately scored. First, counsel should obtain a copy of the training and/or scoring materials related to the instrument. Many of the training and scoring manuals set forth the limitations of the instruments and caution about its use in forensic settings. It is surprising how often tests are used and relied upon in contexts that are inappropriate; Second, counsel should research the history of the use of the instrument and peer reviewed journal articles evaluating the instrument. It is remarkable that in some cases the creator/author of the instrument itself warns against misuse and/or journal articles report “false positives,” or even that the instrument fails to satisfy accepted legal standards of scientific reliability; Third, counsel should obtain the raw data upon which the expert scored the results of the test and rescore the test (rescoring to be done by counsel’s own qualified expert). Remarkably, either through erroneous administration, simple error, or intentional skewing, the scoring of the raw data itself may be incorrect; Fourth, the predicate population used to create the instrument may present a demographic population that is not consistent with the subject being tested. The origin, history, and purpose of the test should be reviewed to determine if it’s application is not appropriate to the subject or the purpose employed; Fifth, always check out the manuals or journal articles to determine whether there have been efforts at statistical validation. Even tests in common use may acknowledge that validity tests confirm accuracy rates that are far below 100%. An acknowledgement that validity rates fall within 60 to 70% may be fine for consideration in a clinical evaluation for treatment, but how reliable is it when there is an acknowledged error rate between 30 to 40 percent in a forensic context? If the potential for being wrong is 30% of the time, just how credible is it to rely on that test in making a legal determination? Last, be aware of the context of the subject and purpose of the forensic examination. Any self-report instrument is by its very nature potentially inaccurate. Assess whether the factual circumstances of the subject and the subject’s knowledge as to the purpose it will be used will have the capacity to skew the results making the resulting scoring unreliable. All of these areas of inquiry may provide valuable fodder for cross-examination or motions to exclude or strike the test results. The results of all “objective” psychological tests should be closely evaluated and challenged. It is important to understand the limitations of such testing when “objective test” results are used against your clients and to understand the limitations of such testing when used to advance the interests of your clients. When your expert(s) rely upon such “objective” tests it is critical that in their preparation they understand, accept, and can credibly explain any limitations of the instruments upon which the expert relies.
 Robert J. Craig, Journal of Personality Assessment, 72(3), 390-406, “Overview and Current Status of the Millon Clinical Multiaxial Inventory” (1999).  Id., the MCMI-III consisted of 175 true/false questions that may be completed in 20 to 30 minutes. The MCMI-IV, now consists of 195 true/false questions that may be completed in 25 to 35 minutes.  Id.; It should be noted that in the fall of 2015, the MCMI-IV was released and the efficacy of that version has yet to be fully reviewed in peer reviewed journals. Presumably, the recent revisions and release of the MCIM-IV has occurred, in part, due to the need to reconcile the instrument with the recent issuance of the latest Diagnostic and Statistical Manual of Mental Disorders, the DSM-5™.  Theodore Millon, PhD, Dsc, Roger Davis, PhD, Carrie Millon, PhD, MCMI-III Manual, Second Edition (1997), at p. 6.  Ibid.  Ibid.  Id. at 7.  Ibid.  Daubert v. Merrell Dow Pharmaceuticals (92-102), 509 U.S. 579 (1993).Rogers, R. Salekin and K. Sewell, Law and Human Behavior, Vol. 23, No. 4 (1990). “Validation of the Millon Clinical Multiaxial Inventory for Axis II Disorders: Does it Meet the Daubert Standard?” (1999), at p. 438.  Id. at 429.  Id. at 430. (additional citations to studies omitted).  Id. at 429.