<p>The rising reliance on testing in American education and for licensure and certification has been accompanied by an escalation in cheating on tests at all levels. Edited by two of the foremost experts on the subject, the <i>Handbook of Quantitative Methods for Detecting Cheating on Tests</i> offers a comprehensive compendium of increasingly sophisticated data forensics used to investigate whether or not cheating has occurred. Written for practitioners, testing professionals, and scholars in testing, measurement, and assessment, this volume builds on the claim that statistical evidence often requires less of an inferential leap to conclude that cheating has taken place than do other, more common sources of evidence.</p><p>This handbook is organized into sections that roughly correspond to the kinds of threats to fair testing represented by different forms of cheating. In Section I, the editors outline the fundamentals and significance of cheating, and they introduce the common datasets to which chapter authors' cheating detection methods were applied. Contributors describe, in Section II, methods for identifying cheating in terms of improbable similarity in test responses, preknowledge and compromised test content, and test tampering. Chapters in Section III concentrate on policy and practical implications of using quantitative detection methods. Synthesis across methodological chapters as well as an overall summary, conclusions, and next steps for the field are the key aspects of the final section.</p> <p>Editors’ Introduction</p><p><strong>SECTION I – INTRODUCTION</strong></p><p><strong>Chapter 1</strong> – Exploring Cheating on Tests: The Context, the Concern, and the Challenges</p><p>Gregory J. Cizek and James A. Wollack</p><p><strong>SECTION II – METHODOLOGIES FOR IDENTIFYING CHEATING ON TESTS</strong></p><p><strong>Section IIa – Detecting Similarity, Answer Copying, and Aberrance</strong></p><p><b>Chapter 2</b> – Similarity, Answer Copying, and Aberrance: Understanding the Status Quo</p><p>Cengiz Zopluoglu</p><p>Chapter 3 – Detecting Potential Collusion Among Individual Examinees Using Similarity Analysis</p><p>Dennis D. Maynes</p><p><b>Chapter 4</b> – Identifying and Investigating Aberrant Responses Using Psychometrics-Based and Machine Learning-Based Approaches</p><p>Doyoung Kim, Ada Woo, and Phil Dickison</p><p><strong>Section IIb – Detecting Preknowledge and Item Compromise</strong></p><p>Chapter 5 – Detecting Preknowledge and Item Compromise: Understanding the Status Quo</p><p>Carol A. Eckerly</p><p><b>Chapter 6</b> – Detection of Test Collusion Using Cluster Analysis</p><p>James A. Wollack and Dennis D. Maynes</p><p><strong>Chapter 7 </strong>– Detecting Candidate Preknowledge and Compromised Content Using Differential Person and Item Functioning</p><p>Lisa S. O’Leary and Russell W. Smith</p><p><b>Chapter 8</b> – Identification of Item Preknowledge by the Methods of Information Theory and Combinatorial Optimization</p><p>Dmitry Belov</p><p><strong>Chapter 9 </strong>– Using Response Time Data to Detect Compromised Items and/or People</p><p>Keith A. Boughton, Jessalyn Smith, and Hao Ren</p><p><strong>Section IIc – Detecting Unusual Gain Scores and Test Tampering</strong></p><p><b>Chapter 10</b> – Detecting Erasures and Unusual Gain Scores: Understanding the Status Quo</p><p>Scott Bishop<i> </i>and Karla Egan</p><p><strong>Chapter 11 </strong>– Detecting Test Tampering at the Group Level</p><p>James A. Wollack and Carol A. Eckerly</p><p><b>Chapter 12</b> – A Bayesian Hierarchical Model for Detecting Aberrant Growth at the Group Level</p><p>William P. Skorupski, Joe Fitzpatrick,<i> </i>and<i> </i>Karla Egan</p><p><b>Chapter 13</b> – Using Nonlinear Regression to Identify Unusual Performance Level Classification Rates</p><p>J. Michael Clark, William P. Skorupski, and<i> </i>Stephen Murphy</p><p><b>Chapter 14</b> – Detecting Unexpected Changes in Pass Rates: A Comparison of Two Statistical Approaches</p><p>Matthew Gaertner and Yuanyuan (Malena) McBride</p><p><b>SECTION III – THEORY, PRACTICE, AND THE FUTURE OF QUANTITATIVE DETECTION METHODS</b></p><p><b>Chapter 15</b> – Security Vulnerabilities Facing Next Generation Accountability Testing</p><p>Joseph A. Martineau, Daniel Jurich, Jeffrey B. Hauger, and Kristen Huff</p><p><b>Chapter 16</b> – Establishing Baseline Data for Incidents of Misconduct in the NextGen Assessment Environment</p><p>Deborah J. Harris and Chi-Yu Huang</p><p>Chapter 17 – Visual Displays of Test Fraud Data</p><p>Brett P. Foley</p><p><b>Chapter 18</b> – The Case for Bayesian Methods When Investigating Test Fraud </p><p>William P. Skorupski<i> </i>and<i> </i>Howard Wainer</p><p><b>Chapter 19 </b>– When Numbers Are Not Enough: Collection and Use of Collateral Evidence to Assess the Ethics and Professionalism of Examinees Suspected of Test Fraud</p><p>Marc J. Weinstein</p><p><strong>SECTION IV – CONCLUSIONS</strong></p><p><strong>Chapter 20</strong> – What Have We Learned? </p><p>Lorin Mueller, Yu Zhang, and Steve Ferrara</p><p><strong>Chapter 21 </strong>– The Future of Quantitative Methods for Detecting Cheating: Conclusions, Cautions, and Recommendations</p><p>James A. Wollack and<i> </i>Gregory J. Cizek</p>