The Office of Language Assessment has utilized the Argument-based approach to validation (Kane, 2006, 2013; Chapelle, Enright, & Jamieson, 2008, 2010) as a framework to evaluate the interpretations and uses of ARCA scores. As a transparent research framework, this approach will guide the testing program in “…prioritizing different lines of evidence, synthesizing them to evaluate the strength of a validity argument, and gauging the progress of the validation efforts” (Xi, 2008, p. 18) and thus methodologically will offer a practical guideline to construct a validity argument (Chapelle et al, 2010) for the ARCA.
As the Argument-based approach suggests, the Office of Language Assessment has outlined the claims based on the ARCA scores as an argument that specifies the inferences and supporting assumptions needed to validate the intended interpretations and uses of this assessment. The interpretation and use argument (Kane, 2013) which will guide the research and development projects in order to build a validity argument for the ARCA is presented below.
1. Domain Analysis of Reading for Academic and Research Purposes
- To identify the key features of the ability to conduct academic research by reading in a secondary language.
- To design the reading assessment tasks that are relevant to (in terms of knowledge, skills, and abilities) and representative of academic research conducted by reading in a secondary research language.
2. Establishing Rating Accuracy, Consistency, and Procedures
- To develop rating rubrics that are accurate and relevant for evaluating the ability to conduct academic research by reading in a secondary research language.
- To investigate raters’ use and understanding of the scoring rubrics.
- To investigate the statistical characteristics of the ARCA and its fit for criterion-referenced decisions.